Physics Algebra
Physics Algebra
Physics Algebra
Contents
Preface
ix
1 Motivation
1.1
Classical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2
Relativity theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3
1.4
Hamiltonian mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5
Quantum mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6
1.7
1.8
1.9
25
2.1
2.2
2.3
2.4
2.5
2.6
ii
CONTENTS
2.7 The Hamiltonian form of a Lie algebra . . . . . . . . . . . . . . . . . . . . . 38
2.8 Atomic energy levels and unitary groups . . . . . . . . . . . . . . . . . . . . 39
2.9 Qubits and Bloch sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.10 Polarized light and beam transformations . . . . . . . . . . . . . . . . . . . . 41
2.11 Spin and spin coherent states . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.12 Particles and detection probabilities . . . . . . . . . . . . . . . . . . . . . . . 47
2.13 Photons on demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.14 Unitary representations of SU(2) . . . . . . . . . . . . . . . . . . . . . . . . 53
57
CONTENTS
iii
3.19 Casimirs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.20 Unitary representations of the Poincare group . . . . . . . . . . . . . . . . . 79
3.21 Some representations of the Poincare group . . . . . . . . . . . . . . . . . . . 80
3.22 Elementary particles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
3.23 The position operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4 From the theoretical physics FAQ
83
4.1
To be done
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
4.2
4.3
4.4
4.5
4.6
4.7
4.8
What is a photon? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
4.9
119
5.1
5.2
5.3
5.4
5.5
5.6
6 Spectral analysis
6.1
137
iv
CONTENTS
6.2 Probing the spectrum of a system . . . . . . . . . . . . . . . . . . . . . . . . 141
6.3 The early history of quantum mechanics . . . . . . . . . . . . . . . . . . . . 144
6.4 The spectrum of many-particle systems . . . . . . . . . . . . . . . . . . . . . 148
6.5 Black body radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
6.6 Derivation of Plancks law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
6.7 Stefans law and Wiens displacement law . . . . . . . . . . . . . . . . . . . . 156
II
Statistical mechanics
7 Phenomenological thermodynamics
159
161
179
201
CONTENTS
219
III
11 Lie algebras
247
249
. . . . . . . . . . . . . . . . . . . . . . 255
273
vi
CONTENTS
293
IV
Nonequilibrium thermodynamics
14 Markov Processes
305
307
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
323
339
CONTENTS
vii
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
357
359
379
393
VI
411
413
viii
CONTENTS
20.1 The classical harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . 414
20.2 Quantizing the harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . 415
20.3 Representations of the Heisenberg algebra . . . . . . . . . . . . . . . . . . . 418
20.4 Bras and Kets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
20.5 Boson Fock space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
20.6 BargmannFock representation . . . . . . . . . . . . . . . . . . . . . . . . . 425
20.7 Coherent states for the harmonic oscillator . . . . . . . . . . . . . . . . . . . 426
20.8 Monochromatic beams and coherent states . . . . . . . . . . . . . . . . . . . 432
435
. . . . . . . . . . . . . . . . . . . . . 438
447
455
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
471
CONTENTS
ix
Preface
This book presents classical mechanics, quantum mechanics, and statistical mechanics in an
almost completely algebraic setting, thereby introducing mathematicians, physicists, and
engineers to the ideas relating classical and quantum mechanics with Lie algebras and Lie
groups.
The book should serve as an appetizer, inviting the reader to go more deeply into these
fascinating, interdisciplinary fields of science.
Much of the material covered here is not part of standard textbook treatments of classical or
quantum mechanics (or is only superficially treated there). For physics students who want
to get a broader view of the subject, this book may therefore serve as a useful complement
to standard treatments of quantum mechanics.
We motivate everything as far as possible by classical mechanics. This forced an approach
to quantum mechanics close to Heisenbergs matrix mechanics, rather than the usual approach dominated by Schrodingers wave mechanics. Indeed, although both approaches are
formally equivalent, only the Heisenberg approach to quantum mechanics has any similarity with classical mechanics; and as we shall see, the similarity is quite close. Indeed,
the present book emphasizes the closeness of classical and quantum mechanics, and the
material is selected in a way to make this closeness as apparent as possible.
Almost without exception, this book is about precise concepts and exact results in classical
mechanics, quantum mechanics, and statistical mechanics. The structural properties of
mechanics are discussed independently of computational techniques for obtaining quantitatively correct numbers from the assumptions made. This allows us to focus attention on the
simplicity and beauty of theoretical physics, which is often hidden in a jungle of techniques
for estimating or calculating quantities of interests. The standard approximation machinery for calculating from first principles explicit thermodynamic properties of materials, or
explicit cross sections for high energy experiments can be found in many textbooks and is
not repeated here.
Compared with the 2008 version, most of Chapters 23 and all of Chapters 1418 are new;
the remaining chapters were slightly improved.
The book originated as course notes from a course given by the first author in fall 2007,
written up by the second author, and expanded and polished by combined efforts, resulting
in a uniform whole that stands for itself. Parts II and IV are mainly based on earlier work
by the first author (including Neumaier [203, 205]); and large parts of Part I were added
later. The second author acknowledges support by the Austrian FWF-projects STARTproject Y-237 and IK 1008-N. Thanks go to Roger Balian, Clemens Elster, Martin Fuchs,
Johann Kim, Mihaly Markot, Mike Mowbray, Hermann Schichl, Peter Schodl, and Tapio
Schneider, who contributed through their comments on earlier versions of parts of the book.
The audience of the course consisted mainly of mathematics students shortly before finishing
their diploma or doctorate degree and a few postgraduates, mostly with only a limited
background knowledge in physics.
CONTENTS
Thus we assume some mathematical background knowledge, but only a superficial acquaintance with physics, at the level of what is available to readers of the Scientific American,
say. It is assumed that the reader has a good command of matrix algebra (including complex numbers and eigenvalues) and knows basic properties of vector spaces, linear algebra,
groups, differential equations, topology, and Hilbert spaces. No background in Lie algebras, Lie groups, or differential geometry is assumed. Rudiments of differential geometry
would be helpful to expand on our somewhat terse treatment of it in Part V; most material,
however, is completely independent of differential geometry.
While we give precise definitions of all mathematical concepts encountered (except in Chapter 4, which is taken verbatim from the theoretical physics FAQ), and an extensive index
of concepts and notation, we avoid the deeper use of functional analysis and differential
geometry without being mathematically inaccurate, by concentrating on situations that
have no special topological difficulties and only need a single chart. But we mention where
one would have to be more careful about existence or convergence issues when generalizing
to infinite dimensions.
On the physics side, we usually first present the mathematical models for a physical theory
before relating these models to reality. This is adequate both for mathematically-minded
readers without much physics knowledge and for physicists who know already on a more
elementary level how to interpret the basic settings in terms of real life examples.
This is an open-ended book. It should whet the appetite for more, and lead the reader
into going deeper into the subject.1 Thus many topics are discussed far too short for a
comprehensive treatment, and often only the surface is scratched. A term has only this
many hours, and our time to extend and polish the lectures after they were given was
limited, too. We added some material, and would have liked to be more complete in many
respects. Nevertheless, we believe that the topics treated are the fundamental ones, whose
understanding gives a solid foundation to assess the wealth of material on other topics.
We usually introduce physical concepts by means of informal historical interludes, and only
discuss simple physical situations in which the relevant concepts can be illustrated. We
refer to the general situation only by means of remarks; however, after reading the book,
the reader should be able to go deeper into the original literature that treats these topics
in greater physical depth.
Part I is an invitation to quantum mechanics, concentrating on giving motivation and
background from history, from classical mechanics, and from simple, highly symmetric
quantum systems. The latter are used to introduce the most basic Lie algebras and Lie
groups. Part II gives a thorough treatment of the formal part of equilibrium statistical
mechanics, viewed in the present context as the common core of classical and quantum
1
Some general references for further reading: Barut & Raczka [28], Cornwell [68], Gilmore [104],
and Sternberg [260], for the general theory of Lie algebras, Lie groups, and their representations from a
physics point of view, Wybourne [296] and Fuchs & Schweigert [95] for a more application oriented
view of Lie algebras, Kac [145] and Neeb [200] for infinite-dimensional Lie algebras, Papou
sek & Aliev
[211] for quantum mechanics and spectroscopy, van der Waerden [276] for the history of quantum
mechanics, and Weinberg [284] for a (somewhat) Lie algebra oriented treatment of quantum field theory.
CONTENTS
xi
mechanics, and discusses the interpretation of the theory in terms of models, statistics and
measurements. Part III introduces the basics about Lie algebras and Poisson algebras,
with an emphasis on the concepts most relevant to the conceptual side of physics. Part IV
discusses the dynamics of nonequilibrium phenomena, i.e., processes where the expectation
changes with time, in as far as no fields are involved. This results in a dissipative dynamics.
Part V introduces the relevant background from differential geometry and applies it to
classical Hamiltonian and Lagrangian mechanics, to a symplectic formulation of quantum
mechanics, and to Lie groups. Part VI applies the concepts to the study of quantum
oscillators (bosons) and spinning systems (fermions), and to the analysis of empirically
observed spectra, concentrating on the mathematical contents of these subjects. The book
concludes with numerous references and an index of all concepts and symbols introduced.
For a more detailed overview of the topics treated, see Section 1.9.
We hope that you enjoy reading this book!
Wien, August 7, 2011
Arnold Neumaier, Dennis Westra
xii
CONTENTS
Part I
An invitation to quantum mechanics
Chapter 1
Motivation
Part I is an invitation to quantum mechanics, concentrating on giving motivation and
background from history, from classical mechanics, and from 2-state quantum mechanics.
The first chapter is an introduction and serves as a motivation for the following chapters.
We shall go over different areas of physics and give a short glimpse on the mathematical
point of view.
The final section of the chapter outlines the content of the whole book.
For the mathematicians most of the folklore vocabulary of physicists may not be familiar,
but later on in the book, precise definitions in mathematical language will be given. Therefore, there is no need to understand everything in the first chapter on first reading; we
merely introduce informal names for certain concepts from physics and try to convey the
impression that these have important applications to reality and that there are many interesting solved and unsolved mathematical problems in many areas of theoretical physics.1
1.1
Classical mechanics
Classical mechanics is the part of physics whose development started around the time of
Isaac Newton (1642-1727).
It was in the period of Newton, Leibniz, and Galileo that classical mechanics was born,
mainly studying planetary motion. Newton wanted to understand why the earth seems to
circle around the sun, why the moon seems to circle around the earth, and why apples (and
other objects) fall down. By analyzing empirical data, he discovered a formula explaining
most of the observed phenomena involving gravity. Newton realized that the laws of physics
here on earth are the same as the laws of physics determining the motion of the planets.
This was a major philosophical breakthrough.
1
We encourage readers to investigate for themselves some of the abundant literature to get a better
feeling and more understanding than we can offer here.
CHAPTER 1. MOTIVATION
The motion of a planet is described by its position and velocity at different times. With
the laws of Newton it was possible to deduce a set of differential equations involving the
positions and velocities of the different constituents of the solar system. Knowing exactly
all positions and velocities at a given time, one could in principle deduce the positions and
velocities at any other time.
Our solar system is a well-posed initial value problem (IVP). However, an initial error in
position and velocities at time t0 = 0 grows exponentially at time t > 0 by a factor of
et . The value of varies for different initial conditions; its maximum is called the
maximal Lyapunov exponent. A system with maximal Lyapunov exponent = 0 is
called integrable. If < 0 the solutions converge to each other and if > 0 the solutions
move away from each other. The solar system is apparently not quite integrable: according
to numerical simulations, the maximal Lyapunov exponent for our solar system seems to be
small but positive, with 1 being about five million years (Laskar [170, 171], Lissauer
[177]).
Frequently, instead of considering separately time-dependent positions and velocities of
many objects (e.g., planets, atoms, or particles) in a system, it is more convenient to work
with single trajectories, paths parameterized by time, in a high-dimensional space called
the phase space of the system. In the case of the planetary system with N planets, the
points in phase space are described by vectors which have 6N components grouped into N
pairs consisting of three components for position and three components for momentum,
velocity multiplied by mass, of each particle. One reason that one prefers momentum before
velocity is that the total momentum of all particles is conserved, i.e., remains constant in
time. A deeper reason that will become apparent later is that on the level of position and
momentum, the similarity between classical and quantum mechanics is most apparent. For
a single particle moving in space, there are three spatial directions to specify its position
and three directions to specify the velocity. Hence the phase space of a (free) particle is
six-dimensional; for a system of N astronomical bodies, the dimension is 6N.
Low-dimensional phase spaces are well-understood in general. Newton showed that the
configuration of a single planet moving around the sun is stable (in fact, the system is
integrable) and motion follows Keplers laws, which were already known before, thus
giving these a theroetical basis. Higher-dimensional phase spaces tend to cause problems.
Indeed, for more planets (that is, more than 2 bodies in the system), deviations from elliptic
motions are predicted, and the question of stability was open for a long time. The Swedish
king Oskar II was willing to reward with a big amount of money the scientist who proved
stability of our solar system.
However, Poincare showed that already three objects (one sun, two planets) cause big
problems for a possible stability proof of our solar system and received the prize in 1887.
The numerical studies from the end of the last century (quoted above) strongly indicate
that the solar system is unstable, though a mathematical proof is missing.
We now turn from celestial mechanics, where phase space is finite-dimensional, to continuum mechanics, which has to cope with infinite-dimensional phase spaces. For example,
to describe a fluid, one needs to give the distribution of mass and energy and the local ve-
locity for all (infinitely many) points in the fluid. The dynamics is now governed by partial
differential equations. In particular, fluid mechanics is dominated by the NavierStokes
equations, which still presents a lot of difficult mathematical problems.
Showing that solutions exist for all times (not only short-time solutions) is one of the
Clay Millennium problems (see, e.g., Ladyzhenskaya [166]), and will be rewarded by one
million dollars.
The infinitely many dimensions of the phase space cause serious additional problems. The
Lyapunov exponents now depend on where the fluid starts in phase space and for fastflowing fluids, the maximal Lyapunov exponent is much larger than zero in most regions
of phase space. This results in a phenomenon called turbulence, well-known from the
behavior of water. The notion of turbulence is still not well understood mathematically.
Surprisingly enough, the problems encountered with turbulence are of the same kind as the
problems encountered in quantum field theories (QFTs) one of the many instances where
a problem in classical mechanics has an analogue in quantum physics.
Another area of continuum mechanics is elasticity theory, where solids are treated as
continuous, nearly rigid objects, which deform slightly under external forces. The rigidity
assumption is easily verified empirically; try to swim in metal at room temperature. . . . Due
to the rigidity assumption the behavior is much better understood mathematically than in
the fluid counterpart.
The configuration of a solid is close to equilibrium and the deviations from the equilibrium
position are strongly suppressed (this is rigidity). Hence the rigidity assumption implies
that linear Taylor approximations work well since the remaining terms of the Taylor series
are small, and the Lyapunov exponent is zero.
Elasticity theory is widely applied in engineering practice. Modern bridges and high rise
buildings would be impossible without the finite element analyses which determine their
stability and their vibration modes. In fact, the calculations are done in finite-dimensional
discretizations, where much of the physics is reducible to linear algebra. Indeed, all continuum theories are (and have to be) handled computationally in approximations with
only finitely many degrees of freedom; in most areas very successfully. The mathematical
difficulties are often related to establishing a valid continuum limit.
1.2
Relativity theory
In the period between 1900 and 1920, classical mechanics was enriched with special relativity theory (SRT) and general relativity theory (GRT). In SRT and GRT, space
and time merge into a four-dimensional manifold, called space-time.
In SRT, space-time is flat. Distances in space and time are measured with the Minkowski
metric, an indefinite metric (discussed in more detail in Section 3.13) which turns spacetime a pseudo-Riemannian manifold. Different observers (in SRT) all see the same speed
of light. But they see the same distances only when measured with the Minkowski metric
CHAPTER 1. MOTIVATION
not with the Euclidean spatial or temporal metric (which also holds for the orthogonality
mentioned above). It follows that spatial separation and temporal separation between
localized systems (for example a chicken laying an egg and an atom splitting) are different
for different observers! But the difference is observable only when the two systems move
at widely different velocities, hence Newton couldnt have noticed this deviation from his
theory.
In classical mechanics, time is absolute in the sense that there exists a global time up to time
shifts; the time difference between two events is the same in every coordinate system. The
symmetries of classical space-time is thus the group generated by time-translations, space
translations and rotations. This group is the Galilean group. Due to the experimental
fact that the speed of light in vacuum is the same for all observers led Einstein to the
conclusion that time is not absolute and that the Galilean group should be enlarged with
transformations that rotate space and time coordinates into each other. The result was the
theory of special relativity. Due to special relativistic effects in the quantum theory, the
world indeed looks different; for example, without special relativity, gold would be white,
and mercury would be solid at room temperature Norrby [208].
SRT is only valid if observers move at fixed velocities with respect to each other. To handle
observers whose relative velocities may vary requires the more general but also more complex
GRT. The metric now depends on the space-time point; it becomes a nondegenerate symmetric bilinear form on the space-time manifold. The transformations (diffeomorphisms)
relating the metric in one patch to the metric in another patch cannot change the signature.
Hence the signature is the same for all observers.
The changing metric has the effect that in GRT, space-time is no longer flat, but has
curvature. That means that freely moving objects do not follow real straight lines in fact
the notion of what straight means is blurred. The trajectory that an object in a free fall,
where no forces are exerted on the object, will follow is called a geodesic. The geodesics are
determined by the geometry by means of a second-order differential equation. The preferred
direction of time on a curved space-time is now no longer fixed, or as mathematicians
say canonical, but is determined by the observer: The geodesic along the observers 4momentum vector defines the world line of the observer (e.g., a measuring instrument) and
with it its time; the space-like surfaces orthogonal to the points on the world line define
the observers 3-dimensional space at each moment. When the observer also defines a set
of spatial coordinates around its position, and a measure of time (along the observers
geodesic), one can say that a chart around the observer has been chosen.
When time becomes an observer dependent quantity, so becomes energy. Local energy
conservation is still well defined, described by a conservation law for the resulting differential
equations. The differential equations are covariant, meaning that they make sense in any
coordinate system. For a large system in general relativity, the definition of a total energy
which is conserved, i.e., time-independent, is however problematic, and well-defined only if
the system satisfies appropriate boundary conditions such as asymptotic flatness, believed
to hold for the universe at large. Finally, if the system is dissipative, there is energy loss,
and the local conservation law is no longer valid. Not even the rate of energy loss is well
defined. Dissipative general relativity has not yet found its final mathematical form.
1.3
where the integral indicates integration with respect to the so-called Liouville measure in
phase space.
In the quantum version of statistical mechanics the density gets replaced by a linear
operator on Hilbert space called the density matrix, the functions become linear operators, and we have again (1.1), except that the integral is now interpreted as the quantum
integral,
R
f = tr f,
(1.2)
We shall see that the algebraic properties of the classical integral and the quantum integral
are so similar that using the same name and symbol is justified.
A deeper justification for the quantum integral becomes visible if we introduce the Lie
product2
{g, f } in the classical case,
f g := i
(1.3)
[f, g] in the quantum case,
h
{f, g} := q f p g q g p f
on the algebra E = C () of smooth functions on phase space = R3N R3N , and the
quantum commutator
[f, g] := f g gf
on the algebra E = Lin C (R3N )of linear operators on the space of smooth functions on
configuration space. (Here i = 1 complex numbers will figure prominently in this
book! , and h
is Plancks constant in the form introduced by Dirac [74]. Planck had
used instead the constant h = 2h which caused many additional factors of 2 in formulas
discovered later.) The Lie product is in both cases an antisymmetric bilinear map from
E E to E and satisfies the Jacobi identity; see Chapter 11 for precise definitions.
2
The symbol , frequently used in the following, is interpreted as a stylized capital letter L and should
be read as Lie.
CHAPTER 1. MOTIVATION
In the classical case, the fact that integration and differentiation are inverse operations
implies that the integral of a derivative of a function vanishing at infinity is zero. The traditional definition of the Poisson bracket therefore implies that, for functions f, g vanishing
at infinity,
R
f g = 0.
(1.4)
Thus we see that there is a very close parallel between the classical and the quantum case.
Indeed, statistical mechanics is the area where classical mechanics and quantum mechanics
meet most closely, and hence an area of central interest for our book. This field, growing out
of the desire to seek a more fundamental understanding of thermodynamics, was developed
especially during the industrial revolution in England. Maxwell wrote many papers on a
mathematical foundation of thermodynamics. With the establishment of a molecular world
view the thermodynamical machinery slowly got replaced by statistical mechanics, where
the macroscopic properties like heat capacity, entropy, temperature were explained through
considerations of the statistical properties of a big population of particles. The first definite
treatise on statistical thermodynamics is by Gibbs [102]3 who also invented much of the
modern mathematical notation in physics, especially the notation for vector analysis.
Quantum mechanics and classical mechanics look almost the same when viewed in the
context of statistical mechanics; indeed, Gibbs account of statistical mechanics had to be
altered very little after quantum mechanics revolutionized the whole of science. In this
course, we shall always emphasize the closeness of classical and quantum reasoning, and
put things in a way to make this closeness as apparent as possible.
1.4
Hamiltonian mechanics
(1.5)
where H is the Hamiltonian function defined above. Important to note is that the Hamiltonian function determines the time-evolution. If we can solve these differential equations,
this defines an operator U(s, t) that maps objects at a time s to corresponding objects at
a time t.
Clearly, the composition of the operators gives U(s, s )U(s , t) = U(s, t). If the Hamiltonian
is independent of time (this amounts to assuming that there are no external forces acting
on the system) the maps U form a so-called one-parameter group, since one can write
U(s, t) = e(ts)adH ,
with the associated Hamiltonian vector field adH , which is determined by H. The vector
field adH generates shifts in time. In terms of adH , the multiplication is given by
etadH esadH = e(t+s)adH ,
and the inverse is given by
U(r, s)1 = U(s, r) ,
(etadH )1 = etadH .
From a mathematical point of view, the physical Hamiltonian is just one of many Hamiltonian functions that can be used in the above discussion. Given another Hamiltonian
function H we get another Hamiltonian vector field adH , another one-parameter group,
and a time parameter t with a different physical interpretation (if one exists). For example, if H is a component of the momentum vector (or the angular momentum vector) then
t corresponds to a translation (or rotation) in the corresponding coordinate direction by a
distance (or angle) t. Combining these groups for all H , where the initial value problem
determined by adH is well posed, we get an infinite-dimensional Lie group. (See Sections
11.3 and 17.7 for a definition of Lie groups.) Thus, classical mechanics can be understood
in terms of infinite-dimensional groups!
To avoid technical complications, we shall however mainly be concerned with the cases
where we can simplify the system such that the groups are finite-dimensional. In the
10
CHAPTER 1. MOTIVATION
present case, to obtain a finite-dimensional group one either picks a nice subgroup (this
involves understanding the symmetries of the system) or one makes a partial discretization
of phase space.
Most of our discussions will be restricted to conservative systems, which can be described by
Hamiltonians. However, these only describe systems which can be regarded as isolated from
the environment, apart from influences that can be specified as external forces. Many real
life systems (and strictly speaking all systems with the exception of the universe as a whole)
interact with the environment indeed, if it were not so, they would be unobservable!
Ignoring the environment is possible in so-called reduced descriptions, where only the variables of the system under consideration are kept. This usually results in differential equations which are dissipative. In a dissipative system, the energy dissipates, which means
that some energy is lost into the unmodelled environment. Due to the energy loss, going
back in time is not well defined in infinite-dimensional spaces; the initial value problem
is solvable only in the forward time direction. Hence we can not find an inverse for the
translation operators U(s, t), and these are defined only for s t. Therefore, in the most
general case of interest, the dissipative infinitesimal generators do not generate a group,
but only a semigroup. A well-known example is heat propagation, described by the heat
equation. Its solution forward in time is a well-posed initial value problem, whereas the solution backward in time suffers from uncontrollable instability. Actually, many dissipative
systems are not even described by a Hamiltonian dynamics, but the semigroup property of
the flow they generate still remains valid.
1.5
Quantum mechanics
11
12
CHAPTER 1. MOTIVATION
with classical measurement equipment), one regards it as the distribution for finding the
particle in the position where such a detctor responds.
In particular, in case of a single particle, the probability density for observing a detector
response at x is |(x)|2 . Physicists and chemists occasionally view a scaled version of the
probability distribution || as a charge density. The justification is that in a population
of a great number of particles that are all subject to the same Schrodinger equation, the
particles will distribute themselves more or less according to the probability distribution of
a single particle.
In a first course on quantum mechanics one postulates the time-dependent Schr
odinger
equation
ih
d
= H
dt
(1.6)
for a single particle described by the wave function , where H is the Hamiltonian now an
operator. The Schrodinger equation describes the dynamics of the wave function and thus of
the particle. Given a solution to the Schrodinger equation, normalized to satisfy = 1
(where is the adjoint linear functional, in finite dimensions the conjugate transpose), one
obtains a density operator = , which is a Hermitian, positive semidefinite rank-one
operator of trace tr = = 1. This type of density operator characterizes so-called
pure states; the nomenclature coincides here with that of the mathematical theory of
C -algebras.
In quantum mechanics, the classical functions are replaced by corresponding operators defined on a dense subspace of a suitable separable Hilbert space. For example, the momentum
in the x-direction of a particle described by a wave function (x) can be described by the
operator ihx . As we shall see, this process of quantization has interesting connections
to the representation theory of Lie algebras. Using the correspondence between classical
functions and operators one deduces the Hamiltonian for an electron of the hydrogen atom,
the basis for an explanation of atomic physics and the periodic system of elements.
The hydrogen atom is the quantum version of the 2-body problem of Newton and is the
simplest of a large class of problems for which one can explicitly get the solutions of the
Schrodinger equation it is integrable in a sense paralleling the classical notion. Unfortunately, integrable systems are not very frequent; they seem to exist only for problems with
finitely many degrees of freedom, for quantum fields living on a 2-dimensional space-time,
and for noninteracting theories in higher dimensions. (Whether there are exactly solvable
interacting local 4-dimensional field theories is an unsolved problem.) Nevertheless, the hydrogen atom and other integrable systems are very important since one can study in these
simple models the features which in some approximate way still hold for more complicated
physical systems.
1.6
13
1.7
The Schr
odinger picture
14
CHAPTER 1. MOTIVATION
Ek k k .
(1.7)
(1.8)
and the ground state is the solution of minimal energy E0 . With our normalization of
energies, E0 = 0 and hence H0 = 0, implying that the ground state is a time-independent
solution of the time-dependent Schrodinger equation (1.6). The other eigenvectors k lead
to time-dependent solutions k (t) = eitEk /h k , which oscillate with the angular frequency
k = Ek /h. This gives Plancks basic relation
E=h
(1.9)
relating energy and angular frequency. (In terms of the ordinary frequency = /2 and
Plancks original constant h = 2h, this reads E = h.) The completeness of the spectrum
and the superposition principle now implies that for a nondegenerate spectrum (where
all energy levels are distinct),
X
(t) =
k eik t k
is the general solution of the time-dependent Schrodinger equation. (In the degenerate case,
a more complicated confluent formula is available.) Thus the time-dependent Schrodinger
equation is solvable in terms of the time-independent Schrodinger equation, or equivalently
with the spectral analysis of the Hamiltonian. This is the reason why spectra are in the
center of attention in quantum mechanics. The relation to observed spectral lines, which
gave rise to the name spectrum for the list of eigenvalues, is discussed in Chapter 23.
The spectral decomposition (1.7) also provides the connection to quantum statistical mechanics. A thorough discussion of equilibrium statistical mechanics emphasizing the quantum-classical correspondence will be given in Part II. Here we only scratch the surface.
Under sufficiently idealized conditions, a thermal quantum system is represented as a socalled canonical ensemble, characterized by a density operator of the form
= eH ,
4
= (k T )1 ,
15
with T the temperature and k the Boltzmann constant, a tiny constant with approximate
value 1.38 1023 J/K. Hence we get
X
=
eEk k k = 0 0 + eE1 1 1 + . . .
(1.10)
At room temperature, T 300K, hence 5 1020 J 1 . Therefore, if the energy gap
E1 E0 is not too small, it is enough to keep a single term, and we find that 0 0 .
Thus the system is approximately in the ground state.
The fact that the ground state is the most relevant state is the basis for fields like quantum
chemistry: For the calculation of electron configuration and the corresponding energies of
molecules at fixed positions of the nuclei, it suffices to know the ground state. An exception
is to be made for laser chemistry where a few excited states become relevant. To compute
the ground state, one must solve (1.8) for the electron wave function 0 , which, because
of the minimality condition, is a global optimization problem in an infinite-dimensional
space. The HartreeFock method and their generalizations are used to solve these in
some discretization, and the various solution techniques are routinely available in modern
quantum chemistry packages.
Applying the Schrodinger equation to the pure state = and noting that H = H,
one finds that
ih
d
+ ) = (ih)
(ih)
= H (H) = H H,
= ih(
dt
1.8
(1.11)
In the beginning of quantum mechanics there were two independent formulations; the
Heisenberg picture (discovered in 1925 by Heisenberg), and the Schr
odinger picture
(discovered in 1926 by Schrodinger). Although the formulations seemed very different at
first, they were quickly shown to be completely equivalent.
In the Schrodinger picture, the physical configuration is described by a time-dependent
state vector in a Hilbert space, and the observables are time-independent operators on
this Hilbert space. In the Heisenberg picture this is the other way around; the vector is
time-independent and the observables are time-dependent operators.
The connection between the two pictures comes from noting that everything in physics that
is objective in the sense that it can be verified repeatedly is computable from expectation
values (here representing averages over repeated observations), and that the time-dependent
expectations
R
R
hf it = (t)f = f (t)
(1.12)
16
CHAPTER 1. MOTIVATION
(remember that the quantum integral (1.2) is a trace!) in the Schrodinger picture can be
alternatively written in the Heisenberg picture as
R
hf it = f (t) = f (t) .
(1.13)
The traditional view of classical mechanics corresponds to the Heisenberg picture the
observables depend on time and the density is time-independent. However, both pictures
can be used in classical mechanics, too.
To transform the Heisenberg picture description to the Schrodinger picture, we note that
the Heisenberg expectations (1.13) satisfy
R
dhf it R
= f (t) = {f (t), H} = h{f, H}it ,
dt
dhf it
= h{f, H}it
(1.14)
dt
for the expectations. An equivalent description in the Schrodinger picture expresses the
same dynamical law using the Schrodinger expectations (1.12). To deduce the dynamics
of (t) we need the following formula which can be justified for concrete Poisson brackets
with integration by parts,
R
R
{f, g}h = f {g, h} ;
(1.15)
R
and
cf. (1.19) below.
Using this,
we find as consistency condition that dtd hf it = f (t)
R
R
h{f, H}it = {f, H}(t) = f {H, (t)} must agree for all f . This dictates the classical
Liouville equation
(t)
= {H, (t)} .
(1.16)
In the quantum case, the Heisenberg and Schrodinger formulations are equivalent if the
dynamics of f (t) is given by the quantum Heisenberg equation
ih
d
f (t) = [H, f (t)] .
dt
To check this, one may proceed in the same way as we did for the classical case above.
Using the Lie product notation (1.3), the dynamics for expectations takes the form
d
hf i = hHf i,
dt
the Heisenberg equation becomes
d
f = Hf,
dt
(1.17)
(t)
= (t)H.
(1.18)
(Note that here H appears in the opposite order!) These formulas hold whether we consider
classical or quantum mechanics.
17
We find the remarkable result that these equations look identical whether we consider
classical or quantum mechanics; moreover, they are linear in f although they encode all the
intricacies of nonlinear classical or quantum dynamics. Thus, on the statistical level,
classical and quantum mechanics look formally simple and identical in structure.
The only difference lies in the definition of the Lie product and the integral.
The connection is in fact much deeper; we shall see that classical mechanics and quantum
mechanics are two applications of the same mathematical formalism. At the present stage,
we get additional hints for this by noting that, as we shall see later, both the classical and
the quantum Lie product satisfies the Jacobi identity
f (gh) + g(hf ) + h(f g) = 0,
hence define a Lie algebra structure; cf. Section 11.1. They also satisfy the Leibniz
identity
gf h = (gf )h + f (gh)
characteristic of a Poisson
algebra;
cf. Section 12.1. Integrating
the Leibniz
identity and
R
R
R
R
using (1.4) gives 0 = gf h = ((gf )h + f (gh)) = (f g)h + f (gh), hence the
integration by parts formula
R
f (gh) = (f g)h.
(1.19)
Readers
having some background in Lie algebras will recognize (1.19) as the property that
R
f g defines a bilinear form with the properties characteristic for the Killing form of a Lie
algebra again both in the classical and the quantum case. Finally, the Poisson algebra
of quantities carries both in the classical case and in the quantum case an intrinsic real
structure given by an involutive mapping satisfying f = f and natural compatibility
relations with the algebraic operations: f is the complex conjugate in the classical case,
and the adjoint in the quantum case.
Thus the common structure of classical mechanics and quantum mechanics is encoded in
the algebraic structure of a Poisson -algebra. This algebraic structure is built up in the
course of the book, and then exploited to analyze one of the characteristic quantum features
of nature the spectral lines visible in light emanating from the sun, or from some chemical
compound brought into the flame of a Bunsen burner.
1.9
The goal of this book is to introduce the ideas relating quantum mechanics, Lie algebras and
Lie groups, motivating everything as far as possible by classical mechanics. We shall mostly
be concerned with systems described by a finite-dimensional phase space; the infinitedimensional case is too difficult for a presentation at the level of this book. However,
we present the concepts in such a way that they are valid even in infinite dimensions, and
select the material so that it provides some insight into the infinite-dimensional case.
18
CHAPTER 1. MOTIVATION
Chapter 2 discusses the mathematics and physics of the 2-level system, the simplest quantum system. It describes a number of important physical situations: Systems having only
two energetically accessible energy eigenstates (e.g., 2-level atoms), the spin of a single
particle such as an electron or a silver atom, the two polarization degrees of freedom of
light, the isospin symmetry between proton and neutron, and the qubit, the smallest unit
of quantum information theory. From a mathematical point of view, this is essentially the
theory of the Lie group SU(2) and its Lie algebra su(2); therefore we introduce along the
way basic concepts for matrix groups and their Lie algebras.
Chapter 3 discusses the mathematics of the most important symmetries found in our universe, and their associated Lie groups and Lie algebras: The rotation group and the group
of rigid motions in physical space R3 , the Heisenberg groups describing systems of point
particles, the Galilei group and the Poincare group describing the symmetry of Newtonian
and Minkowski space-time, the Lorentz group describing basic features of the theory of
relativity, and some more groups describing the hydrogen atom, the periodic system of
elements, a model for nuclei, and quarks. Currently, parts of this chapter are only very
sketchy.
Chapter 4 currently contain a number of sections quoted verbatim from the Theoretical
Physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.
html; the material there must be integrated into the other chapters of the book (mostly
into Chapter 3), a task still to be done..
Chapter 5 discusses systems of classical oscillators, starting with ordinary differential equations modeling nonlinearly coupled, damped oscillators, and introducing some notions from
classical mechanics the Hamiltonian (energy), the notion of conservative and dissipative
dynamics, and the notion of phase space. We then look in more detail into the single oscillator case, the classical anharmonic oscillator, and show that the phase space dynamics can
be represented both in terms of Hamiltons equations, or in terms of Poisson brackets and
the classical Heisenberg equation of motion. Since the Poisson bracket satisfies the Jacobi
identity, this gives the first link to Lie algebras. Considering the special case of harmonic
oscillators, we show that they naturally arise from eigenmodes of linear partial differential
equations describing important classical fields: The Maxwell equations for beams of light
and gamma rays, the Schrodinger equation and the KleinGordon equation for nonrelativistic and relativistic beams of alpha rays, respectively, and the Dirac equation for beams
of beta rays.
Chapter 6.5 relates the dynamics of arbitrary systems to those of oscillators by coupling the
latter to the system, and exploring the resulting frequency spectrum. The observation that
experimental spectra often have a pronounced discrete structure (analyzed in more detail
in Chapter 11) is found to be explained by the fact that the discrete spectrum of a quantum Hamiltonian is directly related to the observed spectrum via the quantum Heisenberg
equation of motion. Indeed, the observed spectral lines have frequencies corresponding to
differences of eigenvalues of the Hamiltonian, multiplied by Plancks constant. This naturally explains the RydbergRitz combination principle that had been established about 30
years before the birth of modern quantum theory. An excursion into the early history of
quantum mechanics paints a colorful picture of this exciting time when the modern world
19
view got its nearly definite shape. We then discuss general properties of the spectrum of
a system consisting of several particles, and how it reflects the bound state and scattering
structure of the multiparticle dynamics. Finally, we show how black body radiation, the
phenomenon whose explanation (by Planck) initiated the quantum era, is related to the
spectrum via elementary statistical mechanics.
Part II discusses statistical mechanics from an algebraic perspective, concentrating on thermal equilibrium but discussing basic things in a more general framework. A treatment
of equilibrium statistical mechanics and the kinematic part of nonequilibrium statistical
mechanics is given. From a single basic assumption (Definition 9.1.1) the full structure of
phenomenological thermodynamics and of statistical mechanics is derived, except for the
third law which requires an additional quantization assumption.
Chapter 7 gives a description of standard phenomenological equilibrium thermodynamics for
single-phase systems in the absence of chemical reactions and electromagnetic fields. Section
7.1 states the assumptions needed in an axiomatic way that allows an easy comparison with
the statistical mechanics approach discussed in later chapters, and derives the basic formulas
for the thermodynamic observables. Section 7.2 then discusses the three fundamental laws
of thermodynamics; their implications are discussed in the remainder of the chapter. In
particular, we derive the conventional formulas that express the thermodynamic observables
in terms of the Helmholtz and Gibbs potentials and the associated extremal principles.
Chapter 8 introduces the technical machinery of statistical mechanics, Gibbs states and the
partition function, in a uniform way common to classical mechanics and quantum mechanics. Section 8.1 introduces the algebra of quantities and their basic realizations in classical
and quantum mechanics. Section 8.2 defines Gibbs states, their partition functions, and
the related KMS condition, and illustrates the concepts by means of the canonical ensemble
and harmonic oscillators. The abstract properties of Gibbs states are studied in Section
8.3, using the Kubo product and the Gibbs-Bogoliubov inequality. These are the basis of
approximation methods, starting with mean field theory, and we indicate the connections.
However, since approximation methods are treated abundantly in common texts, we discuss elsewhere in the present book only exact results. The final Section 8.4 discusses limit
resolutions for the values of quantities, and the associated uncertainty relations.
Chapter 9 rederives the laws of thermodynamics from statistical mechanics, thus putting the
phenomenological discussion of Chapter 7 on more basic foundations. Section 9.1 defines
thermal states and discusses their relevance for global, local, and microlocal equilibrium.
Section 9.2 deduces the existence of an equation of state and connects the results to the
phenomenological exposition in Section 7.1. Section 9.3 proves the first law of thermodynamics. In Section 9.4, we compare thermal states with arbitrary Gibbs states and deduce
the extremal principles of the second law. Section 9.5 shows that the third law is related
to a simple quantization condition for the entropy and relates it to the time-independent
Schrodinger equation.
In Chapter 10 we discuss in more detail the relation between mathematical models of physical systems and reality. Through a discussion of the meaning of uncertainty, statistics,
and probability, the abstract setting introduced in the previous chapters is given both a
20
CHAPTER 1. MOTIVATION
21
Lie algebra so(3) of infinitesimal rotations, which is generated by the components of the
angular momentum. In particular, we obtain in Section 12.4 the Euler equations for a spinning rigid body from a Hamiltonian quadratic in the angular momentum. This example
shows how the quantities of a classical Poisson algebra are naturally interpreted as physical
observables. The angular momentum Poisson algebra is a simple instance of LiePoisson algebras, a class of commutative Poisson algebras canonically associated with any Lie algebra
and constructed in Section 12.5. The Poisson bracket for the harmonic oscillator is another
instance, arising in this way from the Heisenberg algebra. Thus Hamiltonian mechanics
on LiePoisson algebras generalizes the classical anharmonic oscillator, and gives for so(3)
the dynamics of spinning rigid bodies. Sections on classical symplectic mechanics and its
application to the dynamics of molecules and an outlook to quantum field theory conclude
the chapter.
Chapter 13 introduces representations of Lie algebras and Lie groups in associative algebras and in Poisson algebras. A general physical system can be characterized in terms of
a Poisson representation of the kinematical Lie algebra of distinguished quantities of interest, a Hamiltonian, a distinguished Hermitian quantity in the Poisson algebra defining
the dynamics, and a state defining a particular system at a particular time. We also introduce Lie algebra and Lie group representations in associative algebras, which relate Lie
algebras and Lie groups of matrices or linear operators to abstract Lie algebras and Lie
groups. These linear representations turn out to be most important for understanding the
spectrum of quantum systems, as discussed later in Section 23.6. We then discuss unitary
representations of the Poincare group, the basis for relativistic quantum field theory. An
overview over semisimple Lie algebras and their classification concludes the chapter.
Part IV discusses the dynamics of nonequilibrium phenomena, i.e., processes where the
expectation changes with time, in as far as no fields are involved. This part is still in a
preliminary, somewhat sketchy form. It also lacks references (both to historical origins, the
literature on the subject) and subject indexing, and must also better connected with the
earlier parts.
Chapter 14 discusses general Markov processes, i.e., abstract (classical or quantum) stochastic processes without memory. It will also contain the basic features of quantum dynamical
semigroups and the associated Lindblad dynamics.
Chapter 15 discusses stochastic differential equations and associated diffusion processes,
and their deterministic limits dissipative Hamiltonian systems.
Chapter 16 discusses collective processes described by a master equation, and their most
prominent application stirred chemical reactions.
Part V gives an introduction to differential geometry from an algebraic perspective.
Chapter 17 starts with an introduction to basic concepts of differential geometry. We
define (smooth, infinitely often differentiable) manifolds and the associatated algebra of
scalar fields. Its derivations define vector fields, which form important examples of Lie
22
CHAPTER 1. MOTIVATION
algebras. The exterior calculus on alternating forms is developped. Finally, Lie groups are
interpreted as manifolds.
Chapter 18 discusses the construction of Poisson algebras related to manifolds, and associated Poisson manifolds, the arena for the most general classical dynamics. We show how
classical symplectic mechanics (in flat phase space) and constrained Hamiltonian mechanics
fit into the general abstract picture. We end the chapter with a discussion of the Lagrangian
approach to classical mechanics.
Chapter 19 is about Hamiltonian quantum mechanics. We discuss a classical symplectic framework for the Schrodinger equation. This is then generalized to a framework for
quantum-classical dynamics, including important models such as the Born-Oppenheimer
approximation for the quantum motion of molecules. A section on deformation quantization relates classical and quantum descriptions of a system, and the Wigner transform
makes the connection quantitiatively useful in the special case of a canonical system with
finitely many degrees of freedom.
Part VI applies these concepts to the study of the dominant kinds of elementary motion
in a bound system, vibrations (described by oscillators, Poisson representations of the
Heisenberg group), rotations (described by a spinning top, Poisson representations of the
rotation group), and their interaction. On the quantum level, quantum oscillators are
always bosonic systems, while spinning systems may be bosonic or fermionic depending on
whether or not the spin is integral. The analysis of experimental spectra, concentrating on
the mathematical contents of the subject, concludes our discussion.
Chapter 20 is a study of harmonic oscillators (bosons, elementary vibrations), both from
the classical and the quantum point of view. We introduce raising and lowering operators
in the symplectic Poisson algebra, and show that the classical case is the limit h
0 of
the quantum harmonic oscillator. The representation theory of the single-mode Heisenberg
algebra is particularly simple since by the Stonevon Neumann theorem, all unitary representations are equivalent. We find that the quantum spectrum of a harmonic oscillator is
discrete and consists of the classical frequency (multiplied by h
) and its nonnegative integral
multiples (overtones, excited states). For discussing the representation where the harmonic
oscillator Hamiltonian is diagonal, we introduce Diracs bra-ket notation, and deduce the
basic properties of the bosonic Fock spaces, first for a single harmonic oscillator and then
for a system of finitely many harmonic modes. We then introduce coherent states, an overcomplete basis representation in which not only the Heisenberg algebra, but the action of
the Heisenberg group is explicitly visible. The coherent state representation is particularly
relevant for the study of quantum optics, but we only indicate its connection to the modes
of the electromagnetic field.
Chapter 21 discusses spinning systems, again from the classical and the quantum perspective. Starting with the Lie-Poisson algebra for the rotation group and a Hamiltonian
quadratic in the angular momentum, we obtain the Euler equations for the classical spinning top. The quantum version can be obtained by looking for canonical anticommutation
relations, which naturally produce the Lie algebra of a spinning top. As for oscillators,
the canonical anticommutation relations have a unique irreducible unitary representation,
23
which corresponds to a spin 1/2 representation of the rotation group. The multimode
version gives rise to fermionic Fock spaces; in contrast to the bosonic case, these are finitedimensional when the number of modes is finite. In particular, the single mode fermionic
Fock space is 2-dimensional. Many constructions for bosons and fermions only differ in
the signs of certain terms, such as commutators versus anticommutators. For example,
quadratic expressions in bosonic or fermionic Fock spaces form Lie algebras, which give
natural representations of the universal covering groups of the Lie algebras so(n) in the
fermionic case and sp(2n, R) in the bosonic case, the so-called spin groups and metaplectic
groups, respectively. In fact, the analogies apart from sign lead to a common generalization
of bosonic and fermionic objects in form of super Lie algebras, which are, however, outside
the scope of the book. Apart from the Fock representation, the rotation group has a unique
irreducible unitary representation of each finite dimension. We derive these spinor representations by restriction of corresponding nonunitary representations of the general linear
group GL(2, C) on homogeneous polynomials in two variables, and find corresponding spin
coherent states.
Chapter 22 discusses highest weight representations, providing tools for classifying many
irreducible representations of interest. The basic ingredient is a triangular decomposition,
which exists for all finite-dimensional semisimple Lie algebras, but also in other cases of
interest such as the oscillator algebra, the Heisenberg algebra with the harmonic oscillator
Hamiltonian adjoined. We look at detail at 4-dimensional Lie algebras with a nontrivial
triangular decomposition (among them the oscillator algebra and so(3)), which behave
almost like the oscillator algebra. As a result, the analysis leading to Fock spaces generalizes
without problems, and we are able to classify all irreducible unitary representations of the
rotation group.
Chapter 23 applies the Lie theoretic structure to the analysis of quantum spectra. After a
short history of some aspects of spectroscopy, we look at the spectrum of bound systems of
particles. We show how to obtain from a measured spectrum the spectrum of the associated
Hamiltonian, and discuss qualitative results on vibrations (giving discrete spectra) and
chemical reactions (giving continuous spectra) that come from the consideration of simple
systems and the consideration of approximate symmetries. The latter are shown to result
in a clustering of spectral values. The structure of the clusters is determined by how the
irreducible representations of a dynamical Lie algebra split when the algebra is reduced
to a subalgebra of generating symmetries. The clustering can also occur in a hierarchical
fashion with fine splitting and hyperfine splitting, corresponding to a chain of subgroups.
As an example, we discuss the spectrum of the hydrogen atom.
The material presented should be complemented (in a later version of the book) by two
further parts, one covering quantum field theory, and the other on nonequilibrium statistical
mechanics, deriving space-time dependent thermodynamics from quantum field theory.
24
CHAPTER 1. MOTIVATION
Chapter 2
The simplest quantum system
The simplest quantum system is a 2-level system. It describes a number of different situations important in practice: Systems having only two energetically accessible energy
eigenstates (e.g., 2-level atoms), the spin of a single particle such as an electron or a silver atom, the two polarization degrees of freedom of light, the isospin symmetry between
proton and neutron, and the qubit, the smallest unit of quantum information theory.
The observable quantities of a 2-level system are 2 2 matrices. Matrices and their infinitedimensional generalizations linear operators are the bread and butter of quantum mechanics.
In mathematics and physics, symmetries are described in terms of Lie groups and Lie
algebras. An understanding of these concepts is fundamental to appreciate the unity of
modern physics.
This chapter introduces some basic concepts for matrix groups and their Lie algebras,
concentrating on the case of 2 2 matrices and their physical interpretation. In the next
chapter we introduce in an elementary way a number of other Lie groups and Lie algebras
that are important for physics, by means of concrete matrix representations, and relate them
to concrete physics. A general, more abstract treatment of Lie groups and Lie algebras is
given later in Chapter 11.
We assume that the reader already has a good command of matrix algebra (including complex numbers and eigenvalues) and knows basic properties of vector spaces, linear algebra,
limits, and power series (quickly reviewed in Section 2.1).
The beginning is just matrix calculus with some new terminology, but the subject soon
takes on a life of its own. . . . Readers who see matrix groups for the first time may want to
skip forward to the sections with more physical content to get a better idea of how matrix
group are used in physics, before reading the chapter in a linear order.
25
26
2.1
The early 20th century initiated two revolutions in physics that changed the nature of the
mathematical tools used to describe physics. Both revolutions gave matrices a prominent
place in understanding the new physics.
The transition from the old, Newtonian world view to the new, relativistic conception of
the world culminated in the realization that Nature is governed by symmetries that can
be descibed in terms of the Lorentz group, a group of 4 4-matrices that mathematicians
refer to by the symbolic name SO(1, 3) or SL(2, C). Since then, many other symmetry
groups have found uses in physics. Indeed, symmetry considerations and the associated
group theory have become a unifying theme and one of the most powerful tools in modern
physics.
Independently of the theory of relativity, an increasing number of quantum phenomena
defying an explanation in terms of classical physics were noticed, beginning in 1900, when
Max Planck [219] successfully used a quantization condition for his analysis of black
body radiation. After 25 years of groping in the dark to make classical sense of these
quantum phenomena, Werner Heisenberg [123] laid in 1925 the mathematical foundations
of modern quantum mechanics. The key was the insight that basic physical quantities such
as the components of position and momentum should be represented in terms of matrices (in
this case infinite arrays of numbers) rather than by single numbers as in classical mechanics.
Since then, matrices and linear operators, their infinite-dimensional generalizations, form
the cornerstone of quantum mechanics.
Therefore, the language of matrices is an indispensible foundation for a deeper understanding of modern physics. To fix the notation and to remind the reader, we begin by repeating
some definitions and properties of matrices and related concepts. Thorough treatments
(from complementary points of views) are given in Lax [172] and Horn & Johnson
[127].
K denotes a field, usually the field R of real numbers or the field C of complex numbers.
Then Kn denotes the space of column vectors x of length n with entries xk K (k =
1, . . . , n), and Kmn denotes the vector space of all m n matrices A with entries Ajk K
(j = 1, . . . , m; k = 1, . . . , n). We identify 1 1 matrices with the entry they contain, and
column vectors with matrices having a single column; thus K11 = K and Km1 = Km .
The zero matrix of any size (with all entries zero) is denoted by 0, or by 0n if it is
square and its size n n is emphasized. The identity matrix of any size is denoted by
1, or by 1n if its size n n is emphasized; multiples of the identity are identified with
the corresponding elements of K. The transpose of the matrix A Kmn is the matrix
AT Knm with entries (AT )jk := Akj ; the transpose of the column vector x Kn is the
row vector xT K1n . The matrix A is called symmetric if AT = A. The conjugate
transpose of the matrix A Cmn is the matrix A Cnm with entries (A )jk := Akj ,
where = denotes the complex conjugate of C. The matrix A is called Hermitian
if A = A.
The product of the m n matrix A and the n r matrix B is the m r matrix AB with
m
X
27
l=1
tive, A(B + C) = AB + AC, (A + B)C = AB + AC, but in general not commutative. For a
n
X
nn
Akk is the trace of A, and det A denotes
square matrix A K , the number tr A :=
k=1
and
|A| kAk || for Cn .
Matrix functions of a real or complex square matrix A are defined by power series with real
or complex coefficients. If a power series in x has convergence radius r then the series with
x replaced by a square matrix A converges for kAk < r. Any two matrix functions of the
same matrix A commute. If A has the eigenvalues k then the matrix function f (A) has
the eigenvalues f (k ). Identities for power series involving only a single variable x remain
valid when x is replaced by a square matrix; moreover, f (AT ) = f (A)T and f (A ) = f (A) .
In particular, the matrix exponential
X
1 k
A
e =
k!
k=0
A
is defined for all real or complex square matrices A, and satisfies for s, t C the relations
esA etA = e(s+t)A ,
(esA )t = estA .
(2.1)
On the other hand, eA+B is in general distinct from eA eB ; however, eA+B = ea eB if A and
B commute, i.e., AB = BA. We also note the formula
det eA = etr A ,
which follows from det eA =
ek = e
= etr A .
(2.2)
28
2.2
A basic fact about square real and complex matrices is that they can often be interpreted
in terms of motions in the underlying vector space on which they act. This gives them an
intuitive meaning that makes it easy to interpret even very abstract applications. Since
motions can be combined and reversed they carry a natural group structure that gives rise
to the concept of a matrix group. Different matrix groups characterize different forms of
permitted motions. Matrix groups are important examples of so-called Lie groups, defined
in Section 17.7. Indeed, by the theorem of Ado (Ado [2]), every finite-dimensional Lie
group is isomorphic to a matrix group.
A matrix group over K is a nonempty, closed set G of invertible matrices from Knn with
the property that the product of any two matrices from G and the inverse of a matrix from
G are in G. In particular, 1 G, and the limit of any convergent sequence of elements
Ul G (l = 1, 2, 3, . . .) is again in G. If K = R (or K = C) then G is called a real (or
complex) matrix group. The matrix group G is called abelian if all its elements commute,
i.e., if UV = V U for all U, V G.
Let G be a real or complex matrix group. A G-motion (by V G) is an arbitrarily often
differentiable map U : [0, 1] G such that U(0) = 1 (and U(1) = V ). If the group consists
of n n matrices, the G-motion moves a vector x0 K from x(0) = x0 to x(1) = V x0 ,
sweeping out a path x(t) = U(t)x0 for t [0, 1]. It is natural to interpret t as a time
coordinate in suitable units of time.
2.2.1 Examples. We shall meet a large number of examples in this and the next chapter,
progressing from the matrix groups easiest to define to the ones most useful in physics. We
begin by naming the smallest and largest matrix groups of a given size.
(i) The set Id(n) consisting only of the n n identity matrix is a matrix group, called a
trivial group.
(ii) The set L(n, K) = GL(n, K) of all invertible n n matrices with entries in K is a matrix
group, called a general linear group over K. In particular, L(1, K) is the multiplicative
group K := K \ {0} of the field K.
We now illustrate the geometric inplications of the definitions by means of the complex
plane and the motions corresponding to C and some of its subgroups.
The first subgroup of C of geometric interest is the group R
+ of positive real numbers.
An R+ -motion stretches or compresses all vectors from the origin to a nonzero complex
number by a time-varying factor. Such a stretching or compression is called a dilatation
or dilation; thus R
+ is the group of dilatations (with respect to the origin) of the complex
plane. It is well-known that the real multiplicative group and the real additive group R
of translations along the real axis are isomorphic: Vie the exponential function, one can
associate to every translation by f R a dilatation U = ef R
+ , and conversely, find
29
uniform stretch (or compression) is obtained by the exponential motion U(t) = etf when
f > 0 (resp. f < 0).
The subgroup L(1, R) = R of all nonzero real numbers contains dilations, the reflections
at zero given by multiplication with 1, and their products, given by arbitrary negative
real numbers.
Another important subgroup is the group consisting of all complex numbers with absolute
value one, forming the unit circle in the complex plane. This group is the smallest of the
unitary groups defined in Section 2.8, and is therefore generally denoted by U(1)1 . Using
the Euler relation ei = cos + i sin , one can again represent arbitrary group elements
U U(1) as exponential U = ei of a purely imaginary element f = i. As group
elements acting by multiplication on the complex plane, the elements of U(1) correspond
to rotations around zero. Indeed, U = ei is a rotation by the angle . In particular, a
uniform rotational motion progresses by equal angles in equal time intervals, hence is given
by the exponential motion U(t) = cos(t) + i sin(t) = eit .
Since rotations by integral multiples of 2 have no net effect, the representation U = ei
does not define uniquely; hence imaginary elements f differing by a multiple of 2i give
the same group element U = ef . Thus while the two groups behave the same locally, there
is a global topological difference. This also shows in the fact that, as a manifold, U(1) is
compact, while R
+ is noncompact.
2.3
The matrix
d
U (0) := U(t)
t=0
dt
is called the infinitesimal motion of the G-motion U : [0, 1] G. Thus f is the infinitesimal motion of U : [0, 1] G iff, for small t,
U(t) = 1 + tf + O(t2).
Here the Landau symbol O(t2 ) denotes an expression in t whose norm is bounded for small
t by a constant multiple of t2 (which may be different in each occurrence of the Landau
symbol). The Lie algebra of (or associated with) the real or complex matrix group G is
the set log G of all infinitesimal motions of G-motions.
The following fundamental theorem gives basic properties of the Lie algebra log G and
describes the effect that a coordinate transformation in form of a G-motion has on Lie
algebra elements.
2.3.1 Theorem.
(i) The Lie algebra L := log G of any real or complex matrix group G is a vector space
1
The reader should not confuse the occurrences of U (1) as group with those of U (1) as the final group
element of a motion U : [0, 1] G.
30
AdU f := Uf U 1
for f L
(2.3)
(2.4)
The above property (i) motivates to define in general a matrix Lie algebra over K to
be a subspace L of Knn closed under commutation. The matrix Lie algebra L is called
abelian if all its elements commute, i.e., if [f, g] = 0 for all f, g L. A subset of a matrix
Lie algebra L closed under commutation is again a matrix Lie algebra, and is called a Lie
subalgebra of L.
2.3.2 Examples. We shall meet a large number of examples in this and the next chapter,
progressing from the matrix Lie algebras easiest to define to the ones most useful in physics.
We begin by naming the smallest and largest matrix Lie algebras of a given size.
(i) The set id(n) consisting only of the n n zero matrix is a matrix Lie algebra, called a
trivial Lie algebra. Clearly, if K is the real or complex field, id(n) is the Lie algebra of
the trivial group Id(n).
(ii) l(n, K) = gl(n, K) = Knn is a matrix Lie algebra, called a general linear Lie algebra
over K. If K is the real or complex field, l(n, K) is the Lie algebra of the general linear
group L(n, K) since for every f l(n, K), the mapping U defined by U(t) = etf is an
L(n, K)-motion with infinitesimal motion f .
31
We are mainly interested in matrix groups whose associated Lie algebra has interesting
properties. However, there are important matrix groups with a trivial infinitesimal structure. A matrix group is called discrete if its Lie algebra is trivial. Discrete matrix groups
that play an important role for the description of symmetris of molecules and crystals. For
example, a permutation group is a group G of bijective mappings of a finite set X. If the
members of X are the atoms of a molecule with given chemical structure, its symmetry
group G consits of the permutations that preserve the chemical nature of the atoms and
the chemical bonds between them; for example, the benzene ring has a dihedral symmetry
group with 12 elements. Assocoated with each permutation group is a finite group of n n
permutation matrices U Rnn , where n is the size of X, and Ujk = 1 iff, in a fixed
ordering of the elements of X, the jth element is permuted to the kth element, Ujk = 0
otherwise. The representation theory of these discrete matrix groups gives important information about the chemical properties of symmetric molecules. In this book, we shall
meet discrete groups only in passing; for a deeper treatment we refer to Cornwell [68],
Cotton [69], Kim [150], or Weyl [286].
2.4
We now generalize the construction of uniform rotations in U(1) to arbitrary real or complex
matrix groups.
Let K = R or K = C, and f Knn . Because of (2.1), the set of exp(tf ) = etf with t R
is a matrix group, called the one-parameter group with infinitesimal generator f .
The infinitesimal generator is determined only up to a nonzero scalar multiple. Because of
the property (2.1) and the analogy to uniform rotations in the complex plane, G-motions
of the form U(t) = etf are called uniform motions. Since U(t) = 1 + tf + O(t2 ), the
infinitesimal generator of a uniform motion belongs to the Lie algebra log G. In view of
d tf
e = f etf = etf f
dt
and the unique solvability of the intitial-value problems for ordinary differential equations
in finite-dimensional spaces, uniform G-motions are characterized by the property
U(1) = 0,
d
U(t) = f U(t)
dt
32
(2.5)
for sufficiently large k. Since the Taylor expansions of U(t) and etf agree up to first order,
we have U(t) etf = O(t2 ); hence there is a constant C > 0 such that
kUk Ek k = kU(1/k) ef /k k C/k 2
for sufficiently large k. Now
kUkk
e k =
kUkk
k
X
j=1
k
X
j=1
Ekk k
k
X
j1
kj
=
Ek (Uk Ek )Ek
j=1
2 (kj)c/k
C/k e
k
X
j=1
33
which tends to zero as k . This establishes the limit (2.5) and proves that ef G.
Since every ef with f L is part of a uniform motion, it is in G0 . hence the group generated
by these exponentials is contained in G0 .
(ii)
The theorem implies that connected matrix groups are characterized completely by their
Lie algebras. Since Lie algebras are vector spaces, their structure can be studied with the
help of linear algebra, while most matrix groups are intrinsically nonlinear. This explains
the importance of Lie algebras in the study of connected groups.
2.5
The oriented volume is preserved iff the determinant is one. The unoriented volume is
preserved iff the determinant has absolute value one.
If G is a matrix group then the set SG consisting of all elements in G with determinant one
is a matrix group. Indeed, if U, V SG then det(UV ) = det U det V = 1 and det U 1 =
(det U)1 = 1, so that UV, U 1 SG. In particular, the special linear group SL(n, K)
consisting of all n n matrices with entries in K and determinant one is a matrix group.
The center of SL(n, C) is the group Zn = { C | n = 1} of nth roots of unity, and
the quotients P SL(n, C) = SL(n, C)/Zn form a family of simple Lie groups. The group
P SL(2, C) = SL(2, C)/Z2 is isomorphic to the restricted Lorentz group defined in Section
3.13.
If L is a matrix Lie algebra then the set sL consisting of all elements in L with zero trace
is a matrix Lie algebra. Indeed, if f, g sL then tr[f, g] = tr f g tr gf = 0, so that
[f, g] sL. In particular, the special linear Lie algebra sl(n, K) consisting of all n n
matrices with entries in K and zero trace is a matrix Lie algebra. Since
det(1 + tf + O(t2 )) = 1 + tr tf + O(t)2 ,
the trace of infinitesimal generators of SG vanishes; conversely, the property (2.2) implies
that the exponentials of elements of sL have determinant one, hence belong to SG. Therefore sL is the Lie algebra corresponding to the matrix group SG.
We consider the algebraic properties of the special linear group SL(2, C) and its Lie algebra
sl(2, C) in some detail, since the group SL(2, C), its subgroups, and the Lie algebra sl(2, C)
and its Lie subalgebras play a very important role in physics. SL(2, C) and/or sl(2, C) are
implicitly present even in applications not mentioning Lie groups or Lie algebras explicitly:
In special relativity, SL(2, C) appears because of its relation to the Lorentz group. The
Dirac equation for electrons and positrons (see Section 5.5) uses properties of Pauli matrices
(or their cousins, the Dirac matrices), whose relation to SL(2, R) is now established.
34
3-vectors and 4-vectors. As traditional in physics, we usually use fat letters to write
column vectors a with three components a1 , a2 , a3 . Depending on the context, these three
components may be real or complex numbers, matrices, linear operators, or elements from
an arbitrary associative algebra A. We write A3 for the set of all vectors with three components from A. The inner product of a, b A3 is the element
a b := aT b = a1 b1 + a2 b2 + a3 b3 A;
clearly a b = b a. We write
a2 := a a = a21 + a22 + a23 .
The length of a vector a C3 is
|a| :=
so a2 = |a|2 if a R3 .
a a =
We write A1,3 for the set of vectors p with four components p0 , p1 , p2 , p3 A, arranged as
p0
, p0 A, p A3 .
p=
p
Using the traditional terminology from relativity theory, we call such vectors p 4-vectors,
and call p0 the time part and p the space part of p. The Minkowski inner product
of p, q A1,3 is the element
p q := p0 q0 p q = p0 q0 p1 q1 p2 q2 p3 q3 ,
(2.6)
and
Note that p2 may be negative!
p2 := p p = p20 p2 .
(2.7)
p3
p1 ip2
p1 + ip2
p3
(2.8)
This matrix has zero trace, hence belongs to sl(2, C), and it is easily seen that every element
of sl(2, C) can be written uniquely in this form. Similarly, each complex 2 2-matrix can
35
be written as a complex linear combination of all four Pauli matrices. Defining the Pauli
4-vectors
0
l(2, C)1,3 ,
:=
p0
C1,3 .
for some p =
p
(2.9)
2.6
for k, l = 1, 2, 3 .
The vector product. The structure of the Lie algebra sl(2, C) is intimately tied up with
the vector product in R3 .
The vector product of a, b A3 is the vector
a2 b3 a3 b2
a b := a3 b1 a1 b3 A3 ,
a1 b2 a2 b1
a b = b a,
and the determinant formula for the bfitriple product
a1 b1 c1
(a b) c = a (b c) = det(a, b, c) := det a2 b2 c2 ,
a3 b3 c3
36
The most common case is that all three components are real or complex vectors. In this
case, the following rules, which will be used in the following without comment, hold.
a (b c) = b (c a) = c (a b),
(a b) (c d) = (a c)(b d) (b c)(a d),
a (b c) = b(a c) c(a b),
(a b) c = b(a c) a(b c),
a (b c) = (a b) c + b (a c),
(a c) (b c) = det(a, b, c)c.
Indeed, each property follows by a simple computation either directly or from the previous
property.
A simple calculation with (2.8) verifies the product formula
(p )(p ) = (p q)0 + i(p q) ,
(2.10)
for a, b C3 .
(2.11)
37
(2.11) shows that the vector space Q of complex 2 2 matrices of the form a0 + ia
(a0 R, a R3 ) is an algebra. Indeed, if we embed C into R22 using the imaginary unit
0 1
i=
(which satisfies i2 = 1), we can write the quaternions (3.41) as
1 0
i 0
i=
= i3 ,
0 i
j=
0 1
= i2 ,
1 0
0 i
k=
= i1 ,
i 0
which exhibits the isomorphism. Q can also be desribed as the set of complex 22 matrices
aT Qb =
2.6.1 Theorem. The set Q of quaternions is a skew field, i.e., an associative algebra in
which every nonzero element has an inverse. We have
U(r0 , r) + U(s0 , s) = U(r0 + s0 , r + s),
(2.13)
(2.14)
(2.15)
(2.16)
(2.17)
Proof. (2.13)(2.15) are trivial, and (2.16) follows by direct computation, using Specializing
(2.16) to s0 = r0 , s = r gives
U(r0 , r)U(r0 , r) = U(r02 + r2 , 0) = (r02 + r2 )1,
(2.18)
which implies (2.17). Therefore Q is a vector space closed under multiplication, and every
nonzero element in Q has an inverse. Since matrix multiplication is associative, Q is a skew
field.
In the standard treatment, quaternions are treated like complex numbers, as objects of the
form
q(r0 , r) = r0 1 + r1 i + r2 j + r3 k
with special unit quaternions 1, i, j, k. The correspondence is given by the identification
i = 1 , j = 2 , k = 3
in terms of which q(r0 , r) = U(r0 , r).
(2.19)
38
2.7
In the Hamiltonian form, one takes Hermitian matrices and uses the Lie product i/hbar[f,g],
to match things with quantum mechanical usage. Expressed in terms of commutators, as
usual, the structure constants (e.g., for su(2)-so(3)) become purely imaginary, although the
Lie algebra is real.
In the applications, distinguished generators typically are Hermitian and represent important real-valued observables. Therefore they tend to replace the matrix A by iA. This
is one of the reasons why the structure constants for real algebras appear in the physics
literature with an i when written in terms of commutators.
Every one-parameter group is isomorphic to either L(1, R) or U(1). Two connected matrix
groups are called locally isomorphic if their associated Lie algebras are isomorphic. For
example, L(1, R) and U(1) are locally isomorphic but not isomorphic.
Generators, commutation relations, and structure constants
Introduce the vector L of generators for SL(2, C) and its commutation relations.
The components of the vector product satisfy
(a b)j = aj+1bj1 aj1 bj+1
jkl
0
otherwise.
(2.20)
3
X
klm m
for k, l = 1, 2, 3 .
m=1
The summation above contains only one nonzero term. For example, [1 , 2 ] = 2i3 , and
all other Lie products can be found using a cyclic permutation.
2.8
39
40
2.9
The smallest quantum systems have two levels only and are called qubits; they play an
fundamental role in quantum information theory and quantum computing; cf. Nielsen &
Chuang [207].
We have
(p ) = p ,
so that p is Hermitian if and only if the components of p are real, and antihermitian if
and only if the components of p are purely imaginary. Therefore
u(2) = {ip | p R1,3 } ,
and letting p take complex values we get the whole of l(2, C).
Similarly, the matrices i0 , i1 , i2 , i3 form a basis of the Lie algebra u(2), considered as
a real vector space; indeed, any Hermitian 2 2 matrix can be written in a unique way as
p + for some p R1,3 .
We obtain su(2) for p real and p0 = 0.
The Lie algebras u(2) and su(2). The matrices i1 , i2 , i3 form a basis for the Lie
algebra su(2), considered as a real vector space; indeed, any traceless and Hermitian 2 2
matrix can be written in a unique way as p for some p R3 . Clearly, i0 spans the
center of the Lie algebra u(2). As a consequence, we can write u(2)
= R su(2).
The Lie group U(2). In the case n = 2 it is a nice exercise to show that each special
unitary matrix U can be written as
x y
U=
, x, y C , |x|2 + |y|2 = 1 .
y x
Writing x = a + ib and y = c + id for a, b, c, d R we see that a2 + b2 + c2 + d2 = 1. This
implies that there is a one-to-one correspondence between SU(2) and the set of points on
the unit sphere S 3 in R4 . Thus SU(2) is as a manifold homeomorphic to S 3 . (The manifold
point of view of matrix groups may be used to give a definition of abstract Lie groups; see
.)
We now show that SU(2) is a real manifold that is isomorphic to the three sphere S 3 . We
do this by finding an explicit parametrization of SU(3) in terms of two complex numbers
x and y satisfying |x|2 + |y|2 = 1, which defines the three-sphere.
We write an element g SU(2) as
g=
a b
c d
41
Writing out the equation g g = 1 and det g = 1 one finds the following equations:
|a|2 + |c|2 = 1 ,
ab + cd = 0 ,
|b|2 + |d|2 = 1 ,
ad bc = 1 .
We first assume b = 0 and find then that ad = 1 and cd = 0, implying that c = 0 and U
is diagonal with a = d. Next we suppose b 6= 0 and use a = cd/b to deduce that |b| = |c|
and |a| = |d|; we thus have b 6= 0 c 6= 0. We also see that we can use the ansatz
a = ei cos ,
c = ei sin ,
b = ei sin ,
d = ei cos .
2.10
Qubits are closely related to the polarization of light. Since polarization phenomena show
the basic principles of quantum mechanics in a clean and transparent way, we use polarization to derive the basic equations of quantum mechanics, the Liouville equation and the
Schrodinger equation, thus giving them an easily understandable meaning.
Polarized light was discovered by Christiaan Huygens [132] in 1690. The transformation behavior of beams of completely polarized light was first described by Etienne-Louis
Malus[180] in 1809 (who coined the name polarization), and that of partially polarized
light by George Stokes [262] in 1852. The transverse nature of polarization was discovered
by Augustin Fresnel [93] in 1866, and the description in terms of (what is now called)
the Bloch sphere by Henri Poincare [221] in 1892.
It is instructive to read Stokes 1852 paper [262] in the light of modern quantum mechanics.
One finds there all quantum phenomena for modern qubits, explained in classical terms!
Splitting polarized monochromatic beams into two beams with different, but orthogonal
polarization corresponds to writing a wave functions as superposition of preferred basis
vectors. Mixtures are defined (in Stokes paragraph 9) as arising from groups of independent polarized streams and give rise to partially polarized beams. What is now called
the polarization matrix is represented by Stokes with four real parameters comprising, in
todays terms, the Stokes vector, or, equivlently, the polarization matrix. Stokes asserts (in
his paragraph 16) the impossibility of recovering from a mixture of several distinct pure
42
states any information about these states beyond what is encoded in the Stokes vector (i.e.,
the polarization matrix). The latter can be linearly decomposed in many essentially distinct
ways into a sum of pure states, but all these decompositions are optically indistinguishable.
If one interprets the normalized polarization matrix as density matrix of a qubit, a polarized
monochromatic beam of classical light behaves exactly like a modern qubit, which shares
all the features mentioned. Polarized light is therefore the simplest quantum phenomenon,
and the only one that was understood quantitatively already before the birth of quantum
mechanics in 1900.
Experiments with polarization filters are easy to perform; probably they are already known
from school. Since polarization is a macroscopic phenomenon, the counterintuitive features
of quantum mechanics irritating the untrained intuition are still absent. But polarization
was recognized as a quantum phenomenon only when quantum mechanics was already
fully developed. Norbert Wiener [291] 1930 exhibited a description in terms of the Pauli
matrices and wrote: It is the conviction of the author that this analogy between classical
optics and quantum mechanics is not merely an accident, but is due to a deep-lying
connection between the two theories. This is indeed the case; see, e.g., Neumaier [206].
The mathematics of polarization. A beam of polarized light of fixed frequency is
characterized by a state, described equivalently by the Stokes vector, a real 4-dimensional
vector
S0
T
R1,3
S = (S0 , S1 , S2 , S3 ) =
S
with
S0 |S|,
(2.21)
or by a polarization matrix (also called coherence matrix) a complex positive semidefinite Hermitian 2 2 matrix C. These are related by
1 S0 + S3 S1 + iS2
1
1
C=
= S = (S0 0 + S )
2 S1 iS2 S0 S3
2
2
in terms of the Pauli matrices (2.7). (In the literature, the signs and order of the components
may differ.)
The trace tr C = S0 of the polarization matrix is the intensity of the beam. If S0 = 0,
the beam is dark and contains no light. Otherwise, one may normalize the intensity by
dividing the polarization matrix by S0 , resulting in a density matrix of trace one,
1
1
= C/ tr C = r , r = S/S0 =
;
2
r
it contains the intensity-independent information about the beam. The intensity-independent
quotient
d := |r| = |S|/S0 [0, 1]
is called the degree of polarization, and allows the determinant of the polarization matrix
to be written as det C = 41 (S02 S2 ) = 14 S02 (1 d2 ).
43
The extremal case d = 0 characterizes unpolarized light, which therefore has a polarization matrix C = 21 S0 0 . At the other extreme, a fully polarized beam (a pure polarization
state) has d = 1; it corresponds to a so-called pure polarization state. Since d = 1 characterizes singular polarization matrices, a pure polarization state can be written in the
form C = with a state vector determined up to a phase. In this case, the intensity ofthe beam is S0 = ||2 = . In particular, a normalized state vector has norm
|| = = 1.
Beam transformations. Optical instruments may transform beams by letting them pass
through a filter. A linear, non-mixing (not depolarizing) filter is characterized by a complex
2 2 Jones matrix U. (In the literature, many authors call U the Jones matrix.) The
instrument transforms an in-going beam in the state C into an out-going beam in the state
C = UCU . If the instrument is lossless, the intensities of the in-going and the out-going
beam are identical. This is the case if and only if the Jones matrix U is unitary.
A linear, mixing (depolarizing) filter transforms C instead into a sum of several terms of
the form UCU . It is therefore described by a completely positive linear map on the
space of 2 2 matrices, or a corresponding real 4 4 matrix acting linearly on the Stokes
vector, called the M
uller matrix. For definitions and details, see, e.g., Aiello et al. [3]
and Benatti & Floreanini [30].
The Liouville equation. Passage through inhomogeneous media can be modelled by
means of slices consisting of many very thin filters with Jones matrices close to the identity.
The Schr
odinger equation. If t is the time needed to pass through one slice and (t)
denotes the pure state at time t then (t + t) = U(t, t)(t), where U(t, ) is a L(2, C)motion parameterized by the transition time t. We therefore introduce its infinitesimal
generator
,
H(t) := ihU(t, t)/t)
t=0
it
H(t) + O(t2 )
h
(2.22)
In the lossless case, U(t) = U(t, t) is unitary, which implies that H(t) is Hermitian.
A linear, non-mixing (not depolarizing) instrument with Jones matrix U transforms an
in-going beam in the pure state with state vector into an out-going beam in a pure state
with state vector = U. (2.22) implies
ih
ih
ih
d
(t)
((t + t) (t)) =
(U(t) 1)(t).
dt
t
t
d
(t) = H(t)(t).
dt
44
2.11
In this section, we discuss the spinor representations of L(2, C), see also Sternberg
[260]. By restricting to the unitary matrices we get unitary representations of the group
SU(2). As we shall see later in Section 22.3, these representations comprise all irreducible
unitary representations of SU(2).
For 0 s 12 Z (the factor 21 appears here for historical reasons only) we denote with Ps
the complex vector space of all homogeneous polynomials of degree 2s in z = (z1 , z2 ) C2 .
The space Ps has dimension 2s + 1 since the monomials z1k z22sk (k = 0, 1, . . . , 2s) form a
basis of Ps . The group L(2, C) of invertible complex 22 matrices acts on C2 in the natural
way. On Ps we get a representation of L(2, C) by means of the formula
(U(g))(z) := (g 1 z) for g L(2, C) .
(2.23)
for f l(2, C) .
(2.24)
where the action of H is given by (2.24). The dynamics is described by the Schrodinger
equation
ih = H .
(2.26)
The unitary case. By restricting in (2.9) to real-valued p, we represent u(2).
The
resulting representation turns out to be unitary. To give Ps the appropriate Hilbert space
structure, we define on the unit disk
D = {z C2 | z z 1}
of C2 the measure Dz by
Z
Dzf (z , z) =
dz 2 f (z , z) .
D
(2.27)
= kl mn
45
sk tm ds dt
0s+t1
2 k!m!
=
kl mn ,
(k + m + 2)!
where in the last step we used
Z
xa (1 x)b dx =
a!b!
.
(a + b + 1)!
(2.28)
where
s = 2 /(2s + 1)(2s + 2).
(2.29)
if we use for g SU(2) the substitution z = gz , then the integral in (2.28) transforms into
Uy) in place of (x, y). Thus it is invariant under SU(2)
the same integral with (x , y ) = (Ux,
T
and depends therefore only on x y. Indeed, we can always rotate x such that x = (x1 , 0)
m 2sm
and then clearly the right-hand side is a polynomial with terms x2s
, which is only
1 y1 y2
invariant under the diagonal U(1)-subgroup if m = 2s. Hence the right-hand side of (2.28)
is fixed up to the constant s , which is found by looking at the special case x = y = 10 :
s =
Dz
dz 2 |z1 |4s =
2 (2s)!
2
=
.
(2s + 2)!
(2s + 1)(2s + 2)
(2.30)
1
= h|i := s
Dz (z)(z) .
(2.31)
(s)
We introduce the basis vectors k = z1k z22sk for Ps , in terms of which the inner product
reads
1
2s
(s) (s)
kl .
hk |l i =
k
For x C2 , we define the coherent state |x, si Ps to be the functions
|x, si(z) := (x z)2s = (
x1 z1 + x2 z2 )2s .
(2.32)
for x, y C2 .
(2.33)
46
In particular, the coherent state |x, si is normalized to norm 1 if and only if x has norm 1.
Directly from (2.32), we see that
|x, si = 2s |x, si,
|0, si = 0,
(2.34)
so that it suffices in principle to look at coherent states with x of norm 1. In particular, choosing the parametrization x = w1 gives the traditional spin coherent states of
Radcliffe [225]. For coherent states, (2.23) implies
U(g)|x, si = |g 1 x, si for g SL(2, C) ,
(2.35)
Thus coherent states define a representation of L(2, C), the spinor representation of
L(2, C). We verify that we correctly have U(g)U(h)|x, si = |g 1 h1 x, si = |(gh)1x, si =
U(gh)|x, si. One sees easily that only the subgroup SU(2) is represented unitarily and we
have
U(g)|x, si = |gx, si for g SU(2) .
Note that the Schrodinger equation (2.26) implies that (t) = U(t)(0), where U(t) =
eitH/h . Since the Hamiltonian (2.25) is an element of su(2), we have U(t) SU(2),
and equation (2.35) implies the temporal stability of coherent states. This means that
if the initial state vector is a coherent state, then under the time evolution determined
by H the state vector remains for all times a coherent state. Since the norm of the wave
function is invariant under the dynamics, too, one can work with normalized coherent states
throughout.
In general, let H be a Hilbert space of functions on some space . If we can write function
evaluation as inner product, i.e., if for every x there is an element gx H such that
f (x) = hgx |f i for some , then we say that H has the reproducing kernel property.
We show that the space Ps has the reproducing kernel property. Expanding (x z)2s using
the binomial series we obtain
2s
X
2s (s)
(s)
|x, si =
m (x) m
,
m
m=0
from which it follows
(s)
m
s1
(2s)
Dx m
(x)|x, si ,
(2.36)
for all coherent states |y, si and since these span Ps , we have for all Ps
hx, s|i = (x) ,
for all Ps ,
which is the reproducing kernel property. This implies that we can reproduce elements as
follows. For all Ps we have
Z
Z
1
1
Dz h|z, sihz, s|i ,
(2.37)
Dz (z)(z) = s
h|i = s
47
(2.38)
(2.39)
These properties characterize coherent states in general. For an extension of the coherent
state concept to semisimple Lie groups see Perelomov [217] and Zhang et al. [300].
Coherent states for Heisenberg groups are called Glauber coherent states, and are basic
for modern quantum optics. See Section and the book by Mandel & Wolf [181].
2.12
Entanglement
The SternGerlach Experiment. The SternGerlach experiment is one of the most prominent and best known experiments in the history of quantum mechanics. The experiment
provided a first experimental verification of the discrete nature of quantum mechanics. At
the time of the experiment, which took place in 1922, the phenomenon of spin was not
well-understood and, from the point of view of our present knowledge, a wrong model was
used. Fortunately, the outcome of the experiment was in concordance with this model and
the discrete nature of quantum mechanics was accepted as a fact.
When later a better model was invented, the theory and the SternGerlach experiment
showed discrepancies. It was perhaps partially because of these discrepancies that Goudsmit
and Uhlenbeck postulated that the electron had half-integer spin: with the half-integer spin
of the electron the experiment of Stern and Gerlach was again in agreement with the theory
2
.
The setup of the SternGerlach experiment is quite easy. To understand the physics behind
the experiment, one only has to know that the energy of a small object with magnetic
moment in an magnetic field B is given by the equation
U = B .
The energy is measured relative to the energy far away towards infinity where there is no
magnetic field. Note that the magnetic moment is a vector. Hence, classically it lives in a
2
As more often in the history of physics, it was a coincidence that determined the acceptance of a
theory. Another such example was the measurement of the deflection of rays of the stars that can be seen
close to the sun during a solar eclipse done by Eddington in 1919, thereby verifying the general theory
of relativity of Einstein. The actual deflections are too small to be measured and hence the deflections
found by Eddington have to be ascribed to noise; luckily the noise gave a pattern in agreement with the
theoretical results.
48
Bz
.
z
Thus, classically, the beam will be smeared out; the particles with pointing in the +zdirection will be deflected upwards, those with pointing in the z-direction will be deflected downwards. Classically all positions of are possible and distributed in a Gaussian
way, so that the screen will show a bounded strip, most intense in the center and fading out
towards the ends. However, the result of the SternGerlach experiment showed very clearly
two blobs, centered at the positions corresponding to pointing up and down. Both blobs
had the same intensity.
Assume that we have a bunch of particles (for example electrons), then they all have the
same value of l, but the z-component of the magnetic moment might be different. Since
the z-value can take 2l + 1 values, the beam will split in 2l + 1 different parts.
In their experiment, Stern and Gerlach used silver atoms, of which we now know that
there is one electron in the outmost orbit and it is this electron which gives rise to the
magnetic moment. The spin of an electron is however not in an SO(3)-representation,
but in an SU(2)-representation and this correspondsn to l = 1/2. This representation is
two-dimensional and thus the general state of an electron can be described as
= a|+i + b|i ,
|a|2 + |b|2 = 1 ,
where |+i is the state with pointing in the +z-direction and |i is the state with pointing
in the z-direction. When one measures the z-component of the magnetic moment, one
finds with probability |a|2 the value +1/2 and with probability |b|2 the value 1/2. In a
sample of heated silver atoms, there is no preferred direction for and thus in the end, the
possibility that the value of the magnetic moment of a single silver atom is +1/2 is more
or less 1/2. This explains why the two blobs in the SternGerlach experiment are equally
bright.
2.13
Photons on demand
In this section we consider a quantum model for photons on demand, and its realization
through laser-induced emission by a single calcium ion in a cavity. The exposition is based
on Keller et al. [149].
In their paper, Keller et al. discuss in detail a model based on the simplified level scheme
given in Figure 2.13 which ignores the fine structure of the Ca+ states.
49
Figure 2.1: Experimental set-up for the generation of single-photon pulses with an ioncavity system. The drawing shows a cross-section through the trap, perpendicular to the
trap axis. (Figure 16 from [149])
Figure 2.2: Scheme of the eight-level model on which we base our numerical calculations.
Pump and cavity field are assumed to be linearly polarized in the direction of the quantization axis. For clarity, the four possible spontaneous decay transitions to the ground state
are represented by a single arrow. (Figure 5 from [149])
reexcite ion into excited state with a reset laser at 866nm, until it falls back into the
ground state
ground state g, metastable state m, excited state x of Ca+
photons cavity , pump , reset
electron e bound in detector
r
la
se
r
la
se
39
7n
m
86
6n
m
50
Ca+
resonator
(0.1mm)
mirror
semipermeable
mirror
detector
Active processes
a:
b:
g + pump x
(excitation)
c:
x m + cavity
d:
cavity + e
(photodetection)
e:
m + reset x
(ion reset)
51
The Hilbert space on which the master equation is based is the tensor product of a single
mode Fock space for the cavity photon and a 3-mode space for the Ca+ ion.
An orthonormal basis of the space is given by the kets |n, ki, where n = 0, 1, . . . is the
photon occupation number and k {g, x, m} labels the ion level.
The structure of the Hamiltonian and the dissipation terms in the master equation is such
that if the system is started in the ground state |0, gi, it evolves to a mixed state in which
the photon number is never larger than 1.
Thus multiphoton states do not contribute at all, and one can truncate the cavity photon
Fock space to the two modes with occupation number n = 0, 1, without changing the
essence of the model.
52
Of interest for the photon production is the projection of the density matrix to the photon
space, obtained by tracing over the ion degrees of freedom. This results in an effective
time-dependent photon density matrix
photon (t) =
00 (t) 01 (t)
10 (t) 11 (t)
where 11 (t) = p(t) is the probability density of finding a photon, 00 (t) is the probability
density of finding no photon, and 01 (t) = 10 (t) measures the amount of entanglement
between the 1-photon state and the vacuum state.
Semidefiniteness of the state requires |01 |
p
p(1 p).
Assuming for simplicity that we have approximate equality, photon is essentially rank one,
photon (t) (t)(t) , (t) = s(t)|0i + c(t)|1i, where s(t) and c(t) are functions with
|s(t)|2 + |c(t)|2 = 1, determined only up to a time-dependent phase factor. In particular,
we may take c(t) to be real and nonnegative.
Thus, in the approximation considered, the quantum electromagnetic field is in a superposition of
pthe vacuum mode and the single-photon field mode, with a 1-photon amplitude
c(t) = p(t) that varies with time and encodes the probability density p(t) of detecting a
photon particle.
In the actual experiments, p(t) has a bell-shaped form, and the total photon detection
probability, referred to as the efficiency, is significant, but smaller than 1.
Discarding the vacuum contribution corresponding to the dark, unexcited cavity, and giving up the interaction picture by inserting the field description |1it = eit 0 (x) of the
photon
mode, the (now time-dependent) 1-photon state takes the form
A1photon (x, t) =
p
it
p(t)e
0 (x).
(At this stage one notices a minor discrepancy with the field description, since the 1-photon
state is no longer an exact solution of the Maxwell equations. To correct this deviation from
Maxwells equations, one has to work with quasi-monochromatic modes and the paraxial
approximation.)
We now add the reset mechanism to get a continuous pulsed photon stream. Thus we
consider a periodic sequence of excitation-reset cycles of the ion in the cavity. As before,
we find that the electromagnetic field corresponding to the sequence of pulses is a single,
periodically excited 1-photon mode of the electromagnetic field. Thus what appears at
the photodetector as a sequence of photon particles arriving is from the perspective of
quantum electrodynamics the manifestation of a single nonstationary, pulsed 1-photon
state of the electromagnetic field!
2.14
53
[L2 , L3 ] = 2iL1 ,
(L0 L ) = L0 L
[L3 , L1 ] = 2iL2 ,
for = 1 : 3.
(2.40)
(2.41)
(In the infinite-dimensional case, we also require that L0 and (2.41) are self-adjoint.) We
say the Pauli set has spin j if
L21 + L22 + L23 = 4j(j + 1).
(2.42)
for , Cs
(2.43)
L = L
for = 1, 2, 3.
(2.44)
for a C3 .
2.14.1 Proposition.
(i) Any Pauli set satisfies
[a L, b L] = 2i(a b) L for a, b C3 ,
(2.45)
of so(3) by
and hence defines a representation X
X(a)
:= a L/2i,
(2.46)
(2.47)
(iv) For any Pauli set and an arbitrary rotation Q = (e1 , e2 , e3 ) SO(3), the Lk = ek L
form together with L0 another Pauli set.
Proof. (i) (2.45) follows from
X
X
[a L, b L] =
a b [L , L ] =
(a b b a )[L , L ] = 2i(a b) L,
,=1:3
<
54
(ii) (2.45) implies that L is closed under formation of commutators. Hence L is a Lie
algebra. The isomorphism follows from Example (i) below.
(iii-iv)
In contrast to (iv), the spin equation (2.42) is not preserved under general rotations.
2.14.2 Examples. The following examples all have L0 = 1.
(i) On C3 , a Pauli set of spin 1 is given by a L = 2iX(a).
(x)
,
x
0 1
L1 =
,
1 0
0 i
L2 =
,
i 0
1 0
L3 =
.
0 1
(2.48)
(iii) and (iv) from Example 2.14.2 are the first two cases of an infinite family of Pauli sets
with arbitrary nonnegative half-integral spin:
2.14.3 Theorem. The matrices L0 , L1 , L2 , L3 Css defined by
(L1 )k = (k 1)k1 + (s k)k+1 ,
(L2 )k = i(k 1)k1 i(s k)k+1 ,
(L3 )k = (s + 1 2k)k,
s1
k
(L0 )k =
k1
1
k
55
(2.49)
1 = 1 = 1, 2 = i, 2 = i.
(2.50)
where
Therefore, for , {1, 2},
(L L )k = (s k)( (s k 1)k+2 + kk )
+ (k 1)( (s k + 1)k + (k 2)k1 )
= (s k)(s k 1)k+2 + (k 1)(k 2)k2
+( k(s k) + (k 1)(s k + 1))k .
(2.51)
Similarly,
(L L3 )k = (s k)(s 1 2k)k+1 + (k 1)(s + 3 2k)k1 ,
(2.52)
(2.53)
56
Chapter 3
The symmetries of the universe
An understanding of the symmetries of the universe is necessary to be able to appreciate
the modern concept of elementary particles.
The special orthogonal group SO(3) of 3-dimensional rotations and the related special Euclidean group ISO(3) of distance and orientation preserving affine mappings of 3-dimensional
space are of exceptional importance in physics and mechanics. Indeed, the corresponding
symmetries are inherent in many systems of interest and in the building blocks of most
larger systems. The associated Lie algebra so(3) of real, antisymmetric 3 3 matrices
describes angular velocity and angular momentum, both in classical and in quantum mechanics; see Section 3.10. From a mathematical point of view, 3-dimensional rotations are
also interesting due to the sporadic isomorphism between the Lie algebras so(3) and u(2)
and the resulting isomorphism between SO(3) and a quotient of SU(2), see Section3.4.
3.1
58
det Q = 1.
(3.1)
Since
|Qx|2 = (Qx)T (Qx) = xT QT Qx = xT x for Q SO(n), rotations preserve the length
|x| = xT x of a vector,
|Qx| = |x| for all x Rn .
Since det(QA) = det Q det A = det A for Q SO(n), rotations also preserve the orientation
of volumes. Conversely, these condition together imply that QT Q = 1 and det Q = 1, hence
Q SO(n). It can be shown that SO(n) is a connected matrix group; hence every rotation
is obtainable by a rotational motion. Since det is a continuous function of its entries and
det 1 = 1, SO(n) = O(n)0 is the connected part of O(n).
1
kQ1 Q2 kF .
2n + 2
(3.2)
1p
n Q1 : Q2 [0, 1].
2
(3.3)
(3.4)
by (3.24).
tr(Q1 Q2 )T (Q1 Q2 )
tr QT1 Q1 tr QT1 Q2 tr QT2 Q1 + tr QT2 Q2
2 tr 1 2 tr QT1 Q2 = 2n 2Q1 : Q2
2n + 2
59
A matrix f Knn is called antisymmetric if AT = A. The set o(n, K)o(n, K), orthogonal Lie algebra of all antisymmetric n n matrices with entries in K is a matrix
Lie algebra, called an orthogonal Lie algebra over K. Indeed, if f, g o(n, K) then
[f, g]T = (f ggf )T = (g T f T f T g T = (g)(f )(f )(g) = (f ggf ) = [f, g], hence
[f, g] o(n, K). It is customary to write o(n) := o(n, R). Note that o(n) is a Lie subalgebra
of u(n). The antisymmetric n n matrices with trace zero form a Lie algebra so(n), called
a special orthogonal Lie algebra. so(n) is the Lie algebra of the matrix group SO(n).
Note that so(n, K) = o(n, K) since f o(n, K) inplies tr f = tr f T = tr(f ) = tr f ,
hence the trace is automatically zero.
We briefly look at the smallest orthogonal groups and their Lie algebra. For n = 1, we have
O(1) = {1, 1}, SO(1) = Id(1) = {1}, and o(1) = so(1) = id(1) = k0}.
For n = 2, the Lie algebra o(2) = so(2) is 1-dimensional and consists of the antisymmetric
2 2 matrices
0
= i,
0
where
i :=
0
0
(3.5)
(3.6)
the result of a uniform rotation U(t) := eti around zero by some angle in counterclockwise direction. The product of rotations is a rotation by the sum of the angles,
Q[]Q[] = Q[ + ],
(3.7)
and the Frobenius distance of two rotations is a function of the difference of the angles,
d(Q[], Q[]) = sin(| |/2),
(3.8)
correctly taking account of the fact that angles differing by an integral multiple of 2
determine the same rotation. Note that i2 = 1, hence we may identify i with the imaginary
unit i. This identification provides the isomorphisms U(1)
= SO(2) and u(1)
= so(2),
reflecting the fact that the complex number plane is isomorphic to the 2-dimensional real
plane.
The full orthogonal group O(2) consists of the rotations and the matrices
cos
sin
R[] =
= R[0]ei
sin cos
describing 2-dimensional reflections at the axis .
(3.9)
60
3.2
0 a3 a2
X(a) := a3
0 a1 ,
a2 a1
0
a K3 ;
therefore
so(3, K) = {X(a) | a K3 }.
We note the rules
X(a)T = X(a),
X(a)X(b) =
X(a)b = a b = X(b)a,
X(a b) = baT abT ,
X(a)a = 0,
(3.10)
(3.11)
a3 b3 a2 b2
a2 b1
a3 b1
T
a1 b2
a3 b3 a1 b1
a3 b2
= ba (a b)1.
a1 b3
a2 b3
a2 b2 a1 b1
(3.12)
From (3.12) for a = b, we find by repeated multiplication with X(a), using (3.10),
X(a)2 = aaT a2 1,
X(a)3 = a2 X(a),
X(a)4 = a2 X(a)2 ;
(3.13)
(3.14)
We use these relations for K = R to prove the following explicit characterization of 3dimensional rotations.
3.2.1 Theorem.
(i) For all r R3 with |r| 1, the matrix
Q[r] := 1 + 2r0 X(r) + 2X(r)2 ,
where r0 =
1 r2 ,
(3.15)
is a rotation.
(ii) If r = 0 then Q[r] is the identity; otherwise, Q[r] describes a rotation around the axis
through the vector r by the angle
= 2 arcsin |r|,
and we have
Qrr = r,
|r| = sin
,
2
r0 = cos
(3.16)
0.
2
(3.17)
(iii) Conversely, every rotation Q has the form Q = Q[r] for some r R3 with r2 1.
61
Proof. (i) Writing X = X(r), Q = Q[r], we find from (3.13) that X 4 = r2 X 2 = (1r02 )X 2 ,
hence
QT Q = (1 2r0 X + 2X 2 )(1 + 2r0 X + 2X 2 )
= 1 + (4 4r02 )X 2 + 4X 4 = 1.
Thus Q1 = QT , QQT = QQ1 = 1. Since (det Q)2 = det Q det QT = det(QQT ) = 1, we
have det Q[r] = 1. Since the sign is positive for r = 0, continuity of (3.15) implies that
det Q[r] = 1 for all r. Thus Q[r] is a rotation. Moreover,
X(Q + 1) = r0 (Q 1),
(3.18)
(3.19)
which follows using (3.10), to a unit vector a and b = a, we see that the angle between
a vector a and its rotated image Q[r]a is cos = aT Q[r]a = 1 2|a r|2 , hence
p
|a r| = (1 cos )/2 = sin(/2).
(3.20)
In particular, a unit vector a orthogonal to r is rotated by the angle (3.16) since then
|a r| = |a| |r| = |r|. Since (3.10) implies Q[r]r = r, the vector r is fixed by the rotation,
and Q[r] describes a rotation around the vector r by the angle given by (3.16).
(iii) Let Q be an arbitrary rotation.
Case 1. If Q + 1 is nonsingular, we define, motivated by (3.18),
:= (Q 1)(Q + 1)1 = (Q QT Q)(Q + QT Q)1
X
T.
= (1 QT )(1 + QT )1 = (1 QT )1 (QT 1) = X
is antisymmetric, X
= X(a) for some a. Now X(Q
+ 1) = Q 1, hence we have
Hence X
(1 X)Q = 1 + X. Writing
r0 := 1/ 1 + a2 , r := r0 a,
we find from (3.14) and (3.13) that
+ r 2 X)(1
= 1 + (r 2 + 1 r 2 a2 )X
+ 2r 2 X
2
Q = (1 + r02 X
+ X)
0
0
0
0
= 1 + 2r02 X(a) + 2r02 X(a)2 = 1 + 2r0 X(r) + 2X(r)2 .
Since 1 r2 = 1 r02a2 = 1 a2 /(1 + a2) = 1/(1 + a2 ) = r02 0, we conclude that Q = Q[r].
Case 2. If Q + 1 is singular then 1 is an eigenvalue of Q. The other two eigenvalues must
have product 1 since the determinant is the product of all eigenvalues, counted with their
algebraic multiplicity. Since two complex conjugate eigenvalues have a positive product,
this implies that the eigenvalues are all real. Any real eigenvalue has an associated real
62
For angles ||
(corresponding to r2
1
2
q0 = cos = 2r02 1 =
q = 2r0 r,
to rewrite Q = Q[r] as
Q = 1 + X(q) +
p
1 q2 ,
(3.21)
1
X(q)2 = (1 12 X(q))1 (1 + 21 X(q)),
1 + q0
(3.22)
which has nonlinearities only in the higher order term. Since 1 and X(q) are symmetric,
we see that
Q32 Q23
1
q = Q13 Q31
(3.23)
2
Q21 Q12
is linear in the coefficients of Q. Therefore, (3.22) is referred to as the linear parameterization. From (3.31), one easily checks that Q = Q[r] satisfies
tr Q = 4r02 1 1.
(3.24)
Using also (3.23), we see that if tr Q > 1 then r, r0 (and hence the rotation axis and angle)
can be uniquely recovered from Q by
r=
q
,
1 + tr Q
r0 =
1p
1 + tr Q.
2
(3.25)
(3.26)
cos ,
2
2
cos = 1 2 sin2
,
2
r=
sin
e,
2
63
we find
Q[r] = 1 + (sin )X(e) + (1 cos )X(e)2 =: Q().
(3.27)
sin |a|
1 cos |a|
X(a) +
X(a)2
2
|a|
|a|
if a 6= 0
(3.28)
for the exponential of a real, antisymmetric matrix. It describes a rotation along an axis
parallel to a by an angle = t|a|. The Rodrigues formula can also be obtained by writing
the exponential as a power series and simplifying using (3.13). In particular, for small a we
find eX(a) = 1 + X(a) + O(|a|2 ) for small a, showing explicitly that the X(a) so(3) are
infinitesimal rotations.
As a useful application of the exponential form, we prove:
3.2.3 Proposition. For any rotation Q,
X(Qa) = QT X(a)Q,
(3.29)
Qa Qb = Q(a b),
(3.30)
3.3
r22 r32 r1 r2 r0 r3 r1 r3 + r0 r2
r0 =
p
1 |r|2 ,
(3.31)
where 1 denotes the identity matrix and r R3 satisfies |r| 1. Alternatively, we may
write (3.31) in the homogeneous quaternion parameterization
r22 r32 r1 r2 r0 r3 r1 r3 + r0 r2
2
(3.32)
Q[r0 , r] = 1 + 2
r1 r2 + r0 r3 r12 r32 r2 r3 r0 r1
2
r0 + r1 + r22 + r32
r1 r3 r0 r2 r2 r3 + r0 r1 r12 r22
64
(3.33)
and reduces to Q[r] if the arbitrary scale is chosen such that r02 + r12 + r22 + r32 = 1 and
r0 0. Because of (3.33), parallel vectors (r0 , r) in the quaternion parameterization give
the same rotation. This shows that the 3-dimensional rotation group has the topology of
a 3-dimensional projective space. (Note also that the linear parameterization (3.22) can
be obtained from the homogeneous form (3.32) by choosing the arbitrary scale such that
r02 + r2 = 2r0 .)
In computational geometry, the quaternion parameterization of rotations is preferable to
the frequently discussed (and more elementary) parameterization by Euler angles, since it
does not need expensive trigonometric functions, its parameters have a geometric meaning
independent of the coordinate system used, and it has significantly better interpolation
properties (Shoemake [252], Ramamoorthi & Barr [226]). Note that the projective
identification mentioned above has to be taken into account when constructing smooth
motions joining two close rotations Q[r] with nearly opposite r of length close to 1.
Quaternions.
A quaternion is a 4 4 matrix of the form
U(r0 , r) :=
r0
rT
r r0 1 + X(r)
r0 R, r R3 .
(3.34)
3.3.1 Theorem. The set Q of quaternions is a skew field, i.e., an associative algebra in
which every nonzero element has an inverse. We have
U(r0 , r) + U(s0 , s) = U(r0 + s0 , r + s),
(3.35)
(3.36)
(3.37)
(3.38)
U(r0 , r)1 =
r02
1
U(r0 , r) if r02 + r2 6= 0.
+ r2
(3.39)
Proof. (3.35)(3.37) are trivial, and (3.38) follows by direct computation, using (3.10),
(3.12) and (3.11). Specializing (3.38) to s0 = r0 , s = r gives
U(r0 , r)U(r0 , r) = U(r02 + r2 , 0) = (r02 + r2 )1,
(3.40)
which implies (3.39). Therefore Q is a vector space closed under multiplication, and every
nonzero element in Q has an inverse. Since matrix multiplication is associative, Q is a skew
field.
65
In the standard treatment, quaternions are treated like complex numbers, as objects of the
form
q(r0 , r) = r0 1 + r1 i + r2 j + r3 k
with special
0
1
i=
0
0
1 0 0
0
0 1 0
0 0 0 1
0
0 0 1 0
0 0 0
0 0 1
, j =
, k =
, (3.41)
1 0 0 0
0 1 0 0
0 0 1
0 1 0
0 1 0 0
1 0 0 0
r2 + ir3 r0 ir1
r0
r1
r2
r3
r1 r0 r3 r2
=
r2 r3
r0 r1
r3 r2 r1
r0
= r0 + r1 i + r2 j + r3 k.
3.4
(3.42)
This implies that the spaces of real and complex antisymmetric matrices,
so(3) = {X(a) | a R3 },
so(3, C) = {X(a) | a C3 },
are closed under forming commutators, and hence form a Lie algebra. We shall see soon
that the elements of so(3) are infinitesimal rotations. Introducing
0 0 0
0 0 1
0 1 0
L1 = 0 0 1 , L2 = 0 0 0 , L3 = 1 0 0 ,
0 1 0
1 0 0
0 0 0
2i k
7 Lk for k = 1, 2, 3 defines
66
axes. We assemble the three Js in a column vector and (ab-)use the notation X(a) = aJ.
Writing out X(a)X(a ) = X(a a ) we get
X
Jk Jl =
klm Jm ,
(3.43)
m
(gpg 1) = (g 1 ) pg = gpg 1 ,
3.5
Angular velocity
Quaternions are the most elegant way to derive a 3-dimensional analogue of the formulas
(3.6) and (3.7) for 2-dimensional rotations in terms of rotation angles. The resulting product
formula for 3-dimensional rotations, Theorem 3.5.1 in Section 3.5, allows us to derive the
properties of angular velocity.
3.5.1 Theorem. (Product formula)
Let |r|, |s| 1. Then
Q[r]Q[s] = Q[r s],
(3.44)
(3.45)
Moreover,
Q[r]1 = Q[r]T = Q[r],
r (r) = 0.
(3.46)
67
(3.48)
(3.49)
Now
U(s0 , s)U(0, x)U(s0 , s)T = U(s0 , s)U(0, x)U( s0 , s) = U(s0 , s)U(sT x, s0 x + X(s)x)
= U(0, s20 x + 2s0 X(s)x + ssT x + X(s)2 x),
hence
U(s0 , s)U(0, x)U(s0 , s)T = U(0, Q[s]x).
(3.50)
Multiplication by U(r0 , r) on the left and by U(r0 , r)T on the right gives, using (3.48),
U(q0 , q)U(0, x)U(q0 , q)T = U(0, Q[r]Q[s]x).
(3.51)
Computationally,
p (3.47) is numerically stable in finite precision arithmetic, while the direct
formula q0 = 1 q2 suffers from loss of accuracy if q0 is tiny, due to cancellation of
leading digits.
Differentiation of the product formula gives a useful formula for the derivative of a rotation.
3.5.2 Theorem. (Differentiation formula)
If r is a function of t then
d
Q[r] = X()Q[r],
dt
and we have
= 2(r r + r0 r r 0 r),
(3.52)
1
r0 = r .
2
(3.53)
1
r = (r0 r ),
2
Proof. Writing
r = r(t),
r = r(t + h) = r + hr + O(h2 ),
68
we have
Q[r]Q[r] =
=
=
=
Q[r (r)] = Q[
r0 r + r0r r r]
Q[(r0 + hr0 )r + r0 (r + hr) (r + hr) r + O(h2 )]
Q[h(r r + r0 r r0 r)] + O(h2 )
1 + hX() + O(h2 ).
(3.54)
hence
r = 2(r (r r ) + r0 r r )
= 2(rrT r r2 r + r0 r r ) = r0 2r,
giving r = 21 (r0 r ). Multiplication by rT gives rT r = 21 r0 , and the formula for r0
follows from (3.54) if r0 6= 0. For r0 = 0, the formula follows by continuity.
(3.55)
Proof. Since
tr Q[r]T Q[s] = tr Q[r]Q[s] = tr Q[(r) s] = 4((r) s)20 1 = 3 4((r) s)2
3.6
0 0
; x 1n
This motivates a more general triangular construction for Lie groups T (G1 , . . . , Gm ) and Lie
algebras t(L1 , . . . , Lm ) , which will later also produce the Galilean group and the Poincare
group. T (n) = T (R, . . . , R)
D(G1 , . . . , Gm ) diagonal, direct produt, D(n). and corresponding Lie algebras.
69
cR .
n
In the special case special case n = 3, the corresponding Lie algebra of infinitesimal generators is the Lie algebra
!
X() v
3
iso(3) =
, v R ,
0
0
= X((t))x(t) + v(t),
or short
x = x + v.
If we write
X(a)
=
X(a) 0
0
0
bp=
0 b
0 0
(3.56)
b) for a, b R3 ,
[X(a),
X(b)]
= X(a
(3.57)
[X(a),
b p] = (a b) p for a, b R3 ,
(3.58)
[a p, b p] = 0 for a, b R3 .
(3.59)
with Lk1 Lk Lk should give the triangular structure; cf. Lies theorem. Is this related
to Ados theorem?
70
3.7
3.8
3.9
3.10
71
3.11
3.12
script p.24-26,28-30,34f
Galilean spacetime. Until the beginning of the twentieth century, one thought that time
for all observers was the same in the following sense: if two events take place at two different
places in space, then the question whether the events took place at the same time has an
observer independent answer. Space was thought of as a grid on which the motions of all
objects took place and time was thought to be completely independent from space. The
distance between two events therefore consisted of two numbers: a difference in time and
a spatial distance. For example, the distance between when I woke up and when I took
the subway to work is characterized by saying that from the moment I woke up it took me
half an hour to reach the subway station, which is 500 meter from my bed. We call the
spacetime described in this manner the Galilean spacetime.
There are three important kinds of symmetries in the Galilean spacetime and the group
72
that these symmetries generate is called the Galilean symmetry group1 . If we shift the
clock an hour globally, which is possible in Galilean spacetime, the laws of nature cannot
alter. Hence one symmetry generator is the time-shift: t 7 t + a for some fixed number
a. Likewise, the laws of nature should not change if we shift the origin of our coordinate
system; hence a second symmetry is the shift symmetry (x1 , x2 , x3 ) 7 (x1 +b1 , x2 +b2 , x3 +b3 )
for some fixed vector (b1 , b2 , b3 ). The third kind of symmetries are rotations, that is, the
group SO(3), which we have seen before. There are some additional discrete symmetries,
like space reflection, where a vector (x1 , x2 , x3 ) is mapped to (x1 , x2 , x3 ). We focus,
however, on the connected part of the Galilean symmetry group. The subgroup of the
Galilean symmetry group obtained by discarding the time translations is the group ISO(3).
Below, when we discuss the Poincare group, we give more details on the group ISO(3) as
it is a subgroup of the Poincare group.
3.13
script p.30-33
When K = R, one has for symmetric bilinear forms another subdivision, since B can have
a definite signature (p, q) where p + q is the dimension of V . If B is of signature (p, q), this
means that there exists a basis of V in which B can be represented as
B(v, w) = v T Aw ,
where A = diag(1, . . . , 1, 1, . . . , 1) .
| {z } | {z }
p times
q times
The group of all linear transformations that leaves B invariant is denoted by O(p, q). The
subgroup of O(p, q) of transformations with determinant one is the so-called special orthogonal group and is denoted by SO(p, q). The associated real Lie algebra is denoted
so(p, q) and its elements are linear transformations A : V V such that for all v, w V we
have B(Av, w) + B(v, Aw) = 0. The Lie product is given by the commutator of matrices.
More general, the standard representation of so(p, q) is the one that defines so(p, q) and is
thus given by (p + q) (p + q)-matrices that leave a metric of signature (p, q) invariant; in
Lie algebra theory the standard representation is called the fundamental representation.
In the fundamental representation of so(3, 1) (which is not unitary), the Minkowski inner
product is invariant.
The group SO(3) is a subgroup of SO(3, 1) and consists of all those SO(3, 1)-rotations that
act trivially on the time-component of four-vectors. The Galilean symmetry group is the
subgroup of ISO(3, 1) consisting of the SO(3)-rotations together with the time translations.
An element of SO(3, 1) is called a Lorentz boost if the element acts nontrivially on the
zeroth component of four-vectors. By multiplying with an appropriate element of the SO(3)
1
The group is also called the Galilei group or the Galileo group. We follow the tradition that
proceeds in analogy with the use of Euclidean space or Hermitian matrix.
3.13. THE LORENTZ GROUPS O(1, 3), SO(1, 3), SO(1, 3)0
73
subgroup we may assume that a Lorentz boost only mixes the zeroth and first component
of four-vectors. Then a Lorentz boost L takes the following form (recall c = 1):
x0 vx1
L(v)0 =
,
1 v2
x1 vx0
L(v)1 =
,
1 v2
(3.60)
and L(v)2 = v 2 , L(v)3 = v 3 . Physically the Lorentz boost (3.60) describes how coordinates
transform when one goes from one coordinate system to another coordinate system that
moves with respect to the first system in the positive x1 -direction with velocity v. Since v
has to be smaller than one, as is apparent from (3.60), one concludes that special relativity
excludes superluminal velocities. The number
=
1
,
1 v2
(3.61)
is called the -factor. The -factor gives an indication whether we should treat a physical
situation with special relativity or whether a nonrelativistic treatment would suffice. The
Lorentz contraction factor is the inverse of and measures how distances shrink when
measured in another coordinate system, moving at a velocity v with respect to the original
coordinate system. For -particles, moving with a typical speed of 15,000 kilometers per
second, we have v = 0.05 and so 1.03 and 1 0.97, which implies that if we take a
rod of 100 meter and let an -particle fly along the rod, it measures only 97m (assuming
that -particles can measure). The -factor thus tells us that if we want accuracy of more
than 3%, we need to treat the -particle relativistically.
The nonrelativistic limit.
In order to discuss the nonrelativistic limit, we restore
the presence of the velocity of light c in the formulas. For a particle at rest, the space
momentum p vanishes. The formula p2 = (mc)2 therefore implies that, at rest, p0 = mc
and the rest energy is seen to be E = mc2 . This suggests to define the kinetic energy
(which vanishes at rest) by the formula
H := p0 c mc2 .
Introducing velocity v and speed v by
v = |v| = v2 ,
p
p
we find from p2 p20 = (mc)2 that p0 = (mc)2 + p2 = mc 1 + (v/c)2 , so that
v = p/m,
p
mv 2
1 + (v/c)2 1
=p
.
H = mc2 ( 1 + (v/c)2 1) = mc2 p
1 + (v/c)2 + 1
1 + (v/c)2 + 1
E = p0 c = p
mc2
1 (v/c)2
Taking the limit c we find that H becomes the kinetic energy 12 mv 2 of a nonrelativistic
particle of mass m, The nonrelativistic approximation H 21 mv 2 for the kinetic energy
74
is valid for small velocities v = |p/m| c, where we may neglect the term (v/c)2 in the
square root of the denominator.
Lorentz group as SL(2, C). We mention some further properties of spin coherent states.
Because of the identity
| x, si = (1)2s |x, si
fermionic representations (s 6 Z) are called chiral. Since fermions are chiral, they are
not invariant under the Z2 -subgroup of SL(2, C) and thus fermions do not constitute a
representation of the restricted Lorentz group.
We use the notation introduced in Section 2.11 and identify four-vectors p R1,3 with the
2 2-matrices p + . For any four-vector p R1,3 the Minkowski norm is given by
det(p + ) = p p .
The group SL(2, C) acts on R1,3 through
A(p + )A ,
for A SL(2, C) .
Clearly this defines for each A SL(2, C) an element of SO(3, 1), and hence we have a
map SL(2, C) SO(3, 1). The group SL(2, C) is a real connected manifold of dimension
6. Indeed, any complex 2 2 matrix has 4 complex entries making 8 real numbers. The
constraint det A = 1 gives two equations, for the real and imaginary part, and hence
removing two dimensions.
Let us show that SL(2, C) is connected. For A SL(2, C) we can apply the GramSchmidt
proces to the column vectors of A. Looking at how the GramSchmidt procedure works,
we see that any element of A SL(2, C) can be written as a product of an upper triangular
matrix N with positive entries on the diagonal and a unitary matrix U U(2). We can
write U = ei U with U SU(2) making clear that U(2)
= S 2 S 1 so that U(2) is
connected and the matrix U can be smoothly connected to the identity. For N we may
write
a b
N=
0 c
with ac = 1 and a > 0 and c > 0. Then t 7 tN + (1 t)122 is a smooth path in
L(2, C) for t [0, 1] that connects the unit matrix to N. Dividing by the square root of
the determinant gives the required path in SL(2, C). Hence SL(2, C) is connected.
The map SL(2, C) SO(3, 1) is a smooth group homomorphism and thus any two points
in the image can be joined by a smooth path. Hence the image is a connected subgroup of
SO(3, 1). Since the dimensions of SO(3, 1) and SL(2, C) are the same, the image contains
an open connected neighborhood O of the identity (this is nothing more than the statement
that the induced map sl(2, C) so(3, 1) is an isomorphism). But the subgroup of SO(3, 1)
generated by a small open neighborhood of the identity is the connected component containing the identity. Indeed, call G the group generated by the open neighborhood O. We
may assume O 1 := {g 1, g O} = O, since if not we just replace O by O O 1 . If x G
75
3.14
The group of all translations in V generates together with SO(p, q) the group of inhomogeneous special orthogonal transformations, which is denoted ISO(p, q). One can
obtain ISO(p, q) from SO(p, q + 1) by performing a contraction; that is, by rescaling some
generators with some parameter and then choosing a singular limit 0 or . The
group ISO(p, q) can also be seen as the group of (p + q + 1) (p + q + 1)-matrices of the
form
Q b
with Q SO(p, q) , b V .
0 1
The Lie algebra of ISO(p, q) is denoted iso(p, q) and can be described as the Lie algebra
of (p + q + 1) (p + q + 1)-matrices of the form
A b
with A so(p, q) , b V .
0 0
Again, the Lie product in iso(p, q) is the commutator of matrices.
Minkowski spacetime. With the advent of special relativity, the classical spacetime view
was altered in the sense that time and space made up one spacetime, called Minkowski
spacetime. As a topological vector space Minkowski spacetime is nothing more than R4 ,
but it is equipped with the Minkowski metric2 (also see Section 5.6 and Example 11.4.5):
(x y)2 = (x0 y 0 )2 + (x1 y 1 )2 + (x2 y 2)2 + (x3 y 3 )2 = (x y)2 (x0 y 0)2 .
The time component of the four-vectors is the zeroth component. We write a general
four-vector as
1
0
v
v
, v = v2 ,
v=
v
v3
2
We choose units such that c = 1, and work with the signature (, +, +, +).
76
where v is the space-like part of v and v 0 is the time-like component of v. With the notation
introduced we see that the Minkowski metric can be written as v 2 = (v 0 )2 + v2 , where
v2 is the usual Euclidean norm for three-vectors. The Minkowski inner product is derived
from the Minkowski metric and given by
0 0
w
v
= v 0 w 0 + vw .
w
v
Note that in a strict sense the Minkowski inner product, the Minkowski norm and the
Minkowski metric are not an inner product, norm and metric respectively as the positivity
condition is clearly not satisfied.
The Poincar
e group is a subgroup of the group of all symmetries that leave the Minkowski
metric invariant. The Poincare group is often denoted as ISO(3, 1). On a four-vector v
the Poincare group acts as v 7 Av + b, where A is an element of SO(3, 1) and b is some
four-vector. Hence the Poincare group consists of rotations and translations. An explicit
representation of ISO(3, 1) can be given in terms of the matrices
A b
,
0 1
where A is a 4 4-matrix in SO(3, 1) and b is a four-vector. Recall that A is in SO(3, 1) if
A satisfies
1 0 0 0
0 1 0 0
AT A = , =
0 0 1 0 .
0 0 0 1
The affine linear transformations contain the translations and SO(3, 1)-rotations. The
generators of the translations we call the momenta, and since they have four components,
we sometimes refer to them as four-momenta.
The (real) Lie algebra of ISO(3, 1) is described by the matrices of the form
A b
(A, b) :=
,
0 0
with A so(3, 1) and b R3 . The Lie product is given by the commutator of matrices,
and takes the form
(A, b)(A , b ) = ([A, A ], Ab A b) ,
where Ab is the usual matrix action of A on b . In particular, we have
(0, b)(0, b) = 0 ,
from which we read off that the translations form a commutative subalgebra. The translations form an ideal such that the momenta form the standard representation of so(3, 1),
that is, the defining representation.
General spacetime. The generalization of Minkowski spacetime is a manifold with a
pseudo-Riemannian metric g; the latter turns the tangent space at each point of the
77
manifold into a Minkowski space. Thus around every point there is a chart and a coordinate
system such that g takes the form of a Minkowski metric. It is clear that a proper description
of general relativity requires differential geometry and the development of tensor calculus.
In general relativity but also already in special relativity physicists use some conventions
that are worth explaining. Spacetime indices indicating components of four-vectors are
indicated by Greek letters , , . . .. To denote a four-vector x = (x ) one writes simply x .
If an index appears ones upstairs and once downstairs, it is to be summed over; this is called
the Einstein convention. Derivatives are objects with indices downstairs; = /x .
The Kronecker delta is an invariant tensor and we have x = and x = 4.
The Minkowski metric is usually denoted by the Greek letter and again one usually just
writes to denote the metric and not just the -component; as a matrix the Minkowski
metric is given by:
1 0 0 0
0 1 0 0
=
0 0 1 0 .
0 0 0 1
The metric g and its pointwise inverse g are used to lower and to raise indices; indeed,
the metric gives an isomorphism between the tangent space and the cotangent space. Hence
is defined as g , and a check of consisteny gives g = g g g . As a further
exercise in the conventions the reader might verify g g = , g g = 4. The described
conventions are used a lot in physics literature and more on its nature and why it works
can be found in many text books on relativity, e.g., in the nice introductory textbook by
dInverno [72]).
The symmetry group of a manifold M with a pseudo-Riemannian metric g is huge; it
consists of all diffeomorphisms of the manifold, as any diffeomorphism preserves a metric.
The vector fields on M describe the infinitesimal generators of the group of diffeomorphisms.
3.15
(3.62)
is due to Lorentz covariance. The integration measure is clearly rotation invariant. Hence
to study the behavior under a general Lorentz transformation L we may assume that L
78
only mixes the x-direction and the time-direction. In that case we have, using the -factor
(3.61)
kx vkt
L(kx ) = q
= (kx vkt ) ,
v2
1 c2
L(ky ) = ky , L(kz ) = kz ,
kt cv2 kx
v
= (kt 2 kx ) .
L(kt ) = q
2
c
1 vc2
2
2
One easily checks that L(k)
q = k . Since (k) is the zeroth component of the wave vector
2
k, we see that the factors 1 vc2 cancel out. Another way to see the covariance is to note
the equality
d3 k
d4 k 2 m2 c2
(3.63)
=
k + 2 (k0 ) ,
(2)3 2(k)
(2)3
h
3.16
3.17
SO(2, 4)
= SU(4)? containing SO(1, 3)
conformal transformations and Poisson representation
H(3) as subgroup and the hydrogen atom
The periodic system
3.18
3.19
Casimirs
faithful representations
sum and product of reps
universal envelope (classical and quantum),
GROUP
3.20. UNITARY REPRESENTATIONS OF THE POINCARE
79
Casimirs,
splitting representations through common eigenspaces of Casimirs
so(3), sl(2), spin
3.20
80
be denoted p . The number E = p0 c is called the energy, and depends on the basis chosen,
since the so(3, 1) rotations mix the momenta. Having fixed a basis of the translations, there
is only a SO(3) subgroup that leaves the energy invariant. Intuitively this is clear, rotating
a reference frame does not change the energies. In general, for a given basis, the subgroup
of so(3, 1) that leaves the vector (1, 0, 0, 0) invariant is SO(3) and the elements of the SO(3)
subgroup are rotations. There are three independent SO(3, 1) elements that do not leave
(1, 0, 0, 0) invariant, these transformations and their linear combinations are called Lorentz
boosts in the physics literature. The Lorentz boosts mix time and space coordinates. A
basis of Poincare Lie algebra thus consists of the generators of three rotations, three Lorentz
boosts and four translations.
3.21
3.22
Elementary particles
Elementary particle = irreducible unitary representation of the Poincare group with quantized spin, p2 0, and p0 > 0.
massless particles and gauge freedom
3.22.1 Proposition. For real a R3 and any Pauli set of spin j,
|(a )| 2j|a|
for all Cs .
(3.64)
81
s1
(s k)(a1 ia2 ) = Ak+1k .
=
k1
If a3 6= |a| then a3 < |a|, and we may define the lower triangular tridiagonal matrix L and
the diagonal matrix D with nontrivial entries
a1 + ia2
s1
Lkk = 1, Lk+1k =
0.
, Dkk = (|a| a3 )k
k
|a| a3
Now A = LDL ; therefore, A is Hermitian positive semidefinite.
If a3 = |a| then a1 = a2 = 0 and A is diagonal with nonnegative diagonal entries Akk =
s1
2(k 1)|a|, and again positive semidefinite. Now
k1
0 A = (s 1)|a| (a ),
and replacing a by a gives the desired inequality. Equality holds iff A = 0, which is the
Weyl equation.
Note that the Weyl equation is solved for a3 < |a| by iff L is zero except in the last
component (since A = LDL and the other diagonal entries of D are positive).
More precisely, a has the simple eigenvalues (s + 1 2l)|a| (l = 1 : s).
3.23
82
Chapter 4
From the theoretical physics FAQ
4.1
To be done
The present chapter will be merged into the preceding chapters. Some of the sections in
later chapters, whose content is already in Part I will be eliminated.
The section on Heisenberg groups and Poisson representations is at the start of Chapter 3
since to define the Lie algebra of angular momentum already requires the CCR and Poisson
brackets.
Perhaps the two chapters could be integrated better via a sequence like:
- Reflections, Rotations and classical angular momentum (which could contain a lot of math
on SO(3) including some Lie stuff).
- Galilei group, which builds on rotations.
- Symplectic/Hamiltonian stuff (classical non-relativistic dynamical groups)
- Classical relativistic stuff (Poincare) Maybe also classical electromagnetism in here somewhere, since it stands astride both the classical and quantum worlds. Its also the natural
place to introduce the concept of (classical) gauge invariance.
- Non-relativistic QM (Heisenberg, Oscillator, Schrodinger, etc).
- Re-visit SO(3) in the quantum context and show how the requirement of being a symmetry
of a positive-definite inner product is enough to imply stunningly unexpected facts about
the spectrum of angular momentum experiments. This then becomes the archetype for
how representations, Casimirs, etc, are at the heart of modern physics. This is also a good
place to emphasize how Schrodinger wave functions, etc, are not the last word and about
how a more general algebraic framework is cleaner and powerful (having shown that this is
sufficient to make impressive predictions).
- Continue on to Isospin and gauge symmetries.
83
84
4.2
85
86
87
88
89
90
4.3
General Lie groups and Lie algebras extend these notions to to more
general manifolds. A manifold is just a higher-dimensional version
of space, and transformations are generalized motions preserving
invariants that are important in the manifold. The transformations
preserving these invariants are also called symmetries, and the
Lie group consisting of all symmetries is called a symmetry group.
The elements of the corresponding Lie algebra are infinitesimal
symmetries.
For example, physical laws are invariant under rotations and
translations, and hence unter all rigid motions. But not only these:
If one includes time explicitly, the resulting 4-dimensional space
has more invariant motions or symmetries.
The Lie group of all these symmetry transformations is called the
Poincare group, and plays a basic role in the theory of relativity.
The transformations are now about space-time frames in uniform motion.
Apart from translations and rotations there are symmetries called
boosts that accelerate a frame in a certain direction, and
combinations obtained by taking products. All infinitesimal symmetries
together make up a Lie algebra, called the Poincare algebra.
Much more on Lie groups and Lie algebras from the perspective of
classical and quantum physics can be found in:
Arnold Neumaier and Dennis Westra,
Classical and Quantum Mechanics via Lie algebras,
Cambridge University Press, to appear (2009?).
http://www.mat.univie.ac.at/~neum/papers/physpapers.html#QML
arXiv:0810.1019
91
92
4.4
4.5
93
94
http://ptp.ipap.jp/link?PTP/51/249/
constructs covariant propagators and complete vertices for spin J
bosons with conserved currents for all J. See also
H Shi-Zhong et al.,
Eur. Phys. J. C 42 (2005), 375-389
http://www.springerlink.com/content/ww61351722118853/
4.6
95
96
time translations are now dynamical, since they affect the position
of the here-and-now.
This is the form of dynamics which is manifestly
Lorentz invariant, and in which space and time appear on equal footing.
An observer in the here and now (let us call it a point observer)
can - in principle, classically - have arbitrarily accurate
information about the particles and/or fields on the past
hyperboloid; thus causality is naturally accounted for.
Information given on the past hyperboloid of a point can be propagated
to information on any other past hyperboloid using the dynamical
equations that are defined via the momentum 4-vector P, which is a
4-dimensional analogue of the nonrelativistic Hamiltonian.
The Hamiltonian corresponding to motion in a fixed timelike
direction u is given by H=u dot P. The commutativity of the components
of P is the condition for the uniqueness of the resulting state
at a different point x independent of the path x is reached from 0.
97
98
4.7
In his QFT book, Weinberg says no, arguing that there is no way to
implement the cluster separation property. But in fact there is:
There is a big survey by Keister and Polyzou on the subject
B.D. Keister and W.N. Polyzou,
Relativistic Hamiltonian Dynamics in Nuclear and Particle Physics,
in: Advances in Nuclear Physics, Volume 20,
(J. W. Negele and E.W. Vogt, eds.)
Plenum Press 1991.
www.physics.uiowa.edu/~wpolyzou/papers/rev.pdf
that covered everything known at that time. This survey was quoted
at least 116 times, see
http://www.slac.stanford.edu/spires/find/hep?c=ANUPB,20,225
looking these up will bring you close to the state of the art
on this.
They survey the construction of effective few-particle models.
There are no singular interactions, hence there is no need for
4.8
What is a photon?
99
100
101
102
103
104
4.9
105
106
107
108
Theorem.
An irreducible representations of the full Poincare group with
mass m>=0 and finite spin has a position operator transforming
like a 3-vector and satisfying the canonical commutation relations
if and only if either m>0 or m=0 and s<=1/2 (but s=0 if only
the connected poincare group is considered).
This theorem was announced without giving details in
T.D. Newton and E.P. Wigner,
Localized states for elementary systems,
Rev. Mod. Phys. 21 (1949), 400-406.
A mathematically rigorous proof was given in
A. S. Wightman,
On the Localizability of Quantum Mechanical Systems,
Rev. Mod. Phys. 34 (1962), 845-872.
See also
T.F. Jordan
Simple proof of no position operator for quanta with zero mass
and nonzero helicity
J. Math. Phys. 19 (1980), 1382-1385.
who also considers the massless representations of continuous spin,
and
D Rosewarne and S Sarkar,
Rigorous theory of photon localizability,
Quantum Opt. 4 (1992), 405-413.
For spin 1, the case relevant for photons, we have d=3, and the
subspace of interest is the space H obtained by completion of the
space of all vector-valued C^infty functions A(p) of a nonzero
3-momentum p with compact support satisfying the transversality
condition p dot A(p)=0,
with inner product defined by
<A|A> := integral dp/|p| A(p)^* A(p).
It is not difficult to see that one can identify the wave functions
A(p) with the Fourier transform of the vector potential in the
radiation gauge where its 0-component vanishes. This relates the
present discussion to that given in the FAQ entry What is a photon?.
109
110
Related papers:
M.H.L. Pryce,
Commuting Co-ordinates in the new field theory,
Proc. Roy. Soc. London Ser. A 150 (1935), 166-172.
(first construction of position operators in the massive case)
B. Bakamjian and L.H. Thomas,
Relativistic Particle Dynamics. II,
Phys. Rev. 92 (1953), 1300-1310.
(first construction of massive representations along the above
lines)
L.L. Foldy,
Synthesis of Covariant Particle Equations,
Physical Review 102 (1956), 568-581.
(nice and readable version of the Bakamjian-Thomas construction
for massive representations of the Poincare group)
R. Acharya and E. C. G. Sudarshan,
Front Description in Relativistic Quantum Mechanics,
111
4.10
112
113
114
4.11
SO(3) = SU (2)/Z2
In this appendix we wish to show that SO(3)SU(2)/Z2. First we collect some basics on
SO(3) and SU(2).
A real 3 3 matrix R is called special orthogonal if
RT R = 1 ,
det R = 1 .
Note that 1 here denotes the 3 3 identity matrix in the first equation. It is easy to check
that the special orthogonal matrices form a group; we denote this group by SO(3) and call
it the special orthogonal group, or the rotation group. An element of SO(3) is also called
a rotation.
If is an eigenvalue of R SO(3) we see that = 1. We want to show that there is
always an eigenvector with eigenvalue 1. The characteristic polynomial of R has three roots
1 , 2 and 3 . The modulus of the roots has to be 1 and if there is a imaginary eigenvalue
, then so is its conjugate
an eigenvalue. If all the three eigenvalues are real, then the
only possibilities are that all three are 1 or that two are 1 and the third is 1. Let now 1
1 . Then 3 = 1 is real and positive and since it has to be
be imaginary and take 2 =
1 1
of unit modulus 3 = 1. We see that there is always an eigenvalue 1. If R is a rotation and
not the identity there is just one eigenvector with eigenvalue one; we denote this eigenvector
by eR . We thus have ReR = eR and if R 6= 1 then eR is unique. The vector eR determines
a one-dimensional subspace of R that is left invariant under the action of R. We call this
one-dimensional invariant subspace the axis of rotation.
Consider an arbitrary SO(3) element R with axis of rotation determined by eR over an
angle and denote the rotation by R(eR , ). Call the angle between the plane in which eR
and the z-axis lie and the plane in xz-plane . Call the angle between eR and the z-axis
. See figure 4.1. The rotation can now be broken down into three rotations. First we use
two rotations two go to a coordinate system with coordinates x , y and z in which the eR
points in the z -direction, and then we rotate around the z -axis over an angle . The two
rotation to go to the new coordinate system are: (a) a rotation around the z-axis around
an angle to align the x-axis with the projection of eR onto the xy-plane, (b) a rotation
115
Figure 4.1: Rotation around eR . Any axis is characterized by two angles and .
z e
R
y
x
over an angle around the image of the y-axis under the first rotation. Hence we can write
R(eR , ) as a product
cos sin 0
cos 0 sin
cos sin 0
R(eR , ) = sin cos 0 0
1
0 sin cos 0 .
0
0
1
sin 0 cos
0
0
1
In this way we obtain a system of coordinates on the manifold SO(3). The three angles are
then called the Euler angles.
We note in particular the following: The group SO(3) is generated
tions Rx (), Ry () and Rz () given by
1
0
0
cos
Rx () = 0 cos sin , Ry () = 0
0 sin cos
sin
cos sin 0
cos 0 .
Rz () = sin
0
0
1
det U = 1 .
It is easily checked that the special unitary matrices form a group, which is called the
special unitary group and is denoted SU(2).
We now wish to show that SU(2) is a real manifold that is isomorphic to the three sphere
S 3 . We do this by finding an explicit parametrization of SU(3) in terms of two complex
numbers x and y satisfying |x|2 + |y|2 = 1. If one splits up x and y in a real and imaginary
parts, one sees that x and y define a point on S 3 .
We write an element U SU(2) as
U=
a b
c d
Writing out the equation U U = 1 and det U = 1 one finds the following equations:
|a|2 + |c|2 = 1 ,
ab + cd = 0 ,
|b|2 + |d|2 = 1 ,
ad bc = 1 .
116
We first assume b = 0 and find then that ad = 1 and cd = 0, implying that c = 0 and U is
Next we suppose b 6= 0 and use a = cd to deduce that |b| = |c| and
diagonal with a = d.
b
|a| = |d|; we thus have b 6= 0 c 6= 0. We also see that we can use the ansatz
a = ei cos ,
c = ei sin ,
b = ei sin ,
d = ei cos .
y x
The map S 3 SU(2) given by (x, y) 7 U(x, y) is clearly injective, and from the above
analysis bijective. Furthermore the map is smooth. Hence we conclude that SU(2)
= S3
as a real manifold.
We introduce the Pauli matrices1
0 1
1
=
,
1 0
0 i
i 0
1 0
0 1
Note that the Pauli-matrices are precisely all the traceless Hermitian 22 complex matrices
and make up a three-dimensional vector space. Therefore they provide a realization of the
Lie algebra su(2). It is easy to check that the Pauli-matrices satisfy the relations
i j = ij + iijk k ,
where we also used the LeviCivita symbol ijk ; if (ijk) is not a permutation of (123) ijk
is zero and if (ijk) is a permutation of (123) then ijk is the sign of the permutation. In
particular we note the commutator relations
[ i , j ] = i j j i = 2iijk k ,
which resembles the vector product in R3 . We also note the identities
tr( i ) = 0 , tr( i j ) = 2 ij , tr([ i , j ] k ) = 4iijk .
For every vector ~x R we identify an element x of su(2) as follows
3
x1
X
~x = x2 x =
xi i .
i=1
x3
From now on we simply identify the elements with each other and thus write equality signs
instead of arrows. We see that ~x ~y corresponds to the element 2i1 [x, y];
~x ~y =
1
1
[x, y] .
2i
117
1
tr(xy) ,
2
~x ~y ~z =
1
tr([x, y]z) .
4i
(UxU 1 ) = (U 1 ) xU = UxU 1 ,
Therefore
We find
1
0
0
R(U(cos /2, i sin /2)) = 0 cos sin = Rx () ,
0 sin cos
cos 0 sin
R(U(cos /2, sin /2)) = 0
1
0 = Ry () ,
sin 0 cos
cos sin 0
R(U(ei/2 , 0)) = sin
cos 0 = Rz () ,
0
0
1
and hence the map R : SU(2) SO(3) is surjective. Suppose now that U(x, y) is mapped
to the identity element in SO(3). We see then that x
y = 0, so that either x = 0 or y = 0.
Since also |x|2 |y|2 = 1, we cannot have x = 0 and hence y = 0. Furthermore from
Rex2 = |x|2 = 1 we see x = 1: indeed we see that R(U(x, y)) = R(U(x, y)). The
kernel of R : SU(2) SO(3) is thus given by 1 times the identity matrix, which is the
Z2 -subgroup of SU(2)2 . As any kernel of group homomorphisms, the kernel is a normal
subgroup. All in all we have shown
SU(2)
= SO(3)/Z2 .
2
118
Chapter 5
Classical oscillating systems
In this chapter, we discuss in detail an important family of classical physical systems:
harmonic or anharmonic oscillators, and their multivariate generalization, which describe
systems of coupled oscillators such as macromolecules or planetary systems.
Understanding classical oscillators is of great importance in understanding many other
physical systems. The reason is that an arbitrary classical system behaves close to equilibrium like a system of coupled linear oscillators. The equations we deduce are therefore
approximately valid in many other systems. For example, a nearly rigid mechanical structure such as a high-rise building always remains close enough to equilibrium so that it can
be approximately treated as a linear oscillating system for the elements into which it is
decomposed for computational purposes via the finite element method.
We shall see that the equations of motion of coupled oscillators can be cast in a form that
suggest a Lie algebra structure behind the formalism. This will provide the connection to
Part III of the book, where Lie algebras are in the center of our attention.
Besides the (an-)harmonic oscillators we discuss some basic linear partial differential equations of physics: the Maxwell equations describing (among others) light and gamma rays,
the Schrodinger equation and the KleinGordon equation (describing alpha rays), and the
Dirac equation (describing beta rays). The solutions of these equations can be represented
in terms of infinitely many harmonic oscillators, whose quantization (not treated in this
book) leads to quantum field theory.
5.1
For any quantity x depending on time t, differentiation with respect to time is denoted by
x:
d
x = x .
dt
119
120
Analogously, n dots over a quantity represents differentiating this quantity n times with
respect to time.
The configuration space is the space of possible positions that a physical system may
attain, including external constraints. For the moment, we think of it as a subset in Rn . A
point in configuration space is generally denoted q. For example, for a system of N point
masses, q is an N-tuple of vectors qk R3 arranged below each other; each qk denotes the
spatial coordinates of the kth moving point (planet, atom, node in a triangulation of the
body of a car or building, etc.), so that n = 3N.
A system of damped oscillators is defined by the differential equation
M q + C q + V (q) = 0 .
(5.1)
The reader wishing to see simple examples should turn to Section 5.2; here we explain the
contents of equation (5.1) in general terms. As before, q is the configuration space point
q Rn . The M and C are real n n-matrices, called the mass matrix and the friction
matrix, respectively. The mass matrix is always symmetric and positive definite (and often
diagonal, the diagonal entries being the masses of the components). The friction matrix
need not be symmetric but is always positive semidefinite. The potential V is a smooth
function from Rn to R, i.e., V C (Rn , R), and V is the gradient of V ,
V =
V
V
,...,
q1
qn
T
Here the gradient operator is considered as a vector whose components are the differential operators /qk . In finite-element applications in structural mechanics, the mass
matrix is created by the discretization procedure for a corresponding partial differential
equation. In general, the mass may here be distributed by the discretization over all adjacent degrees of freedom. However, in many applications the mass matrix is diagonal;
Mij = mi ij ,
where mi is the mass corresponding to the coordinate qi , and ij is the Kronecker symbol
(or Kronecker delta), which is 1 if i = j and zero otherwise. In the example where qk is
a three-vector denoting the position of an object, then i is a multi-index i = (k, j) where
k denotes an object index and j = 1, 2, 3 is the index of the coordinate of the kth object
which sits in position i of the vector q. Then mi is the mass of the kth object.
The quantity F defined by
F (q) := V (q) ,
is the force on the system at the point q due to the potential V (q). We define the velocity
v of the oscillating system by
v := q .
The Hamiltonian energy H is then defined by
1
H := v T Mv + V (q) .
2
(5.2)
121
The first term on the right-hand side is called the kinetic energy since it depends solely on
the velocity of the system. The second term on the right-hand side is called the potential
energy and it depends on the position of the system. For more complex systems the
potential energy can also depend on the velocities. Calculating the time-derivative of the
Hamiltonian energy H we get
H = v T M v + V (q) q = qT (M q + V (q)) = qT C q 0 ,
(5.3)
where the last equality follows from the differential equation (5.1) and the final inequality
follows since C is assumed to be positive semidefinite. If C = 0 (the idealized case of no
friction) then the Hamiltonian energy is constant, H = 0, and in this case we speak of
conservative dynamics (the Hamiltonian H is conserved). If C is positive definite we
have H < 0 unless q = 0 and there is energy loss. This is called dissipative dynamics.
In the dissipative case, the sum of the kinetic and potential energy has to decrease.
If the potential V is unbounded from below, it might happen that the system starts falling
in a direction in which the potential is unbounded from below and the system becomes
unphysical; the velocity could increase without limits. Thus, in a realistic and manageable physical system, the potential is always bounded from below, and we shall make this
assumption throughout. It follows that the Hamiltonian is bounded from below.
Since in the dissipative case the Hamiltonian energy is decreasing and is bounded below,
it will approach a limit as t . Therefore, H 0, and by (5.3), qT C q 0. Since
C is positive definite for a dissipative system, this forces q 0. Thus, the velocities will
get smaller and smaller, and asymptotically the system will approach the configuration of
being in a state with q = 0, at the level of the accuracy of the model. Typically this implies
that q tends to some constant value q0 . Note that it does not follow rigorously that q tends
to a constant value; it is possible that q . Nevertheless we assume that q does not
walk away to infinity and then it follows from q = 0 that q = 0, so that (5.1) implies
V (q0 ) = 0, and we conclude that q tends to a stationary point q0 of the potential. If
this is a saddle point, small perturbations can (and will) cause the system to move towards
another stationary point. Because of such stability reasons, the system ultimately moves
towards a local minimum.
In practice, the perturbations come from imperfections in the model. Remember that the
deterministic equation (5.1) is a mathematical idealization of the real world situation. A
more appropriate model (but still an approximation) is the equation
M q + C q + V (q) = ,
where is a stochastic force, describing the imperfections of the model. Typically, these
are already sufficient to guarantee with probability 1 that the system will not end up
in a saddle point. Usually, imperfections are small, irregular jumps due to friction, see,
e.g., Bowden & Leben [49], or Brownian motion due to kicks by molecules of a solvent.
See, e.g., Brown [52], Einstein & Brown [79], Garcia & Palacios [98], Hanggi &
Marchesoni [121], for an overview on Brownian motion with lots of historical references
and citations Duplantier [75], and for a discussion in the context of protein folding
Neumaier [202, Section 4].
122
In many cases, the potential V (q) has several local minima. Our argument so far says
that the state of the system will usually move towards one of these local minima. Around
the local minimum it can oscillate for a while, and in the absence of stochastic forces it
will ultimately get into one of the local minima. If we assume that there are stochastic
imperfections, we can say even more!
Suppose that the local minimum towards which the system tends is not a global minimum.
Then occasional stochastic perturbations may suffice to push (or kick) the system over a
barrier separating the local minimum from a valley leading to a different minimum. Such
a barrier is characterized by a saddle point, a stationary point where the Hessian of
the potential has exactly one negative eigenvalue. The energy needed to pass the barrier,
called the activation energy, is simply the difference between the potential energy of the
separating saddle point and the potential energy of the minimum. In a simple, frequently
used approximation, the negative logarithm of the probability of exceeding the activation
energy in a given time span is proportional to the activation energy. This implies that
small barriers are easy to cross, while high barriers are difficult to cross. In particular, if a
system can cross a barrier between a high-lying minimum to a much lower lying minimum,
it is much more likely to cross it in the direction of the lower minimum than in the other
direction. This means that (averaged over a population of many similar systems) most
systems will spend most of their time near low minima, and if the energy barriers between
the different minima are not too high, most systems will be most of the time close to the
global minimum. Thus a global minimum characterizes an absolutely stable equilibrium,
while other local minima are only metastable equilibrium positions, which can become
unstable under sufficiently large stochastic perturbations.
There are famous relations called fluctuation-dissipation theorems that assert (in a
quantitative way) that friction is related to stochastic (i.e., not directly modeled high frequency) interactions with the environment. In particular, if a system is sufficiently well
isolated, both friction and stochastic forces become negligible, and the system can be described as a conservative system. Of course, from a fundamental point of view, the only
truly isolated system is the universe as a whole, since at least electromagnetic radiation
escapes from all systems not enclosed in an opaque container, and systems confined to a
container interact quite strongly with the walls of the container (or else the wall would not
be able to confine the system).
Thus on a fundamental level, a conservative system describes the whole universe from the
tiniest microscopic details to the largest cosmological facts. Such a system would have to
be described by a quantum field theory that combines the successful standard model of
particle physics with general relativity. At present, no such theory is available.
On the other hand, conservative systems form a good first approximation to many small and
practically relevant systems, which justifies that most of the book looks at the conservative
case only. However, in Part IV, the dissipative case is in the center of the discussion.
The phase space formulation. So far, our discussion was framed in terms of position
and velocity. As we shall see, the Hamiltonian description is most powerful in phase space
coordinates. Here everything is expressed in terms of the phase space observables q and p,
123
where
p := Mv ,
is called the momentum p of the oscillating system. The phase space for a system of
oscillators is the space of points (q, p) Rn Rn . A state (in the classical sense) is a point
(p, q) in phase space. The Hamiltonian function (or simply the Hamiltonian) is the
function defining the Hamiltonian energy in terms of the phase space observables p and q.
In our case, since a positive definite matrix is always invertible, we can express v in terms
of p as v = M 1 p, and find that
1
H(p, q) = pT M 1 p + V (q) .
2
(5.4)
Note that H does not depend explictly on time. (In this book, we only treat such cases;
but in problems with time-dependent external fields, an explicit time dependence would be
unavoidable.)
5.2
To keep things simple, we concentrate on the case of a single degree of freedom. Everything
said has a corresponding generalization to systems of coupled oscillators, but the essentials
are easier to see in the simplest case.
The simple anharmonic oscillator is obtained by taking n = 1. The differential equation
(5.1) reduces to a scalar equation
m
q + cq + V (q) = 0 ,
(5.5)
where the prime denotes differentiation with respect to q. This describes for example the
behavior of an object attached to a spring; then q is the length of the spring, m = M is
the mass of the object, c = C is the friction constant (collective of the air, some friction
in the spring itself, or of a surface if the object is lying on a surface) and V (q) describes
the potential energy (see below) the spring has when extended or contracted to length q.
Note that a constant shift in the potential does not alter the equations of motion of an
anharmonic oscillator; hence the potential is determined only up to a constant shift.
The harmonic oscillator is the special case of the anharmonic oscillator defined by a
potential of the form
k
V (q) = (q q0 )2 , k > 0 ,
2
where q0 is the equilibrium position of the spring. (Strictly speaking, only oscillators that
are not harmonic should be called anharmonic, but we follow the mathematical practice
where limiting cases are taken to be special cases of the generic concept: A linear function
is also nonlinear, and a real number is also complex.) In this case, the force becomes
F (q) = V (q) = k(q q0 )
(5.6)
124
The equation (5.6) is sometimes called Hookes law, which asserts that the force needed
to pull a spring from equilibrium is linear in the deviation q q0 from equilibrium, a valid
approximation when q q0 is small. Since the force is minus the gradient of the potential,
the potential has to be quadratic to reproduce Hookes law. It is customary to shift the
potential such that it vanishes in global equilibrium; then one gets the above form, and
stability of the equilibrium position dictates the sign of k. Note that the shift does not
change the force, hence has no physical effect.
The mathematical pendulum is described by the equation
V (q) = k(1 cos q) ,
k > 0,
(5.7)
where q is now the angle of deviation from the equilibrium, measured in radians. Looking
at small q we can approximate as follows:
V (q) =
k 2
q + O(q 4) ,
2
and after dropping the error term, we end up with a harmonic oscillator. The same argument
allows one to approximate an arbitrary anharmonic oscillator by a harmonic oscillator as
long as the oscillations around a stable equilibrium position are small enough.
For q not small the mathematical pendulum is far from being harmonic. Physically this
is clear; stretching a (good) spring further and further is harder and harder, but pushing
the one-dimensional pendulum far from its equilibrium position is really different. After
rotating it over radians the pendulum is upside down and pushing it further no longer
costs energy.
Dynamics in phase space. We now restrict to conservative systems and analyze the
conservative anharmonic oscillator (H = 0) a bit more. Since c = 0, the differential
equation (5.5) simplifies to
m
q + V (q) = 0 .
(5.8)
p2
+ V (q),
2m
p
, p = mv = m
q = V (q) .
(5.9)
m
An observable is something you can calculate from the state; simple examples are the
velocity and the kinetic, potential, or Hamiltonian energy. Thus arbitrary observables
q = v =
125
can be written as smooth functions f (p, q) of the phase space observables. In precise
terms, an observable is (for an anharmonic oscillator) a function f C (R R). The
required amount of smoothness can be reduced in practical applications; on the fundamental
theoretical level, it pays to require infinite differentiability to get rid of troubling exceptions.
Introducing the shorthand notation
fp :=
f
,
p
fq :=
f
,
q
for partial derivatives, we can write the equations (5.9) in the form
q = Hp ,
p = Hq .
(5.10)
The equations (5.10) are called the Hamilton equations in state form. Although derived
here only for the anharmonic oscillator, the Hamilton equations are of great generality;
the equations of motions of many (unconstrained) conservative physical systems can be
cast in this form, with more complicated objects in place of p and q, and more complex
Hamiltonians H(p, q). A dynamical system governed by the Hamilton equations is called
an isolated Hamiltonian system. If there are external forces, the system is not truly
isolated, but the Hamilton equations are still valid in many cases, provided one allows the
Hamiltonian to depend explicitly on time, H = H(p, q, t); in this case, there would appear
additional partial derivatives with respect to time in various of our formulas.
Calculating the time-dependence of an arbitrary observable f we get
f
f
p +
q = fp p + fq q ,
f =
p
q
hence
f = Hp fq Hq fp .
(5.11)
(5.12)
126
The Hamiltonian equations can be cast in a form that turns out to be even more general
and very useful. It brings us directly to the heart of the subject of the present book. We
define a binary operation on C (R R) as follows:
f g := fp gq gp fq .
Physicists write {g, f } for f g and call it the Poisson bracket. Our alternative notation
will turn out to be very useful, and generalizes in many unexpected ways. The equation
(5.11) can then be written in form of a classical Heisenberg equation
f = Hf.
(5.13)
It turns out that this equation, appropriately interpreted, is extremely general. It covers
virtually all conservative systems of classical and quantum mechanics.
A basic and most remarkable fact, which we shall make precise in the following chapter, is
that the vector space C (R R) equipped with the binary operation is a Lie algebra.
We shall take this up systematically in Section 12.1.
5.3
Historically, radiating substances which produce rays of -, - or -particles were fundamental for gaining an understanding of the structure of matter. Even today, many experiments
in physics are performed by rays (or beams, which is essentially the same) generated by
some source and then manipulated in the experiments.
The oldest, most familiar rays are light rays, -rays, -rays, and -rays. (Nowadays, we
also have neutron rays, etc., and cosmic rays contain all sorts of particles.)
All kinds of rays are described by certain quantum fields, obtained by quantizing corresponding classical field equations, linear partial differential equations whose time-periodic
solutions provide the possible single-particle modes of the quantum fields. In the following sections we look at these field equations in some detail; here we just make some
introductory comments.
-rays are modes (realizations) of the field of doubly ionized helium, He++ , which is modeled on the classical level by a Schrodinger wave equation or a KleinGordon wave equation.
-rays are modes of a charged field of electrons or positrons, modeled on the classical level
by a Dirac wave equation. For radiation of only positrons one uses the notation + , and
for rays with only electrons one uses . Both light rays and -rays are modes of the
electromagnetic field which are modeled on the classical level by the Maxwell wave equations. Their quantization (which we do not treat in this book) produces the corresponding
quantum fields.
In the present context, the Schrodinger, KleinGordon, Dirac, and Maxwell equations are
all regarded as classical field equations for waves in 3 + 1 dimensions, though they can also
be regarded as the equations for a single quantum particle (a nonrelativistic or relativistic
127
scalar particle, an electron, or a photon, respectively). This dual use is responsible for calling
second quantization the quantum field theory, the quantum version of the classical theory
of these equations. It also accounts for the particle-wave duality, the puzzling property
that rays sometimes (e.g., in photodetection or a Geiger counter) behave like a beam of
particles, sometimes (in diffraction experiments, of which the double slit experiments are
the most famous ones) like a wave in the case of light a century-old conflict dating back
to the times of Newton and Huygens.
In the quantum field setting, quantum particles arise as eigenstates of an operator N called
the number operator. This operator has a discrete spectrum with nonnegative integer
eigenvalues, counting the number of particles in an eigenstate. The ground state, with
zero eigenvalue, is essentially unique, and defines the vacuum; a quantum particle has
an eigenstate corresponding to the eigenvalue 1 of N, and eigenstates with eigenvalue n
correspond to systems of n particles. If a quantum system contains particles of different
types, each particle type has its own number operator.
The states that are easy to prepare correspond to beams. The fact that beams have a
fairly well-defined direction translates into the formal fact that beams are approximate
eigenstates of the momentum operator. Indeed, often beams are well approximated by
exact eigenstates of the momentum operator, which describe so-called monochromatic
beams. (Real beams are at best quasi-monochromatic, a term we shall not explain.) Since
the states of beams are not eigenstates of the number operator N, they contain an indefinite
number of particles.
All equations mentioned are linear partial differential equations, and behave just like a set
of infinitely many coupled harmonic oscillators, one at each space position. They describe
non-interacting fields in a homogeneous medium. The definition of interacting fields leaves
the linear regime and leads into the heart of nonlinear field theory, both in a classical and
a quantum version. This is outside the scope of the present book. However, when position
space (or momentum space) is discretized so that only a finite number of degrees of freedon
remain to describe a field, one is back to nonlinear oscillators, which can be understood
completely on the basis of the treatment given here, and indeed, number operator will play
a prominent role in Part V of this book.
Fortunately, for understanding beam experiments, it usually suffices to quantize a few modes
of the classical field, and these are harmonic oscillators. Indeed, by separation of variables,
the linear field equations can be decoupled in time, leading to a system of uncoupled
harmonic oscillators forming the Fourier modes. Beams correspond to solutions which have
a significant intensity only in a small neighborhood of a line in 3-space. Frequently, beams
correspond to solutions that have an (almost) constant frequency. Interactions with such
(quasi-)monochromatic beams can be modelled in many situations simply as interactions
with a harmonic oscillator.
On the other hand, when a beam containing all frequencies interacts with a system which
oscillates only with certain frequencies, the beam will resonate with these frequencies. This
allows the detection of a systems eigenfrequencies by observing its interaction with light
or other radiation. This is the basis of spectroscopy, and will be discussed in more detail
128
5.4
Alpha rays
We first consider rays consisting of -particles, helium atoms stripped of their two electrons, and consist of two protons and two neutrons. -particles are released by other
heavier nuclei during certain processes in the nucleus. For example, some elements are
-radioactive, which means that a nucleus of type A will want to go to a lower energy
level, which can then be done by emitting two of its protons and two of its neutrons. The
result is thus two nuclei, a Helium nucleus and a nucleus of type A 6= A; schematically
A A + . But also during nuclear splitting -particles are released. Yet even more, the
129
sun is emitting -particles all the time; the sun produces heat by means of a chain of nuclear
fusion reactions, during which some -particles are produced. If the atmosphere would not
be there, life on earth would be impossible due to the bombardment of -particles. That
-particles are not healthy has been in the news lately (in 2007), since the former Russian
spy Litvinenko is said to have been killed by a small amount of polonium, which is an
-emitter.
An -particle emitted from a radioactive nucleus typically has a speed of 15,000 kilometers
per second. Although this might look very fast, it is only 5% of the speed of light, which
means that for a lot of calculations -particles can be considered nonrelativistically, that
is, without using special relativity. For some more accurate calculations though, special
relativity is required.
For the nonrelativistic -particle we have to use the Schrodinger equation. For a particle
of mass m moving in a potential V (x) the Schrodinger equation is given by
h
2 2
ih (x, t) =
(x, t) + V (x)(x, t)
t
2m
where is the wave function of the particle, and 2 = is the Laplace operator.
The wave function contains the information about the particle. The quantity |(x, t)|2 is
the probability density for finding the particle at time t in a given position (x). For beam
considerations, we take V (x) = 0. Since we shoot the -particles just in one direction, we
assume (x, t) = (t)(x). We obtain
h
(x)
i(t)
=
,
(t)
2m (x)
(5.14)
where the dot denotes the derivative with respect to time t and the prime the derivative
with respect to the coordinate x. The left-hand side of (5.14) only depends on time t
and the right-hand side only on x, which implies that both sides are a constant (with the
dimensions of time1 ) independent of t and x. We denote this constant by and obtain
two linear ordinary differential equations for and with the solutions
(t) = eit ,
k=
2m
,
h
where a and b are some constants; we have normalized the constant in front of to 1
since we are only interested in the product of and . Note that we wrote the solution
suggestively as if 0 and in fact, on physical grounds it is; the solutions with < 0 are
not integrable and hence cannot determine a probability distribution.
We can express E in terms of k, which plays the role of the inverse wavelength, getting
k2
(k) = h2m
. Reintroducing an arbitrary direction unit vector n and the wave vector k = kn,
we obtain the dispersion relation of the Schrodinger equation,
k =
h
k2
.
2m
130
If we have an experiment with a great number of non-interacting particles, all of which have
the same wave function , the quantity |(x, t)|2 is proportional to the particle density.
However, -particles interact and thus the Hamiltonian is different. If we assume the
particle density is not too high we can still assume that the -particles move as if there
were no other -particles. Under this assumption we may again take |(x, t)|2 as the particle
density. The energy density is then proportional to |(x, t)|2 . Putting as before the whole
experiment in a box of finite volume V one can again arrive at a Hamiltonian corresponding
to a collection of independent harmonic oscillators.
Now we look at relativistic -particles, and remind the reader of the notation introduced
in Section 3.13. The dynamics of relativistic -particles of mass m is described by a realvalued function (x ) whose evolution is governed by the KleinGordon equation, which
is given by
m2 c2
2
=0
(5.15)
h
2
2
2
2
+
+
,
+
c2 t2 x21 x22 x23
called the dAlembertian. Here c is the speed of light. We look for wave like solutions
eikx for some vector k . Note that eikx = ik eikx and hence
eikx = k 2 eikx ,
2
where k = k k. Hence we obtain the condition on k
m2 c2
k 2 + 2 = 0 .
(5.16)
h
Writing k 0 = and denoting the spatial parts of k with bold k we thus get the dispersion
relation
r
m2 c4
= c2 k2 + 2 .
(5.17)
h
We see that h
|| = E = mc2 , combining Einsteins famous formula E = mc2 and Plancks
law E = h
. The solution for a given choice of sign of is expanded in Fourier terms and
most often written as
Z
d3 k
a(k)ei(k)tikx + a(k) ei(k)t+ikx ,
(x, t) =
3
(2) 2(k)
q
2 4
where we used the Lorentz-invariant measure (3.62) involving (k) = c2 k2 + mh2c .
5.5
Beta rays
We now discuss beams composed of spin 21 particles, the -rays. -radiation is emitted by
radioactive material. Unstable nuclei can lose some of their energy and go to a more stable
131
nucleus under the emission of -rays. There are two kinds of -rays those with positive
charge and those with negative charge. The negatively charged version consists of nothing
more than electrons. The positively charged counterpart consists of the antiparticles of
the electrons, the so-called positrons.
Other examples of fermions are neutrinos. The sun emits a stream of neutrinos; in each
second, there are approximately 1013 neutrinos flying through your body (it depends on
which latitude you are, how big you are and whether you are standing or lying down,
making a difference of a factor of 100 perhaps). Neutrinos fly very fast; the solar neutrinos
travel at the speed of light (or very close). The reason they travel that fast is that neutrinos
have a zero or very tiny mass, and massless particles (such as photons) always travel at
the speed of light. For a long time, neutrinos were believed to be massless; only recently
it became an established fact that at least one of the three generations of neutrinos must
have a tiny positive mass. We do not feel anything of the many neutrinos coming from the
sun and steadily passing through our body, because unlike protons and electrons they
hardly interact with matter; for example, to absorb half of the solar neutrinos, one would
need a solid lead wall of around 1016 meters thick! The reason is that they do not have
charge: they are electrically neutral.
To discuss the case that the particles shot by the beam are fermions, we have to use the
Dirac equation. It is convenient to use the same conventions for dealing with relativistic
particles. In addition to the previously introduced symbols, we now introduce the so-called
-matrices. In four dimensions there are four of them, called 0 , . . . , 3 , and they satisfy
+ = 2 .
(5.18)
The associative algebra generated by the -matrices subject to the above relation is a
Clifford algebra. There are several possibilities to find a representation for the -matrices
in terms of 4 4-matrices; a frequently made choice is
0 = i1 1 ,
2 = 3 1 ,
1 = 2 1
3 = 3 2 ,
where the k are the Pauli matrices (2.7). however, we only need the defining relation
(5.18). We assemble the -matrices in a vector ( 0 , 1 , 2 , 3 ) and inner products with
vectors p are given by p = p = p .
A fermion is described by a vector-like object , which takes values in the spinor representation of the Lie algebra so(3, 1). Hence we can think of as a vector with four-components.
In this case, the -matrices are 4 4-matrices; that such a representation exists is shown
by the explicit construction above. We need a property of the -matrices, namely that
they are traceless (in any representation). To prove this, take any and choose another
-matrix , 6= . Then we have
tr = tr ( )1 = tr ( )1 = tr .
Hence tr = 0.
132
mc
= 0.
h
p , we see that each component of the spinor obeys the KleinGordon equation (5.15).
We look for solutions of the form = ueikx. Putting this ansatz in the Dirac equation we
obtain
mc
u = 0,
(5.19)
i k
h
2 2
and the additional constraint k 2 + mh2c = 0 follows from the KleinGordon equation. Equation (5.19) can be written as
h
1 i k u = 0,
mc
and it is easy to see that
1
P =
2
1i k
mc
in that frame the particle is not moving, hence we may choose p = (1, 0, 0, 0) in the chosen
frame. It follows that 2P = 1 + i 0. The eigenvalues of i 0 are 1 since (i 0 )2 = 1. But the
-matrices are traceless, and hence the eigenvalues add up to 0. Therefore the eigenvalues
of i 0 are 1, 1, +1, +1. Thus P can be cast in the form
1
0
P =
0
0
0
1
0
0
0
0
0
0
0
0
.
0
0
We conclude that there are two independent degrees of freedom for a fermion; similar to
the case of light one speaks of two polarizations. For a particular choice of the sign of k
we can thus specialise the expansion of the fermion to
=
d3 k
(2)3
v+ (k)eik tkx + v (k)eik t+kx ,
q
where k = k k + m2 c2 /h2 and where the v (k) are linear combinations of the two basis
polarization vectors u1 and u2 :
v (k) = (k)u1 + (k)u2 .
5.6
133
Lasers produce light of a high intensity and with almost only one frequency. That is, the
light of a laser is almost monochromatic. We assume that the laser is perfect and thus
emits only radiation of one particular wavelength. Also we consider general lasers, which
can radiate electrons, -particles, -radiation and so on. We shortly comment on the
nature of the different kinds of radiation and see how the modes come into play. To make
life easy for us, we imagine the laser is placed such that the medium through which the
beam is shot, is the vacuum.
First we consider the common situation where light is radiated. Light waves are particular
solutions to the Maxwell equations in vacuum, or any other homogeneous medium. The
Maxwell equations in vacuum are given by
E = 0,
B = 0,
B
,
t
1 E
,
B= 2
c t
E =
where E is the electric field strength, B is the magnetic field strength and t is the time, and
c is again the speed of light. As usual in physics, boldface symbols denote 3-dimensional
vectors, while their components are not written in bold;
A = div A :=
and
A1 A2 A3
+
+
x1
x2
x3
A3 A2
x2
x3
A
A3
1
A = curl A :=
x3
x1
A2 A1
x1
x2
denote the divergence and curl of a vector field A, respectively. Using the generally valid
relation
( X) = ( X) 2 X
and the fact that the divergence of B and E vanishes we obtain from the Maxwell equations
the wave equations
1 2
1 2
2 E = 2 2 E, , 2 B = 2 2 B .
c t
c t
To solve we use the ansatz
E(x, t) = Eeitikx ,
B(x, t) = Beitikx ,
where E and B are now fixed vectors. The ansatz represents waves propagating in the
k-direction and at any fixed point in space the measured frequency is . From the wave
equations we immediately find the dispersion relations for the Maxwell equations that
relate and k = (kx , ky , kz )T ;
= c|k|,
134
Figure 5.1: An image of a solution of the electromagnetic wave equations. The Poynting
vector gives the direction of the wave and is perpendicular to the electric and the magnetic
field.
where
|k| := k k = kx2 + ky2 + kz2 .
We compute
E(x, t) = 0 k E = 0 ,
and similarly k B = 0. Thus k is perpendicular to both E and B. We find for the outer
products
E(x, t) = ik E(x, t) , B(x, t) = ik B(x, t) ,
and thus it follows
k E = B ,
kB =
E.
c2
We see that E and B are perpendicular to each other, and k is perpendicular to E and B,
hence k is parallel to the so-called Poynting vector P = E B. Figure 5.1 displays an
image of a solution.
Without loss of generality we may change the coordinates so that k points into the zdirection; then kx = ky = 0 and only kz is nonzero. Then, since B = k E and E is
orthogonal to k, light is completely determined by giving the x- and y-components of E.
Thus light has two degrees of freedom; put in other words, light has two polarizations.
Linearly polarized light is light where E oscillates in a constant direction orthogonal to the
light ray. Circularly polarized light is light where E rotates along the path of light; this
can be achieved by superimposing two linearly polarized light beams. Since the Maxwell
equations are linear, any sum of solutions is again a solution. Note that to actually get the
solution for E(x, t), one has to take the real part.
So we have seen that a light beam is determined by giving two polarizations. These polarizations can be interpreted as modes of an oscillator. One can write the general solution in
135
(5.20)
In the quantum theory one promotes the modes a(k) and a (k) to operators. We treat the
transition from the classical theory to the quantum theory in detail only for the harmonic
oscillator, corresponding to a single monochromatic mode; see Chapter 20.
To motivate the connection, we rewrite the Hamiltonian into a specific form that we will
later recognize as the Hamiltonian of a harmonic oscillator, thereby showing that the
Maxwell equations give rise to (an infinite set of) harmonic oscillators.
First we consider the system in a finite volume V to avoid some questions of finiteness.
In that case (since one has to impose appropriate boundary conditions), the integral over
wave vectors k for the electric field becomes a sum over a discrete (but infinite) set of wave
vectors. To get a sum over finitely many terms, one also has to remove wave vectors with
very large momentum; this corresponds to discretizing space2 .
The functions uk (x) can then be normalized as
Z
dx uk (x) uk (x) = kk .
V
The energy density of the electromagnetic field is proportional to E2 +B2 . Hence classically
the Hamiltonian is given by
Z
1
H=
dx E2 + B2 .
2 V
1
We are not taking all details into account here, since we only want to convey the general picture of
what is happening and dont use the material outside this section.
2
Getting a proper limit is the subject of renormalization theory, which is beyond the scope of our
presentation. The mathematical details for interactive fields are still obscure; indeed, whether quantum
electrodynamics (QED) exists as a mathematically well-defined theory is one of the big open questions
in mathematical physics.
136
Inserting the expansions (5.20) of E and B into the expression for the Hamiltonian and
taking into account the normalization of the uk one obtains after shifting the ground state
energy to zero and performing the so-called thermodynamic limit V a Hamiltonian
of the form
X
H = 21
h
k (ak ak + ak ak ) .
k
In Chapter 20 we will show that the quantum mechanical Hamiltonian of the harmonic
oscillator is given by H = h
a a for some constant and operators a and a . For light we
thus obtain for each possible k-vector a quantum oscillator. In practice, a laser admits only
a selection of possible k-vectors. In the ideal case that there is only one possible k-vector,
that is, the Poynting vector can only point in one direction and only one wavelength is
allowed, the Hamiltonian reduces to the Hamiltonian of one harmonic oscillator.
Chapter 6
Spectral analysis
In this chapter we show that the spectrum of a quantum Hamiltonian (defining the admissible energy levels) contains very useful information about a conservative quantum system.
It not only allows one to solve the Heisenberg equations of motion but also has a direct link
to experiment, in that the differences of the energy levels are directly observable, since they
can be probed by coupling the system to a harmonic oscillator with adjustable frequency.
6.1
(6.1)
In infinite dimensions, things are a bit more complicated and require the spectral theorem
137
138
from functional analysis. If the spectrum of H is, however, discrete then (6.1) remains
valid.
As we shall show in Section 20.3, the Hamiltonian of a quantum harmonic oscillator in
normal mode form is given by H = E0 + h
n, where n is the so-called number operator
whose spectrum consists of the nonnegative integers. Hence the eigenvectors of H (also
called eigenfunctions if, as here, the Hilbert space consists of functions) are eigenvectors
of n, and the eigenvalues Ek of H are related to the eigenvalues k N0 of n by the formula
Ek = E0 + kh.
This shows that the eigenvalues of the quantum harmonic oscillator are quantized, and the
eigenvalue differences are integral multiples of the energy quantum h
. That the spectrum
of the Hamiltonian is discrete is sometimes rephrased as H is quantized.
In this and the next section we investigate the experimental meaning of the spectrum of
the Hamiltonian of an arbitrary quantum system. Since the Hamiltonian describes the
evolution of the system via the quantum Heisenberg equation (1.17), i.e.,
i
f = Hf = [H, f ] ,
h
one expects that the spectrum will be related to the time dependence of f (t). To solve
the Heisenberg equation, we need to find a representation where the Hamiltonian acts
diagonally.
In the case where the Hilbert space H is finite-dimensional, we can always diagonalize H,
since H is Hermitian. There is an orthonormal basis of eigenvectors of H, and fixing such
a basis we may represent all H by their components k in this basis, thus identifying
H with Cn with the standard inner product. In this representation, H acts as a diagonal
matrix whose diagonal entries are the eigenvalues corresponding to the basis of eigenvectors;
(H)k = Ek k .
In the case where the Hilbert space is infinite-dimensional and H is self-adjoint, an analogous
representation is possible, using the GelfandMaurin theorem, also known under the
name nuclear spectral theorem. The theorem asserts that if H is self-adjoint, then
H can be extended into the dual space of the domain of definition of H; there it has a
complete family of eigenfunctions, which can be used to coordinatize the Hilbert space.
The situation is slightly complicated by the fact that the spectrum may be partially or
fully continuous, in which case the concept of a basis of eigenvectors no longer makes sense
since the eigenvectors corresponding to points in the continuous spectrum are no longer
square integrable and hence lie outside the Hilbert space.
In the physics literature, the rigorous mathematical exposition is usually abandoned at this
stage, and one simply proceeds by analogy, choosing a set of labels of the eigenstates
and treating them formally as if they form a discrete set. Often, the discreteness of
the spectrum is enforced verbally by artificially putting the particles in a finite box and
139
going to an infinite volume limit at the very end of the computations. The justification
for the approach is that most experiments are indeed very well localized; in letting two
protons collide in CERN we do not take interaction with particles on Jupiter into account.
Mathematically we thus put our system in a box. Since we do not want our system to
interact too much with the walls of the box we take the box large enough. Having met the
final requirement one observes that the physical quantities do not depend on the precise
form and size of the box. To simplify the equations one then takes the size of the box
to infinity. Making this mathematically precise is quite difficult, though well-understood
for nonrelativiastic systems. In particular, for the part of the spectrum that becomes
continuous in this limit, the limits of the eigenvectors become generalized eigenvectors
lying no longer in the Hilbert space itself but in a distributional extension of the Hilbert
space which must be discussed in the setting of a so-called Gelfand triple or rigged
Hilbert space; cf. Section 20.4.
In many cases of physical interest, these generalized eigenvectors come in two flavors, depending on the boundary conditions imposed, resulting in two families of in-eigenstates
|ki and out-eigenstates |ki+ labelled by a set which in the case of the harmonic oscillator is = N0 . (The bra-ket notation used here informally is made precise in Section
20.4.) The in- and out-states are called so because they have a natural geometric interpretation in scattering experiments (see Section 6.4). In addition to these eigenstates, there
is a measure d(k) on , and a spectral density (k) with real positive values such that
every vector in the Hilbert space has a unique representation in the form
Z
Z
=
d(k)+ (k)|ki+ =
d(k) (k)|ki .
For any fixed choice of the sign in (k) := (k), the inner product is given by
Z
=
d(k)(k)(k)(k) .
(6.2)
The spectral measure d(k) may also have a discrete part corresponding to square integrable
eigenstates, in which case |ki+ = |ki . If all eigenvectors are square integrable, the spectrum
is completely discrete. In particular, this is the case for the harmonic oscillator, for which
we construct the diagonal representation explicitly in Section 20.
Since the |ki are eigenvectors with corresponding eigenvalue E(k) = Ek , that is, the
Hamiltonian satisfies
(H)(k) = E(k)(k) ,
(6.3)
we say that H acts diagonally in the representation defined by the (k). Thus one
space with the space L2 () of coefficient functions with finite
Rcan identify the Hilbert
2
d(k)(k)| (k)| ; the Hamiltonian is then determined by (6.3). The in- and out-states
are related by the so-called S-matrix, a unitary matrix S Lin L2 () such that
+ (k) = (S )(k) .
As a consequence of the time-symmetric nature of conservative quantum dynamics and the
time-asymmetry of scattering eigenstates, the in-representation and the out-representation
140
are both equivalent to the original representation on which the Hamiltonian is defined.
In many cases of interest, one can then rigorously prove existence and uniqueness of the
S-matrix.
The transformation from an arbitrary Hilbert space representation to the equivalent representation in terms of which H is diagonal, is an analogue of a Fourier transformation; the
latter corresponds to the special case where H = L2 (R) and H is a differential operator
with constant coefficients.
In general, the GelfandMaurin theorem guarantees the existence of a topological space
and a Borel measurable spectral density function : R+ such that the original
Hilbert space is L2 (, ) with inner product (6.2) and such that (6.3) holds. Indeed,
can be constructed as the set of characters, that is, -homomorphisms into the complex
numbers, of a maximal commutative C -algebra of bounded linear operators containing the
bounded operators eitH (t R). (Since we dont use this construction further, the concepts
involved will not be explained in detail.)
n
The above reasoning is completely
P parallel to the finite-dimensional case, where H = C .
There one would write =
quantity
k k |ki and have (H)k = Ek k . An arbitrary P
nn
f Lin H = C
would then be represented by a matrix, acting as (f )k = l fkl l .
In the infinite-dimensional setting, k takes values in the label space . The quantities of
primary interest are represented by integral operators defined by a kernel function
Z
(f )(k) :=
d(l)f (k, l)(l) ;
Z
Z
i
E(k) d(l)(f )(l)
df (k, l, t)E(l)(l)
=
h
Z
i
d(l)f (k, l, t) E(k) E(l) (l) ,
=
h
from which it follows that
i
f (k, l, t) =
E(k) E(l) f (k, l, t) .
h
(6.4)
In (6.4) we recognize a linear differential equation with constant coefficients, whose general
solution is
i
f (k, l, t) = e h (E(k)E(l))t f (k, l, 0) .
141
Thus the kernel function of the operator f has oscillatory behavior with frequencies
kl =
E(k) E(l)
.
h
(6.5)
This relation, the modern form of the RydbergRitz combination principle found in
1908 by Walter Ritz [236], may be expressed in the form
E = h
,
(6.6)
The formula (6.6) appears first in Plancks famous paper [219] from 1900 where he explained
the radiation spectrum of a black body. Planck wrote it in the form E = h, where
h = 2h and = /2 is the linear frequency. The symbol for the quotient h
= h/2,
which translates this into our formula was invented much later, in 1930, by Dirac in his
famous book1 on quantum mechanics [74].
6.2
All physical systems exhibit small (and sometimes large) oscillations of various frequencies,
collectively referred to as the spectrum of the system. By observing the size of these
oscillations and their dependence on the frequency, valuable information can be obtained
about intrinsic properties of the system. Indeed, the resulting science of spectroscopy
is today one of the indispensable means for obtaining experimental information on the
structure of chemical materials and the presence of traces of chemical compounds.
To probe the spectrum of a quantum system, we bring it into contact with a macroscopically
observable (hence classical) weakly damped harmonic oscillator. That we treat just a single
harmonic oscillator is for convenience only. In practice, one often observes many oscillators
simultaneously, e.g., by observing the oscillations of the electromagnetic field in the form
of electromagnetic radiation light, X-rays, or microwaves. However, the oscillators do
not interact that strongly in most cases and in the case of electromagnetic radiation not at
all. In that case the result of probing a system with multiple oscillators results in a linear
superposition of the results of probing with a single oscillator. This is a special case of the
general fact that solutions of linear differential equations depend linearly on the right hand
side.
From the point of view of the macroscopically observable classical oscillator, the probed
quantum system appears simply as a time-dependent external force F (t) that modifies the
dynamics of the free harmonic oscillator. Instead of the equation m
q + cq + kq = 0 we get
the differential equation describing the forced harmonic oscillator, given by
m
q + cq + kq = F (t) .
1
The book contains the Dirac equation but also Diracs famous mistake (cf. Section 6.3) he had
wrongly interpreted the antiparticle of the electron predicted by his equation (later named the positron) to
be the proton.
142
with distinct, real and nonzero frequencies. However, the analysis holds with obvious
changes also for a (partly or fully) continuous spectrum if the sums are replaced by appropriate integrals.
The solution to the differential equation consists of a particular solution and a solution to
the homogeneous equation. Due to damping, the latter is transient and decays to zero. To
get a particular solution, we note that common experience shows that forced oscillations
typically have the same frequency as the force. We therefore make the ansatz
X
q(t) =
ql eil t .
l
Inserting both sums into the differential equation, we obtain the relation
X
X
Fl eil t ,
ml2 + icl + k ql eil t =
l
Fl
,
k ml2 + icl
for all l.
Since the frequencies are real and distinct, the denominator cannot vanish. The energy in
the lth mode is therefore proportional to
|ql |2 =
|Fl |2
.
(k ml2 )2 + (cl )2
(6.7)
143
Now first imagine that the system under study has only one frequency, that is, Fl 6= 0
for only one l. For example, the system under study is also an oscillator that is swinging
with a certain frequency. In this case the oscillator with which we probe the system will
also swing with that same frequency
as the probed system, but with an amplitude given
q
by (6.7). We see that for =
k
m
Returning to the case that there are more Fl nonzero, we see that the oscillator will swing
with the same frequencies as the probed system. But the intensity with which the oscillator
swings depends on the positions of the l relative to the resonance frequencies. Suppose
that c is relatively small, so that we can ignore the term c in the denominator of (6.7).
Then the ql for which l is close to show a higher intensity.
Looking for resonances with an oscillator that has an adjustable frequency therefore gives
a way to experimentally find the frequencies in the force incident to the oscillator. If the
frequency passes over one of the frequencies of the probed system, the oscillator will
swing more intensively.
The resonances occur around a natural frequency but also the width of the interval in
which the system shows a resonance has information. If the interval is small, one speaks
of a sharp resonance and this corresponds to a discrete or nearly discrete spectrum of
the frequencies. If the resonance is not sharp, the response corresponds to a continuous
spectrum. The graph that shows the absorbed energy (which is proportional to |ql |2 ) as a
function of the frequency ( l ) for a system with one resonance frequency typically
has a
p
Lorentz shape, according to the formula (6.7): There is a peak around 0 = k/m with
a certain width, and on both sides of the peak the function tends to zero at plus and minus
infinity. In Figure 6.1 we displayed a graph of a Lorentz shape for a harmonic oscillator
with varying frequency in contact with a probed system that has one Fl nonzero for the
frequency 0 .
For general systems with more resonance frequencies, the graph is a superposition of such
curves and the peaks around the resonance frequencies can have different widths and different heights. This graph is recorded by typical spectrometers, and the shapes and positions
of characteristic pieces of the graph contain important information about the system. We
shall assume that the peaks have already been translated into resonance frequencies (a nontrivial task in case of overlapping resonances), and concentrate on relating these frequencies
to the Hamiltonian of the system. This is done in Section 6.4.
144
Figure 6.1: Lorentz-shape. The absorbed energy of the oscillator with varying frequency .
6.3
In this section we remark on some important aspects of the history of quantum mechanics.
We focus on the physics of the atom, which was one of the main reasons to develop quantum
mechanics. In Section 6.5 we discuss the physics of the black body and the history of the
formula of Planck, which describes black body radiation. For an interesting historical
account we refer to for example van der Waerden[276] or Zeidler [299].
The importance of the spectrum in quantum physics is not only due to the preceding
analysis, which allows a complete solution of the dynamics, but also to the fact that the
spectrum can easily be probed experimentally. Indeed, spectral data (from black body
radiation and the spectral absorption and emission lines of hydrogen) were historically the
trigger for the development of modern quantum theory. Even the name spectrum for the
set of eigenvalues was derived from this connection to experiment.
Probing the spectrum through contact with a damped harmonic oscillator has been discussed in Section 6.2. Note that the observed frequencies give the spectrum of the force,
not the spectrum of the Hamiltonian. As derived above, the spectrum of the force consists of the spectral differences of the Hamiltonian spectrum. This is in accordance with
the fact that (in nonrelativistic mechanics) absolute energy is meaningless and only energy
differences are observable.
In case of the harmonic oscillator, the spectrum of the Hamiltonian H is discrete (see
Chapter 20 for the details and derivation), consisting of the nonnegative integral multiples
k of the base frequency . Thus the set of labels for the eigenvectors |ki is discrete,
= N0 . The number of allowed frequencies is thus countable and the external force may
be expanded into a sum of the form
X
F (t) =
eikl t Fkl .
145
mechanics one finds overtones in a similar setting for example, in the pitching of a guitar
string.
A historically more interesting system is the hydrogen atom, where the energies are given
by an equation of the form
C
Ek = E0 2 ,
k
for some constant C. Then the frequencies are given by the Rydberg formula
kl = RH
1
1
,
k 2 l2
(6.8)
where RH 1.1 107 m1 is the Rydberg constant. The Rydberg formula correctly gives
the observed spectral lines of the hydrogen atom. The formula was discovered by Rydberg
in 1889 (Martinson and Curtis [187]) after preliminary work of Balmer, who found the
formula for the Balmer series of spectral lines (given by k = 2). Schrodinger derived this
formula using the theoretical framework of quantum mechanics.
Let us review the situation of the time where quantum mechanics was conceived. Around
1900 physicists were experimentally exploring the atom, which until then was (since antiquity) only a philosophically disputable part of Nature. The experiments clearly indicated
that atoms existed and that matter was built up from atoms. The physicist Boltzmann had
argued that atoms existed, but his point of view had not been accepted; only after his death
in 1906, the existence of atoms was unarguably proved by experiments by Perrin around
1909. This lead to the problem of finding the constituents of the atom and its structure. In
1897 Thompson had discovered the electron as a subatomic particle. Since the atom is electrically neutral, the atom has to contain positively charged particles. Thompson thought of
a model in which the atom was a positively charged sphere with the electrons being in this
plum pudding of positive charge. But then in 1911 Rutherford put Thompsons model
to the test; Marsden and Geiger, who were working under the supervision of Rutherford,
shot -particles at a thin foil of gold and looked at the scattering pattern [115]. The experiment is therefore called the GeigerMarsden experiment. At that time, -particles
were considered a special radiation emitted by some radio-active elements; now we know
that these are the nuclei of Helium with the electrons being stripped off.
Since the -particles are positive, they have a particular kind of interaction with the positively charged sphere of Thompsons model. But since the electrons swim around in the
positive charge, the net charge is zero and most interaction is screened off. Therefore it
was expected that the -particles would be only slightly deflected. However, the pattern
was not at all like that! It rather looked as if almost all -particles went straight through
and a small percentage was deflected by a concentrated positive charge. Most -particles
that were deflected were scattered backwards, implying that they had an almost head-on
collision with a positive charge.
The very small percentage of scattered -particles indicated that the chance that an particle meets a positively charged nucleus on its way is very small, which implies that the
nucleus is very small compared to the atom. Therefore Rutherford (who wrote a paper to
explain the results of the GeigerMarsden experiment) concluded that the nucleus of an
146
atom is positively charged and the electrons circle around the nucleus, and furthermore, the
size of the nucleus is very small compared to the radii at which the electrons circle around
the atom [245]. If one imagines the atomic nucleus to have the size of a pea and one would
place it at the top of the Eiffel tower, the closest electrons would circle around in an orbit
that touches the ground; the atom is mostly empty.
In 1918 it was again Rutherford who performed an important experiment from which he
concluded that the electric charge of the atomic nucleus was carried by little particles, called
protons. The hydrogen atom was found to be the simplest atom; it consists of a proton
and one electron circling around the proton. Because of this experiment the discovery of
the proton is attributed to Rutherford.
Classically, if an electron circles around in an electric field it radiates and thus loses energy.
The question thus arises why the hydrogen atom is stable. Again classically, an electron can
circle around a positive charge with arbitrary energy. If the electron changes its orbit, this
happens gradually, hence the energy changes continuously and the absorption or emissions
patterns of the hydrogen atom should be continuous. But experiments done by Rydberg in
1888 and Balmer in 1885 showed that hydrogen absorbed or emitted light at well-defined
frequencies, visible as lines in the spectrum obtained by refraction. For the atomic model
this implies that the electron can only have well-defined energies separated by gaps (forbidden energies). In 1913 Bohr wrote a series of papers [40, 41, 42, 43] in which he postulated
a model to account for this. Bohr postulated that angular momentum is quantized (if p is
the momentum of the electron and r the radius, then the angular momentum is L = r p,
where the cross denotes the vector product) and that the electron does not lose energy
continuously. With these assumptions he could explain the spectrum observed by Rydberg.
The model of Bohr did not explain the behavior of atoms, it only gave rules the atom
had to obey. In 1925 Werner Heisenberg wrote a paper [123] where he tried to give a
fundamental basis for the rules of quantum mechanics. Heisenberg described the dynamics
of the transitions of an electron in an atom by using the states of the electron as labels.
For example, he wrote the frequency emitted by an electron jumping from a state n to a
state n as (n, n ). Just two months later Max Born and Pascal Jordan wrote a
paper [47] about the paper of Heisenberg, in which they made clear that what Heisenberg
actually did was promoting observables to matrices. The three of them, Born, Jordan and
Heisenberg, wrote in the same year a paper [179] where they elaborated on the formalism
they developed. Also in the same year 1925 Paul Dirac wrote a paper in response to the
paper of Heisenberg, in which the remarkable relation qr ps ps qr = rs ih appeared. Dirac
tried to find the relation between a classical theory and the corresponding quantum theory.
In fact, Dirac postulated this equation: we make the fundamental assumption that the
difference between the Heisenberg product of two quantum quantities is equal to ih/2
times their Poisson bracket expression.
So, in the beginning years of quantum mechanics, the dynamics of the observables was
described by a kind of matrix mechanics. (A modern version of this is the view presented
in the present book.) Based on work of de Broglie, Schrodinger came up with a differential
equation for the nonrelativistic electron [249]. A probability interpretation for Schrodingers
wave function was found by Born. In 1927, Pauli reformulated his exclusion principle in
147
terms of spin and antisymmetry. In 1928, Dirac discovered the Dirac equation for the
relativistic electron. In 1932, the early years concluded with the discovery of the positron
by Anderson and the neutron by Chadwick, which were enough to explain the behavior
of ordinary matter and radioactivity. But the forces that hold the nucleus together were
still unknown, and already in 1934, Yukawa predicted the existence of new particles, the
mesons. Since then the particle zoo has increased further and further.
A number of Nobel prizes (most of them in physics, but one in chemistry early research
on atoms was interdisciplinary) for the pioneers accompanied the early development of
quantum mechanics2 :
1908 Ernest Rutherford, (Nobel prize in chemistry) for his investigations into the
disintegration of the elements, and the chemistry of radioactive substances
1918 Max Planck, in recognition of the services he rendered to the advancement of
physics by his discovery of energy quanta
1921 Albert Einstein, for his services to theoretical physics, and especially for his
discovery of the law of the photoelectric effect
1922 Niels Bohr, for his services in the investigation of the structure of atoms and of
the radiation emanating from them
1929 Louis de Broglie, for his discovery of the wave nature of electrons
1932 Werner Heisenberg for the creation of quantum mechanics, the application of
which has led among others to the discovery of the allotropic forms of hydrogen
1933 Erwin Schrodinger and Paul A.M. Dirac, for the discovery of new productive
forms of atomic theory
1935 James Chadwick, for the discovery of the neutron
1936 Carl D. Anderson, for his discovery of the positron
and belatedly, but still for work done before 1935,
1945 Wolfgang Pauli, for the discovery of the exclusion principle, also called the Pauli
principle
1949 Hideki Yukawa, for his prediction of the existence of mesons on the basis of
theoretical work on nuclear forces
1954 Max Born, for his fundamental research in quantum mechanics, especially for
his statistical interpretation of the wave function
2
The remarks to each Nobel laureate are the official wordings in the announcements of the Nobel
prizes. For press announcements, Nobel lectures of the laureates, and their biographies, see the web site
http://nobelprize.org/physics/laureates.
148
The story of the discovery of antimatter is interesting. Though Dirac called it a prediction
in his Nobel lecture, There is one other feature of these equations which I should now like to
discuss, a feature which led to the prediction of the positron, it was only a postdiction. Yes,
he had a theory in which there were antiparticles. But before the positron was discovered,
Dirac thought the antiparticles had to be protons (though there was a problem with the
mass) since new particles were inconceivable at that time. Official history seems to have
followed Diracs lead in his Nobel lecture, and tells the story as it should have happened
from the point of the theorist, namely that he (i.e., theory) actually predicted the positron.
The truth is a little different.
Anderson discovered and named the positron in 1932. He wrote the announcement of his
discovery in Science [11], with due reserve in interpretation. The proper publication [13],
where he also predicted negative protons (now called antiprotons), was still without any
awareness of Diracs theory. It is in the subsequent paper [12] that Anderson relates the
positron to Diracs theory.
Heisenberg, Dirac, and Anderson were all 31 years old when they got the Nobel prize. The
fact that Andersons paper [13] is very rarely cited3 should cast some doubt on the relevance
of citation counts for actual impact in science.
6.4
To give a better intuition for what kind of spectra quantum systems can be expected to
have, we discuss here the spectrum of many-particle systems from an informal point of
view.
There are bound states, where all particles of the system stay together, and there are
scattering states, where the system is broken up into several fragments moving independently but possibly influencing each other. The nomenclature comes from the scattering
experiments in physics; shooting particles at each other can result in the formation of a
system where the particles are bound together or where the particles scatter off from each
other. In the case of a scattering process, different arrangements, (i.e., partitions of the
set of individual particles into fragments which form a subsystem moving together) describe
the combination of particles before a collision and their recombination in the debris after a
collision.
The discrete spectrum of a Hamiltonian H corresponds to the bound states; each discrete
eigenvalue to a different mode of the bound system. The study of the discrete spectrum
of compound systems is the domain of spectroscopy. We shall return to this topic in
Chapter 23, when the machinery to understand a spectrum is fully developed.
The continuous part of the spectrum corresponds to the scattering states. In general, the
spectrum is discrete till a certain energy level, called the dissociation threshold, and
3
http//www.prola.aps.org/ lists only 37 citations, and only 5 before 1954. The paper [12] is cited
35 times.
149
after the dissociation threshold the spectrum is continuous. For the hydrogen atom, the
dissociation threshold is 13.6eV . For the harmonic oscillator, the dissociation threshold is
infinite. In such a case, where the dissociation threshold is infinite, there is no continuous
spectrum and the system is always bound; we call this confinement. For example, three
quarks always form a bound state, that is, they are confined. A single quark can not get
loose from its partners. It may also be the case that there is no bound state; for example,
the atoms in inert gases dont form bound states, hence a system consisting of more than
one of such atoms has only a continuous spectrum.
In scattering experiments the ingoing particles and the outgoing particles can be different.
Hence one needs to keep track of what precisely went where. After the scattering the
particles separate from each other in different clusters. The constituents in cluster i form
a bound state, which can be in an excited state, which we denote Ei . If the cluster i is
moving with a momentum, the total kinetic energy of cluster i is p2i /2mi , where mi is the
mass of cluster i. If there are N clusters after a collision (scattering), the resulting total
energy is
N 2
X
pi
E=
+ Ei .
2mi
i=1
In scattering experiments a possible outcome of clusters and their constituents is called a
channel. It is very common in particle physics that a single reaction like shooting two
protons at each other has more than one channel. We see that in each channel, there
is a continuous spectrum above a certain energy level , which is the sum of the ground
state energies of the different clusters. To theoretically disentangle the spectrum, one uses
an analytic continuation of the scattering amplitudes. We thus view the spectrum as a
subset of the complex plane. When multiplying the momenta with a complex phase that
has a nonzero imaginary part, the continuous part of the spectrum becomes imaginary and
is tilted away from the real axis. The bound states still appear on the real line as isolated
points, that is, discrete. But now at each bound state with energy above there is a line
connected representing the continuously varying momentum of the corresponding cluster.
The technique of disentangling the spectrum using analytic continuation is called complex
scaling. For more background and rigorous mathematical arguments, see, e.g., Simon
[253], Moiseyev [191], or Bohm [39].
with Lindblad operators Lj encoding interactions with the unmodelled environment into
which the lost energy dissipates, and complex coefficients Gjk forming a symmetric, positive
definite matrix. Remembering that acts as a derivation, the additional terms can be
viewed as generalized diffusion terms; indeed, the dynamics (6.9) describes classically for
example reaction-diffusion equations, and its quantum version is the quantum equivalent
of stochastic differential equations, which model systems like Brownian motion and give
150
microscopic models of diffusion processes. For details, see, e.g., Gardiner [99], Breuer
& Petruccione [50].
Assuming that the terms in the sum of (6.9) are negligible, the dynamics satisfies the
Heisenberg equation, and the above analysis applies with small changes. However, since
H is no longer Hermitian, the energy levels typically acquire a possibly nonzero (and then
positive) imaginary part. Isolated eigenvalues with positive imaginary parts are called
resonances. The oscillation frequencies are still of of the form h
= E, but since the
energies have a positive imaginary part, the oscillations will be damped, as can be seen
by looking at the form of eit . That this does not lead to a decay of the response of the
oscillator is due to stochastic contributions modelled by the Lindblad terms and neglected
in our simplified analysis.
Resonances with tiny imaginary parts behave almost like bound states, and represent unstable particles, which decay in a stochastic manner; the value = 2 Im gives their
lifetime, defined as the time where (in a large sample of unstable particles) the number of
undecayed particles left is reduced by a factor of e, the basis of the exponential function.
Thus the spectrum of a Hamiltonian contains valuable experimentally observable information about a quantum system.
6.5
In the remainder of this chapter, we discuss the spectrum of a black body and some of its
consequences.
In the history the black body plays an important role. Applying some basic concepts of
quantum mechanics and statistical mechanics one arrives at the distribution formula first
derived by Max Planck in December 1900 [220]. According to Van der Waerden in his
(partially autobiographical) book [276] the presentation of Planck in December 1900 was
the birth of quantum mechanics.
What is a black body? A body that looks black does not reflect any light, it absorbs
all incoming light. Hence if some radiation comes from a perfectly black body, it needs
to be due to the interaction of the internal degrees of freedom with light. It is hard to
experimentally construct a black body. The theoretical idea is to have a hollow box with
a single little hole, through which the box can emit radiation outwards. Since the hole
is assumed to be very small, no light will fall inwards and then be reflected through the
hole again. Thus no light will be reflected (or at least almost no light). In practice many
objects behave like black bodies above a certain temperature. The sun does not reflect
a substantial amount of light (where should it come from?) compared to the amount it
radiates. Therefore one of the best black bodies is the sun.
Given a black body, there is a positive integrable function f () of the frequency , such
151
The function f () is the radiation-energy density. The main object of this section is the
function f (). The importance of the black body lies in the fact that the radiation emitted
is only due to its internal energy and its interaction with light. In practice a system has
always interaction with the environment and light falling onto it (since we want to see
where the black body is, the latter is often inevitable). What would we expect from the
radiation-energy density? First, since = 0 means that the energy of the photons emitted
is h
= 0 and < 0 is not a possibility, we have f (0) = 0.R Second, the function f has to
f () = a 3 eb T ,
(6.10)
for some parameters a, b > 0. It is clear that the proposed f is integrable and satisfies
f (0) = 0. For large the radiation-energy density matches the observed densities, however,
for small the radiation density of Wien does not match the experiments.
On the other hand, there were other radiation laws. First, there was Stefans law (or
StefanBoltzmann law) derived on basis of empirical results in 1879 [259]. The statement
of Stefans law is that the total energy radiated per second of a hot radiating body is
proportional to the fourth power of the temperature:
dE
= AT 4 ,
dt
where A is the area of the body and is a constant. In 1884 Boltzmann gave a theoretical
derivation of Stefans law using the theoretical tools of statistical mechanics [44]. The
4
His real name is rather long: Wilhelm Carl Werner Otto Fritz Franz Wien.
152
second radiation law known in 1900 was Rayleighs law. Lord Rayleigh used classical
mechanics to derive a better description of the radiation density for low values of [229].
He proposed
f () = 2 ,
which is clearly wrong for large and is not even integrable. Later in 1905, Lord Rayleigh
improved the derivation of his proposal in a collaboration with Sir James Jeans, again based
on purely classical arguments. Although their discovery was interesting, it did not match
the experiments for high . In December 1900, Max Planck had given a seminar and gave a
derivation of f () that resulted in a radiation-energy density that matched the experiments
both for low and for high . Even more was true, the formula of Planck reproduced Wiens
displacement law, Wiens approximation, Rayleighs proposal and Stefans law. The formula
Planck derived was giving the energy density of a black body in thermal equilibrium, from
which one obtains the radiation-energy density
f () =
h
V
3
,
2 c3 eh 1
where V is the volume of the black body (the cavity actually) and = 1/kT , k is Boltzmanns constant. Indeed for low we get an expression that is quadratic in , for high
we get Wiens law and integrating the expression over one sees that the integral is
proportional to T 4 . The accordance with Wiens displacement law will be shown later we
will also remark on the agreement of Plancks law later.
So what precisely did Planck do that the others did wrong? The key ingredient in Plancks
derivation is to consider the constituents of the black body as follows: the black body is
just a cavity where the inner walls can have an interaction with light. The walls of the
cavity are made of molecules that behave like compounds of harmonic oscillators. Planck
assumed that the energies of the molecules take values in some discrete set: the states of
the molecules do not vary continuously but are discrete. Hence we can put the states in
bijection with the natural numbers. Furthermore he assumed that the light inside the cavity
induces transitions in the molecules by absorbing or emitting radiation. A transition from
a state labelled with n and with energy En and a state labelled with m and with energy Em
is only possible if the energy differences and the frequencies are related by |En Em | = h
.
Thus by discretizing the states of the interior of the black body the interaction with light
varies over a discrete set of frequencies. Planck at the moment saw the discretization as a
purely theoretical and mathematical tool that would bear no relation with reality. It just
reproduced the correct results, which was most important: it gave a formula that fitted all
experiments. Very puzzling at the time was the necessary assumption that the energy was
quantized an assumption that marked the start of the quantum era. It took some time
until the derivation of Plancks law was given a clear meaning.
6.6
In 1905 Einstein gave a comprehensible derivation, which we shall present below. In modern
textbooks one can find a one-page-derivation and we will present such a proof below as well.
153
For both derivations we need a basic fact from statistical mechanics, called the Boltzmann
distribution.
Suppose that we have a physical system consisting of many identical molecules (or atoms, or
any other smaller subsystems). Each molecule can attain different states that are labelled
with integers n = 0, 1, 2, . . .. In a modern treatment, these states are identified with the
eigenstates of the quantum Hamiltonian, and we shall use this terminology, though it was
not available when Einstein wrote his paper. We thus assume that the spectrum of the
molecules is discrete and there is a bijection between the eigenstates of the molecule and
the natural numbers. Each eigenstate n of the molecule corresponds to an eigenvalue En
of the Hamiltonian, giving the energy the molecule has in eigenstate n. The Boltzmann
distribution gives the relative frequency of eigenstates of the molecules. Writing N(n) for
the number of molecules in state n, the Boltzmann distribution dictates that
(En Em )
N(n)
= e kT .
N(m)
(6.11)
when the system is in thermal equilibrium with itself and with the surrounding system.
Thus, the temperature T has to be constant. Such a mixed state, where the volume, the
temperature, and the number of particles are kept constant and in thermal equilibrium
with its environment is called a canonical ensemble. A derivation of the Boltzmann
distribution can be found in many elementary textbooks on statistical physics, e.g., Reichl
[230], Mandl [182], Huang [129], or Kittel [153].
The probability pn of measuring an arbitrary molecule to be in state n is
En
e kT
,
pn =
Z
where Z is the partition function
Z=
En
e kT .
(En F )
kT
(En F )
kT
154
The constant = 1/kT is called the inverse temperature and plays a fundamental role;
in statistical physics, it is customary to express all quantities in terms of . The average
is found by
of the energy, denoted E,
E =
X
En
En pn =
ln Z() .
Einsteins derivation. We now focus on two energies in the molecule, n and m with
Em > En and degeneracies gn and gm , and assume the molecules have interaction with
light. There are three types of processes that might happen: (i) A molecule in state m
might decay to state n while omitting light with the frequency
h
= Em En ;
(6.12)
this process is called spontaneous decay. (ii) A molecule might jump from n to m by
absorbing light with the right frequency (6.12). (iii) A molecule decays from m to n by
being kicked by light having the right frequency (6.12); this process is called induced
emission. Thus, there is one transition which happens even in the absence of light: in a
spontaneous emission the molecule may jump from m to n, thereby emitting light. The
other two transitions take place under the influence of light; they are therefore dependent
of how much light is present and thus depends on the radiation-energy density f ().
The probabilities of transitions are given as transition rates dW/dt; dW is the infinitesimal
difference in molecules in a certain state and dt is an infinitesimal time interval. Now
spontaneous emission is independent of the presence of light and only depends on the
characteristics of the molecule and the number of molecules in state m. Therefore,
dW1 = N(m)Amn dt ,
where dW1 is the number of molecules undergoing spontaneous emission from m to n during
a time interval dt, and where Amn is some number depending on the states n and m (not
on temperature in particular). We denote dW2 the amount of molecules absorbing light
and jumping from n to m during a time interval dt and dW3 the number of molecules
jumping from m to n under influence of light (getting the right kick). The probabilities are
determined by some constants Bmn and Cmn , which are characteristic for the states m and
n and the amount of light that has the right frequency. Thus dW2 and dW3 are proportional
to f ();
dW2 = N(n)Bmn f dt , dW3 = N(m)Cmn f dt .
Now we consider we have an enclosed system of molecules that are in equilibrium with the
light in the system. Being in equilibrium means
dW1 + dW3 = dW2 .
Using the Boltzmann distribution we get
Em
En
(6.13)
155
Now comes a basic assumption that Einstein does; if T becomes larger the system gets very
hot and transitions will be more and more frequent. Therefore one assumes that as T
that also f . In this case the exponentials in (6.13) become 1 and the term with A
can be neglected and we obtain
gm Cmn = gn Bmn .
(6.14)
From another point of view the assumption Einstein makes is natural. The relation
gm Cmn = gn Bmn is representing that the processes m n under induced emission, or
n m under absorption are symmetric; the numbers Cmn and Bmn only differ by the ratio
of number of states with energy En to the number of states with energy Em . Indeed, taking
gm = gn = 1, the process of induced emission is the time-reversed process of absorption.
Since the equations in nature show a time-reversal symmetry (in this case) we find in
this case Cmn = Bmn . If now gn and gm are not equal one has to correct for this and
multiply the probabilities with the corresponding multiplicities to get (6.14). With the
assumption (6.14) we find
Amn /Cmn
f = (Em En )/kT
.
e
1
Inserting now Em En = h
and requiring that Wiens law (6.10) holds in the limit where
is large we obtain
a 3
f () = h/kT
.
e
1
In particular we find that Amn /Cmn = a 3 , which relates the constants Amn and Cmn to
the energy difference Em En . The constant a does not depend on the frequency and the
temperature.
Modern derivation. We now discuss a relatively fast derivation that in addition gives a
value for the constant a in Wiens law (6.10). We consider a box with the shape of a cube
with sides L. Later we then require that the precise shape of the box is not relevant in the
limit where the typical sizes are much larger then the wavelength. Then the only relevant
parameter is the volume V = L3 . We assume the walls of the box can absorb and emit
light; we furthermore assume that the walls are made of a conducting material. Away from
the walls light satisfies Maxwell equations, but at the walls the perpendicular components
of the electric field have to vanish; if the electric field would not vanish, the electrons in
the material of the wall would be accelerated, but then the system is not in equilibrium. A
plane wave solution to the Maxwell equations is of the form
eitikx xiky yikz z ,
We can always chose a coordinate system that is aligned with the box. Then the boundary
conditions imply eikx L = 0 and thus kx = nLx for some integer nx . The wave functions with
negative nx are identical to the corresponding wave functions with positive nx ; they just
differ by a phase. Therefore we may assume nx 0. For the other coordinate directions
the discussion is similar.
Thus we find that for each triple of integer numbers n = (nx , ny , nz ) we have a harmonic
oscillator with frequency
c
n = n, n = n n .
L
156
We now use the fact (proved below in Section 20.3) that for each harmonic oscillator the
energies are En (r) = h
n (r + 21 ). Since energy is defined only up to a constant shift, we
subtract the zero-point energy E0 = 12 h
n and take En (r) = n r. The partition function is
then
X
1
.
Z(n, ) =
erhn = hn
e
1
r=0
Therefore the average energy in the mode corresponding to n is
En =
ln(1 ehn )
h
n
.
= hn
e
1
We now have to sum up all the energies for all modes. Since we are interested in the
behavior of f () in the regime where the number L is much larger than the wavelength we
replace the sum over ~n by an integral. We have to integrate over the positive octant where
nx 0, ny 0 and nz 0. Since all expressions are rotationally symmetric in n, we can
also integrate over all of R3 and divide by 8. We have not yet taken into account that light
has two polarizations. Therefore, for each n there are two harmonic oscillators. The total
energy enclosed in the box is thus
Z
Z
h
n
2
Z
3
L3h
E= 2 3
d .
c 0 eh 1
With L3 = V being the volume we thus find
Vh
3
.
(6.15)
2 c3 eh 1
Of course, this f only represents the radiation-energy density inside the black body. However, up to some overall constants the above f is the radiation-energy density of a black
body since the emitted radiation is proportional to the energy density.
f () =
6.7
From the calculated density (6.15) we can draw some conclusions, which we now shortly
treat.
157
To calculate the total radiation that is emitted, we first calculate the total energy by
integrating (6.15) over all . We get for the total energy of the light inside the black body
Z
V k 4 T 4 x3 dx
E= 2 3 3
.
ch
0 ex 1
But we have
4
x3 dx
=
,
ex 1
15
0
and thus the energy density u(T ) is given by
u(T ) =
E
2k4 T 4
=
.
V
15h3 c3
We see that the energy density is expressible by fundamental constants and the fourth
power of the temperature. Since the energy density determines the total radiation emitted
per time interval we see that the total energy a black body radiates per time interval is
proportional to T 4 . This already explains Stefans law, but in order to derive Stefans law
we have to be a bit more careful.
In order to see how much a black body will radiate, we pinch a small hole in the black
body. Let us say that the area of the hole is dA. Now the question is how many photons
will hit the hole from inside out? We fix a time t and a small time interval dt. Only the
photons that are within a distance between ct and ct + cdt away from the hole are eligible
to pass through the hole in a time interval dt after time t. We thus consider a thin shell of
a half sphere inside the black body a distance ct away from the hole and of thickness cdt.
Light however spreads in all directions and so not all the photons inside the shell are going
in the direction of the hole. Our task is to find the ratio of the total that does go through
the hole. This is a purely geometric question.
We introduce spherical coordinates around the hole; an angle ranging from 0 to 2
that goes around the hole, and a polar angle ranging from 0 to /2 (values below zero
correspond to points outside the black body). We cut the half sphere of radius ct in little
stripes by cutting for fixed along the angle ; each stripe is a thin band of thickness ctd
and of length 2 sin . Consider a little cube of size dV = (ct)2 sin ddcdt in the shell.
The fraction of radiation going in the right direction is given by the solid angle d that
dA describes seen from the little cube. But d is given by the projection of the surface dA
onto the surface of the sphere of radius ct around the little cube:
d =
dA cos
.
4c2 t2
The cube of volume emits all the radiation present in the cube (since the light waves just
pass through), and that amounts to an energy dE = u(T )dV . From the little cube under
consideration the amount of radiation going in the right direction is thus
uddV = u(T )
Note that the amount of radiation is independent of the radius of the half sphere. Since
the question is of a purely geometric nature, that is to be expected. We now get the total
158
amount of radiation from summing up all the d contributions: Denoting by dU the energy
that leaves the hole during the time interval dt, we have
dU
dt
/2
d
0
c
=
u(T )dA .
4
d u(T )
For a black body that radiates over all its surface, and not only through one little hole, we
sum up the contributions over all little surfaces dA. In order that the above analysis still
holds the shape of the black body needs to be such that radiation that exits the black body
does not enter again. If the black body is convex this requirement is met, e.g., we could
take a sphere. We then find Stefans law in the form given by
dU
2k4 T 4A
=
= AT 4 ,
dt
60h3 c2
with Stefans constant
=
2k4
8
1
2 4
.
3 2 5.7 10 J.s .m K
60h c
3
.
eh 1
Differentiation with respect to and putting the result to zero to obtain the position of
the maximum gives the equations
3 x = 3ex ,
x=h
max .
We discard the trivial solution x = 0 since this corresponds to the behavior at = 0. One
finds the other solution by solving the equation 3 x = 3ex with numerical methods and
finds x 2.82. Hence we have
2.82 T
max
.
h
k
Part II
Statistical mechanics
159
Chapter 7
Phenomenological thermodynamics
Part II discusses statistical mechanics from an algebraic perspective, concentrating on thermal equilibrium but discussing basic things in a more general framework. A treatment
of equilibrium statistical mechanics and the kinematic part of nonequilibrium statistical
mechanics is given which derives from a single basic assumption (Definition 9.1.1) the full
structure of phenomenological thermodynamics and of statistical mechanics, except for the
third law which requires an additional quantization assumption.
This chapter gives a concise description of standard phenomenological equilibrium thermodynamics for single-phase systems in the absence of chemical reactions and electromagnetic
fields. From the formulas provided, it is an easy step to go to various examples and applications discussed in standard textbooks such as Callen [55] or Reichl [230]. A full
discussion of global equilibrium would also involve the equilibrium treatment of multiple
phases and chemical reactions. Since their discussion offers no new aspects compared with
traditional textbook treatments, they are not treated here.
Our phenomenological approach is similar to that of Callen [55], who introduces the basic
concepts by means of a few postulates from which everything else follows. The present
setting is a modified version designed to match the more fundamental approach based on
statistical mechanics. By specifying the kinematical properties of states outside equilibrium,
his informal thermodynamic stability arguments (which depend on a dynamical assumption
close to equilibrium) can be replaced by rigorous mathematical arguments.
7.1
We discuss here the special but very important case of thermodynamic systems describing
the single-phase global equilibrium of matter composed of one or several kinds of substances
in the absence of chemical reactions and electromagnetic fields. We call such systems
standard thermodynamic systems; they are ubiquitous in applications. In particular,
161
162
In the terminology, we mainly follow the IUPAC convention (Alberty [5, Section 7]), except that
we use the letter H to denote the Hamilton energy, as customary in quantum mechanics. In equilibrium,
H equals the internal energy U . The Hamilton energy should not be confused with the enthalpy which
is usually denoted by H but here is given in equilibrium by H + P V . For a history of thermodynamics
notation, see Battino et al. [29].
163
Proof. It suffices to show that (t) := (s + tk, x + th) is convex (concave) for all s, x, h, k
such that s + tk > 0 (resp. < 0). Let z(t) := (x + th)/(s + tk) and c := sh kx. Then
z (t) =
c
,
(s + tk)2
(t) = (s + tk)(z(t)),
hence
(t) = k(z(t)) + (z(t))
(t) = k (z(t))
c
,
s + tk
c
cT
c
ck
cT (z(t))c
(z(t))
+
(z(t))
=
,
(s + tk)2 (s + tk)2
s + tk
(s + tk)2
(s + tk)3
(7.1)
The set of (T, P, ) satisfying T > 0, P > 0 and the equation of state is called the state
space.
(iii) The Hamilton energy H satisfies the Euler inequality
H TS PV + N
(7.2)
164
9.2.1. All other properties follow from the system function. Thus, all equilibrium properties
of a material are characterized by the system function .
Surfaces where the system function is not differentiable correspond to so-called phase
transitions. The equation of state shows that, apart from possible phase transitions, the
state space has the structure of an (s1)-dimensional manifold in Rs , where s is the number
of intensive variables; in case of a standard system, the manifold dimension is therefore one
higher than the number of kinds of substances.
Standard systems describe only a single phase of a substance (typically the solid, liquid, or
gas phase), and changes between these as some thermodynamic variable(s) change. Thermodynamic systems with multiple phases (e.g., boiling water, or water containing ice cubes)
are only piecewise homogeneous. Each phase may be described separately as a standard
thermodynamic system. But discussing the equilibrium at the interfaces between different phases needs some additional effort. (This is described in all common textbooks on
thermodynamics.) Therefore, we consider only regions of the state space where the system
function is twice continuously differentiable.
Each equilibrium instance of the material is characterized by a particular state (T, P, ),
from which all equilibrium properties can be computed:
7.1.3 Theorem.
(i) In any equilibrium state, the extensive variables are given by
S=
(T, P, ),
T
V =
(T, P, ),
P
N =
(T, P, ),
(7.3)
(7.4)
S
V
=
,
T
P
Nj
S
,
=
T
j
Nj
V
,
=
P
j
Nj
Nk
=
,
k
j
(7.5)
V
0,
P
Nj
0.
j
(7.6)
165
for some Lagrange multiplier . Setting the partial derivatives to zero gives (7.3), and since
the maximum is attained in equilibrium, the Euler equation (7.4) follows. The system size
is positive since V > 0 and is decreasing in P . Since the Hessian matrix of ,
S
2 2
2
S
S
T 2 P T T
T
P
2
2
V
V
V
=
=
2
T
P
P
P
T P
N
2
2
2
N
N
T
P
T P
2
Note that there are further stability conditions since the determinants of all principal submatrices of must be nonnegative. In addition, since Nj 0, (7.3) implies that is
monotone increasing in each j .
7.1.4 Example. The equilibrium behavior of electrically neutral gases at sufficiently low
pressure can be modelled as ideal gases. An ideal gas is defined by a system function of
the form
X
(T, P, ) =
j (T )ej /RT P,
(7.7)
jJ
(7.8)
is the universal gas constant2 , and we use the bracketing convention j /RT = j /(RT ).
Differentiation with respect to P shows that = V is the system size, and from (7.1),
(7.3), and (7.4), we find that, in equilibrium,
P =
j /RT
j (T )e
V j (T ) j /RT
Nj =
e
,
RT
X
j j (T ) j /RT
j (T )
e
,
S=V
T
RT 2
j
H=V
X
j
T
j (T ) j (T ) ej /RT .
T
X
j
Nj ,
j = RT log
RT Nj
,
V j (T )
For the internationally recommended values of this and other constants, their accuracy, determination,
and history, see CODATA [67].
166
hj (T )Nj ,
hj (T ) = RT T
log j (T ) 1 ,
T
from which S can be computed by means of the Euler equation (7.4). In particular, for one
mole of a single substance, defined by N = 1, we get the ideal gas law
P V = RT
(7.9)
hj (T ) = hj (T ) +
dT Cj (T ).
j (T ) = j (T ) exp
dT
hj (T )
1+
.
T
RT
Thus there are two undetermined integration constants for each kind of substance. These
cannot be determined experimentally as long as we are in the range of validity of the
ideal gas approximation. Indeed, if we pick arbitrary constants j and j and replace
j (T ), j , H, and S by
j (T ) := ej j /RT j (T ),
H = H +
j Nj ,
j = j + j RT j ,
S = S + R
j Nj ,
all relations remain unchanged. Thus, the Hamilton energy and the entropy of an ideal
gas are only determined up to an arbitrary linear combination of the mole numbers. This
is an instance of the deeper problem to determine under which conditions thermodynamic
variables are controllable; cf. the discussion in the context of Example 10.1.1 below.
This gauge freedom (present only in the ideal gas) can be fixed by choosing a particular
standard temperature T0 and setting arbitrarily hj (T0 ) = 0, j (T0 ) = 0. Alternatively,
at sufficiently large temperature T , heat capacities are usually nearly constant, and making
use of the gauge freedom, we may simply assume that
hj (T ) = hj0 T,
j (T ) = j0 T
for large T.
7.2
167
In global equilibrium, all thermal variables are constant throughout the system, except at
phase boundaries, where the extensive variables may exhibit jumps and only the intensive
variables remain constant. This is sometimes referred to as the zeroth law of thermodynamics (Fowler & Guggenheim[89]) and characterizes global equilibrium; it allows
one to measure intensive variables (like temperature) by bringing a calibrated instrument
that is sensitive to this variable (for temperature a thermometer) into equilibrium with the
system to be measured.
For example, the ideal gas law (7.9) can be used as a basis for the construction of a gas
thermometer: The amount of expansion of volume in a long, thin tube can easily be
read off from a scale along the tube. We have V = aL, where a is the cross section area
and L is the length of the filled part of the tube, hence T = (aP/R)L. Thus, at constant
pressure, the temperature of the gas is proportional to L. For the history of temperature,
see Roller [240] and Truesdell [270].
We say that two thermodynamic systems are brought in good thermal contact if the joint
system tends after a short time to an equilibrium state. To measure the temperature of a
system, one brings it in thermal contact with a thermometer and waits until equilibrium is
established. The system and the thermometer will then have the same temperature, which
can be read off from the thermometer. If the system is much larger than the thermometer,
this temperature will be essentially the same as the temperature of the system before the
measurement. For a survey of the problems involved in defining and measuring temperature
squez & Jou [58].
outside equilibrium, see Casas-Va
To be able to formulate the first law of thermodynamics we need the concept of a reversible
change of states, i.e., changes preserving the equilibrium condition. For use in later sections,
we define the concept in a slightly more general form, writing for P and jointly. We
need to assume that the system under study is embedded into its environment in such a way
that, at the boundary, certain thermodynamic variables are kept constant (and independent
of position). This determines the boundary conditions of the thermodynamic system;
see the discussion in Section 7.3.
7.2.1 Definition. A state variable is an almost everywhere continuously differentiable
function (T, ) defined on the state space (or on a subset of it). Temporal changes in a state
variable that occur when the boundary conditions are kept fixed are called spontaneous
changes. A reversible transformation is a continuously differentiable mapping
(T (), ())
from a real interval into the state space; thus (T (), ()) = 0. The differential
dT +
d,
(7.10)
T
obtained by multiplying the chain rule by d, describes the change of a state variable
under arbitrary (infinitesimal) reversible transformations. In formal mathematical terms,
differentials are exact linear forms on the state space manifold; cf. Chapter 17.
d =
168
Reversible changes per se have nothing to do with changes in time. However, by sufficiently
slow, quasistatic changes of the boundary conditions, reversible changes can often be realized approximately as temporal changes. The degree to which this is possible determines
the efficiency of thermodynamic machines. The analysis of the efficiency by means of the
so-called Carnot cycle was the historical origin of thermodynamics.
The state space is often parameterized by different sets of state variables, as required by the
application. If T = T (, ), = (, ) is such a parameterization then the state variable
g(T, ) can be written as a function of (, ),
g(, ) = g(T (, ), (, )).
(7.11)
This notation, while mathematically ambiguous, is common in the literature; the names of
the argument decide which function is intended. When writing partial derivatives without
g
arguments, this leads to serious ambiguities. These can be resolved by writing
for
the partial derivative of (7.11) with respect to ; it can be evaluated using (7.10), giving
the chain rule
g
g T
g
=
+
.
(7.12)
T
Here the partial derivatives in the original parameterization by the intensive variables are
written without parentheses.
Differentiating the equation of state (7.1), using the chain rule (7.10), and simplifying using
(7.3) gives the Gibbs-Duhem equation
0 = SdT V dP + N d
(7.13)
(7.14)
Historically, the first law of thermodynamics took on this form only gradually, through
work by Mayer [188], Joule [143], Helmholtz [125], and Clausius [65].
Considering global equilibrium from a fundamental point of view, the extensive variables
are the variables that are conserved or at least change so slowly that they may be regarded
as time independent on the time scale of interest. In the absence of chemical reactions, the
mole numbers, the entropy, and the Hamilton energy are conserved; the volume is a system
size variable which, in the fundamental view, must be taken as infinite (thermodynamic
limit) to exclude the unavoidable interaction with the environment. However, real systems
are always in contact with their environment, and the conservation laws are approximate
only. In thermodynamics, the description of the system boundary is generally reduced to
the degrees of freedom observable at a given resolution.
The result of this reduced description (for derivations, see, e.g., Balian [20], Grabert
ller [228]) is a dynamical effect called dissipation (Thomson [268]). It
[109], Rau & Mu
169
(7.15)
7.3
The first law of thermodynamics describes the observable energy balance in a reversible
process. The total energy flux dH into the system is composed of the thermal energy
flux or heat flux T dS, the mechanical energy flux P dV , and the chemical energy
flux dN.
The Gibbs-Duhem equation (7.13) describes the energy balance necessary to compensate
the changes d(T S) = T dS + SdT of thermal energy, d(P V ) = P dV + V dP of mechanical
energy, and d( N) = dN + N d of chemical energy in the energy contributions
to the Euler equation to ensure that the Euler equation remains valid during a reversible
transformation. Indeed, both equations together imply that d(T S P V + N H)
vanishes, which expresses the preservation of the Euler equation.
Related to the various energy fluxes are the thermal work
Z
Q = T ()dS(),
the mechanical work
Wmech =
and the chemical work
Wchem =
P ()dV (),
() dN()
170
Note that the terms closed system has also a much more general interpretation which we do not use
in this chapter , namely as a conservative dynamical system.
4
Thus, entropy is the modern replacement for the historical concepts of phlogiston and caloric, which
failed to give a correct account of heat phenomena. Phlogiston turned out to be missing oxygen, an
early analogue of the picture of positrons as holes, missing electrons, in the Dirac sea. Caloric was a
massless substance of heat which had almost the right properties, explained many effects correctly, and fell
out of favor only after it became known that caloric could be generated in arbitrarily large amounts from
mechanical energy, thus discrediting the idea of heat being a substance. (For the precise relation of entropy
and caloric, see Kuhn [163, 164], Walter [283], and the references quoted there.) In the modern picture,
the extensivity of entropy models the substance-like properties of the colloquial term heat. But as there
are no particles of space whose mole number is proportional to the volume, so there are no particles of heat
whose mole number is proportional to the entropy. Nevertheless, the introduction of heat particles on a
formal level has some uses; see, e.g., Streater [263].
171
defined physical quantities. For historical reasons, the words heat, power, and force are used
in physics with a meaning different from the colloquial terms heat, power, and force.
7.4
The second law is centered around the impossibility of perpetual motion machines due to
the inevitable loss of energy by dissipation such as friction (see, e.g., Bowden & Leben
[49]), uncontrolled radiation, etc.. This means that unless continually provided from the
outside energy is lost with time until a metastable state is attained, which usually is an
equilibrium state. Therefore, the energy at equilibrium is minimal under the circumstances
dictated by the boundary conditions. In a purely kinematic setting as in our treatment,
the approach to equilibrium cannot be studied, and only the minimal energy principles
one for each set of boundary conditions remain.
Traditionally, the second law is often expressed in the form of an extremal principle for
some thermodynamic potential. We derive here the extremal principles for the Hamilton
energy, the Helmholtz energy, and the Gibbs energy5 , which give rise to the Hamilton
potential
U(S, V, N) := max {T S P V + N | (T, P, ) = 0; T > 0; P > 0},
T,P,
The Gibbs potential is of particular importance for everyday processes since the latter
frequently happen at approximately constant temperature, pressure, and mole number.
(For other thermodynamic potentials used in practice, see Alberty [5]; for the maximum
entropy principle, see Section 10.7.)
7.4.1 Theorem. (Extremal principles)
(i) In an arbitrary state,
H U(S, V, N),
(7.16)
with equality iff the state is an equilibrium state. The remaining thermodynamic variables
are then given by
T =
5
U(S, V, N),
S
P =
U(S, V, N),
V
U(S, V, N),
N
H = U(S, V, N).
The different potentials are related by so-called Legendre transforms; cf. Rockafellar [239] for the
mathematical properties of Legendre transforms, Arnold [16] for their application in mechanics, and
Alberty [5] for their application in chemistry.
172
(7.17)
with equality iff the state is an equilibrium state. The remaining thermodynamic variables
are then given by
S=
F
(T, V, N),
T
P =
F
(T, V, N),
V
F
(T, V, N),
N
H = F (T, V, N) + T S.
In particular, an equilibrium state is uniquely determined by the values of T , V , and N.
(iii) In an arbitrary state,
H T S + P V G(T, P, N),
(7.18)
with equality iff the state is an equilibrium state. The remaining thermodynamic variables
are then given by
S=
G
(T, P, N),
T
V =
G
(T, P, N),
P
G
(T, P, N),
N
H = G(T, P, N) + T S P V.
In particular, an equilibrium state is uniquely determined by the values of T , P , and N.
Proof. We prove (ii); the other two cases are entirely similar. (7.17) and the statement
about equality is a direct consequence of Axiom 7.1.2(iii)(iv). Thus, the difference H
T S F (T, V, N) takes its minimum value zero at the equilibrium value of T . Therefore, the
derivative with respect to T vanishes, which gives the formula for S. To get the formulas
for P and , we note that for constant T , the first law (7.14) implies
dF = d(H T S) = dH T dS = P dV + dN.
For the reversible transformation which only changes P or j , we conclude that dF = P dV
and dF = dN, respectively. Solving for P and j , respectively, implies the formulas for
P and j .
The above results imply that one can regard each thermodynamic potential as a complete
alternative way to describe the manifold of thermal states and hence all equilibrium properties. This is very important in practice, where one usually describes thermodynamic
material properties in terms of the Helmholtz or Gibbs potential, using models like NRTL
(Renon & Prausnitz [231], Prausnitz et al. [223]) or SAFT (Chapman et al. [59, 60]).
The additivity of extensive quantities is reflected in the corresponding properties of the
thermodynamic potentials:
173
7.4.2 Theorem. The potentials U(S, V, N), F (T, V, N), and G(T, P, N) satisfy, for real
, 1 , 2 0,
U(S, V, N) = U(S, V, N),
(7.19)
F (T, V, N) = F (T, V, N),
(7.20)
(7.21)
(7.23)
(7.24)
Proof. The first three equations express homogeneity and are a direct consequence of the
definitions. Inequality (7.23) holds since, for suitable P and ,
F (T, 1 V 1 + 2 V 2 , 1 N 1 + 2 N 2 ) = P (1 V 1 + 2 V 2 ) + (1 N 1 + 2 N 2 )
= 1 (P V 1 + N 1 ) + 2 (P V 2 + N 2 )
1 F (T, V 1 , N 1 ) + 2 F (T, V 2 , N 2 );
and the others follow in the same way. Specialized to 1 + 2 = 1, the inequalities express
the claimed convexity.
P k
P k
Equilibrium requires that
G is minimal among all choices with
N = N, and by
introducing a Lagrange
multiplier vector for the constraints, we see that in equilibrium,
P
the derivative of (G(T, P, N k ) N k ) with respect to each N k must vanish. This
implies that
G
k =
(T, P, N k ) = .
N k
Thus, in equilibrium, all k must be the same. At constant T , V , and N, one can apply the
same argument to the Helmholtz potential, and at constant S, V , and N to the Hamilton
potential. In each case, the equilibrium is characterized by the constancy of the intensive
parameters.
The degree to which macroscopic space and time correlations are absent characterizes the
amount of macroscopic disorder of a system. Global equilibrium states are therefore
macroscopically highly uniform; they are the most ordered macroscopic states in the universe rather than the most disordered ones. A system not in global equilibrium is characterized by macroscopic local inhomogeneities, indicating that the space-independent global
174
equilibrium variables alone are not sufficient to describe the system. Its intrinsic complexity is apparent only in a microscopic treatment; cf. Section 10.6 below. The only
macroscopic shadow of this complexity is the critical opalescence of fluids near a critical
point (Andrews [14], Forster [88]). The contents of the second law of thermodynamics
for global equilibrium states may therefore be phrased informally as follows: In global equilibrium, macroscopic order (homogeneity) is perfect and microscopic complexity is maximal.
In particular, the traditional interpretation of entropy as a measure of disorder is often misleading. Much more carefully argued support for this statement, with numerous examples
from teaching practice, is in Lambert [167].
7.4.3 Theorem. (Entropy form of the second law)
In an arbitrary state of a standard thermodynamic system
S S(H, V, N) := min {T 1(H + P V N) | (T, P, ) = 0},
with equality iff the state is an equilibrium state. The remaining thermal variables are then
given by
T 1 =
S
(H, V, N),
H
T 1 P =
S
(H, V, N),
V
T 1 =
U = H = T S(T, V, N) P V + N.
Proof. This is proved in the same way as Theorem 7.4.1.
S
(H, V, N),
N
(7.25)
(7.26)
This result implies that when a system in which H, V and N are kept constant reaches
equilibrium, the entropy must have increased. Unfortunately, the assumption of constant
H, V and N is unrealistic; such constraints are not easily realized in nature. Under different
constraints6 , the entropy is no longer maximal.
In systems with several phases, a naive interpretation of the second law as moving systems
towards increasing disorder is even more inappropriate: A mixture of water and oil spontaneously separates, thus ordering the water molecules and the oil molecules into separate
phases!
Thus, while the second law in the form of a maximum principle for the entropy has some
theoretical and historical relevance, it is not the extremal principle ruling nature. The
irreversible nature of physical processes is instead manifest as energy dissipation which,
in a microscopic interpretation, indicates the loss of energy to the unmodelled microscopic
6
For example, if one pours milk into a cup of coffee, stirring mixes coffee and milk, thus increasing
complexity. Macroscopic order is restored after some time when this increased complexity has become
macroscopically inaccessible. Since T, P and N are constant, the cup of coffee ends up in a state of
minimal Gibbs energy, and not in a state of maximal entropy! More formally, the first law shows that, for
standard systems at fixed value of the mole number, the value of the entropy decreases when H or V (or
both) decrease reversibly; this shows that the value of the entropy may well decrease if accompanied by a
corresponding decrease of H or V . The same holds out of equilibrium (though our equilibrium argument
no longer applies); for example, the reaction 2 H2 + O2 2 H2 O (if catalyzed) may happen spontaneously
at constant T = 25 C and P = 1 atm, though it decreases the entropy.
175
degrees of freedom. Macroscopically, the global equilibrium states are therefore states of
least free energy, the correct choice of which depends on the boundary condition, with
the least possible freedom for change. This macroscopic immutability is another intuitive
explanation for the maximal macroscopic order in global equilibrium states.
7.5
Using only the present axioms, one can say a little bit about the behavior of a system close
to equilibrium in the following, idealized situation. Suppose that a system at constant S,
V , and N which is close to equilibrium at some time t reaches equilibrium at some later
time t . Then the second law implies
0 H(t) H(t ) (t t )
dH
,
dt
so that dH/dt 0. We assume that the system is composed of two parts, which are both
in equilibrium at times t and t . Then the time shift induces on both parts a reversible
transformation, and the first law can be applied to them. Thus
dH =
k=1,2
dH k =
(T k dS k P k dV k + k dN k ).
k=1,2
176
as differences in electrical potentials create electrical currents, a flow of electricity (electrons)7 . While these dynamical issues are outside the scope of the present work, they
motivate the fact that one can control some intensive parameters of the system by controlling the corresponding intensive parameters of the environment and making the walls
permeable to the corresponding extensive quantities. This corresponds to standard procedures familiar to everyone from ordinary life, such as: heating to change the temperature;
applying pressure to change the volume; immersion into a substance to change the chemical
composition; or, in the more general thermal models discussed in Section 10.1, applying
forces to displace an object.
The stronger nonequilibrium version of the second law says that (for suitable boundary
conditions) equilibrium is actually attained after some time (strictly speaking, only in the
limit of infinite time). This implies that the energy difference
E := H U(S, V, N) = H T S F (S, V, N) = H T S + P V = G(S, V, N)
is the amount of energy that is dissipated in order to reach equilibrium. In an equilibrium
setting, we can only compare what happens to a system prepared in a nonequilibrium state
assuming that, subsequently, the full energy difference E is dissipated so that the system
ends up in an equilibrium state. Since few variables describe everything of interest, this
constitutes the power of equilibrium thermodynamics. But this power is limited, since equilibrium thermodynamics is silent about when or whether at all equilibrium is reached.
Indeed, in many cases, only metastable states are reached, which change too slowly to ever
reach equilibrium on a human time scale. Typical examples of this are crystal defects,
which constitute nonglobal minima of the free energy the globasl minimum would be a
perfect crystal.
7.6
Description levels
As we have seen, extensive and intensive variables play completely different roles in equilibrium thermodynamics. Extensive variables such as mass, charge, or volume depend
additively on the size of the system. The conjugate intensive variables act as parameters
defining the state.
A system composed of many small subsystems, each in equilibrium, needs for its complete
characterization the values of the extensive and intensive variables in each subsystem. Such
a system is in global equilibrium only if its intensive variables are independent of the
subsystem. On the other hand, the values of the extensive variables may jump at phase
space boundaries, if (as is the case for multi-phase systems) the equations of state allow
multiple values for the extensive variables to correspond to the same values of the intensive
variables. If the intensive variables are not independent of the subsystem then, by the
second law, the differences in the intensive variables of adjacent subsystems give rise to
thermodynamic forces trying to move the system towards equilibrium.
7
See Table 10.1 for more parallels in other thermodynamic systems, and Fuchs [94] for a thermodynamics course (and for a German course Job [142]) thoroughly exploiting these parallels.
177
A real nonequilibrium system does not actually consist of subsystems in equilibrium; however, typically, smaller and smaller pieces behave more and more like equilibrium systems.
Thus we may view a real system as the continuum limit of a larger and larger number of
smaller and smaller subsystems, each in approximate equilibrium. As a result, the extensive
and intensive variables become fields depending on the continuum variables used to label
the subsystems. For extensive variables, the integral of their fields over the label space gives
the bulk value of the extensive quantity; thus the fields themselves have a natural interpretation as a density. For intensive variables, an interpretation as a density is physically
meaningless; instead, they have a natural interpretation as field strengths. The gradients
of their fields have physical significance as the sources for thermodynamic forces.
From this field theory perspective, the extensive variables in the single-phase global equilibrium case have constant densities, and their bulk values are the densities multiplied by the
system size (which might be mass, or volume, or another additive parameter), hence scale
linearly with the size of the system, while intensive variables are invariant under a change
of system size. We do not use the alternative convention to call extensive any variable that
scales linearly with the system size, and intensive any variable that is invariant under a
change of system size.
We distinguish four nested levels of thermal descriptions, depending on whether the system
is considered to be in global, local, microlocal, or quantum equilibrium. The highest and
computationally simplest level, global equilibrium, is concerned with macroscopic situations characterized by finitely many space- and time-independent variables. The next level,
local equilibrium, treats macroscopic situations in a continuum mechanical description,
where the equilibrium subsystems are labeled by the space coordinates. Therefore the relevant variables are finitely many space- and time-dependent fields. The next deeper level,
microlocal8 equilibrium, treats mesoscopic situations in a kinetic description, where the
equilibrium subsystems are labeled by phase space coordinates. The relevant variables are
now finitely many fields depending on time, position, and momentum; cf. Balian [18]. The
bottom level is the microscopic regime, where we must consider quantum equilibrium.
This no longer fits a thermodynamic framework but must be described in terms of quantum
dynamical semigroups; see Section 10.2.
The relations between the different description levels are discussed in Section 10.2. Apart
from descriptions on these clear-cut levels, there are also various hybrid descriptions, where
some part of a system is described on a more detailed level than the remaining parts, or
where, as for stirred chemical reactions, the fields are considered to be spatially homogeneous and only the time-dependence matters.
What was said at the beginning of Section 7.2 about measuring intensive variables like temperature applies in principle also in local or microlocal equilibrium, but with fields in place
of variables. The extensive variables are now densities represented by distributions that
can be meaningfully integrated over bounded regions (the domains of contact with a measuring instrument), whereas intensive variables are nonsingular fields (e.g., pressure) whose
integrals denote after divistion by the size of the domain of contact with an instrument
8
The term microlocal for a phase space dependent analysis is taken from the literature on partial
differential equations; see, e.g., Martinez [1].
178
Chapter 8
Quantities, states, and statistics
When considered in sufficient detail, no physical system is truly in global equilibrium; one
can always find smaller or larger deviations. To describe these deviations, extra variables
are needed, resulting in a more complete but also more complex model. At even higher
resolution, this model is again imperfect and an approximation to an even more complex,
better model. This refinement process may be repeated in several stages. At the most
detailed stages, we transcend the frontier of current knowledge in physics, but even as this
frontier recedes, deeper and deeper stages with unknown details are imaginable.
Therefore, it is desirable to have a meta-description of thermodynamics that, starting with
a detailed model, allows to deduce the properties of each coarser model, in a way that all
description levels are consistent with the current state of the art in physics. Moreover, the
results should be as independent as possible of unknown details at the lower levels. This
meta-description is the subject of statistical mechanics.
This chapter introduces the technical machinery of statistical mechanics, Gibbs states and
the partition function, in a uniform way common to classical mechanics and quantum
mechanics. As in the phenomenological case, the intensive variables determine the state
(which now is a more abstract object), whereas the extensive variables now appear as
values of other abstract objects called quantities. This change of setting allows the natural
incorporation of quantum mechanics, where quantities need not commute, while values are
numbers observable in principle, hence must satisfy the commutative law.
The operational meaning of the abstract concepts of quantities, states and values introduced
in the following becomes apparent once we have recovered the phenomenological results of
Chapter 7 from the abstract theory developped in this and the next chapter. Chapter 10
discusses in more detail how the theory relates to experiment.
179
180
8.1
Quantities
Any fundamental description of physical systems must give account of the numerical values
of quantities observable in experiments when the system under consideration is in a specified
state. Moreover, the form and meaning of states, and of what is observable in principle,
must be clearly defined. We consider an axiomatic conceptual foundation on the basis of
quantities1 and their values, consistent with the conventions adopted by the International
System of Units (SI) [265], who declare: A quantity in the general sense is a property
ascribed to phenomena, bodies, or substances that can be quantified for, or assigned to, a
particular phenomenon, body, or substance. [...] The value of a physical quantity is the
quantitative expression of a particular physical quantity as the product of a number and a
unit, the number being its numerical value.
In different states, the quantities of a given system may have different values; the state
(equivalently, the values determined by it) characterizes an individual system at a particular
time. Theory must therefore define what to consider as quantities, what as states, and how
a state assigns values to a quantity. Since quantities can be added, multiplied, compared,
and integrated, the set of all quantities has an elaborate structure whose properties we
formulate after the discussion of the following motivating example.
8.1.1 Example. As a simple example satisfying the axioms to be introduced, the reader
may think of an N-level quantum system.
The quantities are the elements of the
N N
algebra E = C
of square complex N N matrices, the constants are the multiples
of the identity
matrix,
the conjugate f of f is given by conjugate transposition, and the
R
integral g = tr g is the trace, the sum of the diagonal entries or, equivalently, the sum
of the eigenvalues. The standard basis consisting of the N unit vectors ek with a one in
component k and zeros in all other component corresponds to the N levels of the quantum
systems. The Hamiltonian H is represented by a diagonal matrix H = Diag(E1 , . . . , EN )
whose diagonal entries Ek are the energy levels of the system. In the nondegenerate
case, all Ek are distinct, and the diagonal matrices comprise all functions of H. Quantities
given by arbitrary nondiagonal matrices are less easy to interpret. However, an important
class of quantities are the matrices of the form P = , where is a vector of norm 1;
they satisfy P 2 = P = P and are the quantities observed in binary measurements such as
detector clicks; see Section 10.5. The states of the N-level system are mappings defined
by a density matrix E, a positive semidefinite Hermitian matrix with trace one,
assigning to each quantity f E the value hf i = tr f of f in this state. The diagonal
entries pk := kk represent the probability for obtaining a response in a binary test for the
kth quantum level; the off-diagonal entries jk represent deviations from a classical mixture
of quantum levels.
8.1.2 Definition.
1
Quantities are formal, numerical properties associated to a given system in a given state. We deliberately avoid the notion of observables, since it is not clear on a fundamental level what it means to observe
something, and since many things (such as the fine structure constant, neutrino masses, decay rates, scattering cross sections) which can be observed in nature are only indirectly related to what is traditionally
called an observable in quantum mechanics. The related problem of how to interpret measurements is
discussed in Section 10.4.
8.1. QUANTITIES
181
(i) A -algebra is a set E whose elements are called quantities, together with operations
on E defining for any two quantities f, g E the sum f + g E, the product f g E,
and the conjugate f E, such that the following axioms (Q1)(Q4) hold for all C
and all f, g, h E:
(Q1) C E, i.e., complex numbers are special elements called constants, for which
addition, multiplication and conjugation have their traditional meaning.
(Q2) (f g)h = f (gh), f = f , 0f = 0, 1f = f .
(Q3) (f + g) + h = f + (g + h), f (g + h) = f g + f h, f + 0 = f .
(Q4) f = f , (f g) = g f , (f + g) = f + g .
(ii) A -algebra E is called commutative if f g = gf for all f, g E, and noncommutative
otherwise. The -algebra E is called nondegenerate if
(Q5) f f = 0
f = 0.
[f, g] := f g gf,
f 0 := 1, f l := f l1f (l = 1, 2, . . .),
1
1
Re f := (f + f ), Im f := (f f ),
2
2i
for f, g E. [f, g] is called the commutator of f and g, and Re f , Im f are referred to
as the real part (or Hermitian part) and imaginary part of f , respectively. f E is
called Hermitian if f = f .
(iv) A -homomorphism is a mapping from a -algebra E with unity to another (or the
same) -algebra E with unity such that
(f + g) = (f ) + (g),
(f g) = (f )(g),
(f ) = (f ) ,
(f ) = (f ),
(1) = 1.
(8.1)
(8.2)
182
8.1.4 Definition.
(i) The -algebra E is called partially ordered if there is a partial order satisfying the
following axioms (Q6)(Q9) for all f, g, h E:
(Q6) is reflexive (f f ), antisymmetric (f g f f = g), and transitive
(f g h f h).
(Q7) f g f + h g + h.
(Q8) f 0
f = f and g f g 0.
(Q9) 1 0.
We introduce the notation
f g : g f,
kf k := inf{ R | f f 2 , 0},
where the infimum of the empty set is taken to be . The number kf k is referred to as the
(spectral) norm of f . An element f E is called bounded if kf k < . The uniform
topology is the topology induced on E by declaring as open sets arbitrary unions of finite
intersections of the open balls {f E | kf f0 k < } for some > 0 and some f0 E.
8.1. QUANTITIES
183
8.1.5 Proposition.
(i) For all quantities f , g, h E and C,
f f 0, f f 0,
f f 0
f g
kf k = 0
(8.3)
f = 0,
f g + g f 2kf k kgk,
kf k = ||kf k,
kf gk kf k kgk,
kf gk kf k kgk.
(8.4)
(8.5)
(8.6)
(8.7)
(8.8)
(ii) Among the complex numbers, precisely the nonnegative real numbers satisfy 0.
Proof. (i) (8.3) follows from the case f = 1 of (Q8) by substituting then f or f for g. (8.4)
follows from (8.3), the definition of the norm, and (Q5). To prove (8.5), we deduce from
f g and (Q7) that g f 0, then use (Q8) to find h gh h f h = h (g f )h 0, hence
h f h h gh. Specializing to g = sqrt|| then gives ||f ||g.
To prove (8.5), let = kf k, = kgk. Then f f 2 and g g 2 . Since
0 (f g)(f g) = 2 f f (f g + g f ) + 2 g g
2 2 (f g + g f ) + 2 2 ,
f g + g f 2 if 6= 0, and for = 0, the same follows from (8.4). Therefore (8.6)
holds. The first half of (8.7) is trivial, and the second half follows for the plus sign from
(f + g) (f + g) = f f + f g + g f + g g 2 + 2 + 2 = ( + )2 ,
and then for the minus sign from the first half. Finally, by (8.5),
(f g) (f g) = g f f g g 2 g = 2 g g 2 2 .
This implies (8.8).
(h + h ) = h + h ,
gh = hg,
( h) = h ,
184
(EA3)
(EA4)
(EA5)
h h > 0 if h 6= 0,
h gh = 0 for all strongly integrable h g = 0
hl hl 0
(EA6) hl 0
ghl 0,
inf hl = 0
(nondegeneracy),
hl ghl 0,
(Dini property).
Here, integrals extend over the longest following product or quotient (in contrast to differential operators, which act on the shortest syntactically meaningful term), the monotonic
R
limit is defined by gl 0 iff, for every strongly integrable h, the sequence (or net) h gl h
consists of real numbers converging monotonically decreasing to zero.
Note that the integral can often be naturally extended from strongly integrable quantities
to a significantly larger space of integrable quantities.
8.1.7 Proposition.
R
g E,
For strongly integrable f, g,
R
gf = 0 for all f E
g = 0.
(8.9)
(8.10)
We now describe the basic Euclidean -algebras relevant in nonrelativistic physics. However,
the remainder is completely independent of details how the axioms are realized; a specific
realization is needed only when doing specific quantitative calculations.
8.1.8 Examples.
(i) (N-level quantum systems) The simplest family of Euclidean -algebras is the algebra
E = CN N of square complex N N matrices; cf. Example 8.1.1. Here the quantites are
square matrices, the constants are the multiples of the identity matrix, the conjugate is
conjugate transposition, and the integral is the trace, the sum of the diagonal entries or,
equivalently, the sum of the eigenvalues. In particular, all quantities are strongly integrable.
(ii) (Nonrelativistic classical mechanics) An atomic N-particle system is described in
classical mechanics by the phase space R6N with six coordinates position xa R3 and
momentum pa R3 for each particle. The algebra
EN := C (R6N )
8.1. QUANTITIES
185
where C is a positive constant. Strongly integrable quantities are the Schwartz functions
in E.2 The axioms are easily verified.
(iii) (Classical fluids) A fluid is classically described by an atomic system with an indefinite number of particles. The appropriate Euclidean -algebra for a single species of
monatomic particles is the direct sum E = N 0 EN whose quantities are infinite sequences
sym
g = (g0 , g1 , ...) of gN Esym
consisting of all permutation-invariant functions
N , with EN
functions form EN as in (ii), and weighted Liouville integral
Z
X
R
1
g=
CN
dp1:N dx1:N gN (x1:N , p1:N ).
N 0
Here CN is a symmetry factor for the symmetry group of the N-particle systen, which equals
h3N N! for indistinguishable particles; h = 2h is Plancks constant. This accounts for the
Maxwell statistics and gives the correct entropy of mixing. Classical fluids with monatomic
particles of several different kinds require a tensor product of several such algebras, and
classical fluids composed of molecules require additional degrees of freedom to account for
the rotation and vibration of the molecules.
(iv) (Nonrelativistic quantum mechanics) Let H be a Euclidean space, a dense subspace of a Hilbert space. Then the algebra E := Lin H of continuous linear operators on H
is a Euclidean -algebra with the adjoint as conjugate and the quantum integral
R
g = tr g,
given by the trace of the quantity in the integrand. Strongly integrable quantities are the
operators g E which are trace class; this includes all linear operators of finite rank. Again,
the axioms are easily verified. In the quantum context, Hermitian quantities f are often
referred to as observables; but we do not use this term here.
We end this section by stating some results needed later. The exposition in this and the
next chapter is fully rigorous if the statements of Proposition 8.1.9 and Proposition 8.1.10
are assumed in addition to (EA1)(EA6). We prove these propositions only in case that E
is finite-dimensional3 . But they can also be proved if the quantities involved are smooth
functions, or if they have a spectral resolution; cf., e.g., Thirring [267] (who works in the
framework of C -algebras and von Neumann algebras).
2
z1k1
. . . znkn
3
l1 +...+ln
f (z)
l
ln
z11 ...zn
Wed appreciate to be informed about possible proofs in general that only use the properties of Euclidean
-algebras (and perhaps further, elementary assumptions).
186
(ef ) = ef ,
ef g = gef
if f and g commute,
f = f
f 0
log ef = f,
p
p
f 0, ( f )2 = f,
Proof. In finite dimensions, the first fiveR assertions are standard matrix calculus, and
the remaining two statements hold since f must be a finite linear combination of the
components of f .
(8.11)
=
=
m+n
X
j=1
m+n
X
j=1
f1 . . . fj1
dfj
fj+1 . . . fm+n
d
dfj
,
d
187
Of course, the proposition generalizes to families of more than two commuting quantities;
but more important is the special case g = f :
8.1.11 Corollary. For any quantity f depending continuously differentiably on a parameter vector , and any continuously differentiable function F of a single variable,
R
dR
df
F (f ) = F (f ) .
d
d
8.2
(8.14)
Gibbs states
Our next task is to specify the formal properties of the value of a quantity.
8.2.1 Definition. A state is a mapping that assigns to all quantities f from a subspace
of E containing all bounded quantities its value hf i C such that for all f, g E, C,
(E1) h1i = 1, hf i = hf i , hf + gi = hf i + hgi,
(E2) hf i = hf i,
(E3) If f 0 then hf i 0,
(E4) If fl E, fl 0 then hfl i 0.
Note that this formal definition of a state always used in the remainder of the book
differs from the phenomenological thermodynamic states defined in Section 7.1. The
connection between the two notions will be made in Section 9.2.
Statistical mechanics essentially originated with Josiah Willard Gibbs, whose 1902 book
Gibbs [102] on (at that time of course only classical) statistical mechanics is still readable.
See Uffink [273] for a history of the subject.
All states arising in thermodynamics have the following particular form.
8.2.2 Definition. A Gibbs state is defined by assigning to any g E the value
R
hgi := eS/k g,
(8.15)
where S, called the entropy of the state, is a Hermitian quantity with strongly integrable
eS/k , satisfying the normalization condition
eS/k = 1,
(8.16)
(8.17)
188
The Boltzmann constant defines the units in which the entropy is measured. In analogy4
with Plancks constant h
, we write k in place of the customary k or kB , in order to be free
to use the letter k for other purposes. By a change of units one can enforce any value of k .
Chemists use instead of particle number N the corresponding mole number, which differs
by a fixed numerical factor, the Avogadro constant
NA = R/k 6.02214 1023 mol1 ,
where R is the universal gas constant (7.8). As a result, all results from statistical mechanics
may be translated to phenomenological thermodynamics by setting k = R, corresponding
to setting 1 mol = 6.02214 1023 , the number of particles in one mole of a pure substance.
What is here called entropy has a variety of alternative names in the literature on statistical mechanics. For example, Gibbs [102], who first noticed the rich thermodynamic
implications of states defined by (8.15), called S the index of probability; Alhassid &
Levine [7] and Balian [18] use the name surprisal for S. Our terminology is close to that
of Mrugala et al. [196], who call S the microscopic entropy, and Hassan et al. [122],
who call S the information(al) entropy operator. What is traditionally (and in Section
7.1)
R
called entropy and denoted by S is in the present setting the value S = hSi = eS/k S.
8.2.3 Theorem.
(i) A Gibbs state determines its entropy uniquely.
(ii) For any Hermitian quantity f with strongly integrable ef , the mapping hif defined by
R
hgif := Zf1 ef g,
where Zf := ef ,
(8.18)
Sf := k (f + log Zf ).
(8.19)
(iii) The KMS condition (cf. Kubo [161], Martin & Schwinger [186])
hghif = hhQf gi for bounded g, h
(8.20)
for all g, hence (8.9) gives eS/k eS /k = 0. This implies that eS/k = eS /k , hence
S = S by Proposition 8.1.9.
4
As we shall see in (20.20) and (9.43), h and k play indeed analogous roles in quantum mechanical and
thermodynamic uncertainty relations.
189
hhif = hg gif = g g = gg 0.
g = h and
Zf hgif = ef hh = h ef h = k k 0.
This implies (E3). the other axioms (E1)(E4) follow easily from the corresponding properties of the integral. Thus hif is a state. Finally, with the definition (8.19), we have
Zf1 ef = ef log Zf = eSf /k ,
whence hif is a Gibbs state.
R
Note that the state (8.18) is unaltered when f is shifted by a constant. Qf is called the
modular automorphism of the state hif since Qf (gh) = Qf (g)Qf (h); for a classical
system, Qf is the identity. In the following, we shall not make use of the KMS condition;
however, it plays an important role in the mathematics of the thermodynamic limit (cf.
Thirring [267]).
Zf is called the partition function of f ; it is a function of whatever parameters appear
in a particular form given to f in the applications. A large part of traditional statistical
mechanics is concerned with the calculation, for given f , of the partition function Zf and
of the values hgif for selected quantities g. As we shall see, the basic results of statistical
mechanics are completely independent of the details involved, and it is this basic part that
we concentrate upon in this book.
8.2.4 Example. A canonical ensemble5 , is defined as a Gibbs state whose entropy is
an affine function of a Hermitian quantity H, called the Hamiltonian:
S = H + const,
with a constant depending on , computable from (8.19) and the partition function
R
Z = eH
5
190
where the En (n N ) are the energy levels, the eigenvalues of H. If the spectrum of H
is known, this leads to explicit formulas for Z. For example, a two level system is defined
by the energy levels 0, E (or E0 and E0 + E, which gives the same results), and has
Z = 1 + eE .
(8.22)
It describes a single Fermion mode, but also many other systems at low temperature; cf.
(9.57). In particular, it is the basis of laser-induced chemical reactions in photochemistry
(see, e.g., Karlov [144], Murov et al. [198]), where only two electronic energy levels (the
ground state and the first excited state) are relevant; cf. the discussion of (9.57) below.
For a harmonic oscillator, defined by the energy levels nE, n = 0, 1, 2, . . . and describing
a single Boson mode, we have
Z=
X
n=0
enE = (1 eE )1 .
Independent modes are modelled by taking tensor products of single mode algebras and
adding their Hamiltonians, leading to spectra which are obtained by summing the eigenvalues of the modes in all possible ways. The resulting partition function is the product
of the single-mode partition functions. ] From here, a thermodynamic limit leads to the
properties of ideal gases. Then nonideal gases due to interactions can be handled using the
cumulant expansion, as indicated at the end of Section 8.3. The details are outside the
scope of this book.
Since the Hamiltonian can be any Hermitian quantity, the quantum partition function
formula (8.21) can in principle be used to compute the partition function of arbitrary
quantized Hermitian quantities.
8.3
The negative logarithm of the partition function, the so-called generating functional, plays
a fundamental role in statistical mechanics.
We first discuss a number of general properties, discovered by Gibbs [102], Peierls [213],
Bogoliubov [37], Kubo [162], Mori [194], and Griffiths [112]. The somewhat technical
setting involving the Kubo inner product is necessary to handle noncommuting quantities
correctly; everything would be much easier in the classical case. On a first reading, the
proofs in this section may be skipped.
191
8.3.1 Proposition. Let f be Hermitian such that esf is strongly integrable for all s
[1, 1]. Then
hg; hif := hgEf hif ,
(8.23)
where Ef is the linear mapping defined for Hermitian f by
Z 1
Ef h :=
ds esf hesf ,
0
defines a bilinear, positive definite inner product h ; if on the algebra of quantities, called
the Kubo (or Mori or Bogoliubov) inner product. For all f, g, the following relations
hold:
hg; hif = hh ; g if ,
(8.24)
hg ; gif > 0 if g 6= 0,
if g C,
(8.26)
if g or h commutes with f ,
(8.27)
(8.25)
if g commutes with f .
(8.28)
(8.29)
= h(gEf h) if = h(Ef h) g if =
DZ
sf sf
ds e h e
dshesf h esf g if .
by (EA2), hence
Z 1
D Z
sf sf
dshh e g e if = h
hg; hif =
ds esf g esf
= hh Ef g if = hh ; g if .
so that
hg ; gif = hg Ef gif =
This proves (8.25), and shows that the Kubo inner product is positive definite.
192
dsg = g,
0
giving (8.28). The definition of the Kubo inner product then implies (8.27), and taking
g C gives (8.26).
(iv) The function q on [0, 1] defined by
Z t
d
df
q(t) :=
ds esf esf +
etf etf
d
d
0
satisfies q(0) = 0 and
d
d
d
tf df tf
tf
f etf + (etf f )etf = 0.
q(t) = e
e +
e
dt
d
d
d
(8.30)
If the f () commute for all values of then the quantum chain rule reduces to the classical
df
df
df
; hence Ef d
= d
, and Ef df = df .
chain rule. Indeed, then f commutes also with d
The following theorem is central to the mathematics of statistical mechanics. As will be
apparent from the discussion in the next chapter, part (i) is the abstract mathematical
form of the second law of thermodynamics, part (ii) allows the actual computation of
thermal properties from microscopic assumptions, and part (iii) is the abstract form of the
first law.
8.3.2 Theorem. Let f be Hermitian such that esf is strongly integrable for all s [1, 1].
(i) The generating functional
R
W (f ) := log ef
(8.31)
(8.32)
(8.33)
193
2
(hgi2f hg; gif ) + O( 3 )
2
(8.34)
(8.35)
dW (f ) = hdf if .
(8.36)
S = k (f W (f )).
(8.37)
On the other hand, d gef = d(Zf hgif ) = dZf hgif + Zf dhgif , so that
(8.38)
In particular, for g = 1 we find by (8.26) that dZf = Zf h1; df if = Zf hdf if . Now (8.36)
follows from dW (f ) = d log Zf = dZf /Zf = hdf if , and solving (8.38) for dhgif gives
(8.35).
(ii) Equation (8.33) follows from
R
+ hgi2f + g .
( ) =
hgif + g = g
d
d
f + g
In particular,
(0) = hgif ,
(8.39)
194
d2
0
W
(f
+
g)
2
=0
d
for all f, g. This implies that W (f ) is concave. Moreover, replacing f by f + sg, we find
that (s) 0 for all s. The remainder form of Taylors theorem therefore gives
Z
( ) = (0) + (0) +
ds( s) (s) (0) + (0),
0
(8.40)
The difference
hSc Si = hSc i hSi 0
(8.42)
195
Using Wm (g) in place of W (g) defines a so-called mean field theory; cf. Callen [55]. For
computations from first principles (quantum field theory), see, e.g., the survey by Berges
et al. [134].
8.4
Definition 8.2.1 generalizes the expectation axioms of Whittle [288, Section 2.2] for classical probability theory. Indeed, the values of our quantities are traditionally called expectation values, and refer to the mean over an ensemble of (real or imagined) identically
prepared systems.
In our treatment, we keep the notation with pointed brackets familiar from statistical
mechanics, but use the more neutral term value for hf i to avoid any reference to probability
or statistics. This keeps the formal machinery completely independent of controversial
issues about the interpretation of probabilities. Statistics and measurements, where the
probabilistic aspect enters directly, are discussed separately in Chapter 10.2.
Our analysis of the uncertainty inherent in the description of a system by a state is based
on the following result.
8.4.1 Proposition. For Hermitian g,
hgi2 hg 2i.
(8.44)
196
(8.45)
is called the limit resolution of a Hermitian quantity g with nonzero value hgi.
Note that (E3) and (8.44) ensure that (f ) and res(g) are nonnegative real numbers that
vanish if f, g are constant, i.e., complex numbers, and g 6= 0. This definition is analogous to
the definitions of elementary classical statistics, where E is a commutative algebra of random
variables, to the present, more general situation; in a statistical context, the uncertainty
(f ) is referred to as standard deviation.
There is no need to associate an intrinsic statistical meaning to the above concepts. We
treat the uncertainty (f ) and the limit resolution res(g) simply as an absolute and relative
uncertainty measure, respectively, specifying how accurately one can treat g as a sharp
number, given by this value.
In experimental practice, the limit resolution is a lower bound on the relative accuracy
with which one can expect hgi to be determinable reliably6 from measurements of a single
system at a single time. In particular, a quantity g is considered to be significant if
res(g) 1, while it is noise if res(g) 1. If g is a quantity and e
g is a good approximation
of its value then g := g e
g is noise. Sufficiently significant quantities can be treated as
deterministic; the analysis of noise is the subject of statistics.
8.4.3 Proposition. For any state,
(i) f g
6
hf i hgi.
The situation is analogous to the limit resolution with which one can determine the longitude and
latitude of a city such as Vienna. Clearly these are well-defined only up to some limit resolution related to
the diameter of the city. No amount of measurements can reduce the uncertainty below about 10km. For
an extended object, the uncertainty in its position is conceptual, not just a lack of knowledge or precision.
Indeed, a point may be defined in these terms: It is an object in a state where the position has zero limit
resolution.
197
Formally, the essential difference between classical mechanics and quantum mechanics in
the latters lack of commutativity. While in classical mechanics there is in principle no lower
limit to the uncertainties with which we can prepare the quantities in a system of interest,
the quantum mechanical uncertainty relation for noncommuting quantities puts strict limits
on the uncertainties in the preparation of microscopic states. Here, preparation is defined
informally as bringing the system into a state such that measuring certain quantities f gives
numbers that agree with the values hf i to an accuracy specified by given uncertainties.
We now discuss the limits of the accuracy to which this can be done.
8.4.4 Proposition.
(i) The CauchySchwarz inequality
|hf gi|2 hf f ihg gi
holds for all f, g E.
(ii) The uncertainty relation
2
(f )2 (g)2 | cov(f, g)|2 + 12 hf g g f i
198
(8.46)
(8.47)
(f + g) (f ) + (g).
(8.48)
(8.49)
In particular,
(8.50)
(8.51)
and (8.46) follows. (8.47) is an immediate consequence of (ii), and (8.48) follows easily from
(8.51) and (8.47). Finally, (8.49) is a consequence of (8.47) and Proposition 8.4.3(iii).
199
(8.52)
(q)(p) 21 h
,
(8.53)
we obtain
the uncertainty relation of Heisenberg [123, 237]. It implies that no state exists where
both position q and momentum p have arbitrarily small uncertainty.
200
Chapter 9
The laws of thermodynamics
This chapter rederives the laws of thermodynamics from statistical mechanics, thus putting
the phenomenological discussion of Chapter 7 on more basic foundations.
We confine our attention to a restricted but very important class of Gibbs states, those
describing thermal states. We introduce thermal states by selecting the quantities whose
values shall act as extensive variables in a thermal model. On this level, we shall be able to
reproduce the phenomenological setting of the present section from first principles; see the
discussion after Theorem 9.2.3. If the underlying detailed model is assumed to be known
then the system function, and with it all thermal properties, are computable in principle,
although we only hint at the ways to do this numerically. We also look at a hierarchy
of thermal models based on the same bottom level description and discuss how to decide
which description levels are appropriate.
Although dynamics is important for systems not in global equilibrium, we completely ignore
dynamical issues in this chapter. We take a strictly kinematic point of view, and look as
before only at a single phase without chemical reactions. In principle, it is possible to
extend the present setting to cover the dynamics of the nonequilibrium case and deduce
quantitatively the dynamical laws of nonequilibrium thermodynamics (Beris & Edwards
[33], Oettinger [209]) from microscopic properties, including phase formation, chemical
reactions, and the approach to equilibrium; see, e.g., Balian [18], Grabert [109], Rau
ller [228], Spohn [257].
& Mu
9.1
Thermal states are special Gibbs states, used in statistical mechanics to model macroscopic
physical systems that are homogeneous on the (global, local, microlocal, or quantum) level
used for modeling. They have all the properties traditionally postulated in thermodynamics.
While we discuss the lower levels on an informal basis, we consider in the formulas for
notational simplicity mainly the case of global equilibrium, where there are only finitely
many extensive variables. Everything extends, however, with (formally trivial but from a
201
202
rigorous mathematical view nontrivial) changes to local and microlocal equilibrium, where
extensive variables are fields, provided the sums are replaced by appropriate integrals; cf.
Oettinger [209].
In the setting of statistical mechanics, the intensive variables are, as in Section 7.1, numbers
parameterizing the entropy and characterizing a particular system at a particular time. To
each admissible combination of intensive variables there is a unique thermal state providing
values for all quantities. The extensive variables then appear as the values of corresponding
extensive quantities.
A basic extensive quantity present in each thermal system is the Hamilton energy H;
it is identical to the Hamiltonian function (or operator) in the underlying dynamical description of the classical (or quantum) system. In addition, there are further basic extensive
quantities which we call Xj (j J) and collect in a vector X, indexed by J. All other extensive quantities are expressible as linear combinations of these basic extensive quantities.
The number and meaning of the extensive variables depends on the type of the system;
typical examples are given in Table 10.1 in Section 10.2.
In the context of statistical mechanics (cf. Examples 8.1.8), the Euclidean -algebra E
is typically an algebra of functions (for classical physics) or linear operators (for quantum
physics), and H is a particular function or linear operator characterizing the class of systems
considered. The form of the operators Xj depends on the level of thermal modeling; for
further discussion, see Section 10.2.
For qualitative theory and for deriving semi-empirical recipes, there is no need to know
details about H or Xj ; it suffices to treat them as primitive objects. The advantage we
gain from such a less detailed setting is that to reconstruct all of phenomenological thermodynamics, a much simpler machinery suffices than what would be needed for a detailed
model
It is intuitively clear from the informal definition of extensive variables in Section 7.5 that
the only functions of independent extensive variables that are again extensive can be linear
combinations, and it is a little surprising that the whole machinery of equilibrium thermodynamics follows from a formal version of the simple assumption that in thermal states the
entropy is extensive. We take this to be the mathematical expression of the zeroth law and
formalize this assumption in a precise mathematical definition.
9.1.1 Definition. A thermal system is defined by a family of Hermitian extensive
variables H and Xj (j J) from a Euclidean -algebra. A thermal state of a thermal
system is a Gibbs state whose entropy S is a linear combination of the basic extensive
quantities of the form
X
j Xj = T 1 (H X) (zeroth law of thermodynamics) (9.1)
S = T 1 H
jJ
with suitable real numbers T 6= 0 and j (j J). Here and X are the vectors with
components j (j J) and Xj (j J), respectively.
203
g := hgi = e(HX) g,
where
(9.2)
1
.
(9.3)
k T
The numbers j are called the intensive variables conjugate to Xj , the number T is
called the temperature, and the coldness. S, H, X, T , and are called the thermal
variables of the system. Note that the extensive variables of traditional thermodynamics
are in the present setting not represented by the extensive quantities S, H, Xj themselves
but by their values S, H, X.
=
(9.4)
called the Euler equation, the temperature T is considered to be the intensive variable
conjugate to the entropy S.
9.1.2 Remarks. (i) As already discussed in Section 7.2 for the case of temperature, measuring intensive variables is based upon the empirical fact that two systems in contact where
the free exchange of some extensive quantity is allowed tend to relax to a joint equilibrium
state, in which the corresponding intensive variable is the same in both systems. If a small
measuring device is brought into close contact with a large system, the joint equilibrium
state will be only minimally different from the original state of the large system; hence the
intensive variables of the measuring device will move to the values of the intensive variables
of the large system in the location of the measuring device. This allows to read off their
value from a calibrated scale.
(ii) Many treatises of equilibrium thermodynamics take the possibility of measuring temperature to be the contents of the zeroth law of thermodynamics. The present, different choice
for the zeroth law has far reaching consequences. Indeed, as we shall see, the definition
implies the first and second law, and (together with a quantization condition) the third law
of thermodynamics. Thus these become theorems rather than separately postulated laws.
(iii) We emphasize that the extensive quantities H and Xj are independent of the intensive
quantities T and , while S, defined by (9.1), is an extensive quantity defined only when
values for the intensive quantities are prescribed. From (9.1) it is clear that values also
depend on the particular state a system is in. It is crucial to distinguish between the
quantities H or Xj , which are part of the definition of the system but independent of the
state (since they are independent of T and ), and their values H = hHi or X j = hXj i,
which change with the state.
(iv) In thermodynamics, the interest is restricted to the values of the thermal variables. In
statistical mechanics, the values of the thermal variables determine a state of the microscopic
system. In particular, the knowledge of the intensive variables allows one to compute the
values (9.2) of arbitrary microscopic quantities, not only the extensive ones. Of course, these
values dont give information about the position and momentum of individual particles but
204
only about their means. For example, the mean velocity of an ideal monatomic gas at
temperature T turns out to be hvi = 0, and the mean velocity-squared is hv2 i = 3k T . (We
dont derive these relations here; usually they are obtained from a starting point involving
the Boltzmann equation.)
(v) A general Gibbs state has an incredibly high complexity. Indeed, in the classical case,
the specification of an arbitrary Gibbs state for 1 mole of a pure, monatomic substance such
as Argon requires specifying the entropy S, a function of 6NA 361023 degrees of freedom.
In comparison, a global equilibrium state of Argon is specified by three numbers T, p and ,
a local equilibrium state by three fields depending on four parameters (time and position)
only, and a microlocal equilibrium state by three fields depending on seven parameters
(time, position, and momentum). Thus global, local, and microlocal equilibrium states
form a small minority in the class of all Gibbs states. It is remarkable that this small class
of states suffices for the engineering accuracy description of all macroscopic phenomena.
(vi) Of course, the number of thermal variables or fields needed to describe a system depends
on the true physical situation. For example, a system that is in local equilibrium only
cannot be adequately described by the few variables characterizing global equilibrium. The
problem of selecting the right set of extensive quantities for an adequate description is
discussed in Section 10.2.
(vii) The formulation (9.1) is almost universally used in practice. However, an arbitrary
linear combination
S = H + h0 X0 + . . . + hs Xs
(9.5)
can be written in the form (9.1) with T = 1/ and j = hj /, provided that 6= 0;
indeed, (9.5) is mathematically the more natural form, which also allows states of infinite
temperature that are excluded in (9.1). This shows that the coldness is a more natural
variable than the temperature T ; it figures prominently in statistical mechanics. Indeed,
the formulas of statistical mechanics are continuous in even for systems such as those
considered in Example 9.2.5, where may become zero or negative. The temperature T
reaches in this case infinity, then jumps to minus infinity, and then continues to increase.
According to Landau & Lifschitz [168, Section 73], states of negative temperature, i.e.,
negative coldness, must therefore be considered to be hotter, i.e., less cold, than states
of any positive temperature. On the other hand, in the limit T 0, a system becomes
infinitely cold, giving intuition for the unattainability of zero absolute temperature.
(viii) In mathematical statistics, there is a large body of work on exponential families,
which is essentially the mathematical equivalent of the concept of a thermal state over a
commutative algebra; see, e.g., Barndorff-Nielsen [25]. In this context, the values of
the extensive quantities define a sufficient statistic, from which the whole distribution can be
reconstructed (cf. Theorem 9.2.4 below and the remarks on objective probability in Section
8.4). This is one of the reasons why exponential families provide a powerful machinery for
statistical inference; see, e.g., Bernardo & Smith [34]. For recent extensions to quantum
statistical inference, see, e.g., Barndorff-Nielsen et al. [26] and the references there.
(ix) For other axiomatic settings for deriving thermodynamics, which provide different
perspectives, see Carath
eodory [57], Haken [119], Jaynes [139], Katz [147], Emch &
205
9.2
Not every combination (T, ) of intensive variables defines a thermal state; the requirement
that h1i = 1 enforces a restriction of (T, ) to a manifold of admissible thermal states.
9.2.1 Theorem. Suppose that T > 0.
(i) For any > 0, the system function defined by
R
(9.6)
is a convex function of T and . It vanishes only if T and are the intensive variables of
a thermal state.
(ii) In a thermal state, the intensive variables are related by the equation of state
(T, ) = 0.
(9.7)
(T, ),
T
X=
(9.8)
S S
T
:=
X X
T
(9.9)
(9.10)
(9.12)
206
1
,
k T k T
(9.13)
the condition for a thermal state. This proves (i) and (ii).
(iii) The formulas for S and X follow by differentiation of (9.13) with respect to T and ,
using (8.36). Equation (9.9) follows by taking values in (9.4), noting that T and are real
numbers.
(iv) By (iii), the matrix
2
2
T 2 T
2
2
T
2
is the Hessian matrix of the convex function . Hence is symmetric and positive semidefinite. (9.11) expresses the symmetry of , and (9.12) holds since the diagonal entries of a
positive semidefinite matrix are nonnegative.
9.2.2 Remarks. (i) For T < 0, the same results hold, with the change that is concave
instead of convex, is negative semidefinite, and the inequality signs in (9.12) are reversed.
This is a rare situation; it can occur only in (nearly) massless systems embedded out of
equilibrium within (much heavier) matter, such as spin systems (cf. Purcell & Pound
[224]), radiation fields in a cavity (cf. Hsu & Barakat [128]), or vortices in 2-dimensional
fluids (cf. Montgomery & Joyce [192], Eyinck & Spohn [83]). A massive thermal
system couples significantly to kinetic energy. In this case, the total momentum p is an
extensive quantity, related to the velocity v, the corresponding intensive variable, by p =
Mv, where M is the extensive total mass of the system. From (9.8), we find that p =
M 2
/v, which implies that = |v=0 + 2
v . Since the mass is positive, this expression is
convex in v, not concave; hence T > 0. Thus, in a massive thermal system, the temperature
must be positive.
(ii) In applications, the free scaling constant is usually chosen as
= k /,
(9.14)
where is a measure of system size, e.g., the total volume or total mass of the system.
In actual calculations from statistical mechanics, the integral is usually a function of the
207
shape and size of the system. To make the result independent of it, one performs the socalled thermodynamic limit ; thus must be chosen in such a way that this limit
is nontrivial. Extensivity in single phase global equilibrium then justifies treating as an
arbitrary positive factor.
In phenomenological thermodynamics (cf. Section 7.1), one makes suitable, more or less
heuristic assumptions on the form of the system function, while in statistical mechanics,
one derives its form from (9.7) and specific choices for the quantities H and X within one
of the settings described in Example 8.1.8. Given these choices, the main task is then
the evaluation of the system function (9.6), from which the values of all quantities can be
computed. (9.6) can often be approximately evaluated from the cumulant expansion (8.34)
and/or a mean field approximation (8.43).
An arbitrary Gibbs state is generally not a thermal state. However, we can try to approximate it by an equilibrium state in which the extensive variables have the same values. The
next result shows that the slack (the difference between the left hand side and the right
hand side) in (9.15), which will turn out to be the microscopic form of the Euler inequality
(7.2), is always nonnegative and vanishes precisely in equilibrium. Thus it can be used as
a measure of how close the Gibbs state is to an equilibrium state.
9.2.3 Theorem. Let hi be a Gibbs state with entropy S. Then, for arbitrary (T, )
satisfying T > 0 and the equation of state (9.7), the values H = hHi, S = hSi, and
X = hXi satisfy
H T S X.
(9.15)
Equality only holds if S is the entropy of a thermal state with intensive variables (T, ).
Xj = Nj (j 6= 0),
(9.16)
where, as before, V denotes the (positive) volume of the system, and each Nj denotes
the (nonnegative) number of molecules of a fixed chemical composition (we shall call these
particles of kind j). However, H and the Nj are now quantities from E, rather than
thermal variables. We call
P := 0
(9.17)
the pressure and
j := j
(j 6= 0)
(9.18)
208
(9.19)
Note that V = V since we took V as system size. For reversible changes, we have the first
law of thermodynamics
dH = T dS P dV + dN
(9.20)
and the Gibbs-Duhem equation
0 = SdT V dP + N d.
(9.21)
A comparison with Section 7.1 shows that dropping the bars from the values reproduces
for T > 0, P > 0 and S 0 the axioms of phenomenological thermodynamics, except for
the extensivity outside equilibrium (which has local equilibrium as its justification). The
assumption T > 0 was justified in Remark 9.2.2(i), and S 0 will be justified in Section 9.5.
But there seem to be no theoretical arguments which shows that the pressure of a standard
system in the above sense must always be positive. (At T < 0, negative pressure is possible;
see Example 9.2.5.) Wed appreciate getting information about this from readers of this
book.
Apart from boundary effects, whose role diminishes as the system gets larger, the extensive quantities scale linearly with the volume. In the thermodynamic limit, corresponding
to an idealized system infinitely extended in all directions, the boundary effects disappear
and the linear scaling becomes exact, although this can be proved rigorously only in simple
situations, e.g., for hard sphere model systems (Yang & Lee [297]) or spin systems (Griffiths [112]). A thorough treatment of the thermodynamic limit (e.g., Ruelle [243, 244],
Thirring [267], or, in the framework of large deviation theory, Ellis [80]) in general
needs considerably more algebraic and analytic machinery, e.g., the need to work in place
of thermal states with more abstract KMS-states (which are limits of sequences of thermal
states still satisfying a KMS condition (8.20)). Moreover, proving the existence of the limit
requires detailed properties of the concrete microscopic description of the system.
For very small systems, typically atomic clusters or molecules, N is fixed and a canonical
ensemble without the N term is more appropriate. For the thermodynamics of small
systems (see, e.g., (Bustamente et al. [54], Gross [114], Kratky [158]) such as a
single cluster of atoms, V is still taken as a fixed reference volume, but now changes in
the physical volume (adsorption or dissociation at the surface) are not represented in the
system, hence need not respect the thermodynamic laws. For large surfaces (e.g., adsorption
studies in chromatography; see Karger et al. [146], Masel [189]), a thermal description
is achievable by including additional variables (surface area and surface tension) to account
for the boundary effects; but clearly, surface terms scale differently with system size than
bulk terms.
Thus, whenever the thermal description is valid, computations can be done in a fixed
reference volume which we take as system size . (Formulas for an arbitrary volume V
209
are then derived by extensivity, scaling every extensive quantity with V /.) The reference
volume may be represented in the Euclidean -algebra as a real number, so that in particular
V = V . Then (9.6) together with e.Deltascale implies that
R
Z(T, V, ) := e(HN )
(9.22)
(9.23)
(9.24)
while P without argument is the parameter in the left hand side of (9.22). With our convention of considering a fixed reference volume and treating the true volume by scaling extensive
variables, this expression is independent of V , since it relates intensive variables unaffected
by scaling. (A more detailed argument would have to show that the thermodynamic
limit P (T, ) := limV V 1 k T log Z(T, V, ) exists, and argue that thermodynamics is
applied in practice only to systems where V is so large that the difference to the limit is
negligible.
The equation of state (9.7) therefore takes the form
P = P (T, ).
(9.25)
Quantitative expressions for the equation of state can often be computed from (9.23)(9.24)
using the cumulant expansion (8.34) and/or a mean field approximation (8.43). Note that
these relations imply that
R
eP (T,)V = e(HN ) .
Traditionally (see, e.g., Gibbs [102], Huang [130], Reichl [230]), the thermal state corresponding to (9.22)(9.24) is called a grand canonical ensemble, and the following results
are taken as the basis for microscopic calculations from statistical mechanics.
9.2.4 Theorem. For a standard system in global equilibrium, values of an arbitrary quantity g can be calculated from (9.23) and
R
(9.26)
The values of the extensive quantities are given in terms of the equation of state (9.24) by
S=V
P
(T, ),
T
Nj = V
P
(T, )
j
(9.27)
210
giving (9.26). The formulas in (9.27) follow from (9.8) and (9.22).
No thermodynamic limit was needed to derive the above results. Thus, everything holds
though with large limit resolutions in measurements even for single small systems
(Bustamente et al. [54], Gross [114], Kratky [158]).
9.2.5 Example. We consider the two level system from Example 8.2.4, using = 1 as
system size. From (9.23) and (9.24), we find Z(T, ) = 1 + eE/kT , hence
P (T, ) = k T log(1 + eE/kT ) = k T log(eE/kT + 1) E.
From (9.26), we find
H=
E
EeE/kT
=
,
1 + eE/kT
eE/kT + 1
k T =
E
.
log(E/H 1)
(This implies that a two-level system has negative temperature and negative pressure if
H > E/2.) The heat capacity C := dH/dT takes the form
C=
E2
eE/kT
.
k T 2 (eE/kT + 1)2
It exhibits a pronounced maximum, the so-called Schottky bump (cf. Callen [55]),
from which E can be determined. In view of (9.57) below, this allows the experimental
estimation of the spectral gap of a quantum system. The phenomenon persists to some
extent for multilevel systems; see Civitarese et al. [63].
9.3
We now discuss relations between changes of the values of extensive or intensive variables,
as expressed by the first law of thermodynamics. To derive the first law in full generality,
we use the concept of reversible transformations introduced in Section 7.1. Corresponding
to such a transformation, there is a family of thermal states hi defined by
R
hf i = e()(H()X) f,
() =
1
.
k T ()
Important: In case of local or microlocal equilibrium, where the thermal system carries
a dynamics, it is important to note that reversible transformations are ficticious transformations which have nothing to do with how the system changes with time, or whether a
211
process is reversible in the dynamical sense that both the process and the reverse process
can be realized dynamically. The time shift is generally not a reversible transformation.
We use differentials corresponding to reversible transformations; writing f = S/k , we can
delete the index f from the formulas in Section 8.2. In particular, we write the Kubo inner
product (8.23) as
hg; hi := hg; hiS/k .
(9.28)
9.3.1 Proposition. The value g(T, ) := hg(T, )i of every (possibly T - and -dependent)
quantity g(T, ) is a state variable satisfying the differentiation formula
dhgi = hdgi hg g; dSi/k.
(9.29)
Proof. That g is a state variable is an immediate consequence of the zeroth law (9.1) since
the entropy depends on T and only. The differentiation formula follows from (8.35) and
(9.28).
9.3.2 Theorem. For reversible changes, we have the first law of thermodynamics
dH = T dS + dX
(9.30)
(9.31)
Proof. Differentiating the equation of state (9.7), using the chain rule (7.10), and simplifying
using (9.8) gives the Gibbs-Duhem equation (9.31). If we differentiate the phenomenological
Euler equation (9.9), we obtain
dH = T dS + SdT + dX + X d,
and using (9.31), this simplifies to the first law of thermodynamics.
Because of the form of the energy terms in the first law (9.30), one often uses the analogy
to mechanics and calls the intensive variables generalized forces, and differentials of
extensive variables generalized displacements.
For the Gibbs-Duhem equation, we give a second proof which provides additional insight.
Since H and X are fixed quantities for a given system, they do not change under reversible
transformations; therefore
dH = 0, dX = 0.
Differentiating the Euler equation (9.4), therefore gives the relation
0 = T dS + SdT + X d.
(9.32)
212
(9.33)
taking values in (9.32) implies again the Gibbs-Duhem equation. By combining equation
(9.32) with the Kubo product we get information about limit resolutions:
9.3.3 Theorem.
(i) Let g be a quantity depending continuously differentiable on the intensive variables T
and . Then
g D g E
hg g; S Si = k T
,
(9.34)
T
T
g
D g E
hg g; Xj X j i = k T
,
(9.35)
j
j
(ii) If the extensive variables H and Xj (j J) are pairwise commuting then
h(S S)2 i = k T
S
,
T
(9.36)
X j
(j J),
T
X j
(j, k J),
h(Xj X j )(Xk X k )i = k T
k
v
s
u
u k T X j
k T S
t
,
res(X
)
=
,
res(S) =
j
2
2
S T
X j
h(Xj X j )(S S)i = k T
(9.37)
(9.38)
(9.39)
res(H) =
k T H
H
T
.
+
2
T
(9.40)
Proof. Multiplying the differentiation formula (9.29) by k T and using (9.32), we find, for
arbitrary reversible transformations,
k T (dhgi hdgi) = hg g; SidT + hg g; Xi d.
Dividing by d and choosing = T and = j , respectively, gives
hg g; Si = k T
g
D g E
T
hg g; Xj i = k T
D g E
g
.
j
j
213
(9.36)(9.38). The limit resolutions (9.39) now follow from (8.45) and the observation that
h(g g)2 i = h(g g)gi hg gig = h(g g)gi = hg 2 i g 2 . The limit resolution (9.40)
follows similarly from
2
H res(H)2 = hH H; H Hi = T hH H; S Si + hH H; X Xi
H
H
= k T T
.
+
T
Note that higher order central moments can be obtained in the same way, substituting
more complicated expressions for f and using the formulas for the lower order moments to
evaluate the right hand side of (9.34) and (9.35).
The extensive variables scale linearly with the p
system size of the system. Hence, the
limit resolution of the extensive quantities is O( k /) in regions of the state space where
the extensive variables depend smoothly on the intensive variables. Since k is very small,
they are negligible unless the system considered is very tiny. Thus, macroscopic thermal
variables can generally be obtained with fairly high precision. The only exceptions are
states close to critical points where the extensive variables need not be differentiable, and
their derivatives may therefore become huge. In particular, in the thermodynamic limit
, uncertainties are absent except close to a critical point, where they lead to critical
opacity.
9.3.4 Corollary. For a standard thermal system,
v
s
u
u k T N j
k T S
t
,
,
res(N
)
=
res(S) =
j
2
2
T
S
N j
(9.41)
res(H) =
H
H
k T H
T
.
+
P
+
2
T
P
(9.42)
Note that res(V ) = 0 since we regarded V as the system size, so that it is just a number.
The above results imply an approximate thermodynamic uncertainty relation
ST k T
(9.43)
Nj j k T.
(9.44)
for entropy S and the logarithm log T of temperature, analogous to the Heisenberg uncertainty relation (8.52) for position and momentum, in which the Boltzmann constant k plays
a role analogous to Plancks constant h
. Indeed (Gilmore [103]), (9.43) can be derived by
S
observing that (9.41) may be interpreted approximately as (S)2 k T T
; together with
1
S
S
T , we find that ST = (S)2 T
214
9.4
The extremal principles of the second law of thermodynamics assert that in a nonthermal
state, some energy expression depending on one of a number of standard boundary conditions is strictly larger than that of related thermal states. The associated thermodynamic
potentials can be used in place of the system function to calculate all thermal variables
given half of them. Thus, like the system function, thermodynamic potentials give a complete summary of the equilibrium properties of homogeneous materials. We only discuss
the Hamilton potential
U(S, X) := max {T S + X | (T, ) = 0, T > 0}
T,
with equality iff the state is a thermal state of positive temperature. The remaining thermal
variables are then given by
T =
U
(S, X),
S
U
(S, X),
X
U = H = U(S, X).
(9.45)
(9.46)
(9.47)
H = T S + X = A(T, X) + T S.
(9.48)
S=
A
(T, X),
T
215
9.4.2 Theorem.
(i) The function U(S, X) is a convex function of its arguments which is positive homogeneous of degree 1, i.e., for real , 1 , 2 0,
U(S, X) = U(S, X),
1
(9.49)
(9.50)
(ii) The function A(T, X) is a convex function of X which is positive homogeneous of degree
1, i.e., for real , 1 , 2 0,
(9.51)
A(T, X) = A(T, X),
1
(9.52)
The extremal principles imply energy dissipation properties for time-dependent states.
Since the present kinematical setting does not have a proper dynamical framework, it is
only possible to outline the implications without going much into details.
9.4.3 Theorem.
(i) For any time-dependent system for which S and X remain constant and which converges
to a thermal state with positive temperature, the Hamilton energy hHi attains its global
minimum in the limit t .
(ii) For any time-dependent system maintained at fixed temperature T > 0, for which X
remains constant and which converges to a thermal state, the Helmholtz energy hH T Si
attains its global minimum in the limit t .
Proof. This follows directly from Theorem 9.4.1.
This result is the shadow of a more general, dynamical observation (that, of course, cannot
be proved from kinematic assumptions alone but would require a dynamical theory). Indeed,
it is a universally valid empirical fact that in all natural time-dependent processes, energy is
lost or dissipated, i.e., becomes macroscopically unavailable, unless compensated by energy
provided by the environment. Details go beyond the present framework, which adopts a
strictly kinematic setting.
9.5
The third law of thermodynamics asserts that the value of the entropy is always nonnegative.
But it cannot be deduced from our axioms without making a further assumption, as a simple
example demonstrates.
216
N
1 X
f=
wn fn
N n=1
the axioms are trivial to verify. For this integral the state defined by
N
1 X
hf i =
fn ,
N n=1
wn < 1.
Thus, we need an additional condition which guarantees the validity of the third law. Since
the third law is also violated in classical statistical mechanics, which is a particular case
of the present setting, we need a condition which forbids the classical interpretation of our
axioms.
We take our inspiration from a simple information theoretic model of states discussed
in Section 10.6 below, which has this property. (Indeed, the third law is a necessary
requirement for the interpretation of the value of the entropy as a measure of internal
complexity, as discussed there.) There, the integral is a sum over the components, and,
since functions were defined componentwise,
X
R
F (f ) =
F (fn ).
(9.53)
nN
We say that a quantity f is quantized iff (9.53) holds with a suitable spectrum {fn | n
N } for all functions F for which F (f ) is strongly integrable; in this case, the fn are called
the levels of f . For example, in the quantum setting all trace class linear operators are
quantized quantities, since these always has a discrete spectrum.
Quantization is the additional ingredient needed to derive the third law:
9.5.2 Theorem. (Third law of thermodynamics)
If the entropy S is quantized then S 0. Equality holds iff the entropy has a single level
only (|N | = 1).
Proof. We have
1 = eS/k =
nN
eSn /k ,
(9.54)
217
X
S = SeS/k =
Sn eSn /k .
(9.55)
nN
If N = {n} then (9.54) implies eSn /k = 0, hence Sn = 0, and (9.55) gives S = 0. And if
|N | > 1 then (9.54) gives eSn /k < 1, hence Sn > 0 for all n N , and (9.55) implies S > 0.
In quantum chemistry, energy H, volume V , and particle numbers N1 , . . . , Ns form a quantized family of pairwise commuting Hermitian variables. Indeed, the Hamiltonian H has
discrete energy levels if the system is confined to a finite volume, V is a number, hence has
a single level only, and Nj counts particles hence has as levels the nonnegative integers. As
a consequence, the entropy S = T 1 (H + P V N) is quantized, too, so that the third
law of thermodynamics is valid. The number of levels is infinite, so that the value of the
entropy is positive.
A zero value of the entropy (absolute zero) is therefore an idealization which cannot be
realized in practice. But Theorem 9.5.2 implies in this idealized situation that entropy and
hence the joint spectrum of (H, V, N1 , . . . , NS ) can have a single level only.
This is the situation discussed in ordinary quantum mechanics (pure energy states at fixed
particle numbers). It is usually associated with the limit T 0, though at absolute
temperature T = 0, i.e., infinite coldness , the thermal formalism fails (but for low T
asymptotic expansions are possible).
To see the behavior close to this limit, we consider for simplicity a canonical ensemble with
Hamiltonian H (Example 8.2.4); thus the particle number is fixed. Since S is quantized, the
spectrum of H is discrete, so that there is a finite or infinite sequence E0 < E1 < E2 < . . .
of distinct energy levels. Denoting by Pn the (rank dn ) orthogonal projector to the dn dimensional eigenspace with energy En , we have the spectral decomposition
X
(H) =
(En )Pn
n0
Z = tr eH =
As a consequence,
S/
k
=Z
1 H
= X
S/
k
hf i = e
eEn tr Pn . =
eEn Pn
f=
eEn dn
R
= X
eEn dn .
e(En E0 ) Pn
e(En E0 ) dn
P (En E0 )
e
P
P (E E ) n .
n
0 d
e
n
(9.56)
218
From this representation, we see that only the energy levels En with
En E0 + O(kT )
In the nondegenerate case, where the lowest energy eigenvalue is simple, there is a corresponding normalized eigenvector , unique up to a phase, satisfying the Schr
odinger
equation
H = E0 , || = 1 (E0 minimal).
(9.58)
In this case, the projector is P0 = and has rank d0 = 1. Thus
eS/k = + O(e(E1 E0 ) ).
has almost rank one, and the value takes the form
hgi = tr eS/k g tr g = g.
(9.59)
hgi = g
(9.60)
In the terminology of quantum mechanics, E0 is the ground state energy, the solution
of (9.58) is called the ground state, and
is the expectation of the observable g in the ground state.
For a general state vector normalized to satify = 1, the formula (9.60) defines the
values in the pure state . It is easily checked that (9.60) indeed defines a state in the
sense of Definition 8.2.1. These are not Gibbs states, but their idealized limiting cases.
Our derivation therefore shows that unless the ground state is degenerate a canonical
ensemble at sufficiently low temperature is in an almost pure state described by the quantum
mechanical ground state.
Thus, the third law directly leads to the conventional form of quantum mechanics, which can
therefore be understood as the low temperature limit of thermodynamics. It also indicates
when a quantum mechanical description by a pure state is appropriate, namely always
when the gap between the ground state energy and the next energy level is significantly
larger than the temperature (measured in units where the Boltzmann constant is set to
1). This is the typical situation in most of quantum chemistry and justifies the use of
the Born-Oppenheimer approximation in the absence of level crossing; cf. Smith [256],
Yarkony [298]. Moreover, it gives the correct (mixed) form of the state in case of ground
state degeneracy, and the form of the correction terms when the energy gap is not large
enough for the ground state approximation to be valid.
Chapter 10
Models, statistics, and measurements
In this chapter, we discuss the relation between models and reality. This topic is difficult
and to some extent controversial since it touches on unresolved foundational issues about
the meaning of probability and the interpretation of quantum mechanics. By necessity, the
ratio between the number of words and the number of formulas is higher than in other
chapters.
We discuss in more detail the relation between different thermal models constructed on the
basis of the same Euclidean -algebra by selecting different lists of extensive quantities.
Moreover, a discussion of the meaning of uncertainty and probability gives the abstract
setting introduced in the previous chapters both a deterministic and a statistical interpretation.
The interpretation of probability, statistical mechanics, and today intrinsically interwoven
of quantum mechanics has a long history, resulting in a huge number of publications.
Informative sources for the foundations of probability in general include Fine [87] and
Hacking [116]. For statistical mechanics, see Ehrenfest [78], ter Haar [266], Penrose
[214], Sklar [254], Grandy [295], and Wallace [282]. For the foundations of quantum
mechanics, see Stapp [258], Ballentine [21], Home & Whitaker [126], Peres &
Terno [218], Schlosshauer [247] and the reprint collection by Wheeler & Zurek
[287].
10.1
Description levels
There is no fully objective way of defining how quantities and states are related to reality,
since the observer modeling a particular situation may describe the same object from different perspectives and at different levels of faithfulness. Different observers may choose to
study different materials or different experiments, or they may study the same material or
the same experiment in different levels of detail, or draw the system boundary differently.
For example, one observer may regard a measuring instrument as part of the system of
219
220
221
degenerate cases a single Gibbs state, with entropy S(t), say, which best describes the
system under consideration at the chosen level of modeling. Taking the description by the
Gibbs state as fundamental, its value is the objective, true value of the entropy, relative
only to the algebra of quantities chosen to model the system. A description of the state in
terms of a thermal system is therefore adequate if (and, under an observability qualification
to be discussed below, only if), for all relevant times t, the entropy S(t) can be adequately
approximated by a linear combination of the extensive quantities available at the chosen
level of description.
In the preceding chapter, we assumed a fixed selection of extensive quantities defining the
thermal model.
As indicated at the end of Section 7.1, observable differences from the conclusions derived
from a thermal model known to be valid on some level imply that one or more conjugate
pairs of thermal variables are missing in the model. So, how should the extensive quantities
be selected?
The set of extensive variables depends on the application and on the desired accuracy of
the model; it must be chosen in such a way that knowing the measured values of the
extensive variables determines (to the accuracy specified) the complete behavior of the
thermal system. The choice of extensive variables is (to the accuracy specified) completely
determined by the level of accuracy with which the thermal description should fit the
systems behavior. This forces everything else: The theory must describe the freedom
available to characterize a particular thermal system with this set of extensive variables,
and it must describe how the numerical values of interest can be computed for each state
of each thermal system.
Clearly, physics cannot be done without approximation, and the choice of a resolution is
unavoidable. (To remove even this trace of subjectivity, inherent in any approximation
of anything, the entropy would have to be represented without any approximation, which
would require to use the algebra of quantities of the still unknown theory of everything, and
to demand that the extensive quantities exhaust this algebra.) Once the (subjective) choice
of the resolution of modeling is fixed, this fixes the amount of approximation tolerable in
the ansatz, and hence the necessary list of extensive quantities. This is the only subjective
aspect of our setting. In contrast to the information theoretic approach where the choice
of extensive quantities is considered to be the subjective matter of which observables an
observer happens to have knowledge of.
In general, which quantities need to be considered depends on the resolution with which
the system is to be modeled the higher the resolution, the larger the family of extensive
quantities. Thus whether we describe bulk matter, surface effects, impurities, fatigue,
decay, chemical reactions, or transition states, the thermal setting remains the same since
it is a universal approximation scheme, while the number of degrees of freedom increases
with increasingly detailed models.
In phenomenological thermodynamics, the relevant extensive quantities are precisely those
variables that are observed to make a difference in modeling the phenomenon of interest.
Table 10.1 gives typical extensive variables (S and Xj ), their intensive conjugate variables
222
Table 10.1: Typical conjugate pairs of thermal variables and their contribution to the Euler
equation. The signs are fixed by tradition. (In the gravitational term, m is the vector with
components mj , the mass of a particle of kind j, g the acceleration of gravity, and h the
height.)
extensive Xj
intensive j
contribution j Xj
entropy S
temperature T
thermal, T S
particle number Nj
conformation tensor C
chemical potential j
relaxation force R
strain jk
volume V
surface AS
length L
displacement q
momentum p
angular momentum J
stress jk
pressure P
surface tension
tension J
force F
velocity v
angular velocity
charge Q
polarization P
magnetization M
electromagnetic field F
electric potential
electric field strength E
magnetic field strength B
electromagnetic field strength F s
chemical, j Nj
P
conformational
Rjk C jk
P
elastic,
jk jk
mechanical, P V
mechanical, AS
mechanical, JL
mechanical, F q
kinetic, v p
rotational, J
mass M = m N
energy-momentum U
gravitational potential gh
metric g
electrical, Q
electrical, E P
magnetical, B M
P s
electromagnetic, F
F
gravitational, ghM
P
gravitational,
g U
(T and j ), and their contribution (T S and j Xj ) to the Euler equation (9.4)1 . Some of
the extensive variables and their intensive conjugates are vectors or (in elasticity theory,
the theory of complex fluids, and in the relativistic case) tensors; cf. Balian [20] for the
The Euler equation looks like an energy balance. But since S is undefined, this formal balance has
no contents apart from defining the entropy S in terms of the energy and other contributions. The energy
balance is rather given by the first law discussed later, and is about changes in energy. Conservative work
contributions are exact differentials. For example, the mechanical force F = dV (q)/dq translates into the
term F dq = dV (q) of the first law, corresponding to the term F q in the Euler equation. The change
of the kinetic energy Ekin = mv 2 /2 contribution of linear motion with velocity v and momentum p = mv is
dEkin = d(mv 2 /2) = mv dv = v dp, which is exactly what one gets from the v p contribution in the Euler
equation. Since v p = mv 2 is larger than the kinetic energy, this shows that motion implies a contribution
to the entropy of (Ekin v p)/T = mv 2 /2T . A similar argument applies to the angular motion of a rigid
body in its rest frame, providing the term involving angular velocity and angular momentum.
223
The variables and quantities of the fine system are written as before, but the variables
and quantities associated with the coarser system get an additional index c. That the
fine system is a refinement of the coarse system means that the extensive quantities of the
coarse system are Xc = CX, with a fixed matrix C with linearly independent rows, whose
components tell how the components of Xc are built from those of X. The entropy of the
coarse system is then given by
Sc = T 1 (H c Xc ) = T 1 (H c CX) = T 1 (H X),
where
= C T c .
(10.1)
We see that the thermal states of the coarse model are just the states of the detailed model
for which the intensive parameter vector is of the form = C T c for some c . Thus the
coarse state space can simply be viewed as a lower-dimensional subspace of the detailed
state space. Therefore, one expects the coarse description to be adequate precisely when
the detailed state is close to the coarse state space, with an accuracy determined by the
desired fidelity of the coarse model. Since the relative entropy (8.42),
hSc Si = hT 1 (H c CX) T 1 (H X)i = hT 1( C T c ) Xi,
(10.2)
measures the amount of information in the detailed state which cannot be explained by the
coarse state, we associate to an arbitrary detailed state the coarse state c determined
as a function of by minimizing (10.2). If = C T c then
Sc = T 1 (H X) T 1 (H X) = S,
and the coarse description is adequate. If 6 , there is no a priori reason to trust the
coarse model, and we have to investigate to which extent its predictions will significantly
differ from those of the detailed model. One expects the differences to be significant; however, in practice, there are difficulties if there are limits on our ability to prepare particular
detailed states. The reason is that the entropy and chemical potentials can be prepared
and measured only by comparison with sufficiently known states. For ideal gases, they are
inherently ambiguous because of the gauge freedom discussed in Example 7.1.4, which implies that different models of the same situation may have nontrivial differences in Hamilton
energy, entropy, and chemical potential. A similar ambiguity persists in more perplexing
situations:
10.1.1 Example. (The Gibbs paradox)
Suppose that we have an ideal gas of two kinds j = 1, 2 of particles which are experimentally
indistinguishable. Suppose that in the samples available for experiments, the two kinds are
mixed in significantly varying proportions N1 : N2 = q1 : q2 which, by assumption, have
no effect on the observable properties; in particular, their values are unknown but varying.
The detailed model treats them as distinct, the coarse model as identical. Reverting to the
barless notation of Section 7.1, we have
V
P
X = N1 , = 1 ,
N2
2
224
1 0 0
and, assuming C =
for suitable c1 , c2 > 0,
0 c1 c2
V
V
P
Xc =
=
, c =
.
Nc
c1 N1 + c2 N2
c
From the known proportions, we find
Nj = xj Nc ,
xj =
qj
.
c1 q1 + c2 q2
H = hc (T )Nc ,
c = k T log
k T Nc
.
V c
P
P
Now Nc = (k T )1 P V =
Nj =
xj Nc implies that x1 + x2 = 1. Because of indistinguishability, this must hold for any choice
P of q1 , q2 0; for the two choices q1 = 0 and
q2 = 0, we get c1 = c2 = 1, hence Nc =
Nj , and the xj are mole fractions. Similarly, if
we use for all kinds j of substances the same normalization
P for fixing the
P gauge freedom discussed in Example 7.1.4, the relation hc (T )Nc = H =
hj (T )Nj =
hj (T )xj Nc implies
for varying mole fractions that hj (T ) = hc (T ) for j = 1, 2. From this, we get j (T ) = c (T )
for j = 1, 2. Thus
X
H Hc =
hj (T )Nj hc (T )Nc = 0,
j c = k T log
k T Nj
k T Nc
k T log
= k T log xj ,
V j
V c
S Sc = T 1 (H P V + G) T 1 (Hc P V + Gc )
P
= T 1 (G Gc ) = k Nc xj log xj .
The latter term is called the entropy of mixing. Its occurence is referred to as the Gibbs
paradox (cf. Jaynes [140], Tseng & Caticha [271], Allahverdyan & Nieuwenhuizen [10], Uffink [273, Section 5.2]). It seems to say that there are two different
entropies, depending on how we choose to model the situation. For fixed mole fractions,
there is no real paradox since the fine and the coarse description differ only by a choice of
the unobservable gauge parameters, and only gauge invariant quantities (such as entropy
differences) have a physical meaning.
If the mole fractions may vary, the fine and the coarse description differ significantly. But
the difference in the descriptions is observable only if we know processes which affect the
different kinds differently.
Fixed chemical potentials can be prepared only through chemical contact with substances
with known chemical potentials, and the latter must be computed from observed mole
225
fractions. Therefore, the chemical potentials can be calibrated only if we can prepare
equilibrium states at fixed mole fraction. This requires that we are able to separate to some
extent particles of different kinds.
Examples are a difference in mass, which allows a mechanical separation, a difference in
molecular size or shape, which allows their separation by a semipermeable membrane, a
difference in spin, which allows a magnetic separation, or a difference in scattering properties
of the particles, which allows a chemical or radiation-based differentiation. In each of these
cases, the particles become distinguishable; the coarse description is therefore inadequate
and gives a wrong description for the entropy and the chemical potentials.
Generalizing from the example, we conclude that even when both a coarse model and a more
detailed model are faithful to all experimental information possible at a given description
level, there is no guarantee that they agree in the values of all thermal variables of the coarse
model. In the language of control theory (see, e.g., Ljung [178]), agreement is guaranteed
only when all parameters of the more detailed models are observable.
On the other hand, all observable state functions of the detailed system that depend only
on the coarse state have the same value within the experimental accuracy, if both models
are adequate descriptions of the situation. Thus, while the values of some variables need
not be experimentally determinable, the validity of a model is an objective property.
Therefore, preferences for one or the other of two valid models can only be based on other
criteria. The criterion usually employed in this case is Ockhams razor, although there may
be differences of opinion on what counts as the most economic model. In particular, a fundamental description of macroscopic matter by means of quantum mechanics is hopelessly
overspecified in terms of the number of degrees of freedom needed for comparison with
experiment, most of which are in principle unobservable by equipment made of ordinary
matter. But it is often the most economical model in terms of description length (though
extracting the relevant information from it may be difficult). Thus, different people may
well make different rational choices, or employ several models simultaneously.
As soon as a discrepancy of model predictions with experiment is reliably found, the model
is inadequate and must be replaced by a more detailed or altogether different model. This
is indeed what happened with the textbook example of the Gibbs paradox situation, ortho
and para hydrogen, cf. Bonhoeffer & Harteck [45], Farkas [85]. Hydrogen seemed
at first to be a single substance, but then thermodynamic data forced a refined description.
Similarly, in spin echo experiments (see, e.g., Hahn [117, 118], Rothstein [241], Ridderbos & Redhead [234]), the specially prepared system appears to be in equilibrium but,
according to Callens empirical definition quoted on 220 it is not the surprising future
behavior (for someone not knowing the special preparation) shows that some correlation
variables were neglected that are needed for a correct description.
Grad [110] speaks of the adoption of a new entropy is forced by the discovery of new
information. More precisely, the adoption of a new model (in which the entropy has
different values) is forced, since the old model is simply wrong under the new conditions
and remains valid only under some restrictions.
226
Observability issues aside, the coarser description usually has a more limited range of applicability; with the qualification discussed in the example, it is generally restricted to those
systems whose detailed intensive variable vector is close to the subspace of vectors of the
form C T reproducible in the coarse model.
Finding the right family of thermal variables is therefore a matter of discovery, not of
subjective choice. This is further discussed in Section 10.2.
10.2
As we have seen in Section 10.1, when descriptions on several levels are justified empirically,
they differ significantly only in quantities that are negligible in the more detailed models
and vanish in the coarser models, or by terms that are not observable in principle. We
now apply the above considerations to various levels of equilibrium descriptions.
A global equilibrium description is adequate at some resolution if and only if only the
nonequilibrium forces present in the finer description are small, and a more detailed local
equilibrium description will (apart from variations of the Gibbs paradox, which should
be cured on the more detailed level) agree with the global equilibrium description to the
accuracy within which the differences in the corresponding approximations to the entropy,
as measured by the relative entropy (8.42), are negligible. Of course, if the relative entropy
of a thermal state relative to the true Gibbs state is large then the thermal state cannot be
regarded as a faithful description of the true state of the system, and the thermal model is
inadequate.
In statistical mechanics, where the microscopic dynamics is given, the relevant extensive
quantities are those whose values vary slowly enough to be macroscopically observable at
a given spatial or temporal resolution (cf. Balian [18]). Which ones must be included
is a difficult mathematical problem that has been solved only in simple situations (such
as monatomic gases) where a weak coupling limit applies. In more general situations,
the selection is currently based on phenomenological consideration, without any formal
mathematical support.
In equilibrium statistical mechanics, which describes time-independent, global equilibrium
situations, the relevant extensive quantities are the additive conserved quantities of a microscopic system and additional parameters describing order parameters that emerge from
broken symmetries or various defects not present in the ideal model. Phase equilibrium
needs, in addition, copies of the extensive variables (e.g., partial volumes) for each phase,
since the phases are spatially distributed, while the intensive variables are shared by all
phases. Chemical equilibrium also accounts for exchange of atoms through a list of
permitted chemical reactions whose length is again determined by the desired resolution.
In states not corresponding to global equilibrium usually called non-equilibrium states,
a thermal description is still possible assuming so-called local equilibrium. There, the
natural extensive quantities are those whose values are locally additive and slowly varying
in space and time and hence, reliably observable at the scales of interest. In the statistical
227
mechanics of local equilibrium, the thermal variables therefore become space- and timedependent fields (Robertson [237]). On even shorter time scales, phase space behavior
becomes relevant, and the appropriate description is in terms of microlocal equilibrium
and position- and momentum-dependent phase space densities. Finally, on the microscopic
level, a linear operator description in terms of quantum equilibrium is needed.
The present formalism is still applicable to local, microlocal, and quantum equilibrium
(though most products now become inner products in suitable function spaces), but the
relevant quantities are now time-dependent and additional dynamical issues (relating states
at different times) arise; these are outside the scope of the present book.
In local equilibrium, one needs a hydrodynamic description by Navier-Stokes equations and
their generalizations; see, e.g., Beris & Eswards [33], Oettinger [209], Edwards et al.
[77]. In the local view, one gets the interpretation of extensive variables as locally conserved
(or at least slowly varying) quantities (whence additivity) and of intensive variables as
parameter fields, which cause non-equilibrium currents when they are not constant, driving
the system towards global equilibrium. In microlocal equilibrium, one needs a kinetic
description by the Boltzmann equation and its generalizations; see, e.g., Bornath et al.
ller & Ruggeri [197].
[48], Calzetta & Hu [56], Mu
Quantum equilibrium. Fully realistic microscopic dynamics must be based on quantum
mechanics. In quantum equilibrium, the dynamics is given by quantum dynamical semigroups. We outline the ideas involved, in order to emphasize some issues that are usually
swept under the carpet.
Even when described at the microscopic level, thermal systems of sizes handled in a laboratory are in contact with their environment, via containing walls, emitted or absorbed
radiation, etc.. We therefore embed the system of interest into a bigger, completely isolated
system and assume that the quantum state of the big system is described at a fixed time by
a value map that assigns to a linear operator g in the big system the value hgi and satisfies
the rules (R1)(R4) for a state. The small system is defined by a Euclidean -algebra E
b composed of all meaningful expressions in field
of linear operators densely defined on H,
operators at arguments in the region of interest; the integral is given by the trace in the
big system. Since the value map restricted to g E also satisfies the rules (R1)(R4) for
a state, the big system induces on the system of interest a state. By standard theorems
R
(see, e.g., Thirring [267]), there is a unique density operator E such that hgi = g
for all g E with finite value. Moreover, is Hermitian and positive semidefinite. If 0
is not an eigenvalue of then hi is a Gibbs state with entropy S = k log . Note that
the entropy defined in this way depends on the choice of E, hence on the set of quantities
found to be relevant. (In contrast, if the big system that includes the environment is in an
approximately pure state, as is often assumed, the value of the entropy of the big system
is approximately zero.)
To put quantum equilibrium into the thermal setting, we simply choose a set of extensive
variables spanning the algebra E; then S can be written in the form (9.1). (A thermal
description is not possible if 0 is an eigenvalue of , an exceptional situation that can be
228
realized experimentally only for systems with extremely few quantum levels. This happens,
e.g., when the state is pure, = .)
Of course, and hence the state hi depend on time. The time evolution is now quite
different from the conservative dynamics usually assumed for the big system that includs
the environment. The system of interest does not inherit a Hamiltonian dynamics from the
isolated big system; instead, the dynamics of is given by an integro-differential equation
with a complicated memory term, defined by the so-called projector operator formalism
ller [228] and
described in detail in Grabert [109]; for summaries, see Rau & Mu
Balian [18]. In particular, one can say nothing specific about the dynamics of S. (In
contrast, were the reduced system governed by a Hamiltonian dynamics, would evolve
by means of a unitary evolution; in particular, S = hSi = k tr log would be timeindependent.) A suitable starting point for a fundamental derivation, based on quantum
field theory, are provided by the so-called exact renormalization group equations (see, e.g.,
Polonyi & Sailer [222], Berges [32]).
In typical treatments of reduced descriptions, one assumes that the memory decays sufficiently fast; this so-called Markov assumption can be justified in a weak coupling limit
(Davies [71], Spohn [257]), corresponding to a system of interest that is only weakly interacting with the environment. But a typical thermal system, such as a glass of water on
a desk is held in place by the container. Considered as a nearly independent system, the
water would behave very differently, probably diffusing into space. Thus, it is questionable
whether the Markov assumption is satisfied; a detailed investigation of the situation would
be highly desirable. Apparently there are only few discussions of the problem how containers modify the dynamics of a large quantum system; see, e.g., Lebowitz & Frisch
[173], Blatt [36] and Ridderbos [233]. One should expect a decoherence effect (Brune
et al. [53]) of the environment on the system that, for large quantum systems, is extremely
strong (Zurek [301]).
However, simply assuming the Markov assumption as the condition for regarding the system of interest to be effectively isolated allows one to deduce for the resulting Markov
approximation a deterministic differential equation for the density operator. The dynamics then describes a linear quantum dynamical semigroup. For all known linear quantum
dynamical semigroups (cf. Davies [71]) on a Hilbert space, the dynamics takes the form
of a Lindblad equation
i
= (H H ) + P
(10.3)
h
(Lindblad [176], Gorini et al. [107]), where the effective Hamiltonian H is a not
necessarily Hermitian operator and P is the dual of a completely positive map P of the
form
P (f ) = Q J(f )Q for all f E,
with some linear operator Q from E to a second -algebra E and some -algebra homomorphism J from E to E . (Stinespring [261], Davies [71, Theorem 2.1]). The resulting
dynamics is inherently dissipative; for time t , P can be shown to tend to zero,
which implies under a natural nondegeneracy assumption that the limiting state is a global
equilibrium state.
229
No matter how large we make the system, it is necessary to take account of an unobserved
environment, since all our observations are done in a limited region of space, which, however,
interacts with the remainder of the universe. As a consequence, the time evolution of any
system of signifcant size is irreversible. In particular, the prevalence here on earth of
matter in approximate equilibrium could possibly be explained by the fact that the earth
is extremely old.
We now consider relations within the hierarchy of the four levels. The quantum equilibrium
entropy Squ , the microlocal equilibrium entropy Sml , the local equilibrium entropy Slc , and
the global equilibrium entropy Sgl denote the values of the entropy in a thermal description
of the corresponding equilibrium levels. The four levels have an increasingly restricted set
of extensive quantities, and the relative entropy argument of Theorem 8.3.3 can be applied
at each level. Therefore
Squ Sml Slc Sgl .
(10.4)
In general, the four entropies might have completely different values. We discussfour essentially different possibilities,
(i) Squ Sml Slc Sgl ,
(ii) Squ Sml Slc Sgl ,
(iii) Squ Sml Slc Sgl ,
(iv) Squ Sml Slc Sgl ,
with different physical interpretations. As we have seen in Section 10.1, a thermal description is valid only if the entropy in this description approximates the true entropy sufficiently
well. All other entropies, when significantly different, do not correspond to a correct description of the system; their disagreement simply means failure of the coarser description
to match reality. Thus which of the cases (i)(iv) occurs decides upon which descriptions
are valid. (i) says that the state is in global equilibrium, and all four descriptions are valid.
(ii) that the state is in local, but not in global equilibrium, and only the three remaining
descriptions are valid. (iii) says that the state is in microlocal, but not in local equilibrium,
and in particular not in global equilibrium. Only the quantum and the microlocal descriptions are valid. Finally, (iv) says that the state is not even in microlocal equilibrium, and
only the quantum description is valid.
Assuming that the fundamental limitations in observability are correctly treated on the
quantum level, the entropy is an objective quantity, independent of the level of accuracy
with which we are able to describe the system. The precise value it gets in a model depends,
however, on the model used and its accuracy. The observation (by Grad [110], Balian
[18], and others) that entropy may depend significantly on the description level is explained
by two facts that hold for variables in models of any kind, not just for the entropy, namely
(i) that if two models disagree in their observable predictions, at most one of them can be
correct, and
(ii) that if a coarse model and a refined model agree in their observable predictions, the
more detailed model has unobservable details.
230
Since unobservable details cannot be put to an experimental test, the more detailed model
in case (ii) is questionable unless dictated by fundamental considerations, such as symmetry
or formal simplicity.
10.3
Recall from Section 8.4 that a quantity g is considered to be significant if its resolution
res(g) is much smaller than one, while it is considered as noise if it is much larger than
one. If g is a quantity and e
g is a good approximation of its value then g := g e
g is noise.
Sufficiently significant quantities can be treated as deterministic; the analysis of noise is
the subject of statistics.
Statistics is based on the idea of obtaining information about noisy quantities of a system by
repeated sampling from a population2 of independent systems with identical preparation,
but differing in noisy details not controllable by the preparation. In the present context,
such systems are described by the same Euclidean -algebra E0 , the same set of quantities
to be sampled, and the same state hi0.
More precisely, the systems may be regarded as subsystems of a bigger system (e.g., the
laboratory) whose set of quantities is given by a big Euclidean -algebra E. To model identically prepared subsystems we consider injective homomorphisms from E0 into E mapping
each reference quantity f E0 to the quantity fl E of the lth subsystem considered to
be identical with f . Of course, in terms of the big system, the fl are not really identical;
they refer to quantities distinguished by position and/or time. That the subsystems are
identically prepared is instead modelled by the assumption
hfl i = hf0 i for all f E0 ,
(10.5)
(10.6)
N
1 X
b
f :=
fl
N l=1
hfbi = hf0 i,
(fb) = (f0 )/ N,
(10.7)
Physicists usually speak of an ensemble in place of a population; but since in connection with the
microcanonical, canonical, or grand canonical ensemble we use the term ensemble synonymous with state,
we prefer the statistical term population to keep the discussion unambiguous.
231
and
hfbi =
Now
hfb fbi =
X
1 D X X E
2
f
f
=
N
hfj fk i.
j
k
N2
j
k
j,k
(10.8)
so that
hf fbi = N
N
2
2
2||2 = N 1 2 + ||2 ,
N(|| + ) +
2
As a significant body of work in probability theory shows, the conditions under which
(fb) 0 as N can be significantly relaxed; thus in practice, it is sufficient if (10.5)
and (10.6) are approximately valid.
The significance of the weak law of large numbers lies in the fact that (10.7) becomes
arbitrarily small as N becomes sufficiently large. Thus the uncertainty of quantities when
averaged over a large population of identically prepared systems becomes arbitrarily small
while the mean value reproduces the value of each quantity. Thus quantities averaged over
a large population of identically prepared systems become highly significant when their
value is nonzero, even when no single quantity is significant.
232
(and only in this case), res(g) becomes the standard deviation of g, divided by the absolute value of the expectation; therefore, it measures the relative accuracy of the individual
realizations.
On the other hand, in equilibrium thermodynamics, where a tiny number of macroscopic
observations on a single system completely determine its state to engineering accuracy,
such a frequentist interpretation is inappropriate. Indeed, as discussed by Sklar [254],
a frequentist interpretation of statistical mechanics has significant foundational problems,
already in the framework of classical physics.
Thus, the present framework captures correctly the experimental practice, and determines
the conditions under which deterministic and statistical reasoning are justified:
Deterministic reasoning is sufficient for all quantities whose limit resolution is below the
relative accuracy desired for a given description level.
Statistical reasoning is necessary precisely when the limit resolution of certain quantities is
larger than the desired relative accuracy, and these quantities are sufficiently identical and
independent to ensure that the limit resolution of their mean is below this accuracy.
In this way, we delegate statistics to its role as the art of interpreting measurements, as
in classical physics. Indeed, to have a consistent interpretation, real experiments must
be designed such that they allow one to determine approximately the properties of the
state under study, hence the values of all quantities of interest. The uncertainties in the
experiments imply approximations, which, if treated probabilistically, need an additional
probabilistic layer accounting for measurement errors. Expectations from this secondary
layer, which involve probabilistic statements about situations that are uncertain due to
neglected but in principle observable details (cf. Peres [217]), happen to have the same
formal properties as the values on the primary layer, though their physical origin and
meaning is completely different.
Classical probability. Apart from the traditional axiomatic foundation of probability
theory by Kolmogorov [156] in terms of measure theory there is a less well-known axiomatic treatment by Whittle [288] in terms of expectations, which is essentially the
commutative case of the present setting. The exposition in Whittle [288] (or, in more
abstract terms, already in Gelfand & Naimark [100]) shows that, if the Xj are pairwise
commuting, it is possible to define for any Gibbs state in the present sense, random variables Xj in Kolmogorovs sense such that the expectation of all sufficiently regular functions
f (X) defined on the joint spectrum of (X) agrees with the value of f . It follows that in
the pairwise commuting case, it is always possible to construct a probability interpretation
for the quantities, completely independent of any assumed microscopic reality.
The details (which the reader unfamiliar with measure theory may simply skip) are as
follows. We may associate with every vector X of quantities with commuting components
a time-dependent, monotone linear functional hit defining the expectation
R
233
at time t of arbitrary bounded continuous functions f of X. These functions define a commutative -algebra E(X). The spectrum Spec X of X is the set of all -homomorphisms
(often called characters) from E(X) to C, and has the structure of a Hausdorff space, with
the weak- topology obtained by calling a subset S of Spec X closed if, for any pointwise
convergent sequence (or net) contained in S, its limit is also in S. Now a monotone linear
functional turns out to be equivalent to a multivariate probability measure dt (X) (on the
sigma algebra of Borel subsets of the spectrum of X) defined by
Z
R
dt (X)f (X) := (t)f (X) = hf (X)it .
Conversely, classical probability theory may be discussed in terms of the Euclidean -algebra
of random variables, i.e., Borel measurable complex-valued functions on a Hausdorff
space where
Rbounded continuous functions are strongly integrable and the integral is
R
given by f := d(X)f (X) for some distinguished measure .
10.4
Classical measurements
234
When measuring classical or quantum systems that are macroscopic, i.e., large enough to
be described sufficiently well by the methods of statistical mechanics, one measures more
or less accurately extensive or intensive variables of the system and one obtains essentially
deterministic results. A classical instrument is a measuring instrument that measures
such deterministic values within some known margin of accuracy. Note that this gives
an operational meaning to the term classical, although every classical instrument is, of
course, a quantum mechanical many-particle system when modelled in full detail. Whether
a particular instrument behaves classically can in principle be found out by an analysis of
the measurement process considered as a many-particle system, although the calculations
can be done in practice only under simplifying assumptions. For some concrete models,
see, e.g., Allahverdyan et al. [9]. Thus there is no split between the classical and the
quantum world but a gradual change from quantum to classical as the system gets larger
and the limit resolution improves.
It is interesting to discover the nature of thermodynamic observables3 . We encountered
intensive variables, which are parameters characterizing the state of the system, extensive
variables, values that are functions of the intensive variables and of the parameters (if
there are any) in the Hamiltonian, and limit resolutions, which, as functions of values, are
also functions of the intensive variables. Thus all thermodynamic observables of practical
interest are functions of the parameters defining the thermal state or the Hamiltonian of
the system. Which parameters these are depends of course on the assumed model.
For an arbitrary model of an arbitrary system we perform a natural step of extrapolation,
substantiated later (in Section 19.1) by the Dirac-Frenkel variational principle, and take
the parameters characterizing a family of Hamiltonians and a family of states that describe
the possible states of a system. as the basic variables. We call these parameters the model
parameters; the values of the model parameters completely characterize a particular system described by the model. An observable of the model is then a function of these basic
variables.
Thus we may say that a classical instrument is characterized by the fact that upon measurement the measurement result approximates with a certain accuracy the value of a function
F of the model parameters. As customary, one writes the result of a measurement as an
uncertain number F0 F consisting of a main value F0 and a deviation F , with the
meaning that the error |F0 F | is at most a small multiple of F . Because of possible
systematic errors, it is generally not possible to interpret F0 as mean value and F as
standard deviation. Such an interpretation is valid only if the instrument is calibrated to
satisfy the implied statistical relation.
In particular, since hf i is a function of the model parameters, a measurement may yield
the value hf i of a quantity f , and is then said to be a classical instrument for measuring
f . As an important special case, all readings from a photographic image or from the scale
of a measuring instrument, done by an observer, are of this nature when considered as
measurements of the instrument by the observer. Indeed, what is measured by the eye
is the particle density of blackened silver on a photographic plate or of iron of the tip of
3
We use the term observable with its common-sense meaning. In quantum mechanics, the term has also
a technical meaning that we do not use, denoting there a self-adjoint linear operator on a Hilbert space.
235
the pointer on the scale, and these are extensive variables in a continuum mechanical local
equilibrium description of the instrument.
The measurement of a tiny, microscopic system, often consisting of only a single particle,
is of a completely different nature. Now the limit resolutions do not benefit from the law
of large numbers, and the relevant quantities often are no longer significant. Then the
necessary quantitative relations between properties of the measured system and the values
read off from the measuring instrument are only visible as stochastic correlations. In a
single measurement of a microscopic system, one can only glean very little information
about the state of a system; conversely, from the state of the system one can predict only
probabilities for the results of a single measurement. The results of single measurements
are no longer reproducably observable numbers; reproducably observable and hence the
carrier of scientific information are only probabilities and statistical mean values.
To obtain comprehensive information about the state of a single microscopic system is
therefore impossible. To collect enough information about the prepared state and hence
the state of each system measured, one needs either time-resolved measurements on a
single system (available, e.g., for atoms in ion traps or for electrons in quantum dots), or a
population of identically prepared systems.
Extrapolating from the macroscopic case, it is natural to consider again the parameters
characterizing a family of states that describe the possible states of a system as the basic
numbers whose functions define observables in the present, nontechnical sense. This is now
a less well-founded assumption based only on the lack of a definite boundary between the
macroscopic and the microscopic regime, and an application of Ockhams razor to minimize
the needed assumptions.
Measurements in the form of clicks, flashes or events (particle tracks) in scattering experiments may be described in terms of a statistical instrument characterized by a discrete
family of possible measurement results a1 , a2 , . . . that may be real or complex numbers,
vectors, or fields, and nonnegative Hermitan quantities P1 , P2 , . . . satisfying
P1 + P2 + . . . = 1
(10.9)
(10.10)
if the measured system is in the state hi. The nonnegativity of the Pk implies that all
probabilities are nonnegative, and (10.9) guarantees that the probabilities always add up
to 1.
An instructive example is the photoelectric effect, the measurement of a classical free
electromagnetic field by means of a photomultiplier. A detailed discussion is given in
Chapter 9 of Mandel & Wolf [181]; here we only give an informal summary of their
account.
236
10.5
Quantum probability
237
238
Pj Pk = 0 for j 6= k,
on the eigenspaces of a self-adjoint operator A (or the components of a vector A of commuting, self-adjoint operators) with discrete spectrum given by a1 , a2 , . . .. In this case, the
statistical instrument is said to perform an ideal measurement of A, and the rule (10.10)
defining the probabilities is called Borns rule. The rule is named after Max Born [46],
who derived it in 1926 in the special case of pure states (defined in (9.60)) and was rewarded
in 1954 with the Nobel prize for this at that time crucial insight into the nature of quantum
mechanics.
Ideal measurements of A have quite strong properties since under the stated assumptions,
the instrument-based statistical average
f (A) = p1 f (a1 ) + p2 f (a2 ) + . . .
agrees for all functions f defined on the spectrum of A with the model-based value hf (A)i.
On the other hand, these strong properties are bought at the price of idealization, since they
result in effects incompatible with real measurements. For example, according to Borns
rule, the ideal measurement of the energy of a system whose Hamiltonian H is discrete always yields an exact eigenvalue of H, the only statistical component is the question which
of the eigenvalues is obtained. This is impossible in a real measurement; the precise measurement of the Lamb shift, a difference of eigenvalues of the Hamiltonian of the hydrogen
atom, was even worth a Nobel prize (1955 for Willis Lamb).
In general, the correspondence between values and eigenvalues is only approximate, and
the quality of the approximation improves with improved resolution. The correspondence
is perfect only at resolution zero, i.e., for completely sharp measurements. To discuss this
in detail, we need some results from functional analysis. The spectrum Spec f of a linear
operator on a Euclidean space H is the set of all C for which no linear operator R()
from the completion H of H to H exists such that ( f )R() is the identity. Spec f
is always a closed set. A linear operator f Lin H is called essentially self-adjoint
if it is Hermitian and its spectrum is real (i.e., a subset of R). For N-level systems,
where H is finite-dimensional, the spectrum coincides with the set of eigenvalues, and every
Hermitian operator is essentially self-adjoint. In infinite dimensions, The spectrum contains
the eigenvalues, but not every number in the spectrum must be an eigenvalue; and whether
a Hermitian operator is essentially self-adjoint is a question of correct boundary conditions.
239
(10.11)
(10.12)
240
This implies that a set of N 2 1 tests for specific states, repeated often enough, suffices
for the state determination. Indeed, it is easy to see that repeated tests for the states ej ,
the unit vectors with just one entry one and other entries zero, tests the diagonal elements
of the density matrix, and since the trace is one, one of these diagonal elements can be
computed from the knowledge of all others. Tests for ej + ek and ej + iek for all j < k
then allow the determination
of the (j, k) and (k, j) entries. Thus frequent repetition of
N
2
a total of N 1 + 2 2 = N 1 particular tests determines the full state. The optimal
reconstruction to a given accuracy, using a minimal number of individual measurements, is
again a nontrivial problem of quantum estimation theory.
10.6
The concept of entropy also plays an important role in information theory. To connect
the information theoretical notion of entropy with the present setting, we present in this
section an informal example of a simple stochastic model in which the entropy has a natural
information theoretical interpretation. We then discuss what this may teach us about a
non-stochastic macroscopic view of the situation.
We assume that we have a simple stationary device that, in regular intervals, delivers
a reading n from a countable set N of possible readings. For example, the device might
count the number of events of a certain kind in fixed periods of time; then N = {0, 1, 2, . . .}.
We suppose that, by observing the device in action for some time, we are led to some
conjecture about the (expected) relative frequencies pn of readings n N ; since the device
is stationary, these relative frequencies are independent of time. If N is finite and not
too large, we might take averages and wait until these stabilize to a satisfactory degree; if
N is large or infinite, most n N will not have been observed, and our conjecture must
depend on educated guesses. (The appropriateness of the conjecture, the relation to the
knowledge of the guesser, and how to improve a conjecture when new information arrives
are the subject of Bayesian statistics; cf. Section 10.7.)
Clearly, in order to have a consistent interpretation of the pn as relative frequencies, we
need to assume that each reading is possible:
pn > 0 for all n N ,
and some reading occurs with certainty:
X
pn = 1.
(10.13)
(10.14)
nN
For reasons of economy, we shall not allow pn = 0 in (10.13), which would correspond to
readings that are either impossible, or occur too rarely to have a scientific meaning. Clearly,
this is no loss of generality.
Knowing relative frequencies only means that (when N > 1) we only have incomplete
information about future readings of the device. We want to calculate the information
241
deficit by counting the expected number of questions needed to identify a particular reading
unknown to us, but known to someone else who may answer our questions with yes or no.
Consider arbitrary strategies s for asking questions, and denote by sn the number of questions needed to determine the reading n with strategy s. Since there are two possible
answers for each question, we can distinguish with q questions at most 2q different cases.
However, since reading n is assumed to be determined after sn questions, the answers to
the later questions do not matter, and reading n is obtained in 2qsn of the 2q cases when
sn q. Thus, no matter which strategy is used,
X
2qsn 2q .
sn q
(10.15)
nN
Since we do not know in advance the reading, we cannot determine the precise number
of questions needed in a particular unknown case. However, knowledge of the relative
frequencies allows us to compute the average number of questions needed, namely
X
s=
pn sn .
(10.16)
nN
(10.17)
nN
for every quantity f indexed by the elements from N , and we use the convention that
inequalities, operations and functions of such quantities are understood componentwise.
Then we can rewrite (10.13)(10.16) as
p > 0,
R
s = ps,
and
p=1,
(10.18)
2s 1 ,
(10.19)
f = hf i := pf
(10.20)
242
where k =
1
,
log 2
(10.21)
R
satisfies S s, with equality if and only if s = S. (One also needs 2s = 1, but this holds
for s = S.)
Proof. (10.21) implies log p = S log 2, hence p = 2S . Therefore
2s = p2Ss = pe(Ss) log 2 p(1 + (S s) log 2),
with equality iff S = s. Thus
p(S s)
and
1
(2s p) = k (2s p)
log 2
S s = p(S s) k (2s p)
R
R
= k 2s k p k k = 0.
hence 2Sn 1, Sn 0 for all n N . Thus, the entropy S is the unique optimal
decision strategy. The expected entropy, i.e., the mean number
R
S = hSi = pS = k p log p
(10.22)
(10.23)
S measures the information deficit of the device with respect to our conjecture about
relative frequencies. Traditionally, the expected entropy is simply called the entropy, while
we reserve this word for the random variable (10.21). Also commonly used is the name
information for S, which invites linguistic paradoxes since ordinary language associates with
information a connotation of relevance or quality that is absent here. The classical book
on information theory by Brillouin [51] emphasizes this very carefully, by distinguishing
absolute information from its human value or meaning. Katz [147] uses the phrase missing
information.
243
The information deficit says nothing at all about the quality of the information contained
in the summary p of our past observations. An inappropriate p can have arbitrarily small
information deficit and still give a false account of reality. For example, if for some small
> 0,
pn = n1(1 ) for n = 1, 2, . . . ,
(10.24)
expressing that the reading is expected to be nearly always 1 (p1 = 1 ) and hardly ever
large, then
log 0 as 0.
S = k log(1 ) +
1
Thus the information deficit can be made very small by the choice (10.24) with small ,
independent of whether this choice corresponds to the known facts. The real information
value of p depends instead on the care with which the past observations were interpreted,
which is a matter of data analysis and not of our model of the device. If the data analysis is
done poorly, the resulting expectations will simply not be matched by reality. This shows
that the entropy reflects objective properties of the stochastic process, and contrary to
claims in the literature has nothing to do with our knowledge of the system, a subjective,
ill-defined notion.
Relations to thermodynamics. Now suppose that the above setting happens at a very
fast, unobservable time scale, so that we can actually observe only short time averages
(10.20) of quantities of interest. Then f = hf i simply has the interpretation of the timeindependent observed value of the quantity f . The information deficit simply becomes
the observed value of the entropy S. Since the information deficit counts the number
of optimal decisions needed to completely specify a (microscopic) situation of which we
know only (macroscopic) observed values, the observed value of the entropy quantifies the
intrinsic (microscopic) complexity present in the system.
However, the unobservable high frequency fluctuations of the device do not completely
disappear from the picture. They show up in the fact that generally g 2 6= g 2 , leading to
a nonzero limit resolution (8.45) of Hermitian quantities. This is precisely the situation
characteristic of the traditional treatment of thermodynamics within classical equilibrium
statistical mechanics, if we assume that the system is ergodic, i.e., that population averages equal time averages. Then, all observed values are time-independent, described
by equilibrium thermal variables. But the underlying high-frequency motions of the atoms
making up a macroscopic substance are revealed by nonzero limit resolutions. However, the
assumption that all systems for which thermodynamics works are ergodic is problematic;
see, e.g., the discussion in Sklar [254].
Note that even a deterministic but chaotic high frequency dynamics, viewed at longer time
scales, looks stochastic, and exactly the same remarks about the unobservable complexity
and the observable consequences of fluctuations apply. Even if fluctuations are observable
directly, these observations are intrinsically limited by the necessary crudity of any actual
measurement protocol. For the best possible measurements (and only for these), the resolution of f in the experiment is given by the limit resolution res(f ), the size of the unavoidable
fluctuations.
244
10.7
Subjective probability
The formalism of statistical mechanics is closely related to that used in statistics for random
phenomena expressible in terms of exponential families; cf. Remark 9.1.2(viii). Exponential
families play an important role in Bayesian statistics. Therefore a Bayesian, subjective
probability interpretation to statistical mechanics is possible in terms of the knowledge of
an observer, using an information theoretic approach. See, e.g., Balian [19] for a recent
exposition in terms of physics, and Barndorff-Nielsen [25, 26] for a formal mathematical
treatment. In such a treatment, the present integral plays the role of a noninformative
prior, i.e., of the state considered to be least informative.
This noninformative prior is
R
often improper, i.e., not a probability distribution, since 1 need not be defined.
Motivated by the subjective, information theoretic approach to probability, Jaynes [137,
138] used the maximum entropy principle to derive the thermodynamic formalism. The
maximum entropy principle asserts that one should model a system with the statistical
distribution that maximizes the expected entropy subject to the known information about
certain expectation values. This principle is sometimes considered as a rational, unprejudiced way of accounting for available information in incompletely known statistical models.
Based on Theorem 8.3.3, it is not difficult to show that when the known information is
given by the expectations of the quantities X1 , . . . , Xn , the optimal state in the sense of
the maximum entropy principle is a Gibbs state whose entropy is a linear combination of 1
and the Xk .
However, the maximum entropy principle is an unreliable general purpose tool, and gives
an appropriate distribution only under quite specific circumstances.
10.7.1 Example. If we have information in the form of a large but finite sample of realizations x(k ) of a random variable x in N independent experiments k (k P
= 1, . . . , n),
n
n
1
we can obtain approximate information about all moments hx i hx i N
x(k )n /N
(n = 0, 1, 2, . . .) by taking the appropriate sample means,
X
X
hxn i
wk x(k )n /
wk (n = 0, 1, 2, . . .),
where the wk are appropriate positive weights (typically chosen such that the experimental
errors in wk x(k ) is approximately constant. It is not difficult to see that the maximum
entropy principle would infer that the distribution of x is discrete, namely that of the sample
distribution.
If we take as
R uninformative prior for a real-valued random variable x the Lebesgue measure,
f (x) := f (x)dx, and only know that the mean of x is 1, say, the maximum entropy
245
principle does not produce a sensible probability distribution. If we add the knowledge of
the second moment
hx2 i = 1, say, we get a Gaussian distribution with mean 1 and standard
deviation 1/ 2. Adding the further knowledge of hx3 i, the maximum entropy principle
fails again to produce a sensible distribution. If, on the other hand, after knowing that
hxi = 1 we learn that the random variable is in fact nonnegative and integer-valued, this
cannot be accounted for by the principle, and the probability of obtaining a negative value
remains
large.PBut if we take as prior the discrete measure on nonnegative integers defined
R
by f (x) :=
x=0 f (x)/x!, the supposedly noninformative prior has become much more
informative, the knowledge of the mean produces via the maximum entropy principle a
Poisson distribution.
If we know that a random variable x is nonnegative and has hx2 i = 1; the
measure
p Lebesgue
2
on R+ as noninformative prior gives for x a distribution with density 2/ex /2 . But we
can consider instead our knowledge about y = x2 , which is nonnegative and has hyi = 1; the
same noninformative prior now gives for y a distribution with density ey . The distribution
2
of x = y resulting from this has density 2xex /2 . Thus the result depends on whether
we regard x or y as the relevant variable.
We see that the choice of expectations to be used as constraints reflects prior assumptions
about which expectations are likely to be relevant. Moreover, the prior, far from being
uninformative, reflects the prejudice assumed in the complete absence of knowledge. The
prior that must be assumed to describe the state of complete ignorance significantly affects
the results of the maximum entropy principle, and hence makes the application of the
principle ambiguous.
The application of the maximum entropy principle becomes reliable only if the information
is available in form of the expectation values of a sufficient statistics of the true model;
see, e.g., Barndorff-Nielsen [25]. Which statistical model may be considered sufficient
depends on the true situation and is difficult to assess in advance.
In particular, a Bayesian interpretation of statistical mechanics in the manner of Jaynes is
appropriate if and only if
correct, complete and sufficiently accurate information about the expectation of all
relevant quantities is assumed to be known, and
the noninformative prior is fixed by the constructions of Example 8.1.8, namely as the
correctly weighted Liouville measure in classical physics and as the microcanonical
ensemble (the trace) in quantum physics.
Only this guarantees that the knowledge assumed and hence the results obtained are completely impersonal and objective, as required for scientfic results, and agree with standard
thermodynamics, as required for agreement with nature. However, this kind of knowledge
is clearly completely hypothetical and has nothing to do with the real, partial and imprecise
knowledge of real observers.
246
Part III
Lie algebras and Poisson algebras
247
Chapter 11
Lie algebras
Part III introduces the basics about Lie algebras and Lie groups, with an emphasis on the
concepts most relevant to the conceptual side of physics.
This chapter introduces Lie algebras together with the slightly richer structure of a Lie algebra usually encountered in the mechanical applications. We introduce tools for verifying
the Jacobi identity, and establish the latter both for the Poisson bracket of a classical
harmonic oscillator and, for quantum systems, for the commutator of linear operators.
Further Lie algebras arise as algebras of matrices closed under commutation, as algebras
of derivations in associative algebras, as centralizers or quotient algebras, and by complexification. An overview over semisimple Lie algebras and their classification concludes the
chapter.
In finite dimensions, the relation is almost one-to-one, the almost being due to the fact
that the so-called universal covering group of a finite-dimensional Lie algebra (defined in
Section 13.4) may have a nontrivial discrete normal subgroup.
Many finite-dimensional Lie groups arise as groups of square invertible matrices, and we
discuss the most important families, in particular the unitary and the orthogonal groups.
We introduce group representations, which relate groups of matrices (or linear operators) to
abstract Lie groups, and will turn out to be most important for understanding the spectrum
of quantum systems.
Of particular importance for systems of oscillators are the Heisenberg groups, the universal
covering groups of the Heisenberg algebras. Their product law is given by the famous
Weyl relations, which are an exactly representable case of the BakerCampbellHausdorff
formula valid for many other Lie groups, in particular for arbitrary finite-dimensional ones.
We also discuss the Poincare group. This is the symmetry group of space-time, and forms
the basis for relativity theory.
249
250
11.1
Basic definitions
We start with the definition of a Lie algebra over a field K, usually implicitly given by the
context. In our course, K is either the field C of complex numbers, occasionally the field
R of real numbers. Lie algebras over other fields, such as the rationals Q or finite fields Zp
for p prime, also have interesting applications in mathematics, physics and engineering, but
these are outside the scope of this book. To denote the Lie product, we use the symbol
introduced at the end of Section 1.3. (This replaces other, bracket-based notations common
in the literature.)
11.1.1 Definition.
(i) A Lie product on a vector space L over K is a bilinear operation on L satisfying
(L1) f f = 0,
(L2) f (gh) + g(hf ) + h(f g) = 0 for all f, g, h L.
Equation (L2) is called the Jacobi identity.
(ii) For subsets A, B of L, we write
AB := {f g | f A, g B},
and for f, g L,
Ag := A{g},
f B := {f }B.
(iii) A Lie algebra over K is a vector space L over K with a distinguished Lie product.
Elements f L with f L = 0 are called (Lie) central; the set Z(L) of all these elements
is called the center of L. A real (complex) Lie algebra is a Lie algebra over K = R (resp.
K = C). Unless confusion is possible, we use the same symbol for the Lie product in
different Lie algebras.
Clearly, if f g defines a Lie product of f and g, so does f g := (f g) for all K. Thus
the same vector space may be a Lie algebra in different ways.
In physics, finite-dimensional Lie algebras are often defined in terms of basis elements Xk
called generators and structure constants cjkl , such that
X
Xj Xk =
cjkl Xl .
(11.1)
l
By taking linear combinations and using the bilinearity of the Lie product, the structure
constants determine the Lie product completely. Conversely, since the generators form a
basis, the structure constants are determined uniquely by the basis. They depend, however,
on the basis chosen. Frequently, there are distinguished bases with a physical interpretation
in which the structure constants are particularly simple, and most of them vanish. If a
basis and the structure constants are given, many Lie algebra computations can be done
automatically; important software packages include LIE (van Leeuwen et al. [278]) and
LTP (Torres-Torriti [269]). In this book, we usually prefer a basis-free approach,
251
0 = (f + g)(f + g) = f f + f g + gf + gg
= f g + gf .
Using the antisymmetry property of the Lie product one can write the Jacobi identity in
two other important forms, each equivalent with the Jacobi identity:
f (gh) = (f g)h + g(f h) ,
(11.2)
(11.3)
These formulas say that one can apply the Lie product to a compound expression in a
manner familiar from the product rule for differentiation.
An important but somewhat trivial class of Lie algebras are the abelian Lie algebras, where
f g = 0 for all f, g L. It is trivial to check that (L1) and (L2) are satisfied. Clearly, every
vector space can be turned into an abelian Lie algebra by defining f g = 0 for all vectors
f and g. In particular, the field K itself and the center of any Lie algebra are abelian Lie
algebras.
A subspace L of a Lie algebra L is a Lie subalgebra if it is closed under the Lie product,
i.e., if f g L for all f, g L . In this case, the restriction of the Lie product of L to L
turns L into a Lie algebra. That is, a Lie subalgebra is a subspace that is a Lie algebra with
the same Lie product. (For example, the subspace Kf spanned by an arbitrary element f
of a Lie algebra is an abelian Lie subalgebra.) A Lie subalgebra is nontrivial if it is not
the whole Lie algebra and contains a nonzero element.
The property (L1) is usually easy to check. It is harder to check the Jacobi identity (L2)
for a proposed Lie product; direct calculations can be quite messy when many terms have
to be calculated before one finds that they all cancel. Since we will encounter many Lie
products that must be verified to satisfy the Jacobi identity, we first develop some technical
machinery to make life easier, or at least more structured. For a given binary bilinear
operation on L, we define the associator of f, g, h L as
[f, g, h] := (f g) h f (g h) .
(11.4)
f g := f g g f
(11.5)
252
Proof. Define
J(f, g, h) := f (gh) + g(hf ) + h(f g) ,
and define
S(f, g, h) := [f, g, h] + [g, h, f ] + [h, f, g] [f, h, g] [h, g, f ] [g, f, h] .
Writing out S(f, g, h) and J(f, g, h) with f g := f ggf , one sees J(f, g, h) = S(f, g, h)
and hence if S(f, g, h) = 0 for all f, g and h, then the Jacobi identity is satisfied for all f, g
and h. The antisymmetry property f f = 0 is trivial.
(f g)p hq fp (g h)q
(fp gq )p hq fp (gp hq )q
fpp gq hq + fp gqp hq fp gpq hq fp gp hqq
fpp gq hq fp gp hqq
We end this section by introducing some concepts needed at various later points but collected here for convenience. If L and L are Lie algebras we call a linear map : L L a
homomorphism (of Lie algebras) if
(f g) = (f )(g)
for all f, g L. Note that the left-hand side involves the Lie product in L, whereas the
right-hand side involves the Lie product in L . An injective homomorphism is called an
embedding of L into L . We call two Lie algebras L and L isomorphic if there is a
homomorphism : L L and a homomorphism : L L such that is the identity
on L and is the identity on L . Then is called an isomorphism, and is the inverse
isomorphism.
253
In words, CL (S) consists of all the elements in L that Lie commute with all elements in
S. One may use the Jacobi identity to see that CL (S) is a Lie subalgebra of L .
An ideal of L is a subspace I L such that f g I for all f L and for all g I. In
other notation LI = IL I. Note that 0 and L itself are always ideals; they are called
the trivial ideals. Also, the center of a Lie algebra is always an ideal. A less trivial ideal is
the derived Lie algebra L(1) of L consisting of all elements that can be written as a finite
sum of elements of LL. If I L is an ideal in L one may form the quotient Lie algebra
L/I, whose elements are the equivalence classes [f ] of all g L such that f g I, with
addition, scalar multiplication, and Lie product given by
[g] := [g] ,
[f ] + [g] := [f + g] ,
[f ][g] := [f g] .
It is well-known that the vector space operations are well-defined. The Lie product is welldefined since f [f ] implies f f I, hence (f f )g I and [f ][g] = [f g] =
[f g + (f f )g] = [f g].
If L and L are Lie algebras, their direct sum L L is the direct sum of the vector spaces
equipped with the Lie product defined by
(x + x )(y + y ) = xy + x y
for all x, y L and all x , y L . It is easily verified that the axioms are satisfied.
11.2
Equation (11.2),
f (gh) = (f g)h + g(f h) .
resembles the product rule for (partial) differentiation;
g
h
(gh) =
h+g
.
x
x
x
To make the similarity more apparent we introduce for every element f L a linear operator
adf : L Lin L, the derivative in direction f , given by
adf g := f g .
The notation reflects the fact that the operator ad : L Lin L defined by
adf := adf
is the adjoint representation of L; see Sections 13.3 and 13.5.
Note that an element f L is in the center Z(L) = CL (L) of L if and only if the linear
operator adf is zero.
254
g.
(11.6)
adf g = f g = fp gq fq gp = fp
fq
q
p
The vector field Xf on R R defined by the coefficients of adf is called the Hamiltonian
vector field defined by f ; cf. Chapter 12. In particular, the Hamiltonian derivative
operators with respect to p and q take the explicit form
Xp =
,
q
Xq =
,
p
and we have
adf = fp Xp + fq Xq .
With the convention that operators bind stronger than the Lie product, the Jacobi identity
can be written in the form
adf (gh) = adf gh + gadf h .
The Jacobi identity is thus equivalent to saying that the operator adf defines for every f a
derivation of the Lie algebra.
11.2.2 Definition.
(i) A derivation of a vector space A with a bilinear product is a linear map : A A
satisfying the product rule (or Leibniz identity)
(f g) = f g + f g ,
for all f, g A. We denote by Der A the set of all derivations of A. (In the cases of interest,
A is an associative algebra with the associative product as , or a Lie algebra with the Lie
product as .
(ii) If E is an associative algebra E, a (left) E-module is an additive abelian group V
together with a multiplication mapping which assigns to f E and x V a product
f x V such that
f (x + y) = f x + f y,
(f + g)x = f x + gx,
f (gx) = (f g)x
255
Proof. Since Der E is a linear vector space, and since the antisymmetry property and the
Jacobi identity are already satisfied in Lin E, we only need to check that the Lie product of
two derivations is again a derivation. We have:
( )(f g) =
=
=
=
( )(f g) ( )(f g)
(( f )g + f ( g)) ((f )g + f (g))
( f )g + f g ( f )g f g
( f )g + f ( g)
for all A S} ,
11.3
In quantum mechanics, linear operators play a central role; they appear in two essentially
different ways: Operators describing time evolution and canonical transformations are linear
operators U on a Hilbert space, that are unitary in the sense that U U = UU = 1, and
hence bounded1 . The unitary operators form a group, which in many cases of interest is a
so-called Lie group.
On the other hand, many important quantities in quantum mechanics are described in
terms of unbounded linear operators that are defined not on the whole Hilbert space but
only on a dense subspace. Usually, the linear operators of interest have a common dense
domain H on which they are defined and which they map into itself. H inherits from the
Hilbert space the Hermitian inner product, hence is a complex Euclidean space, and
the Hilbert space can be reconstructed from H as the completion H of H by equivalence
classes of Cauchy sequences, in the way familiar from the construction of the real numbers
from rationals. We therefore consider the algebra Lin H of continuous linear operators on
a Euclidean space H, with composition as associative product.
1
The bounded operators on a Hilbert space a so-called C -algebra; see for example Rickart [232],
Baggett [17], or Werner [285]. But we do not use this fact.
256
In this section, we define the basic concepts relevant for a study of groups and Lie algebras
inside algebras of operators. Since for these concepts neither the operator structure nor the
coefficient field matters in most cases - as long as the characteristic is not two -, we provide
a slightly more general framework. In the next section, we apply the general framework to
the algebra Cnn = Lin Cn of complex n n-matrices, considered in the standard way as
linear operators on the space Cn of column vectors with n complex entries. Many of the
Lie groups and Lie algebras arising in the applications are naturally defined as subgroups
or subspaces of this algebra.
An (associative) algebra over a field K is a vector space E over K with a bilinear,
associative multiplication. For example, every -algebra is an associative algebra over C.
As traditional, the product of an associative algebra (and in particular that of Lin H and
Knn ) is written by juxtaposition. An associative algebra E is called commutative if
f g = gf for all f, g E, and noncommutative otherwise. In many cases we assume that
such an algebra has a unit element 1 with respect to multiplication; after the identification
of the multiples of 1 with the elements of K, this is equivalent to assuming that K E.
If E and E are associative algebras over K with 1, then a K-linear map : E E is an
algebra homomorphism if (f g) = (f )(g) and (1) = 1. Often we omit the reference to
the ground field K and assume a ground field has been chosen.
We now show that every associative algebra has many Lie products, and thus can be made in
many ways into a Lie algebra. For commutative algebras, the construction is uninteresting
since it only leads to abelian Lie algebras.
11.3.1 Theorem. Let E be an associative algebra. Then, for every J E, the binary
operation J defined on E by
f J g := f Jg gJf
is a Lie product. In particular (J = 1), the binary operation defined on E by
f g := [f, g]
where
[f, g] := f g gf
denotes the commutator of f and g, is a Lie product.
Proof. We compute the associator (11.4) for the bilinear operation f g := f Jg:
[f, g, h] = (f g)Jh f J(g h) = f JgJh f JgJh = 0 ,
by associativity. Hence the associator of satisfies (11.5), and we conclude that J is a Lie
product.
Note that Jf Jg = J(f J g). Hence the corresponding Lie algebras are isomorphic when
J is invertible.
257
If E and E are two associative algebras with unity, we may turn them into Lie algebras by
putting f g = [f, g] in both E and E . We denote by L and L the Lie algebra associated to
E and E , respectively. If is an algebra homomorphism from E to E then induces a Lie
algebra homomorphism between the Lie algebras L and L . Indeed (f g) = (f g gf ) =
(f )(g) (g)(f ) = (f )(g).
Theorem (11.3.1) applies in particular to E = Knn . The Lie algebra Knn with Lie product
f g := [f, g] is called the general linear algebra gl(n, K) over K. If K = C, we simply
write gl(n) = gl(n, C); similar abbreviations apply without notice for the names of other
Lie algebras introduced later.
11.3.2 Definition.
(i) A Hausdorff -algebra is a -algebra E with a Hausdorff topology in which addition,
multiplication, and conjugation are continuous. An element f E is called complete if
the initial-value problem
d
U(t) = f U(t), U(0) = 1
(11.7)
dt
has a unique solution U : R E. Then the mapping U is called a one-parameter group
with infinitesimal generator f , and we write etf := U(t); this notation is unambiguous
since it is easily checked that et(sf ) = e(ts)f for s, t R. An element f E is called selfadjoint if f = f and the product if with the imaginary unit is complete. We call an
element g E exponential if it is of the form g = ef for some complete f E. We call a
Hausdorff -algebra E an exponential algebra if the set of exponential elements in E is
a neighborhood of 1.
(ii) A linear group is a set G of invertible elements of some associative algebra E such
that 1 G and
g, g G g 1 , gg G.
If E is given with a topology in which its operations are continuous, we consider G as a
topological group with the topology induced by calling a subset of G open or closed if it is
the intersection of an open or closed set of E with G.
(iii) A linear Lie group is a closed subgroup of the group E of all invertible elements
e with a Hausdorff topology that
of an exponential algebra E. A Lie group is a group G
is isomorphic to some linear Lie group G, i.e., for which there is a continuous, invertible
e G such that and 1 are continuous and (1) = 1, (gg ) = (g)(g )
mapping : G
e
for all g, g G.
For all exponential algebras E, the group E is a linear Lie group. Note that the law
258
e =
X
fk
k=0
k!
X
(1 g)k
k=1
for k1 gk < 1,
11.4
For general fields, there are no exponentials, and one needs to replace the differential geometric structure
inherent in Lie groups by an algebraic geometry structure, and may then interpret general matrix groups
as so-called groups of Lie type. In particular, for finite fields, one gets the Chevalley groups, which
figure prominently in the classification of finite simple groups.
259
Every subspace of a Lie algebra closed under the Lie product is again a Lie algebra. This
simple recipe provides a large number of useful Lie algebras defined as Lie subalgebras of
some gl(n, K). Conversely, the (nontrivial) theorem of Ado, not proven here but see
e.g. Jacobsen [136], states that every finite-dimensional Lie algebra is isomorphic to a Lie
subalgebra of some gl(n, R).
The group GL(n, K) is one of the most important finite-dimensional linear groups and all
finite-dimensional linear groups are isomorphic tosubgroups of GL(n, K) for some n. If
K = R or K = C then every closed subgroup G of GL(n, K) is a Lie group. These Lie
groups have associated Lie algebras L = log G of infinitesimal generators. For any Lie
subgroup G of GL(n, K) one gets the Lie algebra by looking at the vector space of those
elements X of gl(n, K) such that eX is in G for small enough. This criterion is very useful
since we can take so small that we only have to look at the terms linear in so that we
dont have to expand the exponential series completely. If the subgroup G GL(n, K) is
connected and either compact or nilpotent, then the exponential map can be shown to be
surjective, see e.g. Knapp [154].
The Lie algebra sl(n, K) is the Lie subalgebra of gl(n, K) given by the traceless matrices.
The dimension is n2 1 and we have
sl(n, K)
= gl(n, K)/K .
The quotient is well defined and is a Lie algebra because K is the center and thus in
particular an ideal.
If L is a Lie algebra over R then by taking the tensor product with C and extending the
Lie bracket in a C-linear way, one obtains the complexification of L, denoted LC . The
process of complexification is also called extension of scalars. In particular, if we write
LC = C R L then in LC the Lie bracket is given by ( x)( y) = (xy). The
reverse process is called realization or restriction of scalars; we clarify the process of
restriction of scalars by an example.
11.4.2 Example. Consider L = sl(2, C). We wish to calculate sl(2, C)R . A basis of
sl(2, C) is given by the elements
1 0
0 1
0 0
,
,
.
0 1
0 0
1 0
This basis is as well a basis for sl(2, R); therefore we see sl(2, C)R
= sl(2, R) R i sl(2, R)
as real vector spaces. The Lie product of f + ig and f + ig for f, f sl(2, R) and
ig, ig i sl(2, R) is given by
(f + g)(f + g ) = f f gg + i(f g + f g) .
The reader who has already some experience with Lie algebras is encouraged to verify the
isomorphism sl(2, C)R
= so(3, 1).
11.4.3 Example. Suppose we have a symmetric bilinear form B on Kn . The Lie algebra
so(n, B; K) is the subspace of all f sl(n, K) satisfying
B(f v, w) = B(v, f w).
(11.8)
260
We leave it to the reader to show that if f and g satisfy (11.8), then so does f g gf ; thus
we have indeed a Lie algebra. In the special case where B(v, w) = v T w, the Lie algebra
so(n, B; K) is called the complex orthogonal Lie algebra so(n, K). In matrix language,
so(n, K) is the Lie algebra of antisymmetric matrices with entries in K and has dimension
n(n 1)/2.
An orthogonal matrix is a matrix Q satisfying
QT Q = 1.
(11.9)
The orthogonal n n-matrices with coefficients in a field K form a subgroup of the group
GL(n, K), the orthogonal group O(n, K).
Since (11.9) implies that (det Q)2 = 1,
orthogonal matrices have determinant 1. The orthogonal matrices of determinant one
form a subgroup of O(n, K), the special orthogonal group SO(n, K). The corresponding
Lie algebra is so(n, K) = log O(n, K) = log SO(n, K), the Lie algebra of antisymmetric
n n-matrices. In particular, the group SO(3) = SO(3, R) consists of the rotations in
3-space and was discussed in some detail in Section 3.2.
For a nondegenerate B (i.e., one where B(v, w) = 0 for all v implies w = 0) and K = C
(or any algebraically closed field), we can always choose a basis in which the bilinear form
is represented as the identity matrix. Therefore all so(n, B; K) with nondegenerate B are
isomorphic to so(n, K).
Over K = R, symmetric bilinear forms are classified by their signature, i.e., the triple
(p, q, r) consisting of the number p of positive, q of negative, and r of zero eigenvalues of
the symmetric matrix A representing the bilinear form B; B(v, w) = v T Aw. The form
B is nondegenerate if and only if r = 0. Bilinear forms with the same signature lead to
isomorphic Lie algebras. In particular, so(p, q) denotes a Lie algebra so(p + q, B, R) where
B is a nondegenerate symmetric bilinear form B on Rn of signature (p, q, 0). The basis can
always be chosen such that the representing matrix A is
1p
0
Ip,q =
,
0 1q
where 1p and 1q are the p p and q q identity matrix, respectively. In this basis, the Lie
algebra so(p, q) is the subalgebra of gl(n, R) consisting of elements f satisfying
f T Ip,q + Ip,q f = 0 .
Note that if f so(p, q) then
2
0 = tr (f T Ip,q + Ip,q f )Ip,q = 2 tr(f Ip,q
) = 2 tr(f )
11.4.4 Example. Let V be a vector space over a field K. Suppose V is equipped with a
symmetric or antisymmetric nondegenerate bilinear form B. There is a symmetry group
associated to the bilinear form consisting of the linear transformations Q : V V such
that
B(Qv, Qw) = B(v, w)
261
for all v, w in V . If B is symmetric one calls the group of these linear transformations
an orthogonal group and denotes it by O(B, K). The associated Lie algebra is o(B, K).
Indeed, etf transforms x, y into
B(etf x, etf y) = B(etf x, etf y)
= B((1 + tf )x, (1 + tf )y) + O(t2 )
= B(x, y) + tB(f x, f y) + O(t2 ).
11.4.5 Example. When K = R, one has for symmetric bilinear forms another subdivision,
since B can have a definite signature (p, q) where p + q is the dimension of V . If B is of
signature (p, q), this means that there exists a basis of V in which B can be represented as
B(v, w) = v T Aw ,
where A = diag(1, . . . , 1, 1, . . . , 1) .
| {z } | {z }
p times
q times
The group of all linear transformations that leaves B invariant is denoted by O(p, q). The
subgroup of O(p, q) of transformations with determinant one is the so-called special orthogonal group and is denoted by SO(p, q). The associated real Lie algebra is denoted
so(p, q) and its elements are linear transformations A : V V such that for all v, w V we
have B(Av, w) + B(v, Aw) = 0. The Lie product is given by the commutator of matrices.
The group of all translations in V generates together with SO(p, q) the group of inhomogeneous special orthogonal transformations, which is denoted ISO(p, q). One can
obtain ISO(p, q) from SO(p, q + 1) by performing a contraction; that is, by rescaling some
generators with some parameter and then choosing a singular limit 0 or .
The group ISO(p, q) can also be seen as the group of (p + q + 1) (p + q + 1)-matrices of
the form
Q b
with Q SO(p, q) , b V .
0 1
The Lie algebra of ISO(p, q) is denoted iso(p, q) and can be described as the Lie algebra
of (p + q + 1) (p + q + 1)-matrices of the form
A b
with A so(p, q) , b V .
0 0
Again, the Lie product in iso(p, q) is the commutator of matrices.
We define the symplectic Lie algebra sp(2n, K) as the Lie subalgebra of gl(2n, K) given
by the elements f satisfying
f T J + Jf = 0 ,
(11.10)
where J is the 2n 2n-matrix given by
J=
0
1n
1n
0
We leave it to the reader to verify that if f and g satisfy (11.10), then so does f g gf .
Another useful exercise is to prove sl(2, K)
= sp(2, K). (Caution: The reader is warned
262
that in the literature there are different notational conventions concerning the symplectic
Lie algebras. For example, some people write sp(n, K) for what we and many others call
sp(2n, K).)
If B is antisymmetric in the example 11.4.4, the group is called a symplectic group and
one writes Sp(B, K). The associated Lie algebras is sp(B, K). If V is of finite dimension m
one writes Sp(B, K) = Sp(m, K). Note that m is necessarily even.
Other real Lie algebras that play a major role in many areas of physics are the unitary Lie
algebras and the special unitary Lie algebras called so because they are the generating
algebras of the groups of (special) unitary matrices, a term that will be explained in Section
17.7. The unitary Lie algebra u(n) consists of all antihermitian complex n n matrices.
The special unitary Lie algebra is defined as the antihermitian n n complex traceless
matrices and is denoted su(n). It is clear that su(n) u(n). It might seem weird to call
a Lie algebra real if it consists of complex-valued matrices. However, as a vector space
the antihermitian complex n n matrices form a real vector space. If f is a antihermitian
matrix, then if is Hermitian. The dimension (as a real vector space) of su(n) is n2 1, and
the dimension of u(n) is n2 . It is a good exercise to check that so(3)
= su(2) since these
two Lie algebras will return very often. A hint: so(3) consists of anti-symmetric real 3 3
matrices, so there are only three. Choosing an obvious basis for both su(2) and so(3) will
do the job.
11.4.6 Example. A complex matrix U is unitary if it satisfies
UU = 1,
where (U )ij = Uji . Since the inverse of a matrix is unique, it follows that also U U = 1.
By splitting all the matrix entries into a real and imaginary part Uij = Aij + iBij we see
2
that the set of n n unitary matrices makes up a submanifold of R2n of dimension n2 .
The linear group of unitary n n matrices is denoted U(n).
X
1 k
U =e =
A .
k!
k=0
A
Then multiply A with a parameter t, take t 0 and keep only the linear terms: U =
1 + tA + O(t2 ). Then since U has to be unitary, we obtain
1 = (1 + tA + O(t2 ))(1 + tA + O(t)2 ) = 1 + t(A + A ) + O(t)2 ,
implying that A has to be antihermitian. Thus the Lie algebra of infinitesimal generators
of U(n) is u(n).
The subgroup of U(n) of all elements with determinant 1 is denoted by SU(n) and is called
the special unitary group. The dimension of SU(n) is n2 1. For the determinant we
get
det(1 + tA + O(t2 )) = 1 + tr tA + O(t)2 ,
and thus the trace of infinitesimal generators of SU(n) has to vanish, and we see that
the corresponding Lie algebra is su(n). Note that the Lie algebra u(n) contains all real
263
multiples of i 1, which commutes with all other elements. Hence u(n) has a center, whereas
su(n) does not.
In the case n = 2 it is a nice exercise to show that each special unitary matrix U can be
written as
x y
U=
, x, y C , |x|2 + |y|2 = 1 .
y x
Physicists prefer to work with Lie algebras defined by Hermitian matrices, corresponding
to Lie -algebras. In the applications, distinguished real generators typically represent
important real-valued observables. Therefore they tend to replace the matrix A by iA for a
Hermitian matrix A. This is one of the reasons why the structure constants for real algebras
appear in the physics literature with an i, as alluded at the end of Section 11.2.
11.5
In more abstract terms, central extensions are conveniently described by short exact sequences. Let Ai
be a set of Lie algebras and suppose that there are maps di : Ai Ai+1 ;
...
Ai1
di1
Ai
di
Ai+1
(11.11)
We call the sequence exact if Ker di = Im di1 for all i where there are di1 and di . As an exercise, the
reader is invited to verify the following assertion: The sequence 0 A B 0 is exact if and only if
A
= B and the isomorphism is the map from A to B. A short exact sequence is a sequence of maps of the
form
0 A B C 0.
A central extension of L is then a Lie algebra L such that there is an exact sequence 0 Z L
L 0 with Z abelian.
264
Conversely, given such a form on an arbitrary vector space V not containing 1, this formula
turns L := K V into a Heisenberg algebra. If is nondegenerate on V it defines a
symplectic form on V.
The Heisenberg algebra h(n) is the special case where K = C, V = C2n , and is nondegenerate. Thus h(n) is a central extension of the abelian Lie algebra C2n and has dimension
2n + 1. We can find a basis of V consisting of vectors pk and ql for 1 k, l n such
that (pk , pl ) = (qk , ql ) = 0 for all k, l and (pk , ql )= kl ; that
is, is then the standard
0 1
symplectic form on K2n represented by the matrix
. Thus Heisenberg algebras
1 0
encode symplectic vector spaces in a Lie algebra setting. Everything done here extends
with appropriate definitions to general symplectic manifolds, and, indeed, much of classical
mechanics can be phrased in terms of symplectic geometry, the geometry of such manifolds we refer the reader to the exposition by Arnold [15] on classical mechanics and
symplectic geometry.
11.5.1 Example. Let us write t(n, K) for the Lie subalgebra of gl(n, K) consisting of
upper-triangular matrices and n(n, K) as the Lie subalgebra of gl(n, K) consisting of strictly
upper-triangular matrices, which have zeros on the diagonal.
The Lie algebra t(3, K) of strictly upper triangular 3 3-matrices is a Heisenberg algebra
with
0 0 1
1 = 0 0 0 ,
0 0 0
since
0
0
0 0 0 0
0 0 0
0 0
0 0
= .
= 0 0
0
0
0 0
0
The Lie algebra t(3, C) is called the Heisenberg algebra; thus if one talks about the
(rather than a) Heisenberg algebra, this Lie algebra is meant and is denoted h(1). Introducing names for the special matrices
0 1 0
0 0 0
p := 0 0 0 , q := 0 0 1 ,
0 0 0
0 0 0
we find that p, q and 1 form a basis of t(3, C), and we can express the Lie product in the
more compact form
(p + q + )( p + q + ) = .
(11.12)
Defining
(p + q + ) := p + q +
turns the Heisenberg algebra into a Lie -algebra in which p and q are Hermitian.
that here is not the conjugate transposition of matrices!
Note
265
(11.12) implies that p and q satisfy the so-called canonical commutation relations
pq = 1 ,
pp = qq = 0.
(11.13)
Since f 1 = 0 when 1 is Lie central, (11.13) completely specifies the Lie product. The
canonical commutation relations are frequently found in textbooks on quantum mechanics,
but we see that they just characterize the Heisenberg algebra.
The notation q and p is chosen to remind of position of momentum. Indeed, the canonical
commutation relations arise naturally in classical mechanics. In the Lie algebra C (R R)
constructed in Theorem 11.1.3, we consider the set of affine functions, that is, those that
are of the form f (p, q) = f p + f q + f , with f , f , f C. In particular, the constant
functions are included with f = f = 0, and we identify them with the constants f C.
Given another affine function g(p, q) = g p + g q + g , we find
f g = f g f g C .
Since f g is just a complex number times the function that is 1 everywhere, it is a central
element, that is, it Lie commutes with all other algebra elements. Thus the affine functions
form a Heisenberg subalgebra of C (RR), and p and q satisfy the canonical commutation
relations.
Suppose that a commutative Poisson algebra E contains two elements p and q satisfying
the canonical commutation relations (11.13). Then E contains a copy of the Heisenberg
algebra. The algebra of polynomials in p and q is then a Poisson subalgebra of E in which
(11.6) is valid. This follows from Proposition 12.1.5. Thus the canonical commutation
relations capture the essence of the commutative Poisson algebra C (R R). But getting
the bigger algebra requires taking limits which need not exist in E, since with polynomials
alone, one does not get all functions.
11.5.2 Example. An upper triangular n n-matrix is called unit upper triangular if
its elements on the diagonal are 1, and strictly upper triangular if its elements on the
diagonal are zero. It is straightforward to check that the unit upper triangular n nmatrices form a subgroup T (n, K) of the group GL(n, K), and the strictly upper triangular
n n-matrices form a Lie subalgebra of gl(n, K), which we denote by t(n, K). We have
t(n, K) = log T (n, K). In the following we shall look more closely at the case n = 3 which
is especially important.
The Heisenberg group is the group
T (3, C) =
1 a c
o
0 1 b a, b, c C
0 0 1
(11.14)
of unit upper triangular matrices in C33 ; its corresponding Lie algebra is the Heisenberg algebra t(3, C). Since the Heisenberg group is defined in terms of matrices, it comes
immediately with a representation, the defining representation. Note that the defining
representation is not unitary.
266
The relation between the Heisenberg algebra and the Heisenberg group is particularly simple
since the exponential map has a simple form. Indeed, if A Cnn then
A
e =
X
Ak
k=0
k!
(11.15)
where A0 = 1 is the identity matrix and the series (11.15) is absolutely convergent. A
note on the infinite-dimensional case: For linear operators A on a Hilbert space H, the
series converges absolutely only when A is bounded (and hence everywhere defined); for
unbounded but self-adjoint A (which are only densely defined), convergence holds in a
weaker sense giving
X
Ak
(11.16)
eA =
k!
k=0
for a dense set of vectors H that are analytic for A.
0 0 c
0 0 0
0 0 0
0
A = 0 0 ,
0 0 0
1 + 12
.
eA = 0 1
0 0
1
The map A eA is clearly bijective. The inverse map is given by the logarithm, which is
for matrices defined by
X
(1)k1 k
log(1 + X) =
X ,
(11.17)
k
k=1
log(X) = (X 1) 21 (X 1)2 = 2 + 2X 12 X 2 .
We are thus in the situation that both T (3, C) = exp t(3, C) and t(3, C) = log T (3, C).
This is not special to the Heisenberg group, neither does it hold in general. But there is a
class of groups for which this holds. For example, the exponential map is surjective for all
connected Lie groups that are compact or nilpotent (see below), see, e.g., Helgason [124]
or Knapp [154]. The Heisenberg group is a noncompact but nilpotent Lie group.
Let us shortly repeat what it means when a group is nilpotent. Given any group G, we
can form the commutator subgroup G(1) , which is generated by all elements of the form
267
aba1 b1 for all a, b G. We can also consider the commutator subgroup of G(1) and denote
it by G(2) . Repeating this procedure we get a sequence of groups
G G(1) G(2) ...
A group is nilpotent if the procedure ends in a finite number of steps with the trivial group
G(n) = 1. It is easy to see that the Heisenberg group is two-step nilpotent since G(2) = 1.
Since the exponential map is bijective for the Heisenberg group, there exists a binary
operation on t(3, C), where A B is the element with
eA eB = eAB .
(11.18)
It is not difficult to give an explicit formula for A B. Since A and B are strictly upper
triangular, we have Ap B q = 0 for p + q 3. We thus have
eA eB = (1 + A + 12 A2 )(1 + B + 21 B 2 ) = 1 + A + B + 21 (A2 + B 2 + 2AB) .
Applying (11.17) we find
A B = log 1 + A + B + 12 (A2 + B 2 + 2AB)
= A + B + 12 (AB BA) ,
hence
A B = A + B + 21 AB .
(11.19)
Thus we get from (11.18) the formula eA eB = eA+B+ 2 AB . Since AB is central, it behaves
just like a complex number, and we find the Weyl relations
1
eA+B = e 2 AB eA eB .
(11.20)
In fact this result is also a direct consequence of the famous (but much less elementary)
BakerCampbellHausdorff (BCH) formula that gives for general matrix Lie groups
a series expansion of A B when A and B are not too large. Even more generally, the
BakerCampbellHausdorff formula applies to abstract finite-dimensional Lie groups4 that
are not necessarily matrix groups and says that for two fixed Lie algebra elements A and
B and for small enough real numbers s and t there is a function C from R R to the Lie
algebra such that we have
esA etB = eC(s,t) .
The function C(s, t) is given by a (for small s, t absolutely convergent) infinite power series,
the first terms of which are given by
C(s, t) = sA + tB +
st
st
AB +
(sA(AB) tB(AB)) + . . . .
2
12
In fact, this series expansion may be derived from a closed form integral expression.
The BakerCampbellHausdorff formula is of great importance in both pure and applied
mathematics. It gives (where it applies; in particular in finite dimensions) the relation of a
4
In infinite dimensions, additional assumptions are needed for the BCH-formula to hold.
268
Lie group with the associated Lie algebra. It for example says that the product of eA and
eB for some A and B in the Lie algebra is again an element of the form eC with C in the Lie
algebra. Hence the exponents of the Lie algebra generate a subgroup of the corresponding
Lie group.
For infinite-dimensional Lie algebras and groups, one has to use a refined argument centering around the HilleYosida theorem. Let U(t) denote a one-parameter group of linear
operators on a Hilbert space H such that t U(t) is strongly continuous, which means
that t U(t) is continuous for all H. Then we can differentiate U(t) to obtain the
strong limit
U(t) U(0)
A = lim
.
t0
t
The object A is called infinitesimal generator of the one-parameter group U(t). It turns
out that A is a closed linear operator that is defined on a dense subspace in H. The Hille
Yosida theorem gives a necessary and sufficient condition for a closed linear operator A to
be the infinitesimal generator of some strongly continuous one-parameter semigroup
U(t) = etA ,
since in general one might not get a group. The HilleYosida theorem is very useful for
analyzing the solvability of linear differential equations
d
(t) = A(t) , (0) = 0 ,
dt
examples of which are the Schrodinger equation or the heat equation. If the conditions of
the HilleYosida theorem hold for A, the solution to this initial value problem takes the
form
(t) = etA (0) .
For the (hyperbolic, conservative) Schrodinger equation, A = hi H with a self-adjoint
Hamiltonian H, the solution exists for all t, and the U(t) form a one-parameter group. For
the (parabolic, dissipative) heat equation, A = k is a positive multiple of the Laplacian
= x2 + y2 + z2 , the solution exists only for t 0, and we only get a semigroup.
11.6
Lie -algebras
Many Lie algebras of interest in physics have an additional structure: an adjoint mapping
compatible with the Lie product.
11.6.1 Definition. A Lie -algebra is a Lie algebra L over C with a distinguished element
1 6= 0 called one and a mapping that assigns to every f L an adjoint f L such that
f 1 = 0,
(f + g) = f + g ,
f = f,
(f g) = f g ,
, 1 = 1
(f ) = f
for all f, g L and C. We identify the multiples of 1 with the corresponding complex
numbers.
269
The reason why we include the 1 into the definition of a Lie -algebra is that many physically
relevant Lie algebras are equipped with a distinguished central element5 . But the presence
of 1 is not a restriction, since one can always adjoin a central element 1 to a Lie algebra L
without nonzero central element and form the direct sum L = L K.
An important Lie -algebra for nonrelativistic quantum mechanics is the algebra E = Lin H
of linear operators of a Euclidean space H (usually a dense subspace of a Hilbert space H).
The relevant Lie product is defined by Theorem 11.3.1 with the choice
J :=
i
i
= 1H Lin H,
h
where 1H is the identity operator on H, and the conjugate of f E is given by the adjoint
of f , defined as the linear mapping f satisfying f = (f ) for all , H. Dropping
the index J in the Lie product of Theorem 11.3.1, we get the quantum Lie product
f g =
i
i
(f g gf ) = [f, g]
h
(11.21)
of f, g Lin H, already familiar from (1.3). Note that the axioms require the purely
imaginary factor in this formula, whereas the value of Plancks constant h
is arbitrary from
a purely mathematical point of view. In quantum field theory, a different choice of J is
sometimes more appropriate.
For any Lie -algebra, the set
Re L := {f L | f = f }
is a Lie algebra over R. When describing symmetries, physicists often work with Lie algebras over the reals; the present Lie -algebras are then the complexifications of these real
algebras, with a central element 1 adjoined if necessary.
The complexification of a real Lie algebra L is the Lie -algebra CL defined as follows.
In case that a complex scalar multiplication is already defined on L, one first replaces L by
an isomorphic Lie algebra in which if 6 L if f L is nonzero. Then one defines
CL = L iL,
extending scalar multiplication in a natural way to the complex field. That is, any element
f CL is of the form
f = f1 + if2
with f1 , f2 L, and one defines
(f ) := ()f,
5
f g := ()f g
Many such Lie algebras are realized most naturally as central extensions of semisimple Lie algebras,
corresponding to projective representations of semisimple Lie algebras. By including the 1 automatically
we work directly in the central extension, and avoid the cohomological technicalities associated with the
formal discussion of central extensions and projective representations.
270
for f1 , f2 L;
The axioms for a Lie -algebra are easily established if 1 L. Note that the real dimension
of L equals the complex dimension of CL. It is easy to check that
Re CL
= L.
Conversely, for a Lie -algebra L,
C Re L
= L.
If a complex Lie algebra L is isomorphic to CL as a Lie algebra, one says that L is a real
form of the complex Lie algebra L .
We leave it as an exercise to verify Csu(n)
= sl(n, C) and Cso(p, q) = so(p + q, C). In
general, a complex Lie algebra has more than one real form as we can see since for p 6= q, nq
the Lie algebras so(p, n p) and so(q, n q) are not isomorphic.
with Lie product [, ] and with
An involutive Lie algebra (Neeb [199]) is a Lie algebra L
L
satisfying
an involutive, antilinear anti-automorphism , i.e., a mapping : L
(f ) = f,
(f g) = gf
x 7 ihx, with h
a positive real constant (in physical applications Plancks constant) is an
R.
isomorphism of Lie algebras. The map induces the conjugation x = x and Re L = L
11.6.2 Remarks.
(i) The nomenclature of Lie -algebras is a bit tricky. If L is a Lie -algebra, we therefore
denote it (usually) with the name of the real Lie algebra Re L. To avoid confusion, it is
important to keep track of whether we are discussing real Lie algebras, complex Lie algebras
or Lie -algebras.
(ii) In the physics literature, one often sees the defining relations (11.1) for real Lie algebras
written in terms of complex structure constants,
X
Xj Xk =
icjkl Xl .
l
where i = 1 and the cjkl are real. That is, the Lie product takes values outside of the
real Lie algebra! What is done by the physicists is that as in the above definition of a Lie
-algebra from an involutive Lie algebra they multiply all elements in the Lie algebra by i.
The reasons for making this seemingly difficult construction mainly has historical reasons.
One is that in some real algebras the elements are antihermitian matrices. By multiplying
with i one obtains Hermitian matrices and in quantum mechanics, observable quantities
are represented as Hermitian operators.
271
272
Chapter 12
Mechanics in Poisson algebras
This chapter brings more physics into play by introducing Poisson algebras, i.e., associative
algebras with a compatible Lie algebra structure. These are the algebras in which it is
possible to define Hamiltonian mechanics. Poisson algebras abstract the algebraic features
of both Poisson brackets and commutators, and hence serve as a unifying tool relating
classical and quantum mechanics. In particular, we discuss classical Poisson algebras for
oscillating and rotating systems.
12.1
Poisson algebras
Many algebras that we will encounter have both an associative product and a Lie product,
which are compatible in a certain sense. Such algebras are Poisson algebras, our definition
of which is the noncommutative version discussed, e.g., in Farkas & G. Letzter [86]. (In
contrast, in classical mechanics on Poisson manifolds, one usually assumes Poisson algebras
to be always commutative.)
12.1.1 Definition. A Poisson algebra E is a Lie algebra with an associative and distributive multiplication which associates with f, g E its product f g, and an identity
1 with respect to multiplication, such that the compatibility condition
f (gh) = (f g)h + g(f h)
(12.1)
274
with respect to the associative product. If [f, g] = 0 we say that f and g commute. If
f g = 0 we say that f and g Lie commute. An element which commutes (Lie commutes)
with every element in E is called central (Lie central).
12.1.3 Example. We take C (R R) where the associative product is given by ordinary
multiplication of functions, and where the Lie product is given by f g = fp gq fq gp . To
see that the Leibniz condition is satisfied we write
f gh = fp (gh)q fq (gh)p
= fp gq h + fp ghq fq gp h fq ghp
= (f g)h + g(f h) .
Thus C (R R) is a commutative Poisson algebra.
12.1.4 Example. For a Euclidean space H we consider the space Lin H of continuous linear
operators on H. The Lie product is given by
f g =
i
i
[f, g] = (f g gf ) .
h
We have
i
f gh ghf
h
i
f gh gf h + gf h ghf
=
h
i
[f, g]h + g[f, h] .
=
h
f gh =
275
from which it follows that f 1 = 0. Let us therefore suppose that the proposition is true
for all k with 0 k n, then for k = n + 1 we have
f (g n+1) = (f g n )g + g n (f g) = ng n1 (f g)g + g n (f g) = ng n (f g) .
for all , H ,
i
i
= ((AB) (BA) ) = (B A A B )
h
i
i
(A B B A ) = [A , B ] = A B .
=
h
AB =
[A, B]
276
12.2
The spinning top is the classical model of a spinning particle. Like a football, the top can
be slightly deformed but when the external force is released it jumps quickly back to its
equilibrium state. Molecular versions of a football are the fullerenes, the most football-like
fullerene being a molecule with 60 carbon atoms arranged in precisely the same manner as
the vertices that can be seen in the corners between the patches on the surface of an official
football. In a reasonable approximation, the deformability can be neglected; the spinning
top, and also the fullerene soccer ball, is most often treated as a rigid body.
The spinning top is treated in most undergraduate courses in mechanics; hence there is a
rich literature on the topic.1 Due to the abundance of classical treatments of the spinning
top we pursue here a nonstandard approach based on Poisson algebras, which shows how
it is a special prototypical case of a uniform algebraic approach to mechanical systems.
A rigid body can be moving as a whole, that is, its center of mass can have a nonzero velocity,
but changing to comoving coordinates via a time-dependent translation, one may assume
that the center of mass is not moving. The coordinate system in which the center of mass
of the rigid body is fixed is in physics literature called the center of mass coordinate
system. Without loss of generality we then assume the center of mass is at the origin
(0, 0, 0).
Having fixed the center of mass the rigid body can still rotate, but after rotating the
coordinate system to the body-fixed one, no freedom is left. This means that the pose of
a rigid body with fixed center of mass is completely described by a rotation Q(t) SO(3).
Thus Q(t) satisfies Q(t)Q(t)T = Q(t)T Q(t) = 1 and det Q(t) = 1. Differentiating we get
T
T = 0.
Q(t)Q(t)
+ Q(t)Q(t)
T
1
(t)T = (t) ,
that is is antisymmetric. We can therefore parameterize as
0
3 2
= 3
0
1 .
2 1
0
Good accounts of the standard approach can be found, e.g., in Arnold [15], Marion & Thornton
[183], or Goldstein [106].
277
For a freely rotating body, the Hamiltonian only depends on the kinetic energy and is
quadratic in the angular velocity;
1
H = H() = T I ,
2
and we can always take I symmetric, I = I T . The 3 3-matrix I, called the tensor
of moments of inertia, or just inertia tensor, has the meaning of an angular mass
matrix analogous to the mass matrix M given in Chapter 5 for the case of an oscillating
particle, where the kinetic energy was given by H = 12 vT Mv. The reason why it is called
a tensor and not a matrix is because I is in fact a bilinear form.2 Under a coordinate
change I does transform as a bilinear form and not as a matrix. Indeed, under the change
for some Q
SO(3), the Hamiltonian is invariant and thus I
of coordinates 7 Q
T
1
transforms as I 7 Q IQ , that is, by a congruence transformation. In contrast, a matrix
A transforms as A 7 QAQ1 , which is a similarity transformation. By a coordinate change
I can be made diagonal, so that we may assume that
3
H=
1X
Ik k2 .
2 k=1
The coefficients Ik are called the principal moments of inertia. To have a Hamiltonian
that is bounded from below we require Ii 0. In practice one has Ik > 0 for all k = 1, 2, 3;
then I is invertible.
for an oscillating particle with kinetic
In analogy to the linear momentum p = Mv = H
v
energy H = 12 v T Mv, we define the angular momentum J by
J :=
H
= I .
(12.2)
12.3
H
p
(12.3)
= M 1 p.
In Section 3.4, we used the Jk as generators of the rotations; they are basis elements of the
Lie algebra L = so(3). The Jk correspond to the angular momenta of a spinning particle (see
2
The same holds for the mass matrix but there the terminology has become traditional.
278
Section 12.2). Thus there is a more physical interpretation; the Jk correspond to measurable
quantities, the components of the angular momentum. We denote the observable that
corresponds to Jk with the same symbol Jk . Purely classical, the state of a rigid rotating
body in its rest frame is defined by specifying a numerical value for J = (J1 , J2 , J3 )T , called
the angular momentum of the rigid body.
The dynamics of a rigid body is determined by the equation J = J , where = I 1 J is
the angular velocity of the rigid body and I is the constant inertia tensor.
Thus the state at a given time determines uniquely its value at any time, and therefore the
value of every classical observable f (J), i.e., every function of the angular momentum, such
as the angular velocity or the total angular momentum J2 . In analogy with the case of
a single particle, we therefore consider the manifold R3 of possible states J to be the phase
space of the rotating rigid body.
To study the observables, i.e., functions of J, we begin with polynomials. We write Pol L
for the polynomial algebra generated by 1 and the Ji , and give this algebra the structure
of a Poisson algebra. The recipe obtained will then be further generalized to cover arbitrary
C -functions of J.
Motivated by the so(3) structure we define a product recursively, starting with the
commutation relations of so(3) with 1 adjoined,
1Jk = 0 ,
Jk Jl =
klm Jm .
a Jb J = (a b) J .
Having given the product on the generators of Pol L, the product is completely determined
by the Leibniz rule
a Jf (J)b J = (a Jb J)f (J) + a J(f (J)b J)
for f Pol L.
12.3.1 Lemma. We have the identity
f (J)b J = (b J)
f (J)
,
J
279
Here we use a vector notation, that is, we consider Pol L. Now suppose the statement is
true for some n 1, then we consider next a homogeneous polynomial of degree n + 1 and
write it as a Jf (J) (or a linear sum of such). Next we consider on the one hand
a Jf (J)b J = (a b) Jf (J) + a Jb J
f (J)
,
J
a Jf (J) = (b J) af (J) + a J(b J)
f (J)
J
J
f (J) ,
= (a b) Jf (J) + a J(b J)
J
f (J)
f (J) g(J)
.
J
=J
J
J
J
J
g(J)
(12.4)
Proof. We again proceed by induction, this time on the degree of g. For degree 1 of g,
the previous lemma gives the result. Now suppose the result holds for polynomials up to
degree n 1. Now consider the polynomials of degree n + 1 and write such a polynomial
as a sum of terms g(J)h(J) where g and h both have degree n. Then for each such term
we have
f (J) g(J)h(J) = f (J)g(J) h(J) + g(J) f (J)h(J)
g
f
h
f
=
J
h(J) + g(J)
J
J
J
J
J
f
f
(gh)
(gh)
.
J
=J
=
J
J
J
J
Note that although (12.4) was derived only for polynomials, its right hand side makes sense
for arbitrary C functions of J Thus we take it as the definition of a Lie product on C (R3 ):
12.3.3 Proposition. The algebra E = C (R3 ) and its subalgebra P ol(L) are Poisson
algebras. That is, the product (12.4) is a Lie product satisfying the Leibniz identity.
Proof. The antisymmetry of the product is obvious on the generators, for the other cases
we use Lemma 12.3.1 and Lemma 12.3.2 together with the observation that uJw = wu
J = w J u. The Leibniz identity is a direct consequence of the product rule for partial
f
derivatives. The Jacobi identity is a bit tedious to check. Using the notation fk = J
and
k
280
the LeviCivit`a symbol, one writes the outer product for vectors as (uv)k =
Then we find for the Lie product
X
f g =
klm Jl fm gk ,
lm klm ul vm .
klm
where the summations are over all present indices. When summing over the cyclic permutations of f, g and h the first summation is easily seen to give zero. We write the second
sum as
klm abc Jm Jc fa gbk hl + fak gb hl + ga hbk fl + gak hb fl + ha fbk gl + hak fb gl ,
and focus on the term with two derivatives on f
klm abc Jm Jc fak gb hl + ha fbk gl = klm abc Jm Jc fak gb hl + hk fla gb
= klm abc Jm Jc fak gb hl hl fka gb
= 0.
12.4
Many books on classical mechanics, see for example Marion and Thornton [183],
Arnold [15] or Goldstein [106], present the standard approach to the dynamics of a
spinning rigid body, resulting in the Euler equations. We take an alternative route, exploiting the Lie algebra structure corresponding to the rotation group. We write down
the Lie product that determines the mechanics. We then derive the Euler equations and
reproduce the same equations of motion. Thus we are giving an equivalent description.
The motivation for the form of the Lie product is determined by symmetry considerations.
We have seen that the algebra of infinitesimal rotations which must be involved in the
differential equations describing the state of the spinning object is so(3), the Lie algebra
of real, antisymmetric 33-matrices. In Section 12.5, we shall see that we can obtain a Lie
Poisson algebra out of any Lie algebra; in particular, we construct the LiePoisson algebra
of so(3) in Example 12.5.3. Since the dynamical observables of a physical system form
281
J
J
for f, g C (R3 ).
Now that we have the Poisson algebra and the Hamiltonian (12.2) for the classical mechanics
of the spinning top, we can apply the usual recipe. For an observable f the time-evolution
is given by
f = Hf .
In particular, for the angular momentum we have from (12.3)
H
ek = (J )k ,
Jk = HJk = J
J
where ek is the unit vector in the direction k, and where we use H/J = I 1 J = . We
thus have
J = J .
I1 1
1
2 3 (I2 I3 )
I1 1
I2 2 = I2 2 2 = 1 3 (I2 I1 )
(12.5)
I3 3
3
1 2 (I1 I2 )
I3 3
The equations (12.5) are the Euler equations for the spinning rigid body. The spinning
direction is given by the vector n := /|| and the spinning speed is given by ||. Thus
knowing the trajectory of (t) in the phase space R3 at all times implies knowing everything
about the direction and speed of the spinning motion.
We claim that J2 = J J is a Casimir of the Lie algebra so(3). Indeed, from (3.43) we have
J1 J2 = J3 and the other commutation relations can be obtained by cyclic permutation.
But then
J1 J2 = J1 J22 + J1 J32 = J3 J2 + J2 J3 J2 J3 J3 J2 = 0 ,
and for the other generators the results are similar. Since J2 is a Casimir of the Lie algebra,
it is conserved by the dynamics. Indeed, calculating the time-derivative of J2 we find
(J2 ) = 2J J = 2J J = 0 .
Hence the motion preserves surfaces of constant J2 , which are spheres. The radius of the
sphere is determined by the initial conditions.
Note that the angular momentum phase space R3 cannot be symplectic since it is not
even-dimensional. However, since we have a Poisson algebra, it is a Poisson manifold as
described in Section 18.1.
In the present case, the symplectic leaves (co-adjoint orbits) are the surfaces where the
Casimir J2 has a constant value; hence they are the spheres on which the motion takes
282
12.5
LiePoisson algebras
In the above section we started from a the Lie algebra structure of so(3) to construct an
associated Poisson algebra. This program can be repeated for arbitrary real Lie algebras.
The formulation closest to the physical applications is in terms of a Lie -algebra L. It
applies to arbitrary real Lie algebras such as so(3) by taking their complexification and
adding, if necessary, a central element 1, thus extending the dimension of the Lie algebra
by one. As usual, we write C for the complex linear subspace spanned by the element 1. In
case that L is infinite-dimensional, we assume L to be equipped with a topology in which
all operations are continuous and that L is reflexive (see below); in finite dimensions this
is automatic.
We consider the dual space L of continuous linear maps from L to C, and the bidual space
L of continuous linear maps from the dual space L to C. For finite-dimensional vector
spaces we have canonically L = L, for infinite-dimensional vector spaces in general only
L L ; in both cases we have an injective map L L given by
L , L :
() := () .
283
One should note that M0 is a real linear subspace in L . The affine hyperplane M1 carries
the structure of a real submanifold, with the tangent space at each point being isomorphic
to M0 .
If L is the complexification of a real Lie algebra L , so that we have L = L R C, then
the elements of M0 are the linear functionals on L that are zero on the element 1, and
are extended to linear forms on L by linearity: (a + bi) = (a) + i(b) for a, b L . So
we can identify M0 in this case with the dual of the quotient Lie algebra L /R, where R
denotes the real subspace spanned by the distinguished central element 1. Therefore the
dual of M0 is again L /R. In the general case M0 is a real subspace in (L/C) , so that
M0 C = M0 + iM0 satisfies (M0 C)
= L/C.
We consider for a non-empty open subset M of M1 the commutative algebra E = C (M).
We define for every f E and M a linear map df () : M0 C by
f ( + tv) f ()
for all v M0 .
t0
t
So we have df () Lin(M0 , C). Extending by C-linearity we can view df () as an element
of Lin(M0 C, C). Hence df () defines an element in (M0 C)
= L/C. We can find an element
Df () in L such that under the projection L L/C the element Df () goes to df (). The
choice of Df () is not unique, but another choice D f () differs from Df () by an element
in C, which is contained in the center.
df ()v = lim
We now show how the object Df () can be chosen. We choose an arbitrary element
L with (1) = 1. Then we can write L as a direct sum L = M0 C W (as a
complex vector space), where W = C := { | C} is the 1-dimensional span of .
Indeed, for an arbitrary element of L , the element := (1) satisfies (1) = 0.
Now can be written as a linear combination u + iv of two elements u, v M0 . Thus
= u + iv + (1) M0 C W . For any fixed choice of we define Df () by
Df ()u := df ()(u u(1)) .
(12.6)
Note that u u(1) M0 C. The extended Df () lies thus in L . But L was assumed to
be reflexive, hence we have Df () L.
We are now in a position to define a Lie product on E by
(f g)() = (Df ()Dg())() for all M M1 ,
where the Lie product on the right-hand side is that of L. The left-hand side above is the
complex number obtained by evaluating the function h := f g for the argument M
M1 L . The right-hand side is the complex number obtained from the bilinear pairing
between df ()dg() L and the same . Since the derivative of a smooth function is
again smooth, f g is again an element of E.
We see that the Lie product f g is independent of the choice of Df () and Dg(), or
equivalently, of the choice of in (12.6). Indeed, any other choice would differ only by
an element in the center. But taking the Lie product in L the dependence of the central
element drops out.
We have the following theorem:
284
12.5.1 Theorem. The algebra E with the Lie product defined above is a Poisson algebra,
called the LiePoisson algebra over L. The restriction of the Lie product of E to affine
functions coincides with the Lie product of L.
Proof. (Sketch): The definition of is independent of . The antisymmetry of is
clear, and the Jacobi identity follows from that of L, using the fact that partial derivatives
commute. The Leibniz identity follows from the Leibniz property of differentiation. The
injection L L gives a map from the Lie algebra to the affine functions. We therefore
regard the Lie algebra as a subalgebra of the affine functions. Since we assumed the Lie
algebra L to be reflexive the affine functions represent elements of the Lie algebra. Indeed,
for an affine function f we obtain a linear function by subtracting f (0) and thus defines an
element f of L. But f (0) is a multiple of 1 and thus also an element of the Lie algebra,
therefore f = f + f (0) L.
M1 =
x
o
y x, y R .
1
f (x, y)
f (x, y)
a+
b.
x
y
f (x, y)
f (x, y)
p+
q.
x
y
C h(1) ,
x
y
y
x
285
and thus
,
x
y
y
x
which precisely corresponds to the Lie product associated to the dynamics of a single particle
in one dimension.
(f g)() =
More generally, an arbitrary Heisenberg algebra leads to general symplectic Poisson algebras
on convenient vector spaces.
12.5.3 Example. We now show that for the choice so(3) we recover the Lie product (12.4).
We identify the real Lie algebra so(3) with R3 equipped with the vector product. We adjoin
a central element to obtain so(3) 1 and call L the complexification of so(3) 1. We write
an element of L as (v, a) where v C3 and a C, so that the Lie product is given by
(v, a)(w, b) = (v w, 0) .
Of course, v w is defined by extending the vector product on R3 by C-linearity. We
identify L with C4 as follows
x
v1
y
(v, a) = xv1 + yv2 + zv3 + ta , v = v2 .
z
v3
t
Thus we find that M consists of the vectors (x, y, z, )T with x, y and z real numbers. A
smooth function on M1 is just a smooth function R3 R. For any smooth f : R3 R,
and = (x, y, z, 1) we define
f () f () f () T
,
,
,
f () =
x
y
z
where we identify the vector = (x, y, z, 1) in M1 with the vector (x, y, z) in R3 . We see
that we can choose Df () = (f (), 0) and the Lie product on E is then given by
f g() = (f () g()) ,
which is precisely (12.4); (x, y, z)T corresponds to (J1 , J2 , J3 )T .
The attentive reader might have noticed that in Example 12.5.3, the central element 1
played no role at all.
As mentioned before, when a Lie algebra has no distinguished
central element one can always add one. However, in this case one can also proceed directly
as follows. For a real Lie algebra L, we consider the dual L and the algebra E of real-valued
smooth functions on L . Let f E and L . The 1-form df () is an element of the dual
of the tangent space at . Since L is a vector space and L is assumed to be reflexive,
the dual of the tangent space at is again L. Hence df () defines an element of the Lie
algebra, which we also denote by df (). Then we define the Lie product on E for f, g E
as follows (f g)() = (df ()dg()), that is, to get (f g)() the function is evaluated
at the Lie algebra element df ()dg(). We leave it as an exercise that this gives the same
result for real Lie algebras that do not have a distinguished central element.
286
It turns out that the majority of commutative Poisson algebras relevant in physics are
LiePoisson algebras constructible from a suitable Lie algebra, or natural quotients of such
algebras. In particular, this holds for the Poisson algebra of classical symplectic geometry
in RN , which come from general Heisenberg algebras, and for all but one of the Poisson
algebras for nonequilibrium thermodynamics constructed in Beris and Edwards [33].
12.6
287
called a classical pure state. Note that evaluation at a point z M is more than a
linear functional; an evaluation gives an algebra homomorphism C (M; R) R since
(f g)(z) = f (z)g(z); hence we have a character of the commutative algebra E. If the phase
space M is an open subset of Rn , the evaluations are the only characters of E. This can
be seen as follows. Take any algebra homomorphism : C (M) R. Let x1 , . . . , xn be
coordinates on M and denote by ai = (xi ) the images of the coordinate functions. The
homomorphism thus determines a point z = (a1 , . . . , an ) in Rn . We have to show z M.
Suppose z
/ M, then
X
(xj aj )2
f (x1 , . . . , xn ) =
j
is a function that does not vanish on M, and thus is an invertible element of C (M). If an
element x is invertible, then so is its image under any homomorphism. Indeed, if xy = 1,
then (xy) = (x)(y) = (1) = 1. But the function f is mapped to zero under and
hence cannot be invertible. Hence we arrive at a contradiction and the assumption z
/M
is false.
A mixed classical state is a weighted mixture of pure classical states. That is, there is a
real-valued function on the phase space M, called the density, taking nonnegative values
and integrating to one
Z
(z)d(z) = 1 ,
M
such that
hf i =
(z)f (z)d(z) .
(12.7)
P
The more general form H = 21 N
i,j=1 Gij (q1 , . . . , qN )pi pj +V (q1 , . . . , qN ), where G is a configurationdependent inverse mass matrix, appears at various places in physics. When the potential V is constant
(so that we can put it to zero), the physical system is sometimes called a -model. Such models play an
important role in modern high-energy physics and cosmology. Some authors prefer to include a potential
into the definition of a -model.
3
288
where the potential V (q1 , . . . , qN ) describes the potential energy of the configuration with
positions (q1 , . . . , qN ).
The states in symplectic mechanics are precisely the states of the form (12.7).
If the
system is such that we can measure at one instant of time all positions and momenta
exactly (obviously an idealization), the configuration is precisely given by the point z =
(q1 , . . . , qN , p1 , . . . , pN ) in phase space, and hf i = f (z) for all f E. Thus the density
degenerates to a product of delta functions of each phase space coordinate. Thus classical
pure states are equivalent to points z in phase space, marking position and momentum of
each point of interest, such as the centers of mass of the stars, planets, and moons making
up a celestial system.
12.7
Molecular mechanics
Consider a molecule consisting of N atoms. The molecule is chemically described by assigning bonds between certain pairs of atoms, reflecting the presence of chemical forces that
in the absence of chemical reactions which may break bonds hold these atoms close
together. Thus a molecule may be thought of as a graph embedded in 3-dimensional space,
in which some but usually not all atoms are connected by a bond. The chemical structure
of the molecule is thus described by a connected graph, the formula of the molecule. (In
the following, we ignore multiple bonds, which are just a way to indicate stronger binding
than for single bonds, reflected in the interaction potential.) We write i j if there is
a bond between atom i and atom j and similarly we write i j k if there is a bond
connecting i and j and there is a bond connecting j and k. The notation is extended to
longer chains: i j k l . . ..
The interactions between the atoms in a molecule are primarily through the bonds, and to
a much smaller extent through forces described by a pair potential and through multibody
forces for joint influences of several adjacent bonds.
The geometry is captured mathematically by assigning to the jth atom a 3-dimensional
coordinate vector
qj1
qj = qj2
qj3
specifying the position of the atom in space. If two atoms with labels j and k are joined by
a chemical bond, we consider the corresponding bond vector qj qk , with bond length
kqj qk k. At room temperature, the bonds between adjacent atoms i and j are quite rigid,
meaning that the deviation from the average distance rij is generally small and the force
that tries to maintain the atoms at distance rij is strong. In chemistry this is modeled by
a term
X aij
Vbond (q1 , . . . , qN ) =
(kqi qj k rij )2
2
ij
in the Hamiltonian, where the aij are stiffness constants, parameters determined by the
particular chemical structure.
289
xi
p
xj
xk
q
x
Figure 12.1: Bond vectors, bond angles, and the dihedral angle
Consider two adjacent bonds i j and k l. The bond angle is the angle between the
bond vectors qj qi and ql qk . The bond angle can then be computed from the formulas
cos =
(qi qj ) (qk qj )
,
kqi qj kkqk qj k
sin =
k(qi qj ) (qk qj )k
,
kqi qj kkqk qj k
and is thus invariant under the simultaneous action of the group ISO(3) on all 3 vectors.
In most molecules the bond angles are determined from the interaction between the atoms
in the molecule. There is thus an ISO(3)-invariant term
X
aijk (qi , qj , qk )
Vangle (q1 , . . . , qN ) =
ijk
in the potential with : (R3 )3 R an ISO(3)-invariant function, and aijk are some
parameters.
Finally, the dihedral angle = <) (i j k l) (or the complementary torsion angle
2 ) measures the relative orientation of two adjacent angles in a chain i j k l
of atoms. It is defined as the angle between the normals through the planes determined by
the atoms i, j, k and j, k, l, respectively, and can be calculated from
cos =
sin =
and
Again, the angle between the planes is ISO(3)-invariant and therefore described by an
ISO(3)-invariant function : (R3 )4 R of the positions of the four atoms. Hence to
model the molecule there is a term
X
Vdihedral (q1 , . . . , qN ) =
aijkl (qi , qj , qk , ql )
ijkl
in the Hamiltonian, with again aijkl parameters. The total Hamiltonian is then taken to be
X p2
i
H=
+ Vbond (q1 , . . . , qN ) + Vangle (q1 , . . . , qN ) + Vdihedral (q1 , . . . , qN ) .
2mi
i
290
12.8
Quantum field theory is the area in physics where fields are treated by quantum mechanics.
The way physicists think of this is more or less as follows. As we have seen in Chapter 5,
classical linear field equations, such as the Maxwell equations, can be seen as describing
a family of harmonic oscillators labeled by a continuum of pairs (p, s) of momenta p and
spin or helicity s. Therefore, what has been treated above is nice, but for quantum field
theory it is not enough. One needs an infinite number of oscillators. Treating such a system
becomes mathematically sophisticated, because topological details start playing a dominant
role. A way to deal with this heuristically, often employed by physicists, is by discretizing
space-time in a box. On each point of the lattice one places a harmonic oscillator; then
there are just a finite number of oscillators. To get the quantum field theory, one considers
the limit in which the size of the box goes to infinity and the spacing of the lattice goes
to zero. Then the oscillators are not described by operators ak and a+
k that are labeled by
291
vectors k, but by operators a(x) and a+ (x) that are labeled by the continuous four-vector
index x. The limit might not exist. . . .
In two space-time dimensions the limit is well-defined for interacting field theories, that
is, for field theories where the different fields can interact. In case of four dimensions,
the correct limit is only known for non-interacting field theories. From experience we know
that there is interaction, of course, so our description shows serious shortcomings. After the
preceding description of representations, it is interesting to note that,- in the field theory
limits, the metaplectic representations still exist.
For 2-dimensional field theories with one space and one time dimension, this leads to satisfactory quantum field theories (such as conformal field theory) . But for 4-dimensional
field theories, the metaplectic representation is restricted to a class of operators not flexible
enough for capturing the physics. This is the main mathematical obstacle for formulating
a consistent framework for 4-dimensional quantum field theories.
292
Chapter 13
Representation and classification
13.1
Poisson representations
Consider the Heisenberg algebra h(n) with the usual generators 1, pi , and qi , and the
corresponding LiePoisson algebra E(h(n)). The subalgebra of all polynomials in qi , pj is
closed under the Lie product, and hence a Poisson subalgebra. More interestingly, there are
several Lie subalgebras of low degree polynomials, which we shall now explore. We write z
for the 2n-tuple
p
z=
q
of all the generators except 1. All linear polynomials without constant term can be written as a z for some a C2n . On C2n we introduce the antisymmetric bilinear form ,
represented in the given basis by the matrix J:
0 1
T
(a, b) = a Jb , J =
,
1 0
where the entries in J are n n-matrices, i.e., 1 = 1n , etc.. The bilinear form is
nondegenerate and antisymmetric, and we have
a zb z = (a, b) .
(13.1)
294
which is a quadratic expression. Hence the homogeneous quadratic polynomials form a Lie
subalgebra of E(h(n)). We show below that this Lie algebra is related to sp(2n, C). We
proceed in physicists fashion by looking at a conveniently chosen basis. In Section 21.4
we give a second derivation in a coordinate independent fashion, which generalizes to the
fermionic case and gives Lie algebras related to the real orthogonal groups.
The generators of h(n) are 1, pi and qi . Consider the elements
Qij = qi qj ,
Pij = pi pj ,
1
Eij = (qi pj + pj qi ) ,
2
of the universal enveloping algebra. We have Qij = Qji and Pij = Pji . We find the
commutation relations:
Qij Qkl
Eij Ekl
Eij Qkl
Eij Pkl
Qij Pkl
=
=
=
=
=
0 , Pij Pkl = 0 ,
il Ekj + jk Eil ,
jl Qik + jk Qil ,
il Pjk ik Pjl ,
ik Ejl jk Eil jl Eik il Ejk .
The Lie algebra sp(2n, C) is given by the complex 2n 2n-matrices that preserve the above
given J:
X sp(2n, C) X T J + JX = 0 .
Taking X in block form as
X=
A
C
B
D
=
=
=
=
=
0 , Cij Ckl = 0 ,
il Akj + jk Ail ,
jl Bik + jk Bil ,
il Cjk ik Cjl ,
ik Ejl + jk Eil + jl Eik + il Ejk .
Sending Qij to Bij , Pij to Cij and Eij to Aij we have an isomorphism between the algebras.
We now allow for inhomogeneous quadratic polynomials by adjoining the linear forms of
the algebra E(h(n)) to this Lie algebra. Everything commutes with the central element 1,
295
so we will not write down the commutation relations with 1. The commutation relations
of the other basis elements are found to be
Qij qk
Qij pk
Pij qk
Eij qk
Eij pk
=
=
=
=
=
Pij pk = 0 ,
ik qj jk qi ,
ik pj jk pi ,
ik qj ,
jk pi .
We define the Lie subalgebra L of E(h(n)) as the Lie subalgebra of quadratic expressions in
the generators and we define L = L /C, so that in L we have qi pj = 0. Using the previously
established isomorphism with sp(2n, C) it is not too hard to see that L is isomorphic to the
Lie algebra isp(2n, C), which is defined as the Lie algebra of all (2n + 1) (2n + 1)-matrices
of the form
A r
,
0 0
with A a 2n 2n-matrix in sp(2n, C) and r a 2n-vector. We have thus shown that L is a
central extension of isp(2n, C).
13.2
Linear representations
Of great interest in quantum mechanics are certain realizations of Lie algebras and of Lie
groups by means of operators on vector spaces. We therefore address the concept of a
representation of a Lie algebra. In the previous chapter we have already given a short
discussion of finite-dimensional representations of finite-dimensional Lie algebras.
13.2.1 Definition.
(i) A (linear) representation of a Lie algebra L in an associative algebra E is a linear
map J : L E such that
J(f g) = J(f )J(g) J(g)J(f ) for all f, g L.
The representation is called faithful if J is injective. A linear representation on a (finiteor infinite-dimensional) vector space H is a representation in the algebra E = Lin H. In the
case that E is the algebra of n n matrices with entries in K one obtains the definition of
Section 13.3. A linear representation is called irreducible when the only subspaces closed
under multiplication by linear mappings of the form J(f ) are 0 and H.
(ii) A unitary representation of a Lie -algebra L is a linear map J : L E in the
-algebra E = Lin H of continuous linear operators of a Euclidean space H (with being
the adjoint), satisfying
J(1) = 1 ,
J(f ) = J(f ) ,
i
J(f g) =
J(f )J(g) J(g)J(f ) .
h
296
Note that by Proposition 11.2.3, an associative algebra E becomes in a natural way a Lie
algebra by defining f g = [f, g] = f g f g. Hence a representation of a Lie algebra L in an
algebra E is a Lie algebra homomorphism from L to E, with E regarded as a Lie algebra.
If the representation is faithful, the image of L is a Lie subalgebra of E isomorphic to L.
In this case, one often identifies the elements of L with their images, and then speaks of
an embedding of L into E. By the Theorem of Ado mentioned in Section 11.4, every
finite-dimensional real Lie algebra has a faithful representation.
The enveloping algebra. In a representation, the elements of L are represented by
matrices or linear operators. From a given set of matrices we can form the algebra that
these matrices generate, containing the unit matrix, all finite products and their linear
combinations. This motivates us to consider an object that already encompasses this algebra
for all representations: the universal enveloping algebra of a Lie algebra L. In general it is
constructed by considering the tensor algebra T (L), which is given by
T (L) = K L (L L) (L L L) . . . =
Li .
i=0
One makes T (L) into an associative noncommutative algebra over the complex numbers by
defining the product ab to be the tensor product a b.
Within T (L) we consider the ideal J generated by all elements of the form
x y y x [x, y]
for all x, y in L. Thus an element in J is a sum of elements of the form
a (x y y x [x, y]) b ,
for some a, b T (L). The universal enveloping algebra of L is then defined as the
associative noncommutative algebra U(L) over the complex numbers given by
U(L) = T (L)/J .
Another view on the universal enveloping algebra U(L) would be as follows. One chooses
a basis {ti } for L and considers the associative noncommutative polynomial algebra in the
generators while imposing the relation
ti tj tj ti = [ti , tj ] .
Thus we consider the associative algebra generated by 1 and by the generators of L and
impose the Lie product, which in this case is the commutator, by hand. The algebra we
obtain in this way is canonically isomorphic to the universal enveloping algebra U(L).
The universal enveloping algebra thus contains the Lie algebra, i.e. envelopes the Lie
algebra. This approach is very practical and therefore often used by physicists. There
exists a more sophisticated definition, using a so-called universal property. One then proves
that such an object is unique and that the given definition above has this universal property.
297
We do not expand on the definition using the universal property but refer to the literature,
see, e.g., Jacobsen [136], Knapp [154], or Fuchs & Schweigert [95]. It is because of
this universal property that U(L) is usually called the universal enveloping algebra, and
not just the enveloping algebra.
The main reason to define the universal enveloping algebra is to study the representations of
the Lie algebra. Every representation of the Lie algebra induces a unique representation of
the associative universal enveloping algebra, and conversely, every representation of the universal enveloping algebra induces a representation of the Lie algebra itself. In a sense, all
finite-dimensional representations are maps of the associative universal enveloping algebra
to the associative algebra of n n-matrices for some n.
Casimir elements. An element C U(L) in the center of the universal enveloping algebra,
i.e. that commutes with all other elements of U(L), is called a Casimir element, or just
Casimir and sometimes also Casimir operator. If L has a representation in a vector space
V , then for any c K the subspace Vc = {v V |Cv = cv} is invariant under the action of
L, precisely because C is in the center of U(L). Hence if the representation V is irreducible,
Vc must be the whole of V for some c and the other Vc are zero. That means, C acts
diagonally in irreducible representations.
The classical analogue of the universal enveloping algebra is the LiePoisson algebra discussed in Chapter 12.5.
13.3
Finite-dimensional representations
We have already seen in Section 11.4 that the Lie algebra gl(n, K) has many interesting Lie
subalgebras. Given an arbitrary Lie algebra L it is interesting to see how we can represent
L as a Lie algebra of matrices. In this section we consider finite-dimensional Lie algebras
and finite-dimensional representations in more detail.
For any vector space V over K we denote gl(V ) the Lie algebra of linear maps from V to V
with the Lie product given by the commutator f g = f g gf . If V is identified with Kn
we write gl(V ) = gl(n, K) (see Section 11.4). A Lie algebra homomorphism : L gl(V )
is called a finite-dimensional representation of L; the vector space V is then called
an L-module. We call the representation complex if K = C and real if K = R. We
have already seen that su(n) has a complex representation, since it is defined as a (real)
subalgebra of gl(n, C).
Given a representation : L gl(V ) we call W an invariant subspace of V if (f )w W
for all f L and all w W . The representation is called irreducible if the only invariant
subspaces are 0 and V . We call the representation decomposable or fully reducible, if
for any invariant subspace W there is a complementary invariant subspace W such that
V = W W .
If 1 : L gl(V1 ) and 2 : L gl(V2) are representations of L we can form the direct sum
298
(13.2)
13.4
U(1) = 1.
299
It is easy to see that U(f 1 ) = U(f )1 , and in the unitary case, U(f 1 ) = U(f )1 = U(f ) .
Note that the invertible elements of E form a group and a Lie group representation of G in E
is a group homomorphism of G into this group. Again, if the representation is faithful, one
may identify group elements with their images under the representation, and then has an
embedding of G into the algebra E. Thus if E is the algebra of n n matrices with entries
in K we get a group homomorphism of G into GL(n, K). For K = C the representation is
unitary if the image of G lies inside U(n).
If a Lie algebra representation J : L E is an embedding, we can get something that is
close to a representation of the Lie group by exponentiation, i.e., by defining
f
J(f )
U(e ) := e
X
J(f )k
k=0
k!
provided this converges for all f L in the topology of E. In Subsection 13.4 we go deeper
into the question of how to get a Lie group representation from a Lie algebra representation and the problems one encounters. On the other hand, given a representation U of
a Lie group G with Lie algebra L we can get a representation J of the Lie algebra by
differentiation, i.e., by defining
d
tX
J(X) := U(e ) ,
t=0
dt
provided the derivative always exists. In finite dimensions, both constructions work generally; in infinite dimensions, suitable assumptions are needed to make the constructions
work.
The group G acts on the Lie algebra L. We will discuss this shortly for groups of matrices.
For every element g G GL(n, K) we define Ad(g) which is a linear transformation of
L given by
Ad(g) : X 7 gXg 1 .
It holds that Ad(g)X L, which we will not prove. The interested reader is referred to
Knapp [154], Helgason [124], Frankel [91], or Kirillov [151]. For all the examples
discussed so far, the reader can check it by hand. The map Ad : g Ad(g) clearly
satisfies Ad(gh) = Ad(g)Ad(h) and is thus a representation, which is called the adjoint
representation of the group G.
Universal covering group. For Lie algebra representations an important construct is the
universal enveloping algebra. For Lie groups there is an analogue. Above we mentioned
that by differentiating a representation of a Lie group, one obtains a representation for the
corresponding Lie algebra. By exponentiating a representation of the Lie algebra one gets
a representation for those group elements that can be written as exponents. If a group is
not connected, one does not obtain a representation of the group in this way.
Other problems arise when the group is not simply connected. For example SO(3) is not
simply connected and therefore certain representations of the Lie algebra cannot be lifted
300
to representations of the Lie group; the spin representations become multivalued. Even
other problems arise when two Lie groups that are fundamentally different have isomorphic
Lie algebras. Consider for example the group U(1) of complex numbers of absolute value
1. As a manifold U(1) is just the circle S 1 . The Lie group of U(1) is the one-dimensional
abelian Lie algebra (there is only one). Now consider the Lie group R where the group
operation is addition a b = a + b. Then R is a one-dimensional abelian Lie group with a
one-dimensional Lie algebra. The Lie algebras of U(1) and R are isomorphic, but the Lie
groups are totally different. When we want to lift a Lie algebra representation of either of
them to a Lie group representation, which group do we choose then?
These topological considerations lead one to the question whether there is a unique simply
connected Lie group for a given Lie algebra. The answer is positive: for every real finitedimensional Lie algebra L there is a unique simply connected Lie group G with Lie algebra
L. So given a Lie group H with Lie algebra L one can construct a unique simply connected
Lie group G with Lie algebra L. The group G is called the universal covering group of
H. Then H and G are locally isomorphic; there are small neighborhoods of the origin in
both groups on which H and G are diffeomorphic to each other. The Lie group H is then
a quotient of G; H
= G/D for some discrete normal subgroup D of G.
The exponential map L G is in general not surjective, however, the image of the exponential map generates an interesting subgroup of G, the connected component of G,
denoted G0 . If g G lies in the connected component, we can write g = ef1 efr for some
Lie algebra elements f1 , . . . , fr . Given a Lie algebra representation we can uniquely lift it
to a representation of the connected component of the Lie group G0 if G0 is simply connected. Therefore, in this case, the representations of the Lie algebra L are in a one-to-one
correspondence with the representations of the universal covering group corresponding to
L.
Now let H be a Lie group with Lie algebra L and with universal covering group G such
that H
= G/K for some normal subgroup K of G. Given a Lie algebra representation
of L, we get a Lie group representation of G. If the normal subgroup K is in the kernel
of the representation, we get a well-defined representation of H as well. Conversely, given
a representation of H, we get a representation of G by first projecting to G/K, so that
K is in the kernel. Hence, representations of H are in one-to-one correspondence with
representations of G that map K to the unit matrix.
13.5
For finite-dimensional Lie algebras a lot is known about the general structure; here we give
an overview over the results most useful in physics. Since no details are given, this section
may be skipped on first reading.
Classifying all finite-dimensional Lie algebras is in a certain sense possible; all finitedimensional Lie algebras are a semidirect product of a semisimple and a solvable Lie algebra
(to be defined below). The classification of all semisimple real and complex Lie algebras
301
is completely understood. It turns out that the semisimple complex Lie algebras can be
classified by studying certain root systems. The semisimple real Lie algebras are obtained
by applying the classification of complex Lie algebras to the complexified Lie algebras and
then finding all ways of turning the resulting complex Lie algebras into a Lie -algebra;
their real parts then give all semisimple real Lie algebras.
In the semisimple case, every representation is faithful; hence a representation is nothing
more than an embedding into a matrix Lie algebra gl(n, C), realizing the Lie algebra elements by matrices. Every Lie algebra L comes with a canonical representation, the adjoint
representation, denoted ad, which maps an element f to the Hamiltonian derivative adf
in direction f , introduced in Section 11.2. Thus to each Lie algebra element f we assign
a linear operator on a vector space. The vector space is the Lie algebra itself and an element f of the Lie algebra is represented by the linear transformation adf that maps an
element g L to f g. In the mathematical literature, one often writes the Lie product as
a commutator. Then the definition takes the form
adx (y) = [x, y] .
Due to the Jacobi identity this indeed defines a representation. For finite-dimensional Lie
algebras, there is a canonical symmetric bilinear form called the (Cartan)Killing form,
which we write as BCK and defined by
BCK (x, y) = tr(adx ady ) .
Due to the Jacobi identity, the Killing form is invariant,
BCK ([x, z], y) = BCK (x, [z, y]) .
Recall that an ideal of a Lie algebra L is a subspace I in L such that LI I. Thus
an ideal is an invariant subspace under the adjoint action of the Lie algebra on itself. A
Lie algebra L is called simple if it is not one-dimensional and has no nontrivial ideals
(distinct from 0 and L). Thus the adjoint action of L on itself has no nontrivial invariant
subspace. A Lie algebra is semisimple if it is a direct sum of simple Lie algebras. There
is a convenient criterion for a Lie algebra to be semisimple:
13.5.1 Theorem. (Lemma of Cartan)
A Lie algebra is semisimple if and only if its Killing form is nondegenerate.
Proof. The proof can be found in many Lie algebra textbooks such as Jacobsen [136],
Humphreys [131], Knapp [154], or Fulton & Harris [96].
A finite-dimensional real Lie algebra L is called compact if its Killing form is negative
definite. In this case, the Lemma of Cartan implies that L is semisimple. For example,
the Lie algebra so(3) is compact, whereas so(2, 1) is noncompact. However, note that
Lie algebras are vector spaces and therefore not compact as topological spaces in the usual
topology.
302
For a given Lie algebra one may form the so-called lower central series (or derived
series) of ideals:
L0 = L , Ln+1 = Ln Ln , n 0 .
The Lie algebra L is called solvable if there is an n such that Ln = 0. A theorem of Levi
says that every Lie algebra is a semidirect sum of a semisimple part P and a solvable ideal
S (that is, S is a solvable Lie subalgebra that is an ideal in L), such that P
= L/S. It
follows that an important part of the classification of all Lie algebras is the classification of
the simple Lie algebras.
The classification of the finite-dimensional complex simple Lie algebras can be done by
classifying certain objects called finite root systems, associated to a choice of maximal
commutative subalgebras called Cartan subalgebras. Associated to each root system is
a finite reflection group, i.e., a group generated by elements whose square is 1. The finite
reflection groups (also called Coxeter groups) which are not direct products of nontrivial
smaller reflection groups arise as symmetry groups of regular polytopes. They have all been
classified by Coxeter, and fall into five infinite families denoted by An (simplices), Bn , Cn ,
Dn (all three related to cubes and crosspolytopes), and In (polygons), and a few sporadic
cases denoted by E6 , E7 , E8 , F4 , H3 , and H4 (H3 is the symmetry group of the dodecahedron
and the icosahedron).
Most of the finite reflection groups are also realized as symmetry groups of a root system.
All root systems give rise to semisimple Lie algebras, and irreducible root systems lead to
simple Lie algebras. The classification says there are four infinite series of Lie algebras
denoted An , Bn , Cn for n 1 and Dn for n 4 and five exceptional Lie algebras called
E6 , E7 , E8 , G2 and F4 . The corresponding reflection groups have the same labels, except
for G2 which corresponds to the hexagon I6 . It is a highly nontrivial result and one of
the most beautiful pieces of mathematics that this gives a complete classification of the
finite-dimensional semisimple complex Lie algebras.
The four infinite series of Lie algebras, called the classical Lie algebras, are realized
geometrically as infinitesimal symmetry groups of certain bilinear forms, i.e., Lie algebras of
linear transformations with zero trace whose exponentials leave the form invariant. The Lie
algebras An are isomorphic to the special linear Lie algebras sl(n+1, C) of (n+1)(n+1)matrices with complex entries and trace zero. The Lie algebras Bn and Dn are the odd and
even special orthogonal Lie algebras so(m, C) (m = 2n + 1 and m = 2n, respectively),
consisting of complex antisymmetric m m-matrices. For the C-series we have Cn =
sp(2n, C), where the symplectic Lie algebras sp(2n, C) are given by the complex 2n 2nmatrices X satisfying X T J + JX = 0 where J is the antisymmetric 2n 2n-matrix given
in block form by
0 1
J=
.
1 0
For each complex Lie algebra L of the A-, B-, C- or D-series, there is an associated (simply
connected) Lie group denoted by the same, but capitalized letters, whose complexified
tangent space at the identity coincides with the Lie algebra L.
B1 = C1 , B2 = C2 . It is easy to check that sp(2, C) = sl(2, C) = so(3, C). The Lie algebra
303
so(2) is one-dimensional (and hence abelian) and therefore not simple. The Lie algebra
so(4, C) is in fact semisimple, so(4, C)
= so(3, C) so(3, C), and not simple since each
so(3, C)-factor is a nontrivial ideal. The Lie algebra so(6, C) is isomorphic to sl(4, C). For
the just mentioned reasons, one starts the D-series for n 4; so(8, C) is the first in the
series that is not isomorphic to any other. In fact, so(8, C) is very special in that it has a
large automorphism group (related to triality). For the exceptional simple Lie algebras
E6 , E7 , E8 , F4 and G2 , there is no simple geometric description as for those in the A-, B-, Cand D-series. However, the exceptional simple Lie algebras can be realized as infinitesimal
symmetry groups of some algebraic structure. And to each exceptional Lie algebra L one
can associate a Lie group, such that L is the complexification of the tangent space at the
identity.
It is important to keep in mind over which field the Lie algebra is considered. For example,
over the real numbers the Lie algebras so(p, q) so(p, q; R) are non-isomorphic, apart from
the trivial isomorphism so(p, q)
= so(q, p). Over the complex numbers we have so(p, q; C)
=
so(p+q, C), since over the complex numbers the sign of a nondegenerate symmetric bilinear
form is not invariant. Even more severe things are dependent of the field; the real Lie algebra
so(1, 3), which is extremely important in physics, is simple, but extending the field to the
complex numbers we have so(1, 3; C)
= so(3, C) so(3, C), which is not simple.
= so(4, C)
(For applications to physics, this is actually an advantage.) However, this is as bad as it
can get from the structural point of view; if a Lie algebra is semisimple over some field K,
then it is semisimple over all fields containing K. This follows from the Lemma of Cartan
13.5.1: If the Killing form is nondegenerate over some field, then extending the field does
not change this property.
The (semi-)simple real Lie algebras can also be classified, albeit the classification is a
bit more complicated. See for example the books of Gilmore [104] (or, for the more
mathematically minded, Helgason [124] or Knapp [154]). If a real Lie algebra L is
simple, the complex extension letting the scalars be complex is either simple or of the
form S S for a simple complex Lie algebra S. Hence the classification of the real simple
Lie algebras is still close to the classification of the simple complex Lie algebras in the
sense that no completely new structures appear. It is an amusing historical fact that Elie
Cartan provided the classification of the complex simple Lie algebras and his son, Henri
Cartan, finished the project so to say by classifying the real simple Lie algebras.
As we shall see in Chapter 20, the unitary representations of different real forms of the
same complex Lie algebra can be quite different. The Lie algebra so(2, 1) does not admit
a finite-dimensional unitary representation, whereas so(3) does. All compact Lie algebras
admit a unitary representation, and in fact, the adjoint representation is already unitary.
The main difficulty in the proof of this lies in establishing that all compact Lie algebras
admit a Lie -algebra structure; this requires more theory and will not be discussed here.
Since finite-dimensional unitary groups are compact, noncompact semisimple Lie algebras
cannot have finite-dimensional unitary representations, apart from the trivial one which
maps everything to zero.
304
13.6
The adjoint and coadjoint representations of a Lie algebra L extend to elements g Aut L
by defining
g := gg 1 = Adg for L
g = Adg1
for L
()g = g g ,
g ( g ) = ()
and for continuous motions g C 1 ([0, 1], Aut L),
d g( ) dg
=
( ) g( ) ,
d
d
d g( ) dg
=
( ) g( ) ;
d
d
in short,
( g ) = g
g,
( g ) = g
g.
(0) = 0 .
(13.3)
The set of points (1) reachable from a fixed 0 in this way is called the orbit Orb(0 ) of
0 . The orbits partition V , and is invariant iff it is a union of orbits. The coadjoint
orbits are the orbits in the coadjoint representation on L . Apparently, = Orb() is a
manifold homeomorphic to Aut L/ Stab(), and the tangent space at is
T = {Q() | L}.
The coadjoint orbits correspond to maximal subgroups and are symplectic manifolds with
closed 2-form (f, g) := tr (f g) The set of all L for which a fixed set of casimirs
takes fixed values is always invariant.
Part IV
Nonequilibrium thermodynamics
305
Chapter 14
Markov Processes
Part IV discusses the dynamics of nonequilibrium phenomena, i.e., processes where the
expectation changes with time, in as far as no fields are involved.
It should be complemented (in a later stage of the book) by a treatment of space-time
dependent thermodynamics, and its derivation from quantum field theory.
We first develop a formal mathematical language for representing the physical concepts related to experiments with quantum systems in an unambiguous way, such that the relations
between the mathematical concepts precisely mirror the relations between the corresponding physical concepts. In particular, we define sources, activities, processes, observers,
protocols and observables.
In this way, phenomenological quantum physics gets a formal representation in the Platonic world of precise ideas, in the same way as it has been custumary for centuries for
mathematics.
A general formal framework for phenomenological quantum mechanics is given that allows
a concise formulation of the problems of observation, in a way close to real life.
It makes the ideas developed in quantum measurement theory, and in particular the theory
of positive operator valued measures (POVMs ) intuitive and useful for actual modeling.
We then take the continuum limit of the present framework; it results in the traditional
Lindblad theory of dissipative quantum processes. However, we shall put the latter in the
broader framework of Markov processes.
The most important class of nonequilibrium processes are the memory-less Markov processes. But by disregarding some variables in a Markov process, one also finds a natural
dynamics for processes with memory. Since it can be argued that the memory in any physical process is due to hidden variables, Markov processes can be regarded as the fundamental
processes, and we shall concentrate on the latter.
307
308
A Markov process is characterized in our set-up by a linear operator with properties resembling those of a derivation, and hence called a forward derivation. We shall discuss
forward derivations in Section 14.4, general Markov processes in Section 14.5.
The building blocks of phenomenological (classical or quantum) objects are sources, i.e.,
physical objects producing a definite state. For example, in optical experiments, a source
is typically an object or arrangement that produces one or several light beams of a certain
kind. On the formal level, sources are represented by certain monotone linear functionals.
Part of experimental physics consists in the art of devising real arrangements that prepare
a source, i.e., that produce output whose ensemble properties agree with that of a formal
source.
Sources are further modified by conditioning, i.e., subjecting them to one or several activities
that change the output of a source. For example, in optical experiments, an activity may
be passing a light beam through a beam splitter or an optical filter. On the formal level,
activities are represented by certain monotone linear operators. Activities and how they
condition sources are discussed in Section 14.1.
Informally, a process is a description of everything that may happen to the output of a
source while passing through an arrangement of physical equipment. Since experiments
are not completely reproducible, they need a stochastic description; thus, we describe processes by a (classical) probability distribution on the possible activities that characterize
the corresponding possible changes.
The relation between activities and real-life observations is established by an observer who
classifies the activities according to more or less objective principles, resulting in classical
records. Further processing of the records according to established scientific standards yields
protocols that can be communicated by classical means. Associated to each protocol is a
set of observables defined by the protocol. Processes, observers, and protocols are discussed
in Section 14.2.
14.1
Activities
The building blocks of phenomenological (classical or quantum) objects are sources, i.e.,
physical objects producing a state. For example, in optical experiments, a source is typically
an object or arrangement that produces one or several light beams of a certain kind. On
the formal level, sources are represented by certain monotone linear functionals.
A source is a monotone -linear functional E on the space E of quantities, i.e., a mapping
E : E C satisfying
E(f + g) = E(f ) + E(g),
E(f f ) 0,
E(f ) = E(f ),
E(f ) = E(f ) .
14.1. ACTIVITIES
309
E(f ) = eH f
for an
Hermitian Hamiltonian H E and an inverse temperature such that
R arbitrary
H
Z= e
is finite.
The ensemble associated with a proper source is defined by the expectation functional
hEi that associates with a quantity f the expectation
hEf i := E(f )/E(1).
Sources that are multiples of each other are equivalent and define the same ensemble.
Sources form a closed convex set:
14.1.1 Proposition. Let E be a family of sources.
(i) For any directed set order on the , the limit E = lim E defined by
E(f ) = lim E (f ),
is again a source.
R
(ii) For
any
probability
measure
d()
(with
d() = 1), the convex combination
R
E = d ()E of the E , defined by
Z
E(f ) = d()E (f ),
is again a P
source. In particular, if p are nonnegative numbers with sum 1 then the weighted
sum E =
p E of sources is a source.
Proof. Straightforward.
Part of experimental physics consists in the art of devising real arrangements that prepare
a source, i.e., that produce output whose ensemble properties agree with that of a formal
310
0 gA EA (1)1
(14.1)
(14.2)
L L 1,
called the Lindblad operator of the activity. (Nonprimitive activities have no associated
Lindblad operator.) A von-Neumann activity is an activity with A2 = A, e.g., a primitive
activity whose Lindblad operator is a projector L (satisfying L2 = L).
A primitive activity with Lindblad operator L is conservative iff L is unitary, and a vonNeumann activity iff L is idempotent, L2 = L.
Primitive activities with Lindblad operators
L = etH
with a Hermitian Hamiltonian H describe conservative unitary evolution (simply passing
time).
A screen is an orthogonal projector to an invariant subspace of the position operator, i.e.,
to the closed subspace spanned by some Borel set of position eigenstates. For classical
algebras, the corresponding Lindblad operator is a characteristic function, for quantum
algebras an orthogonal projector. Thus (displaying on) a screen is a von-Neumann activity.
In real applications to quantum systems (for quantum optical devices, see, e.g., Leonhardt
& Neumaier [174]), we typically have dissipative systems described by dissipative Lindblad operators of the form
L = ea
where a E, Re a 0.
14.1. ACTIVITIES
311
L,
LL = E ().
Primitive activities (which preserve or reduce the rank) can also be considered in the Schrodinger picture; nonprimitive activities have no associated Schrodinger picture.
In physics, one frequently passes from a fundamental description in terms of microscopic
quantities to coarse-grained description in terms of certain effective quantities of interest. The effective quantities form a subalgebra, and we may look at the consequences of
restricting attention to such a subalgebra.
312
(f ) = Diag (f )
(projecting quantum observables to corresponding classical observables in the maximal
commuting subalgebra of diagonal operators). The latter may be combined with a basis
change, giving
(f ) = P Diag (P f P )P
for P : H H,
P P = 1.
14.2
Processes
14.2. PROCESSES
313
Figure 14.1: Conditioning a source by a process
E
EA
314
E
A1
EA1
r1 = r(1 )
EA1 A2 . . .
A2
r2 = r(2 )
An
EA1 . . . A
n
rn = r(n )
Further processing of the records according to established scientific standards yields protocols that can be communicated by classical means. Associated to each protocol is a set of
observables defined by the protocol.
A protocol is a mapping v : R Cn that assigns to each record r a vector v(r), and hence
(given an observer), to each label the vector v = v(r()). For any protocol v with values
in D Rn and any function : D Rm , we define the protocol (v) with
(v) := (v ).
Each protocol v defines a vector of observables
Z
v := d()A (v ).
b
14.2.2 Theorem. For an arbitrary source E, the (observable) record expectation hv(r)i
is related to the (computable) ensemble expectation hEb
vi by
hv(r)i = hEb
v i/hEb
1i.
Proof. We have
Eb
v=E
d()A(v ) =
by (14.4), hence
hEb
vi = Eb
v /E1 = hv(r)iEA(1)/E1 = hv(r)ihEA(1)i.
Since A(1) = b
1, the result follows.
Thus hEb
1i = hEA(1)i [0, 1] is the efficiency of a source E for a given process. If the
process is complete, all sources are 100% efficient.
Note that the A (1) need not commute (randomize the decision of what to measure); thus
we can jointly measure noncommuting quantities.
Scientific or industrial standards carefully define protocols for objectively observing key
observables. The art of experimental design consists in finding protocols whose associated
315
Conjecture. (in the 2-norm w.r. to E or in the -norm? Is there a related result in terms
of protocols?)
[f, g] = 0
kf fk kg gk unc(f, g)
One way to measure the quality of the approximation of a quantity f by an observable b
v is
in terms of the maximal deviation
kb
v fk
which is defined independent of sources. Numerically, this leads to a semidefinite programming problem that can be approximately solved with high efficiency (see, e.g., [8, 125, 279,
293]), namely
min
X
s.t.
v(r)fr f ,
r
where
fr =
d ()r()=r A (1).
If the source is known, we may instead minimize the empirical expectation of the surprise
(Neumaier [204])
1
s = (v(r) f ) Cov(f )(v(r) f )
n
where
f = hEf i,
This defines a least squares problem. Note that hsi 1, with equality iff b
v = f.
14.3
316
(14.6)
14.4
Forward derivations
317
14.4.3 Examples.
(i) If E = C (R) then Df (x) := f (x) defines a derivation. This example is responsible for
the name.
(ii) If E = C(Rn ), h Rn and > 0 then the coarse-grained directional derivative
Dh f (x) :=
f (x + h) f (x)
defines a forward derivation. Indeed, (D1) and (D2) are trivial, and since Dh (f f ) maps x
to
f (x + h)f (x + h) f (x)f (x)
Dh (f f )(x) =
we find
Dh (f f ) (Dh f )f f (Dh f ) = (Dh f )(Dh f ) 0
since > 0. Therefore, (D3) holds.
(iii) If B is an arbitrary quantity then
Df := 2Bf B BB f f BB
defines a forward derivation; (D3) follows from [details?]
D(f f ) (Df )f f (Df ) = 2[B, f ][B, f ] 0.
(iv) If B is a quantity such that B + B 0 then
Df := Bf f B
defines a forward derivation. Again, only (D3) is nontrivial and follows from
D(f f ) (Df )f f (Df )
= Bf f f f B + (Bf + f B )f + f (Bf + f B )
= f (B + B )f 0.
(v) If H is a Hermitian quantity then
Df := i[H, f ] = i(Hf f H)
318
(14.7)
319
and
D(cf ) = c(Df ),
D(f c) = (Df )c
for all f E.
(14.8)
(14.9)
14.5
= F (x(t), t),
(14.10)
and
1
E = C (R ),
If we define
tr f =
dxn f (x).
hf it := f (x(t), t)
then
d
hf it
dt
d
f (x(t), t)
dt
(14.11)
= x f (x(t), t) x(t)
+ t f (x(t), t)
= x f (x(t), t) F (x(t), t) + t f (x(t), t)
= hx f F + t f it = hDF f + f it ,
where DF , defined by
DF f = F x f
(14.12)
(14.14)
320
where x(z, t) is the solution of (14.10) with x(z, t0 ) = z. The particular solution (14.11)
corresponds to the density 0 (z) = (z x(t0 )), and has vanishing variance
h(f hf it )2 it = (f (x(t), t) hf it )2 = 0,
as one would expect from a deterministic process. However, unsharp initial conditions
(14.13) lead to solutions (14.14) with, in general, nonzero, variance.
14.5.2 Example. (Reversible classical mechanics) If we specialize the preceding example to an autonomous Hamiltonian system, defined by
q =
H(p, q),
p
p =
H(p, q),
q
(14.15)
(14.17)
Df = {f, H} :=
q p
p q
as associated derivation.
14.5.3 Example. (Reversible quantum mechanics)
Let be a solution of the time-dependent Schrodinger equation,
ih
d
= H,
dt
(14.18)
defined in a Hilbert space H for a Hermitian Hamilton operator H, and let E be the algebra
of linear operators on H, with standard trace. If we define
hf it := f
then
d
hf it
dt
(14.19)
= f ) = f + f + f
H
f + f +f i
= H
i
h
h
H
H
= ih f + f + f ih = h hi [H, f ] + fit
= hDf + fit ,
where
i
[H, f ]
(14.20)
h
14.6. TO BE DONE
321
We now discuss an axiomatic irreversible dynamics, of which our reversible examples are
particular cases.
14.5.4 Definition. A (single-time, autonomous) Markov process is a flow on the set
of states of a Euclidean *-algebra E such that (14.16) holds with a forward derivation D
on E. We use a dot to denote differentiation with respect to time. The process is called
reversible if D is a derivation, and dissipative if D is dissipative.
A stationary state of the process (14.16) is a state with hDf i = 0.
14.5.5 Remarks. (i) In (14.16), it is assumed that f is independent of t; otherwise, the
correct dynamics is given instead by
d
d
hf (t)it = h(Df )(t) + f (t)it
dt
dt
(14.21)
(ii) For a reversible Markov process, the backward dynamics is also a Markov process.
(iii) A stationary state is invariant under the dynamics (14.16). The expectation of conserved quantities satisfies dtd hf it = 0, and hence is time-invariant also for nonstationary
states.
In reversible quantum mechanics (Example 14.5.3), the conserved quantities are precisely
the quantities commuting with H.
14.5.6 Proposition. (Schr
odinger picture and Liouville equation)
= D .
14.6
to be done
322
Chapter 15
Diffusion processes
In this chapter we describe an important class of classical primitive Markov processes,
characterized by their drift vector and their diffusion matrix. As particular cases we obtain damped Hamiltonian systems, which are shown to converge towards the canonical
ensemble, and another important case, the exactly solvable Ornstein-Uhlenbeck processes, which describe coupled damped harmonic oscillators.
Our Euclidean *-algebra is E = E0 = S(Rn ), the Schwartz space of rapidly decaying C
functions, and the trace is integration over Rn Partial integration then gives = .
15.1
15.1.1 Theorem. Let v be a vector field and let G be a symmetric tensor field on Rn ,
G(x) positive semidefinite (v and G may be time dependent). Then
1
Df := v f + G : 2 f
2
defines a forward derivation on E0 = C (Rn ), with drift vector v and diffusion matrix G.
Moreover, if G is definite then D is primitive.
Proof. *-Linearity is clear, and
D(f, g) : = D(f g ) (Df )g f (Dg )
= u ((f g ) (f )g f g )
1
1
1
+ G : 2 (f g ) (G : 2 )g f (G : 2 g)
2
2
2
1
G : (2 (f g ) (2 f )g f 2 g )
=
2
= G : (f )(g) = (f )T G(g).
323
324
(15.1)
is called the diffusion process with drift vector v and diffusion matrix G.
We note that the low noise approximation of any Markov process is determined by drift
vector and diffusion matrix, hence any Markov process can be approximated by a diffusion process when the noise is sufficiently small. (This is the first approximation of the
Kramers-Moyal, or system size expansion and gives a more accurate approximation
than the low noise approximation. The diffusion approximation treats slow time scales
as infinitely slow; hence metastable states appear to be stable.) This accounts for the
importance of diffusion processes as approximations of more complex Markov processes.
15.1.3 Proposition. The Liouville equation for a diffusion process is the Fokker-Planck
equation
1
= D = (v) + 2 : (G).
2
Proof. By integration by parts of (15.1).
15.1.4 Remark. Associated with the diffusion process (15.1) is the stochastic Ito differential equation
dx = v(t, x)dt + B(t, x)dW (t),
(15.2)
where BB T is an arbitrary factorization of G. Here W (t) denotes the Wiener process,
i.e., the special process (15.1) with v = 0, G = 1, and x is defined by (15.2) via stochastic
integration. We shall not use this, and refer to Gardiner (4.3.18) and (4.3.3) for the
equivalence of (15.1) and (15.2). To avoid the arbitrariness in the choice of B we shall write
(15.2) as
1
dx = vdt + (Gdt) 2 = vdt + d, d N(0, Gdt),
(15.3)
which is suggestive in view of
1
hf (x(t + dt))i = hf (x + dx)i = hf + f dx + 2 f : (dx)2 i
2
1 2
= hf i + hf v + f : Gidt = hf i + dhf i
2
if we note that hudi = 0, hA : (d)2 i = A : G, and d3 = ddt = dt2 = 0. We shall see
that the definition (15.1) is very satisfactory, and we need no integrals; these are needed
for questions of existence and other, more mathematical developments.
325
v(t)v(t)dt
Rt
1
so one has to give a meaning to 0 (Gdt) 2 which is an Ito-stochastic integral. With this as
definition, it is easy to show that the Ito transformations formula holds:
1
1
df (x) = (v f + G : 2 f )dt + (f )T G(f )dt 2
2
where f = (f )T .
If functions of x satisfy the Markov process (15.1) then functions of x = (x) satisfy the
Markov process (1) with
1
v = v + : G,
2
= G T .
G
(15.4)
(15.5)
Proof. Differentiation of f(
x) = f((x)) = f (x) gives
(f )T = (f)T f = T f
and hence
(15.6)
v f = v T f = v f.
2
2 f = ( T f) = f + T (f) = f + T T f.
1
1
1
2
v f + G : 2 f = ( v + : G)f + G T : f.
2
2
2
326
(15.7)
1
uT = v T G;
2
then u transforms under x = (x) according to
u = u.
(15.8)
(15.9)
= v T T + G : (G T ) = (v T G) T = uT T .
2
2
2
One can also derive the Stratonovic transformation formula. The Stratonovic drift
1
w T := uT + ( B)B T , where G = BB T ,
2
transforms by
1
w T = ( u)T + ( B)( B)T
2
1
= uT T + ( T B)(B T T )
2
1
= (uT + ( B)B T ) T = w T ,
2
hence according to standard rules. However, it depends on the factorization of G and hence
is less useful than the covariant form (15.7).
Note: The equation w T = v T 21 B : (B T ) gives the traditional translation between the
Ito version and the Stratonovic version of a stochastic process.
15.1.8 Proposition. The Liouville equation for a diffusion process is a continuity equation
+ j = 0,
(15.10)
j = u G(),
(15.11)
327
) + f (f )) = tr([(f
) f (f )] + 0 (f ))
dt
= tr(D[(f ) f (f )] + 0 D[ (f )])
= tr({D[(f ) f (f )] + f D[ (f )]}).
Now the part in curly brackets simplifies to
(f )(f )T G(f )
dw(, t, x)[(f (t, x + )) (f (t, x)) (f (t, x + ) f (t, x)) (f (t, x + ))].
15.2
ueq = G(
eq ), tr eq = 1.
(15.12)
328
= (k T )1
u = G.
L := G
(15.14)
(15.16)
329
with a thermodynamical potential , then the process (15.15) has an equilibrium state
eq = Z 1 e ,
= (k T )1 ,
(15.17)
G = k T (L + LT ) = 2k T LT ,
we can write the Markov equation as
d
hf i = h(k T F ) LT f i.
dt
(15.18)
since 1
g ()e g) = ( F )g.
eq (eq g) = e (e g) = e (e
(iii) In the zero temperature limit, T 0, noise can be neglected (G 0), and the
equilibrium state approches (saddle point approximation!) eq (x xeq ), where xeq is
the global minimizer of (assumed unique). The motion becomes deterministic.
We now consider the deterministic approximation. According to Chapter 5 , this is given
by the differential equation.
x = v(x)
with the drift
1
k T
v = u + G = LF +
(L + LT ).
2
2
However, because of covariance and the next result, it is more appropriate to use in place
of v the covariant drift u = LF . The two are the same when L + LT is constant, a very
common case; in general they differ at low noise (k T small) only in higher order terms.
15.2.3 Theorem. For the covariant deterministic approximation of the closed diffusion process (15.15), given by
x = L(x)F (x), where F (x) = (x),
(15.20)
330
(15.21)
If L is definite and is coercive and below then any limit point of x(t) for t is a
stationary point of the thermodynamic potential, and the only stable equilibria are the
local minima of .
If is coercive and bounded below then
Proof. (15.21) holds since dtd (x) = (x) x.
d
T
lim (x) exists, dt (x) 0, whence F LF = 0 at any limit point. If L is definite, this
t
implies F = 0.
0 I 0
L = MJM T , J = I 0 0 ,
0 0 0
H(z) := (Mz), H = M T M
x = Mz, z = JH(z).
15.2.4 Remark. Other instances of fluctuation - dissipation theorems:
(i) Huang: susceptibilites or response functions are expressible as covariances (fluctuations); cf therm.tex.
(7.14) k T 2 CV = hH 2iph hHi2ph
(7.43) Nk T KT /V = hN 2 iph hNi2ph
ch.16(?) :
k T M
= h(r)i = k T
V Hmagn
where
M = Magnetization = hm(r)i
331
RT 2
= O(NA1 )
NA
15.3
Ornstein-Uhlenbeck processes
(15.22)
(15.23)
where
G = k T (L + LT )
(15.24)
and L, are positive semidefinite matrices, symmetric. Usually they are nonsingular
9and hence definite).
The most important feature of these processes is that they preserve Gaussian distributions.
In particular, this implies that their statistical behavior is completely determined by the
mean
x(t) := hxit
332
(15.25)
d
C(t) = hG + v(x x) + (x x)v i = G LC(t) C(t)LT ,
(15.26)
dt
and these equations are exact consequences of (15.22) - (15.24). [i.e., all approximations
made are already in the model formulation.] (15.25) describes a deterministic system of coupled and damped driven harmonic oscillators, while (15.22) is the corresponding stochastic
version.
The mean equation (15.25) and the covariance equations (15.26) are decoupled, and can be
solved explicitly:
Z
t
(15.27)
(15.28)
B = C(0) k T 1 .
(15.29)
where
We now dicuss the solution (15.27), (15.28), assuming that and L are definite. The
dissipation matrix
A := L
Re =
(x) L(x)
< 0.
x x
Thus the initial state x(0) and old forces get exponentially damped, and after long
times, the system behaves ike the special solution
Z t
Z
(t )L
x(t) =
e
LFext ( )d =
esL LFext (t s)ds.
(15.30)
15.4
333
= (cos(t) Re R(w)
+ sin(t) Im R())F
0
(15.33)
R()
=
is
R(s)ds =
eis R(s)ds
(15.34)
if one extends the response function to s < 0 by setting R(s) = 0 for s < 0. The energy
dissipated in a period is
Z
Z 2/
E = Fext (t) d
x(t) =
Fext (t) x (t)dt = . . . = F0T (Im R())F
(15.35)
0.
0
R(t) =
Im R()
sin(t)d
0
(from Fourier inversion formula since R is real ), this implies that (the symmetric part of)
the response function and the transfer matrix can be obtained very accuratly by measuring
the power absorption dE/dt (averaged over N periods of a harmonic driving force with
computable from (15.35). Then one can calculate R(t). Finally, Re R() is reconstructed
either by Fourier transform of R(t), or by the Kramers-Kronig relations.
The system responds by forced oscillations of the same frequences, but the force is weighted
334
R() =
e
Lds = (L i) e
L ;
0
hence
R()
= (i L)1 L
(15.37)
Im R()
= Im(i L)1 L = [ 2 + (L)2 ]1 Im(i L)L
= [ 2 + (L)2 ]1 L.
In particular, R()
will be large when is close to the imaginary part of an isolated
eigenvalue of L with small real part. This is the reason why one defines resonances
mathematically by the poles of the transfer matrix in the half plane C+ where (15.34)
makes sense.
For numerical calculation, we use a Cholesky factorization
= RT R
and a spectral factorization
RLRT = QQ1 ,
diagonal,
Re 0.
335
R
the
B(t) = 2 0 Im R()
(B is the fluctuation = time correlation, Im R
cos (t) d
dissipation.)
Quantum version
The Brownian particle correponds to linearized damped Hamiltonian systems. Force
only applies to the second order term.
driven continous-time linear state space model = driven Ornstein-Uhlenbeck process
with derived measurable quantities y.
x(t)
Fext (t) =
F0
0
e L d LF0
15.5
H
,
p
p =
H
.
q
(15.38)
336
H
H
H
H
, p =
C(q)q =
C(q)
p
q
q
p
(15.39)
dt
p
q
and we see that the damping matrix C(q) must be assumed to be positive definite (but
not necessarily symmetric) in order to have dissipation except at rest.
In terms of the potential (total energy)
(x) = H(p, q),
the thermodynamic force becomes
F =
H
p
H
q
and we infer from (15.39) that the covariant drift has the form
u=
H
p
H
C(q) H
q
p
I C(q)
where
L=
0 I
I C(q)
H
p
H
q
= LF,
(15.40)
337
The fluctuation-dissipation theorem now shows that the correct diffusion matrix is
!
0
0
(15.41)
G = k T (L + LT ) = k T
0 C(q) + C(q)T
Thus we end up with the canonical diffusion process
d
f
H
f
H f
H
.
hf (p, q)i =
+C
+ k T
C
dt
p q
q
p
p
p
p
(15.42)
Z = tr eH(p,q) .
(15.43)
338
Chapter 16
Collective Processes
16.1
In this section we discuss the general set-up of a system in which a large number of individuals interact through private communication in an environment where collective forces
govern the frequency of communication events.
Our communication model has the following ingredients.
(C1) there are q species Xj (j = 1, . . . , q) describing individuals. The number of individuals of species Xj in a system is written as Nj ; these numbers define the population
vector N Zq ; The density t (N) describes the likelihood to have at time t a population
number N, and the integral is
X
R
f (N).
f=
N Zq
lj+ Xj
j=1
q
X
lj Xj
(l = 1, . . . , r)
(16.1)
j=1
(l = 1, . . . , r; j = 1, . . . , q).
Events model the elementary units of communtication; an event consists in the meeting of
a collection of lj+ individuals of kind Xj (j = 1, . . . , q) changing during the meeting into a
collection of kind lj individuals of kind Xj (j = 1, . . . , q); or reversed. If the reverse process
339
340
is impossible we write in place of . Events are considered as black boxes about which
no details are available. Again we assume r to be finite.
P +
P
Usually
vlj and vlj are very small; typically 3. We may illustrate an event A + B
C + D as follows.
n
(C3) There are transition rates u
l : Z R+ (l = 1, . . . , r); ul (N) and ul (N) specify the
likelihood that event l happens in forward or backward direction in a population described
by N; u
l = 0 specifies an event which only oceurs in the forward direction.
The transiton rates are global, collective, properties of the system; they account for nonlocal, long range interaction between individuals, and for limitations of freedom due to
overpopulation, mutual attraction, and mutual repulsion. A very common Ansatz for the
transmition rates is that of combinatorial kinetics, where
l (N) = Kl j=1
otherwise,
k
lj
l
(Nj vlj )!
j=1
Q
+
k+
with constants kl = kl j (lj )!; one writes kll (or kl if kl = 0). This models the
assumption that individuals are completely independent and meet only by chance, so that
the transition rates are proportional to the number of ways to assemble the required collection of individuals in a population described by N. Combinatorial kinetics describes
correctly the chemistry of ideal gases and ideal solutions; it is also used for most systems
in biology, medicine, economy, and social sciences, mainly because of simplicity and lack of
more detailed knowledge.
(C4) The probability current produced by the event l is defined as
+
+
+
jl (N) := u
l (N + l )(N + l ) ul (N + l )(n + l ).
(16.3)
341
Here N is interpreted as the part of the population not involved in the event, and the
current consists of a positive contribution due to the outcome of the event and a negative
contribution due to the input of the event. If we interpret the probability currents as the
rate of change of the density due to single events, and count forward events positively,
backward events negatively, we end up with the master equation
X
X N +l
d
N l
+
+
[ul t ]N + [ul t ]N
jl (N l ) jl (N l ) =
t (N) =
dt
l=1
l=1
r
(16.4)
l := l l+ .
(16.5)
for all j +
l (N) > 0
(16.6)
for some j u
l (N) = 0;
(16.7)
and
nj < lj
the latter implies that the dynamics preserves the natural condition
t (N) = 0 if some Nj < 0
(16.8)
For x = N, the drift vector v and the diffusion matrix G are given by
X
(u+
v(N) =
l (N) ul (N))l ,
(16.10)
G(N) =
(u+
l (N) + ul (N))l l .
(16.11)
Proof. We first note that the sum over N Zq is translation invariant. Using the notation
2
[f ]N
N1 = f (N2 ) f (N1 ),
we find
X
X
X
N +
[g]N
f (N) =
g(N + )f (N)
g(N)f (N)
N
X
N
g(N)f (N )
X
N
g(N)f (N) =
X
N
N
g(N)[f ]N
.
342
Hence
X d
XX
d
N +l
N l
hf i =
t (N)f (N) =
([u
+ [u+
)f (N)
l ]N
l ]N
dt
dt
N
N
l
XX
N l
N +l
(u
+ u+
)
=
l (N)(N)N
l (N)(N)[f ]N
N
X
N
(16.12)
u
l (N) {(f (N l )g (N l ) f (N)g (N))
hence
Q(f, g)(N) =
XX
u
l (N)(f (N l ) f (N))(g(N l ) g(N)) .
(16.13)
16.1.2 Remarks.
of u
l .
t (N) = (t (N 1) t (N)).
(16.14)
With the initial condition 0 (N) = N 0 (no individual at t = 0), the solution of (16.14) is
t (N) =
(t)N et
,
N!
(16.15)
343
(16.16)
0 X.
with combinational kinetics. We get from u+ (N) = , u (N) = N the master equation
t (N) = (t (N 1) t (N)) + ((N + 1)t (N + 1) Nt (N)).
(16.17)
This can be solved exactly, with a complicated solution (see e.g., Gardiner [99]). Rather
we use the moment equations for mean and variance,
d
hNi = hvi = h(N )i = (hNi ),
dt
d 2
= hGi = hN + i = (hNi + ).
dt
With initial condition N = N0 (a number) at t = 0, we get the solution
)
hNit = (1 et ) + N0 et,
t2 = ( + N0 et )(1 et ).
(16.18)
344
for l = 1, . . . , r.
(16.19)
The typical reason for such a relation is that there is a family Yi (i = 1, . . . , p) of invariants
(conserved quantities, e.g., charge, lepton number, atoms, functional groups, dollars) which,
in every event, are exchanged in full units and dont get lost. If each individual of species
Xj contains Aij invariants Yj then
Al+ = Al
for l = 1, . . . , r,
(16.20)
(16.21)
The matrix A is referred to as the composition matrix of the process with respect to
Y1 , . . . , Yp .
16.1.4 Proposition. If t (N) is a solution of a master equation (16.4) satisfying (16.20)
then, for all functions g,
t (N) := t (N)g(AN)
is also a solution of (16.4).
Proof. (16.20) implies Al = 0, hence g(A(N l )) = g(AN), where we can cancel in (16.4)
a common factor g(AN).
The proposition reflects the fact that any initial distribution of AN is fixed by the dynamics.
Usually one fixes the distribution by assuming deterministic values for the components of
AN (i.e. numbers instead of quantities); this reduces the Euclidean *-algebra and turns
g(AN) into a number, which cancels under normalization of .
A component of collective process is a set N consisty of all population vectors N Zq
which are reachable from some fixed N0 by a sequence of events. Clearly, (16.20) implies
that all N N have the same value of AN, and typically components are characterized by
the common value of AN for N N . However, there are processes like
2X1 0,
where the parity of N1 is conserved, too, and there are processes like
2X1 2X2 X1
which have no invariant but several components, here 00 and {N | N1 , N2 0} \ 00 .
Clearly, the dynamics in different components is completely independent; so we may restrict
E0 (and hence the density) to functions of N which vanish outside some fixed component.
345
16.1.5 Theorem. If E0 consists of the functions of N which vanish outside some fixed
component of a collective process, the forward derivation of the process is primitive. In
particular, if a positive equilibrium state exists, it is unique, and is reached from any initial
state as t .
Proof. By (16.12) and (16.13), D(f f ) = (Df )f + f (Df ) f (N l ) = f (N) whenever u
l (N) > 0. Thus f is constant on each component. Since by assumption only one
component is nontrivial, D is primitive.
16.2
(N l )
l
,
(N)
(16.22)
where l 0 and is a density which is positive when all Nj 0, and vanishes otherwise.
We call processes satisfying (16.22) canonical. Combinatorial kinetics is special case of
(16.22) where is a multivariate Poisson distribution, defined by
(N) =
N
q
Y
j j ej
j=1
Nj !
(16.23)
substitution into (16.22) and comparison with (16.1.16.2), shows that the rate constants
are related to the l by the equations
kl
q
Y
lj
(l = 1, . . . , r).
(16.24)
j=1
[Apparently all processes considered in applicatons are in canonical form. The reason is
unclear to me.]
16.2.1 Theorem. For a canonical collective process with
l+ = l
(l = 1, . . . , r),
we have:
(i) At equilibrum, all probability currents jl (N) vanish.
(ii) On each component of the process, eq (N) is a constant multiple of (N).
(16.25)
346
If (16.25) holds we say the system satisfies detailed balance. Each event then satisfies
a separate balance equation jl (N) = 0, whereas in general at equilibrium only the total
(signed) sum of currents vanishes. Thus (i) and (ii) characterize closed systems.
For a noncanonical system, detailed balance does not say too much, only
u
l (N) =
l (N l )
eq (N)
for suitable l . Note that microreversibility only gives detailed balance , but not the
canonical form, which is an independent axiom.
To discover the equilibrium of a combinatorial process one can try to solve (16.24) with
l+ = l for the l and j (2r equations for q + r unknowns); if these equatons are
consistent, (16.23) provides the equilibrium solution (upto a constant factor which depends
on the initial distribution of the conserved quantities). If the equations are inconsistent,
the system cannot be closed.
The analogy to the canonical form for diffusion processes is seen by introducing the discrete
forward derivations l with
l f : N f (N + l ) f (N + l+ );
one easily sees that the adjoint l is given by
l f : N f (N l ) f (N l+ ).
Thus we can write
Df : N
=
X
l
X
c
+
+
u
l (N)(l f )(N l ) ul (N)(l f )(N l )
1
eq l (l (eq l f )).
Df = 1
eq (eq f )
347
l l Tl .
klkr
In practice, collective processes are often studied when N is very large; then a very useful
approximation is the consideration of the so-called thermodynamic limit N . The
interesting quantities are the relative sizes of the Nj . Thus we shall write
N = x,
l =
l ,
(16.26)
(16.27)
with x, , aj of order 1 and a number which becomes very large. could be the total
number of individuals, or any other extensive quantity (total volume, total mass, etc.). Our
next text theorem justifies deterministic physics for macroscopic objects.
16.2.2 Theorem. Suppose the thermodynamic potential
log (x)
(x) := kT lim
(16.28)
(16.29)
(16.30)
holds.
(ii) The dynamics becomes deterministic in the thermodynamic limit , and is given
by the differential equation
X
l+ F (x)
l F (x)
+
e
e
(l l+ ),
(16.31)
x = u(x) :=
l
l
l
(16.32)
348
=
(16.33)
=
(16.33)
(x /) (x /) + O(1 )
1
Taylor (x) + (x) (x) + O( ),
hence
log( (N )/ (N)) = log (N ) log (N)
= (x) + O(1 ) = F (x) + O(1 ).
The transition rates (16.22) become
l
u
l (N) = l e
F (x)
(1 + O(1 )).
For any function f (x) and f(N) := f (N/) we obtain the forward derivation
XX
(N l ) f(N)
u
(N)
f
D f(N) =
l
X
l
X
=
u (N) (f (x l /) f (x))
XX l
=
l el F (x) 1 + O(1 ) f (x)(l /) + O(2 )
P P F (x)
=
l e l
(l f (x) + O(1 ))
= u(x) f (x) + O(1 ).
Thus, with u(x) defined by (16.31), we find
d
d
hf (xi = hf(N)i = hu(x) f (x)i + O(1 ),
dt
dt
which is (16.32). In the thermodynamic limit, we obtain the deterministic dynamics
d
hf (x)i = hu(x) f (x)i
dt
belonging to the ordinary differential equation (16.31).
349
X
j
(16.34)
as the negative entropy contribution to an ideal mixture. Noting that in the multivariate
Poisson distribution
Y
hu
hNj i lj
,
l (N)iPoisson = kl
Poisson
j
it is more appropriate to replace expectation by Poisson expectation (= equilibrium expectation!) and get from
d
hNi = hv(N)i
dt
=
(16.10)
X
l
hu+
l (N)i hl (N)i l
d
kl+
Nj (t)lj kl
N(t) =
Nj (t)lj
dt
j
j
l
(l l+ )
X
l
(kl+
xj lj kl
(16.35)
xj lj )(l l+ ).
350
Proof. We assume +
l = l =: l and write fl := l F (x). Then
d
(x(t))
dt
=
=
(16.35)
=
(x(t)) x(t)
(16.34)
F (x) U(x)
P
fl+
fl
e )(fl fl+ ).
l l (e
By the mean value theorem, there are fl fl+ fl such that this equals
X
l efl (fl+ fl )(fl fl+ ).
l
Therefore,
X
d
(x(t)) =
l efl (fl+ fl )2 0,
dt
(16.36)
Since Al+ = Al+ , (16.31) implies Ax = 0, so that Ax(t) = Ax(0) for all t. If is coercive
and bounded below, then lim (x(t)) exists, so dtd (x(t)) 0. Therefore (16.36) gives
t
fl+ fl = 0 at any limit point x of x(t) for t . By definition of fl , this implies that
F (x ) N is conserved, and hence that
F (x )T = T A for some Rp .
(16.37)
Now we note that the stationary points x of (x) on the affine subspace Ax = Ax(0) are
the stationary points of the Lagrangian
L(x) = (x) T (Ax Ax(0)),
and since L (x) = ()T T A = F T T A, this is just the condition (16.37).
16.3
l
kl :=
l e
(with +
l = l for closed systems),
(16.38)
l e
F (x)
= fl e
351
lj
(Fj (x)0j )
= kl
zj (x)lj ,
where
(16.39)
is the activity of the jth species. The macroscopic reaction process therefore takes the
form of the system of differential equations
!
q
q
r
Y
Y
X
+
zj (x)lj kl
kl +
zj (x)lj (l l+ ).
(16.40)
x = u(x) :=
l=1
j=1
j=1
This looks like combinatorial kinetics, which is the special case zj (x) = xj = [Xi ] corresponding to an ideal mixture.
Note: For conservation of nonnegativity we need zj (x) = 0 if some xj = 0; thus Fj must
contain a log xj term, i.e., 0 is an analytic multiple of the Poisson 0 . Thus the N! is
perhaps best moved into the trace?
Chemical reactions are most commonly described at constant temperature T and constant
pressure P ; then the appropriate thermodynamic potential is = G, the Gibbs potential.
A useful phenomenological form is the so-called NRTL model which describes the potential
by a correction to the ideal mixture potential,
X
X X (Ax) x
X
X
j j
G =
Gj (xj ) +
xj log xj
xj log
xj +
.
(16.41)
xj + (Bx)j
j
j
P
P
where
A,
B
are
matrices
with
A
=
B
=
0,
B
n
0.
[the
log
term vanishes when
jj
jj
j
P
P
xj = 1 ( =
Nj ) but preserves the homogeneity of G in the general case.]
This model has the advantage that the Gj (xj ) can be determind form pure substances, and
the coefficients Ajk , Bjk can be determined from experiments with binary mixtures.
For a closed system, the equilibium is characterized by detailed balance, which says that in
(16.40) the contribution of each reaction vanishes separately. This gives the law of mass
action,
q
q
Y
Y
+
lj
+
kl
zj (x) = kl
zj (x)lj ,
(16.42)
j=1
j=1
which, in the case of ideal mixing, reduces to the more familiar form
kl+
q
Y
+
lj
[Xj ]
j=1
kl
q
Y
[Xj ]lj .
j=1
A+B C +D
+ = input
352
we get
[A][B] = [C][D].
(Traditionally, this is derived by probabilistic hard sphere arguments.)
However, for practical calculation of nonideal cases if is preferable to solve the constrained
optimization problem
min G(T, P, N)
N
(16.43)
s.t. AN = AN 0 , N 0,
where N0 is the initial composition.
If the Helmholtz potential A(T, V, N) is given as the thermodynamic function, then we
must also consider variation of volume by considering volume elements as separte species
Xvol , and specifying the change of volume in each reaction.
The Gibbs potential is now
G = PV + A
(with V corresponding to NVol ), and the optimization problem becomes
min(P V + A(T, V, N)
N,V
s.t. AN = AN 0 , N 0.
(16.44)
N,U,V
s.t. AN = AN 0 , N 0.
16.4
(16.45)
We assume a macroscopic situtation ( large but not infinite) so that the concept of a
time-dependent external thermodynamic force Fext (t) makes sense. As can be seen from
the thermodynamic limit
X
i+ F (x)
l F (x)
+
e
e
(l l+ ),
x =
l
l
l
353
l = l e
(16.46)
(16.48)
valid for open macroscopic systems with small thermodynamic forces, where
v = L((N) Fext (t))
(16.49)
G = 2kT L
(16.50)
since, by (16.47), the transport matrix L is constant and symmetric positive semidefinite.
16.4.1 Remarks. 1. In the absense of external forces, (16.48 - 16.50) describes a diffusion
process in canonical form; since L is constant, the covariant drift u agrees with v.
2. The entries of L are called the transport coefficients; the symmetry relations Lik = Lki
are called the Onsager relations.
3. The rate constants l often grow nearly linear with T so that l and hence L only
depends weakly on temperature.
4. Written as stochastic differential equation we have
dN = L((N) + Fext (t))dt + d, d N(0, 2k T Ldt).
(16.51)
5. In the space-dependent case, the Onsager relations must be modified for variables like
velocities which are not time-reversal invariant; then L is no longer symmetric (i.e., selfadjoint) and (16.50) reads
G = k T (L + LT ).
(16.52)
Linear response theory is used for an impressively large collection of applications.
354
A particular case where thermodynamical forces are small is when a system operates close
to equilibrium. In this case the potential can be expanded in powers of deviations x :=
N N from a minimizer N of (N), and sufficiently close to equilibrium, a quadratic
expansion is sufficient. The Hessian := (N ) at the minimizer is symmetric and
positive semidefinite, and we get
1
(N) = (N ) + xT x + O(kxk3 ).
2
Ignoring the error term, the substitution into linear response theory yields the driven
Ornstein-Uhlenbeck process discussed in Section 15.3.
16.5
Open system
All interesting phenomena in our world are alive in a more or less complex way, and this
is due to the fact that the systems involved are not closed but open, interacting with the
environment. Life is dependent on communication; a closed system is doomed
to death, by the second law of thermodynamics which moves the system to equilibrium
where nothing happens anymore. Such a system can be brought back to life only by exerting
external influence.
Now it is a very remarkable fact that the same thermodynamic laws which force closed
systems towards death operate on open systems in such a way that an enormously rich
variety of living structures appear, evolve and change. Indeed, we shall see that the universe
is teleological and comprehensible precisely because of dissipation: Life forces and death
forces are identical.
Modern science has just started to understand some details of this fascinating vision of the
world, and like concepts self-organization, evolution, synergetics, chaos created new
paradigms whose further unfolding will enrich and change our scientific understanding of
the world.
Mathematically, open systems are characterized by the occurence of (in general time dependent) external flows or forces = (t). Corresponding to each value there is a
forward derivation D which specifies the dynamics at constant external conditions. = 0
describes a closed system with detailed balance, but for 6= 0, detailed balance is usually
violated. A general open system is described by the forward derivation D defined by
Df (, x) :=
f
(, x) + D f (, x)
(16.53)
355
356
16.6
Related to the dynamics of open systems is the teleological (i.e., goal-directed) nature
of our world. In contrast to a widely held view, physical laws have a natural teleological
interpretation as democracy of forces in collaboration and conflict:
Forces are teleological, their goal is trying to move particles along the field lines, in a
way similar to the way we try to earn our livings, make a career, win a game, etc.. The
laws of physics are constraints which resolve conflicts between competing forces in a
democratic way (forces are additive). As in society, if many individual forces are present
the collective behavior is often different from what the individuals hope for. The
analogy to human affairs is close, and indeed one can model sociological systems by the
same mathematics as chemical systems, say, though much less accurately.
The mind-matter problem is located on this level, and perhaps one is not too far away
from modeling mind-matter interaction by open collective processes involving mind fields
expressing feeling, awareness and will and on the society level, mass media).
In a local perspective, our mind is able to set some external stimuli to the working of
our physical body; further external stimuli come through our senses (and perhaps further
through inspiration, telepathy, etc.). We all know the lack of self-organization in learning
due to wrong circumstances distracting thoughts (mind stimuli), talking neighbours
(physical stimuli), missing information (lack of stimuli), and the phase transitions induced
by the presentation of strange new information after a period of intermittend chaos the
formation of understanding: it dawned upon him, she catched on. Reaching a stable
equilibrium corresponds to the death of doubts and questions.
In a global perspective, Gods mind sets the conditions for a world created by Him to
serve this purpose. Some people think of God as the mind of the universe, and in this view
one might consider the universe as the body of God; but, like with all images of God, this
view is only partially appropriate.
Part V
Mechanics and differential geometry
357
Chapter 17
Fields, forms, and derivatives
Part V introduces the relevant background from differential geometry and applies it to
classical Hamiltonian and Lagrangian mechanics, to a symplectic formulation of quantum
mechanics, and to Lie groups.
In this chapter we introduce basic material on manifolds, the associated commutative algebra of scalar fields, and the Lie algebra of vector fields. All manifolds used in this book are
arbitrarily often differentiable, real manifolds whose dimension need not be finite. However,
we are very brief and sometimes incomplete in the technical details that need attention in
the infinite-dimensional case; on first reading, the reader may restrict everything to the
finite-dimensional case, where these details are not required.
We first recall some basics from differential geometry. Our approach differs from standard
introductions to differential geometry since, consistent with the theme of the book, all
definitions are given in an algebraic way. As a side benefit, this prepares the reader to
noncommutative geometry, only briefly touched in this book, where a manifold structure is
no longer available and all geometry enters in an algebraic way. Among other applications,
noncommutative geometry gives an interesting geometric perspective to the quantum field
theory of the standard model.
Vector fields on a manifold M are essentially equivalent to derivations on the commutative
algebra C (M) of scalar fields. However, to be able to use the traditional terminology,
where vector fields and the corresponding derivations (Lie derivatives) are distinguished,
we introduce an abstract set W = vect M of vector fields, whose elements are put into
correspondence with derivations by means of a mapping d : vect M Der M which is
applied at the right. In this way, the calculus on manifolds can be formulated in a purely
algebraic way, without any reference to the manifold.
We therefore formulate everything in terms of an arbitrary topological commutative algebra
E in place of C (M), and an arbitrary set W in place of vect M. However, the main situation
that the reader should have in mind is where E is an algebra of complex-valued, arbitrarily
often differentiable functions on a finite-dimensional manifold, for example C (Rn ). But
E could also be the Schwartz space of arbitrarily often differentiable functions all of whose
derivatives decay faster than polynomially at infinity.
359
360
17.1
We introduce the objects, operators, and operations needed for presenting the traditional
differential calculus in a purely algebraic framework: Lie derivatives applied to multilinear forms, and exterior products and the exterior derivative of alternating forms. As the
most important special case, we consider manifolds and associated geometric notions, in
particular diffeomorphisms.
Before giving the definitions, we discuss the letter conventions and priority rules used in
the formulas.
We typically (i.e., when not forced by conflicts or tradition to do otherwise) use lower case
letters from the middle of the alphabet, such as f, g, h, to denote scalar fields, capital letters
from the end of the alphabet, such as X, Y, Z to denote vector fields, capital letters from
the beginning of the alphabet, such as A, for general multilinear forms, but z, for linear
forms, for alternating bilinear forms, and for symmetric bilinear forms.
We use the convention that a Lie derivative acts on the shortest following expression which is
syntactically a vector field or a multilinear form. Similarly, the exterior derivative operator
d acts from the right on a vector field X, giving Xd, or from the left on the shortest following
expression that is syntactically an alternating form , giving d.
The wedge product has lower priority than the operations written as juxtaposition, but
higher priority than + and .
17.1.1 Definition.
(i) A differential geometry consists of a commutative algebra E containing C, a left
E-module W with an additional Lie product , both equipped with a topology such that
all operations are continuous, and a continuous mapping d (written on the right), which
maps X W to Xd Der E, such that
(X + Y )d = Xd + Y d,
(f X)d = f (Xd),
(17.1)
As will become apparent in Section 17.3 (cf. Theorem 17.3.2), we may read the term Xd f also as
361
(17.3)
The scalar field LX f (resp. the vector field LX Y ) is called the directional derivative of
the scalar field f (resp. the vector field Y ) in the direction of the vector field X.
The following example is responsible for the naming. Interpreting the set M in the example
as the domain of a chart of a finite-dimensional manifold, one can translate everything said
here to general finite-dimensional manifolds by a process described in all books on differential geometry. Thus the example gives essentially the full intuition for our constructions,
except for the complications that may arise in infinite dimensions.
17.1.2 Example. (Differential geometry of open subsets in Rn )
Let Rn denote the vector space of row vectors2 x = (x1 , . . . , xn ) with n real components
xj , let M be a nonempty, open subset of Rn , and let E = C (M) and W = C (M, Cn ),
equipped with the weak topology.
Thus scalar fields are real-valued functions, while
vector fields are row vector valued functions. In terms of the partial differential operators
j defined by
j f (x) := f (x)/xj ,
we define the gradient f of a scalar field as the column vector with n entries
(f )j = j f.
It is not difficult to show that an arbitrary derivation on t he algebra of scalar fields can
be uniquely expressed as a linear partial differential operator of the form
= X =
n
X
X j j ,
j=1
n
X
X j j f.
j=1
Thus the mapping d which maps the vector field X to the differential operator X is a
bijection of the type required in the previous example. Thus we have a canonical differential
geometry; it is clearly commutative. The reader is invited to check that the Lie derivative
takes the form
LX f = Xf, LX Y = XY Y X.
(17.4)
product of the vector field X with the exact linear form df . Until then, we shall write an explicit space
after d to remind the reader of the correct way to group the letters.
2
The index notation corresponds to standard differential geometric practice when working in a chart of a
manifold (which is essentially the situation we are discussing here). The interpretation in terms of rows (row
vectors = rovectors, indexed by upper indices = roindices) and columns (column vectors = covectors,
indexed by lower indices = coindices) makes the transition to standard linear algebra transparent.
362
LX (f Y ) = (LX f )Y + f (LX Y ),
(17.5)
(17.6)
Lf X g = f LX g,
(17.7)
Lf X Y = f LX Y XY d f
(17.8)
Also,
((LX f )Y )d g = ((Xd f )Y )d g = (Xd f )(Y d g)
and
f (LX Y ) d g =
Putting these three pieces together proves the second part of (17.5).
363
Formula (17.7) is immediate from the definition, and (17.8) follows from the product rule
Xf Y = (Xd f )Y + f (XY ) by swapping X and Y , using the anticommutativity of .
In the following, we develop the differential calculus for commutative differential geometries
only; thus, with exception of the remarks on noncommutative geometry in Section 17.5,
the algebra E of scalar fields is always assumed to be commutative. In this case,
we extend the left module structure on vector fields to a bimodule structure by putting
Xf := f X
for f E and X W. Note that some authors treat vector fields as synonymous with
derivations and therefore write X(f ) for Xd f . This should not be confused with the
present notation Xf for multiplying the vector field X with the scalar field f .
17.2
Multilinear forms
Apart from scalar and vector fields, differential geometry makes heavy use of multilinear
forms and tensors, which we define next.
17.2.1 Definitions.
(i) A linear form is a continuous, E-linear mapping : W E (written on the right3 )
which maps the vector field X to the scalar field X. We write W for the E-module consisting of all linear forms, (sometimes called the E-dual of W), with scalar multiplication
of W by f E defined via
X(f ) := f (X).
(ii) A c-linear form is a mapping : W . . . W E (with c factors of W in the
Cartesian product) such that the image4 X1 . . . Xc of (Xc , . . . , X1 ) W . . . W depends
E-linearly on each argument Xk , i.e., if, for all Xj , Y, Z W and f, g E,
X1 . . . (f Y + gZ) . . . Xc = f X1 . . . Y . . . Xc + gX1 . . . Z . . . Xc .
3
Strictly speaking, they should be called E-linear forms, and a similar remark applies later to multilinear
forms. Talking about a form rather than a mapping implies the assumption of continuity.
The standard notation for X is iX = (X); the present notation simply replaces iX by X. This way of
writing the mapping generalizes standard matrix calculus if we use the intuition gained from Example 17.1.2
and think of vector fields as row vectors and of linear forms as column vectors, an intuition that extends to
matrix fields. Since in the general situation, linear forms are often called covectors, we shall occasionally
use the analogous word rovector to denote a vector field, although, strictly speaking, one should talk
about covector fields and rovector fields. The same ambiguity is traditionally maintained for multilinear
forms on manifolds, which refer both to the corresponding fields and to their values at a particular point.
4
The traditional notation for X1 . . . Xc is iX1 . . . iXc = (Xc , . . . , X1 ); as for linear forms, the present
notation simply replaces the iX by X. Note the reverse order resulting in the arguments written in the
traditional way, needed in order that (17.9) together with our definition (17.10) of insertion is consistent
with the traditional definition (iX )(X1 , . . . , Xc1 ) = (X, X1 , . . . , Xc1 ). In our notation this translates
into Xc1 . . . X1 (iX ) = Xc1 . . . X1 X.
364
Here the unindexed argument between the dots replaces the kth argument Xk , for some k
in 1, . . . , c. In the degenerate case c = 0, we consider the 0-linear mappings to be the scalar
fields.
(iii) We write5 Wc for the E-module of continuous c-linear mappings on W. Scalar multiplication of Wc by f E is defined via
X1 . . . Xc (f ) := f (X1 . . . Xc ) .
The elements of Wc are called multilinear forms or c-linear forms; for c = 2 also
bilinear forms. Note that W0 = E consists of scalar fields (or 0-forms), and W1 = W
consists of linear forms (or 1-forms).
(iv) The product of a vector field X W and a c-linear form Wc is for c = 0 the vector
field X defined by scalar multiplication with the scalar , and for c > 0 the (c 1)-linear
form X defined by
for all X1 , . . . , Xc1 W.
(17.9)
(17.10)
E1 = W1 ,
E2 S2 = W2 .
(17.11)
for all vector fields X, Y . In particular, a bilinear form is symmetric iff T = and
alternating iff T = . A bilinear form is called nondegenerate if every linear form
W can be written as = X for a unique vector field X; otherwise degenerate. A
bilinear form may be considered as a linear mapping from W to W that maps the vector
field X to the linear form X. If is nondegenerate, this mapping is invertible, and the
inverse 1 is a linear mapping from W to W, which maps a linear form to the vector
field 1 in such a way that
1 = .
(17.12)
A nondegenerate bilinear form is called a symplectic form if it is alternating, and a
metric if it is symmetric.
5
365
Thus, for c 2,
i2X = 0, iX iY = iY iX
(17.13)
whereas
iX iY = iY iX
(17.14)
(ii) Note that there is a canonical identification of W[c, 0] with Wc , and a canonical embedding of W into W[0, 1]. In the case of finite-dimensional manifolds, we may also identify
W[0, 1] with W.
(iii) The ordinary operator product of a [c , c]-tensor and an [c, r]-tensor is well-defined, and
is a [c , r]-tensor: W[c , c]W[c, s] W[c , s]. In particular, W[1, 1] = Lin W1 is an algebra of
matrix fields.
17.2.3 Theorem. For every vector field X, the Lie derivative can be extended uniquely
to a linear operator LX mapping8 vector fields to vector fields and c-linear forms to c-linear
forms, and satisfying the product rule
LX (f ) = (LX f ) + f (LX ),
LX (Y ) = (LX Y ) + Y (LX ).
(17.15)
(17.17)
for X, Y W.
An [c, r]-tensor is also called a tensor of [ rc ]-valence (Penrose & Rindler [215]). With traditional
index notation, a c-linear form is written with c lower (co)indices, and a [c, r]-tensor is written with r upper
(ro)indices and c lower (co)indices. E.g., a [3, 2]-tensor T is written Tijk mn , and the image T of a bilinear
form is written (T )ijk = Tijk mn mn , using the traditional Einstein summation convention (which
deletes the explicit indication of the sum over m and n so as not to unnecessarily inflate the formulas
without conveying any more information). In the more modern abstract index notation of Penrose
& Rindler [215], such repeated indices denote instead an insertion (dual-pairing) without any implied
connotation of summation over basis-dependent components, and such indices may be used to keep explicit
track of the types of complicated objects.
7
For the differential geometry of open subsets M of Rn (Example 17.1.2), W[c, r] = Wrc is, in the
traditional terminology, the space of sections of the tensor bundle Tcr M.
8
The Lie derivative can also be extended to tensors T W[c, r] by defining
6
(LX T )B := LX (T B) T LX B
for B Wr .
We do not need such an extension for our limited applications; it would be needed, however, in a treatment
of general relativity. The reader is invited to verify that LX T W[c, r] and to formulate and prove the
analogues to (17.15) and (17.16).
366
Proof. We first assume that the product rule holds, and show that this fixes the operation
of LX on all multinear forms. By the product rule (17.15),
Y LX = LX (Y ) (LX Y ),
(17.18)
This formula shows that LX is determined on c-linear forms by its action on (c 1)-linear
forms, and since it is given on scalar fields, it is unique if it exists at all.
Conversely, to show existence of the extension, we define LX recursively by (17.18), starting
with the known action of LX on scalar fields.
Since (f Y )LX = LX (f Y ) LX (f Y ) = (LX f )Y + f LX (Y ) (LX f )Y f (LX Y ) =
f (LX (Y ) f (LX Y ) = f (Y LX ), we see inductively that Y LX is E-linear in Y , so that
LX is indeed a tensor.
The first part of the product rule holds since, by (17.18), the equation Y LX (f ) =
LX (Y f ) (LX Y )(f ) = LX (f Y ) (LX Y )(f ) = (LX f )Y + f LX (Y ) (LX Y )f )
f (LX Y ) = Y (LX f ) + Y f (LX ) holds for all vector fields Y . The second part of the
product rule follows directly from (17.18).
To prove (17.16), we first note that by (17.18), we have ZLX LY = LX (ZLY )(LX Z)LY =
LX (LY (Z)(LY Z))(LX Z)LY = LX LY (Z)(LX LY Z)(LY Z)LX (LX Z)LY .
The last two terms are symmetric in X, Y , hence cancel when taking the difference with
ZLY LX in Z[LX , LY ] = ZLX LY ZLY LX = (LX LY LY LX )(Z) (LX LY Z
LY LX Z) = [LX , LY ](Z) ([LX , LY ]Z). Since (17.16) is already known by (17.6) to
hold on vector fields and on scalar fields (0-linear forms), we assume that we know its validity for the action on c-linear forms. Taking for a (c + 1)-linear form, we may conclude
that Z[LX , LY ] = LXY (Z) (LXY Z) = ZLXY by (17.18). Since Z was arbitrary,
we conclude that [LX , LY ] = LXY for (c + 1)-linear forms . By induction, (17.16) holds
in general.
(17.17) follows from the product rule (17.15) since
[LX , iY ] = LX (Y ) Y (LX ) = (LX Y ) = (XY ) = iXY .
The reader may wish to prove inductively that, for c -linear forms with c c,
c
X
X1 . . . Xc LX = LX (X1 . . . Xc )
X1 . . . XXk . . . Xc .
k=1
17.3
367
Exterior calculus
(17.20)
LX ( ) = LX + LX
(17.21)
(17.22)
in particular,
=
for a 0-form ,
(17.23)
This completely specifies the exterior product of a linear form and an alternating (c + 1)form , given the exterior product with an alternating c-form. Therefore, if the exterior
product exists, it is unique.
To prove the existence of the exterior product, we have to define the exterior product of
a linear form and an alternating c-form to be the expression defined for c = 0
9
One can define an exterior product for arbitrary alternating forms , , but we do not need it.
368
by (17.23) and for c > 0 recursively by (17.22). To show that we really get an alternating
(c + 1)-form, we need to show that X( ) is alternating for c > 0 and any vector field
X, and verify
(f X)( ) = f (X( ))
(17.24)
and
XX( ) = 0.
(17.25)
17.3.2 Theorem. There is a unique linear mapping d mapping vector fields to zero and
alternating c-forms to alternating (c + 1)-forms (for c = 0, 1, 2, . . .) such that10
LX = Xd + d(X)
(17.26)
for all alternating c-forms and vector fields X. The alternating form d is called the
exterior derivative of , and satisfies the exactness relation
dd = 0
(17.27)
d(f ) = f d + df ,
(17.28)
d( ) = d d,
(17.29)
Lf X = f LX + df X,
(17.30)
for all alternating froms , scalar fields f , linear forms and vector fields X. Note the
minus sign in (17.29)!
Proof. A necessary and sufficient condition for (17.26) to hold is that
X(d) = LX d(X);
(17.31)
in particular,
X(d) = Xd = LX
for a 0-form .
(17.32)
This completely specifies the exterior derivative of an alternating c-form. Therefore, if the
exterior derivative exists, it is unique. To prove the existence of the exterior derivative, we
have to define the exterior derivative of an alternating c-form to be the expression d
determined for c = 0 by (17.32) and for c > 0 recursively by (17.31).
To show that we really get an alternating (c + 1)-form, we need to show that X(d) is
alternating for c > 0 and any vector field X, and verify
(f X)d = f (Xd)
10
(17.33)
In particular, the relation LX = Xd valid on scalar fields fails to hold for the extension of LX and d to
alternating forms.
369
(17.34)
which proves antisymmetry. The proof of (17.33) is based on (17.30). To prove (17.30), we
get inductively Y (Lf X f LX df X) = Y Lf X f Y LX Y df X = Lf X (Y )
(Lf x Y ) f (LX (Y )(LX Y ))((Y df )X df Y X). Using the induction hypothesis
on the first term and (17.28) on the second term, one finds that all terms cancel. From
(17.30) one obtains (f X)d = Lf X d(f X) = f LX + df X (f d(X) + df X) =
f (LX d(X) = f (Xd), showing that (17.33) holds. (17.34) follows inductively from
XXd = XLX Xd(X) = XLX LX (X)d(XX) = (LX X)d(XX) = 00 = 0.
To prove the product rule (17.28), we , and then inductively Xd(f ) = LX (f )d(f X) =
(LX f ) + f LX (f d(X) + df X) = f (LX d(X)) + (Xdf ) df X =
f Xd + X(df ) = X(f d + df ), completing the induction.
To prove the exactness relation (17.27), we need the formula
d(LX ) = LX (d).
(17.35)
In particular, the exterior derivative of the linear form is the alternating bilinear form d
with
Y Xd = Y LX XLY + (XY ),
since Y Xd = Y (LX d(X)) = Y LX LY (X) = Y LX ((LY X) + XLY ) =
Y LX XLY (Y X).
An alternating c-form is called closed if d = 0, and exact if it can be written in the
form = d for some (c 1)-form . In particular, a linear form is exact if it has the
form = df for some scalar field f .
By (17.27), every exact c-form is closed. The converse is not generally valid but holds in
simple cases, e.g., by the Poincar
e Lemma, when E = C (M), where M is a nonempty,
n
open and convex subset of R ).
370
17.4
x = 0.
n
X
hk
k=0
k!
for all t, h R and all natural numbers n. Clearly, (0) = , and we write := () .
The reader should verify that, for any seminorm s and R, x F,
s(0) = 0,
s(x) = ||s(x) 0,
that in any locally convex vector space, addition and scalar multiplication are continuous,
and that differentiation satisfies the traditional rules.
17.4.2 Definition. A convenient vector space is a locally convex vector space F over
R such that every smooth path in F is the derivative of another path in F. A complexvalued function f on a nonempty and open subset M of F is called smooth or arbitrarily
often differentiable if the complex-valued function f is smooth for every smooth path
: R F. The space of smooth functions f from M to a topological vector space V is
denoted by C (M, V), and the space of smooth functions f : M C is denoted by C (M).
The reader should verify that in a convenient vector space there is a mapping Lin(C (M), C
called the gradient such that
d
f ((t)) = (t)f
((t))
dt
for all f : C (M), all arbitrarily often differentiable paths : R M and all t R.
371
Informally, property (M1) says that there are sufficiently many points to separate scalar
fields. It implies not only that E is commutative, since (f ggf )(x) = f (x)g(x)g(x)f (x) =
0 for all points x, but also excludes many other commutative algebras, such as nontrivial
quotients of the algebra of polynomials in a single variable.
(M2) expresses that charts are sufficiently large to represent scalar fields locally, and (M3)
says that there are sufficiently many derivations to reduce differentiation locally to charts.
17.5
17.5.1 Definition.
(i) Let F be a convenient vector space. A manifold modeled on F (short F-manifold or
simply manifold11 if F is apparent from the context) is a set M whose elements are called
points together with a family C of maps : U M from a (-dependent) nonempty open
subset U of F to M called charts, with the properties
(SM1) Every point of M is in the range [U] of some chart : U M;
11
More precisely, this defines arbitrarily often differentiable, real manifolds whose dimension need not be
finite. There are a number of other notions of a manifold which make somewhat different assumptions.
372
(SM2) A map : U M is in C if and only if is injective and, for every nonempty open
subset V of U and every chart : U M in C with [V ] [U ],
1 |V C (V, F).
The manifold is canonically a topological space by declaring as open sets arbitrary unions
of finite intersections of ranges of charts.
(ii) The inverse of a chart is called a local coordinate system. An atlas is a family of
charts whose ranges cover M; the family C of all charts is the universal atlas.
(iii) The dimension of F is called the dimension of M. In particular, M is called finitedimensional (d-dimensional) if dim F < (resp. dim F = d).
(iv) A mapping F from M to convenient some vector space U is called smooth (or infinitely
differentiable) if F () C (U, U) for every chart : U M. A scalar field on M is a
smooth complex-valued function on M; the algebra of all scalar fields on M with pointwise
operations is denoted by C (M). A derivation on M is a mapping Lin C (M)
satisfying
(f g) = (f )g + f (g) for all f, g C (M).
(v) A canonical differential geometry whose scalar fields form the algebra E = C (M) of a
manifold M is called a differential geometry of M.
Note that a chart is injective, hence its inverse 1 , the corresponding local coordinate
system is well-defined on the range of the chart, and maps a nonempty, open subset of M
to an open subset of F. In many treatments of differential geometry, the local coordinate
system 1 rather than is called the chart.
17.5.2 Proposition.
(i) The set C (M) of all scalar fields on M is a commutative -algebra under pointwise
multiplication.
(ii) The set Lie M := Der C (M) of all derivations on M with the commutator of derivations
as Lie product is a Lie algebra.
Proof. This is left to the reader as a straightforward exercise.
17.5.3 Theorem. F-Manifolds in the sense of Definition 17.4.4(iii) and F-manifolds in the
sense of Definition 17.5.1(i) are equivalent concepts.
Proof.
The motivating example defining the terminology is the surface of the earth, the globe,
which may be regarded as a 2-dimensional manifold M with F = R12 , the vector space
373
This generalizes the traditional terminology for the case where m = 1. The implicit function
theorem implies that if the gradient has constant rank m, i.e., rk dF (x) = m for all x M0 ,
then the set M given by
M = {x M0 | F (x) = 0} ,
is a d-dimensional manifold with d = n m. If M defines a d-dimensional manifold given
by an equation F (x) = 0, then the tangent space at a point x M is given by
Tx M = X R1n | X dF (x) = 0 .
Thus the tangent space consists of those vectors perpendicular to the gradient, that is, the
tangent vectors at x are tangent to M at x. Hence the name tangent space. The vector
fields of M are similarly given by
vect M = X C (M, R1n ) | X(x) dF (x) = 0, for all x M .
Given an F-manifold M and an F -manifold N, we define C (M, N) as the set of maps
A : M N such that if : U M is a chart on M and : V N is a chart on N such
that A((U)) (V ) implies ( )1 A C (U, V ). A diffeomorphism of M is an
invertible mapping in C (M, M) with an inverse in C (M, M); we write Ax for the image
of a point x M under a diffeomorphism A.
We assume that the identity map on M is in C (M, M) and that the composition of
f C (M, N) and g C (N, N ) is in C (M, N ), a condition13 automatically satisfied in
12
It is convenient to think of points as row vectors; then tangent vectors are row vectors, too, and
gradients of scalar fields are naturally column vectors. Thus later expressions like the directional derivative
Xdf of a scalar field f in the direction of a vector field X have a natural interpretation as scalar = row
times colums in terms of ordinary matrix algebra.
13
In technical terms this says that the modeling vector spaces should admit a category of smooth manifolds.
374
finite dimensions. Then the set Diff M of all diffeomorphisms is a group under composition
of maps. Additional conditions are needed to ensure that Diff M is a vect M-manifold and
hence (in the terminology of Section 17.7 below) a Lie group; see, e.g., Neeb [200], where
one can find a detailed discussion of pathologies that can arise in infinite-dimensional Lie
groups.
We define a motion on M as a mapping A C ([0, 1], Diff(M)) such that A(0) = 1 is
the identity. The intuition is that the points x = A(0)x of an object (subset of M) at time
t = 0, the start of the motion, is moved by the motion to the point A(t)x at time t [0, 1],
ending up in A(1)x at the end of the motion. For every motion A and for all t [0, 1] we
A(t)df
: x f (A(s)A(t)1 x)
s=t
ds
for all scalar fields f and all x M. Since the product rule holds for smooth functions
f (A(t)x) = A(t)df
(A(t)x) .
dt
If we are only interested in what happens in an infinitesimal neighborhood of a point x M,
the vector fields in
N(x) := {X0 vect M | X0 df (x) = 0 for all f C (M)}
have no effect at x. Since N(x) is a vector space, we can form the quotient space
Tx M = vect M/N(x) ,
called the tangent space or tangent (hyper-)plane at x. We denote with
X(x) := X + N(x),
the equivalence class of X with respect to the equivalence relation X Y X Y N(x).
We call the equivalence class that contains the vector field X the tangent vector of X at
the point x.
The union T M of all Tx M is naturally a manifold called the tangent bundle of M.
The Lie derivative in the traditional approach. In the special case where E = C (M)
for some finite-dimensional manifold M, there is an alternative, traditional route to the
calculus on manifolds, using the following traditional definition of the Lie derivative.
For any vector field X, the initial value problem
(0) = x,
d
( ) = X(( ))
d
(17.36)
375
is solvable for every x M, for in some x-dependent neighborhood of zero. This follows
from the standard theory of ordinary differential equations, since differentiable vector fields
are locally Lipschitz.
We denote by e X the local diffeomorphism which maps x into the value ( ) of the solution
of (17.36). Clearly, e0 = 1 is the identity, but for fixed , the map e X need not be defined
everywhere. The latter is the case only when (17.36) is solvable for all R; in this case, the
vector field is called complete, and the e X form a 1-parameter group of diffeomorphisms.
In general, we have, on the domain of definition,
e X e
= e( + )X ,
d X
e x = X(e X x).
d
We define the directional derivative LX of a tensor field with respect to the complete
vector field X by
d
X
(LX )(x) :=
(e x) .
=0
d
This defines a linear differential operator LX mapping tensor fields to tensor fields of the
same type [c, r], called the Lie derivative of X. Clearly,
(e LX )(x) = (e X x).
It is not difficult to show the chain rule
d
( ) = X( )( )
d
d
(( )) = LX( ) (( ))
d
17.6
Noncommutative geometry
In this short section, we indicate how things generalize to noncommutative geometry, without giving details; the reader not familiar with the notions used may simply skip the section.
In noncommutative geometry, position measurements are limited by uncertainty relations.
The notion of a point therefore loses its meaning, and the evaluation of functions and vectors
at a point is no longer well-defined. Thus, in noncommutative geometry, a manifold of points
376
no longer exists, but in place of C (M) one has a noncommutative algebra E whose elements
behave in a way analogous to scalar fields. All constructions based only on this algebra
rather than a manifold generalize in an appropriate way to the noncommutative situation.
Thus most geometric notions extend formally, but they can be matched with true geometric
concepts only in certain commutative subalgebras. The basic observation is that a point
evaluation is a -homomorphism of C (M) to C, and conversely, all such homomorphisms
of C (M) are obtained as point evaluations. Now, if E0 is a commutative normed subalgebra of E whose completion is a B -algebra (a term we shall not further use, and
hence not introduce formally ) then one can reconstruct on E0 a topological space M0 by
calling the characters of E0 points; the B -algebra is then canonically isomorphic to the
algebra of bounded continuous functions on M0 . If E0 admits sufficiently many derivations
then M0 is a (smooth) manifold. When E1 and E2 are two such commutative subalgebras
that do not commute, then, in contrast to the commutative situation, the corresponding
manifolds M1 and M2 are not naturally embedded into a bigger manifold. Thus there may
be many maximal manifolds embedded in a single noncommutative geometry.
17.7
This section defines Lie groups in full generality. Differential equations defining the flow
along a vector field naturally produce Lie groups and the exponential map, which relates
Lie groups and Lie algebras.
17.7.1 Definition.
(i) A Lie group is a group G which is at the same time a manifold, such that multiplication
and inversion are arbitrarily often differentiable. A Lie group is both a manifold and a group
and the two structures are compatible. The identity element in a Lie group will always be
written as 1.
(ii) We canonically embed G into Diff(G) by associating to A G the map B AB, which
is a diffeomorphism. For the definition of the Lie algebra associated with a Lie group, it
is important to know that the group G acts on C (G) by right multiplication, that is, to
every A G we associate the map RA : C (G) C (G) given by
(RA )(B) := (BA)
for all B G, C (G). Of course, the group also acts by left-multiplication on C (G)
but this action is not directly related to the Lie algebra.
(iii) The Lie algebra vect G contains the set
log G = {X vect G | RB Xd = 0 for all B G}
of invariant vector fields.
It is not difficult to show that every Lie group in the above sense is a Lie group in the sense
of Definition 11.3.2, since G is canonically embedded into Lin C (G). The converse is also
valid but a bit more difficult to establish.
377
17.7.2 Proposition. The invariant vector fields log G form a Lie algebra.
Proof. To check the statement, we only need to show that the Lie product XY of two
invariant vector fields X and Y is invariant. But this follows from Proposition 11.2.4 since
the invariant vector fields form the centralizer of the set {RA | A G}.
17.7.3 Proposition. For any smooth motion A C ([0, 1], G), where G is identified with
(RB A(t)d)(A(t))
= RB (A(t)d)(A(t))
A(t)d(R
B )(A(t))
d
= A(t)d(A(t)B)
(RB )(A(t))
dt
d
d
(A(t)B) (A(t)B) = 0 .
=
dt
dt
17.7.4 Remarks. Note that an essential ingredient in the above proof is that the action
of G on C (G) is defined from the right and the action of the vector field A(t)
from the
left.
17.7.5 Definition. A motion A(t) is called a uniform motion if there exists a unique
f log G such that
= f A(t) for all t [0, 1].
A(t)
(17.37)
In this case we write ef for the group element A(1) and call it the exponential of f .
Conversely, f is called the infinitesimal generator of the motion.
Formula (17.37) is a linear differential equation with constant coefficients; the initial condition A(0) = 1 is already part of the definition of a motion. In finite dimensions, such initial
value problems are uniquely solvable; in infinite dimensions, unique solvability depends on
additional conditions. It is easy to check that a uniform motion with infinitesimal generator
f log G is given by A(t) = etf .
17.7.6 Example. In any associative algebra, the set of invertible elements is a group. In
many cases, the group of invertible elements is a Lie group.
In particular, the group
GL(n, K) of all invertible n n-matrices over K = R or K = C is a Lie group, since it is
the open set of points in Knn where the determinant does not vanish, so that any point
has an open neighborhood on which the identity is a chart. We can choose coordinates xij
for 1 i, j n and then GL(n, K) is the open set where det(xij ) 6= 0. Any derivation is of
the form
X
(Xf )(xij ) =
X ij
f (xij ) ,
xij
1i,jn
378
for all f C (GL(n, K)) and for some X ij K. One finds that log GL(n, K) = gl(n, K)
is the Lie algebra of all n n-matrices over K. It is easy to verify these properties by
describing everything with matrices. The subgroup of GL(n, L) consisting of the matrices
with unit determinant is denoted by SL(n, K). In other words, SL(n, K) is the kernel of
the map det : GL(n, K) K , where K is the group of invertible elements in K. The Lie
algebra of SL(n, K) is denoted by sl(n, K) and consists of the traceless n n matrices with
entries in K.
Chapter 18
Conservative mechanics on manifolds
We consider closed 2-forms in manifolds and their associated Poisson algebras. This naturally leads to symplectic geometry and a symplectic formulation of the dynamics of quantum mechanics. It also leads to classical Hamiltonian and Lagrangian mechanics, including
constraints.
18.1
380
the resulting Poisson algebra, and by selecting an initial state describing the preparation
of the system. In this section, we discuss the general construction principle.
Let be a closed 2-form on a differential geometry E. We call a scalar field f E
compatible with if there is a vector field Xf such that
df = Xf ;
(18.1)
any such Xf is called a Hamiltonian vector field associated with f . We write E() for
the set of all scalar fields f E which are compatible with . In general, Xf need not exist
for all f , and if it exists, it need not be unique. Thus E() may be a proper subspace of E;
this situation is typical for examples arising from constrained Hamiltonian mechanics.
18.1.1 Proposition. Let be a symplectic form. Then every scalar field f is compatible
with ,
Xf = df 1,
(18.2)
and E() = E.
Proof. Since is a symplectic form, is nondegenerate and has an inverse satisfying
(17.12). The defining condition for Xf can therefore be solved uniquely for Xf , for all
f E, resulting in (18.2).
A vector field X is called locally Hamiltonian (with respect to ) if the linear form X
is closed, and Hamiltonian (with respect to ) if X is exact (and hence closed). Thus,
for any f E(), the vector field Xf is Hamiltonian with respect to ,
18.1.2 Proposition. If X, Y are locally Hamiltonian vector fields with respect to the
closed 2-form then XY is Hamiltonian, and
(XY ) = d(XY ).
(18.3)
In particular, the locally Hamiltonian vector fields and the Hamiltonian vector fields form
Lie subalgebras of W.
Proof. Since and X are closed, (17.26) implies that LX = Xd + d(X) = 0. Again
by (17.26), d(XY ) = LX (Y ) Xd(Y ) = LX (Y ) = (LX Y ) + Y LX = (LX Y ) =
(XY ), using the closedness of Y and the product rule (17.15). This proves (18.3). The
concluding statement is an immediate consequence.
18.1.3 Theorem. For every closed 2-form over the manifold M, the set E() is a Poisson
algebra, with Lie product given by
f g := Xf dg = Xf Xg = Xg Xf = Xg df.
(18.4)
(18.5)
381
(18.6)
Proof. We first show that E() is a subalgebra of the algebra E. If f, g E() and C
then f, f g, f g E() since we may take
Xf = Xf ,
Xf g = Xf Xg ,
Xf g = f Xg + gXf .
We next show that f g is well-defined. Indeed, if Xf , Xf are two Hamiltonian vector fields
associated with f then (Xf Xf , Y ) = 0, hence f g does not depend on the choice of
the Hamiltonian vector fields associated with f and g.
Proposition 18.1.2 implies that (18.5) is a Hamiltonian vector field for f g; therefore f g
E().
The operation defined by (18.4) is bilinear, antisymmetric, and satisfies the Leibniz
identity. To conclude that E() is a Poisson algebra it therefore suffices to show that the
Jacobi identity holds. This follows since, with X := Xf , Y := Xg ,
(f g)h = Xf g dh = (Xf Xg )dh = (XY )dh = LXY h
= [LX , LY ]h = LX LY h LY LX h = Xf d(Xg dh) Xg d(Xf dh)
= f (gh) g(f h) = (f h)g + f (gh).
Finally, if is a symplectic form, (18.2) implies that the Lie product (18.4) can be rewritten
in the form (18.6).
Note that the Lie product can be extended to the case where one argument is in E() and
the other may be an arbitrary quantity from E:
f g = Xf dg
f g = Xg df
for f E(), g E,
for f E, g E().
Thus if f is compatible with , the Lie product is defined even when g is not compatible
with .
In the manifold case, the above theorem defines, for each closed 2-form on an F-manifold
M, a Poisson algebra E() which is the set of functions f E = C (M) which are
compatible with . In the special case where is symplectic, we have seen that E() = E;
thus we may define the Poisson bracket
{f, g} := gf = dg 1df
(18.7)
of f, g E. This is the traditional Poisson bracket associated with the symplectic space
(M, ).
382
The affine functions, which map M to u + for some u F and some C, satisfy
(u + )(v + ) = u1v C,
hence form a Lie subalgebra, which is a Heisenberg algebra. This provides a faithful classical
Poisson representation of general Heisenberg algebras.
18.1.4 Example. We continue the discussion of Example 17.1.2, where scalar fields
(resp. vector fields) are the smooth complex-valued (resp. row vector valued) functions on
a nonempty, open subset M of the space Rn of rovectors of length n. In this case, it is
natural to identify W with the vector space C (M, Cn ) of covector-valued fields via
(X)(x) = X(x)(x)
for X W = C (M, Cn ) and W = C (M, Cn ). In particular, the gradient df = f
appears naturally as an element of W , consistent with our abstract development.
Now let be a distinguished linear form. Then we can define its Jacobian, the x-dependent
square array whose entries are the partial derivatives
j k (x) =
k (x)
.
xj
We now consider the exact 2-form = d (the minus sign is traditional). We have
Y X = Y d(X) Xd(Y ) + (XY )
for = d
(18.8)
(18.10)
18.2
We now apply the results of Section 18.1 to classical Hamiltonian mechanics of conservative
systems. The phase space of a classical system is the set of all states that may be attained
in some realization of the system. We begin with the unconstrained case, where the phase
space is a cotangent bundle over a manifold M, and then extend the discussion to the
constrained case, where the phase space has a more complicated structure.
383
To avoid technicalities, we only treat the case where the manifold can be described by a
single chart, so that it can be treated as an open subset of some topological vector space.
However, using standard techniques from differential geometry, it is not difficult to lift the
discussion to arbitrary manifolds. Thus, in the following, the configuration space Mc
is a nonempty, open subset of a convenient vector space F over R. Thinking of Mc as a
chart of a general manifold, everything we say here extends in a standard way to arbitrary
F-manifolds in place of Mc .
We write the bilinear pairing between elements q from F and elements p from the dual space
F as product p q = q p. We extend this product linearly to the compexifications CF of
F and CF of F , and extend it further pointwise to CF-valued or CF -valued functions.
In this section, we consider the case of unconstrained dynamics Here E = C (M) and
W = C (M, CF CF ) are the spaces of scalar fields and vector fields, respectively, on the
cotangent bundle M = T Mc := Mc F of Mc .
The reader may think of the Euclidean
space F = F = Rn of vectors with n real components
P
and bilinear pairing p q =
k pk qk . As discussed in Section 5.2, this accounts for the
mechanics of point particles. For field theories, F is an infinite-dimensional function space.
A classical, conservative, unconstrained mechanical system is defined by a Hamiltonian H E and considering the full cotangent bundle M as the phase space of the
system. The point x = (q, p) M is called the state with position q Mc and momentum p F . The energy of the system in the state (q, p) is the value H(q, p) of the
Hamiltonian at (q, p).
The state of the system varies with time t, which we consider to be a number in the
interval [t, t], where t is the initial time and t > t is the final time for which the system
is considered. The time dependence is modeled by a trajectory, a state-valued, arbitrarily
often differentiable function of time, mapping t [t, t] to (p(t), q(t)) M. The position q(t)
and the momentum p(t) at time t are constrained by the Hamilton equations in state
form,
dq
dp
q =
= p H, p =
= q H.
(18.11)
dt
dt
Here p = /p and q = /q denote the gradient with respect to momentum p and
position q, respectively. Note that if f is a scalar field then p f (q, p) CF and q f (q, p)
CF .
The Hamiltonian equations automatically imply the conservation of energy:
q H q + p H p = 0.
d
H(q, p)
dt
The Hamiltonian equations may be derived from a variational principle. We define the
action as the functional on smooth paths in M defined by
Z t
dt p(t) q(t)
H(q(t), p(t)) ,
(18.12)
I(q, p) :=
t
and consider small variations q and p of the arguments q and p, respectively. Since we do
not make further use of the principle, we assume without the discussion that the integral
384
For
I(q, p + p) I(q, p)
t
t
dt p(t) q(t)
p H(q(t), p(t)) p ,
so that the path (q, p) is a stationary point of the action if and only if the extended
Hamiltonian equations (18.11) hold.
A vector field X W is a pair of functions X = (X q , X p ) C (M, CF) C (M, CF ); its
value at the state (q, p) is X(q, p) = (X q (q, p), X p (q, p)). Associated with each vector field
X is the derivation Xd defined by
Xdf := X q q f + X p p f.
Using the mapping d defined in this way, it is easily checked that we have a commutative
differential geometry. In particular, a general linear form is described by a pair of functions
(q , p ) C (M, CF ) C (M, CF) such that
X = X q q + X p p .
(18.13)
Then = d =
0 1
1 0
(X)(q, p) := X q (q, p) p .
(18.14)
(18.15)
X q = p , X p = q .
(18.16)
(18.17)
In the notation using components and the Einstein summation convention, we have = pj dq j and
= dq j dpj . Here the linear forms dq j and dpj , given by Xdq j := (X q )j and Xdpj := (X p )j , are the
gradients of the functions q j and pj mapping a general state (q, p) to the indicated components.
385
Proof. is an exact 2-form since = d(). To prove (18.15), we use (18.8) to work
out (Y X)(q, p) = Y d(X) Xd(Y ) + (XY ) = Y d(X q p) Xd(Y q p) + (XY )q p =
Y q q (X q p) + Y p p (X q p) X q q (Y q p) X p p (Y q p) + (XY )q p. Using (XY )q = XY q
Y X q = Xq q Y q + Xp p Y q Yq q X q Yp p X q , which follows from (17.4), the product
rule, and p p = 1, everything cancels except for Y p X q X p Y q . This proves (18.15). By
comparing (18.13) with (18.15), we see that
= X
q = X p , p = X q .
(18.18)
Using the chain rule, the dynamics (18.11) is easily seen to be equivalent to the Hamilton
equations in general form,
df
f =
= Hf ;
(18.19)
dt
cf. Chapter 5.2.
18.3
In the constrained case, additional parameters (e.g., Lagrange multipliers) are needed to
describe the possible states of the system. Therefore we take E = C (M U) and W =
C (M U, CF CF CU), the spaces of scalar fields and vector fields, respectively, on
an augmented cotangent bundle M U of Mc , where, as before, the phase space is
M = Mc F, and U is a convenient vector space.
A classical, conservative, constrained mechanical system is again defined by a Hamiltonian H E. The point x = (q, p, u) M U is called the state with position q Mc ,
momentum p F, and parameter u U; however, due to the constraints derived below
from H, not all points in M U are physical. As we shall see, the accessible phase space
may also be smaller than M. The energy of the system in the state (q, p, u) is the value
H(q, p, u) of the Hamiltonian at (q, p, u).
The state of the system again varies with time t [t, t]. The time dependence is modeled by
a trajectory, a state-valued, arbitrarily often differentiable function of time, now defining
position q(t), momentum p(t), and parameter u(t) at time t. These are constrained by the
extended Hamiltonian equations,
q =
dq
= p H,
dt
p =
dp
= q H,
dt
0 = u H.
(18.20)
386
Here u = /u denotes the gradient operator with respect to the parameter u. Thus,
in place of a system of ordinary differential equations in the unconstrained case we now
have a system of differential-algebraic equations (DAE) involving the holonomic
constraints
0 = u H(q, p, u).
(18.21)
Again the extended Hamiltonian equations automatically imply the conservation of energy: dtd H(q, p, u) = q H q + p H p + u H u = 0.
The case where the symmetric Hessian matrix
G := u2 H(q, p, u)
is invertible is referred to as the regular case. Then, by the implicit function theorem,
(18.21) can be solved locally uniquely for u = u(q, p), which implies that (18.20) may be
viewed as an ordinary differential equation in q and p alone. In the singular case where
the Hessian G is not invertible, the constraints imply restrictions on p. Thus, not the whole
phase space is dynamically accessible, and the analysis of solvability of the DAE is more
involved. The details depend on the so-called index of a DAE, index 1 corresponding to
the regular case, index > 1 to the singular case, and are beyond our treatment.
18.3.1 Example. We consider the constrained Hamiltonian system with F = R3 and U =
R, defined by the Hamiltonian
1
H(q, p, u) := p2 + V (k q) (k p)u,
2
where V (E) is a potential energy function. The special case V (E) := 12 E2 , describes the
dynamics of a single Fourier mode with wave vector k of the free electromagnetic field. A
straightforward calculation gives the dynamics
q = p ku,
p = k V (k q),
0 = k p.
E := p,
= k V (B),
E
0 = k E.
(18.22)
387
The extended Hamiltonian equations may also be derived from a variational principle. Now
the action is defined on smooth paths in M U,
I(q, p, u) :=
dt p(t) q(t)
H(q(t), p(t), u(t)) .
(18.23)
Variations of the arguments show as before that the path (q, p, u) is a stationary point of
the action if and only if the extended Hamiltonian equations (18.20) hold; the constraint
equations derive from
I(q, p, u + u) I(q, p, u)
dt
t
u H(q(t), p(t), u(t)) u .
(18.24)
0 1
:= d = 1 0
0 0
and
0
0,
0
388
dq
= p H(q, p),
dt
p =
dp
= q H(q, p) + A(q)u,
dt
0 = p H(q, p) A(q),
where A(q) maps a multiplier vector u U to an element from F, and the Hamiltonian
H again defines the energy. The energy is conserved since dtd H(q, p) = q H(q, p) q +
p H(q, p) p = p H(q, p) A(q)u = 0. However, now the dynamics can usually no longer
be written in terms of a variational principle. Only the special integrable case where
A(q) = q C(q) corresponds to holonomic constraints of the form C(q) = 0 and a modified
e p, u) := H(q, p) C(q) u, which agrees on the space of trajectories
Hamiltonian H(q,
with H. The most general conservative Hamiltonian system may have both holonomic and
nonholonomic constraints; the reader may wish to write down the defining equations and
generalize the above discussion accordingly.
18.4
Lagrangian mechanics
Frequently, and especially in relativistic field theory, a classical system is defined in terms
of the Lagrangian approach to mechanics. We consider here the autonomous case only,
where the Lagrangian is time-independent.
The basic object is now a Lagrangian L C (T Mc ), a function of points in the tangent
space T Mc of a configuration manifold Mc . As in the Hamiltonian case, we restrict our
attention to the case where Mc is a nonempty, open subset of a convenient vector space F
over R. Then the tangent space is T Mc = Mc F, points in T Mc are pairs (q, v) consisting
of a configuration point q Mc and a tangent vector v F at q, referred to as velocity,
and the Lagrangian is a function with function values L(q, v).
The Lagrangian approach to mechanics can be represented in the framework of constrained
Hamiltonian dynamics by taking U = F, and u = v. Then the choice
H(q, p, v) := up L(q, v)
(18.25)
389
and u = (v, u0 ) and H(q, p, v, u0) = pv L(q, v) + uT0 C(q, v), where u0 is a Lagrange
multiplier. However, in the following, we only discuss the unconstrained Lagrangian case.
Applying the general machinery of Section 18.3 to (18.25), we find as dynamical equations
the Euler-Lagrange equations
q = v,
p = q L(q, v),
p = v L(q, v);
(18.26)
dt L(q(t), q(t)).
(18.27)
(p q)
= p q + p q = Lq q + Lq q = L(q, q)
= L.
It is easily verified directly that the condition for I(q) to be stationary at the path q gives
again the Euler-Lagrange equations (18.26); this is usually taken as the starting point of
the Lagrangian approach.
18.4.1 Example. The Lagrangian L(q, v) = 12 mv 2 21 kq 2 defines the harmonic oscillator,
as can be seen by writing down the Euler-Lagrange equations. Note that the action need
not be bounded below, as can be seen from the path q(t) = s(1 t2 ) in [t, t] = [1, 1], where
8
I(q) = (4m 15
k)s2 diverges to when k > 7.5m and s . Thus, it is inappropriate
to refer to the stationary action principle as principle of least action, as often done for
historical reasons.
If we change a Lagrangian L(q, v) to
e v) := L(q, v) + v q (q)
L(q,
for some smooth function , the action I(q) remains unchanged apart from a boundary term
arising through integration by parts. As a result, the new equations of motions and the old
ones are equivalent. On the other hand, the momentum changes from p to pe = p + q (q).
This does not affect the equation of motion in the form (18.19) since the transformation
from p to p is a canonical transformation leaving the Lie product invariant. Indeed, it is
not difficult to see that the more general substitution of p = p + (q) for p preserves the Lie
product iff the Jacobian q (q) is a symmetric matrix. Necessity follows since for constants
a, b F,
a p b p = a (q)b b (q)a
must vanish, and sufficiency can be established by a more involved computation.
We may work directly in the tangent manifold and define the linear form
L = p = q L,
L (X) := Xp,
(18.28)
390
L = dq dp = dq (pq dq + pq dq).
The condition for f E(L ) to be compatible with requires the existence of a Hamiltonian
vector field Xf with
f
f
= Gdq(Xf ),
= Gdq(X
f ),
(18.29)
q
q
with the symmetric Hessian matrix
G := q2 L = q v L =
p
q
(18.30)
Case 1. In the regular case, i.e., if the Hessian matrix is invertible, we can solve the
constraint equation p = v L at least locally for v, getting an equation q = v(q, p). In this
case, we find from (18.29) that
f g =
f
g g
f
G1
G1 ,
q
q q
q
Hq = pvq (q, p) Lq (q, v(q, p)) Lv (q, v(q, p))vq (q, p) = Lq (q, q)
= p,
so that
d
f (q, p) = fp p + fq q = fp Hq + fq Hp = Hf,
dt
(18.31)
391
H
q
Hence
and
p
H
p
=
q,
q + p Lq =
q
q
q
p L p
=
=
q
q p
q
q
q
p
p p p
=
q
q +
q =
q .
q
q
q
q
d d
XH =
q q +
qq
dt
dt
Hg = dg(XH ) =
so that H generates the dynamics.
g g
= g,
q +
q = (g(q, q))
q
q
Case 2. In the singular case, i.e., when the Hessian matrix (18.30) is not invertible,
condition (18.1) is nontrivial, not all f (q, q)
are compatible with and hence in the Poisson
algebra. Then (18.31) only holds for the generalized inverse and (18.29) requires that the
partial derivatives are in the range of G. The Poisson manifold (or orbifold?) is the set
of orbits of the gauge group; cf. M/R p. 325. Restrict E accordingly, as in the symplectic
case:]
The resulting Lie product (cf. (18.4)) is
f g = dg(Xf ) =
g
g
dq(Xf ) + dq(X
f ).
q
q
(18.32)
Note that the standard treatment in terms of symplectic manifolds requires regularity. In
the singular case, complicated additional assumptions and arguments are needed to bring
theories with gauge symmetries (which are always singular) into the framework of symplectic
geometry.
392
Chapter 19
Hamiltonian quantum mechanics
In this chapter, Hamiltonian quantum mechanics is described in differential geometric,
classical terms. In particular, this enables one to formulate dynamics for mixed quantumclassical systems in which as in the Born-Oppenheimer approximation in quantum chemistry slow degrees of freedom are modelled classically, while the fast motion (typically of
electrons) is modelled by quantum mechanics.
Also discussed is the relation between classical mechanics and quantum mechanics in terms
of quantization procedures.
19.1
i
.
h
(19.1)
(19.2)
Then
= q p,
(19.3)
= .
(19.4)
(19.5)
394
(19.6)
( ) = p
(19.7)
for the partial derivatives. Using these, it is an easy matter to rewrite the Lie product
(18.17) in the form
f g = (f g g f ).
(19.8)
Now we consider the classical Hamiltonian
Hc (, ) := H,
where H Lin H is a quantum Hamiltonian. Then we find
= Hc = H,
giving the Schr
odinger equation
ih = H
(19.9)
(19.10)
as classical Hamiltonian equation of motion for the state vector H. Thus, quantum
mechanics may be discussed in a classical framework. The variational principle for classical
Hamiltonian systems discussed in the context of (18.12), rewritten for the present situation,
is called the Dirac-Frenkel variational principle. It was first used by Dirac [73]
and Frenkel [92], and found numerous applications; a geometric treatment is given in
Kramer & Saraceno [157]. The action takes the form
I(, ) :=
d
dt (t) ih H) (t);
dt
(19.11)
setting its variation to zero indeed recovers (19.10). The Dirac-Frenkel variational principle
plays an important role in approximation schemes for the dynamics of quantum systems.
In many cases, a viable approximation is obtained by restricting the state vectors (t) to a
linear or nonlinear manifold of easily manageable states |zi (for example coherent states)
parameterized by classical parameters z which can often be given a physical meaning.
Inserting the ansatz (t) = |z(t)i into the action (19.11) gives an action for the path z(t),
and the variational principle for this action defines an approximate classical Lagrangian
(and hence conservative) dynamics for the parameter vector z(t). Thus, the Dirac-Frenkel
variational principle fits in naturally with the interpretation in Section 10.4 of the parameter
vectors characterizing a state as the natural observables. An important application of this
situation are the time-dependent Hartree-Fock equations which are at the heart of
dynamical simulations in quantum chemistry.
We note that is a constant of the motion, hence we may restrict the dynamics (19.10)
to normalized state vectors satisfying = 1. In this case, we may interpret the
function Ac E defined for A Lin H by
Ac (, ) := A = hAi
395
as the classical value of the quantity A in the pure state defined by the normalized state
vector , or, equivalently, by the rank one density matrix
= .
(19.12)
The Lie product of two values is again a value, since one easily calculates
hAihBi = h[A, B]i = hABi,
(19.13)
where the Lie product on the right hand side is the quantum bracket. In particular, the
dynamics of the values is given by the Ehrenfest equation
d
i
hAi = hHihAi = hHAi = h[H, A]i.
dt
h
(19.14)
In the special case, where H = T (p) + V (q) is expressible as a sum of a kinetic energy
operator T (p) depending on a momentum vector p and of a potential energy operator V (q)
depending on a position vector q, whose components are operators satisfying the traditional
canonical commutation rules
qj qk = pj pk = 0,
pj qk = jk ,
d
hpi = hq H(q, p)i = hq V (q)i,
dt
often called the Ehrenfest theorem, are due to Ehrenfest [78]. The Ehrenfest equation,
here derived in the Schrodinger picture, is valid also in the Heisenberg picture (or even
more general interaction pictures); the dynamical objects of physical interest are neither
the states nor the quantities, but the values. We may also compute the dynamics of the
density matrix (19.12), and find the Liouville equation
ih = [H(p, q), ].
(19.15)
More generally, it is not difficult to check that taking (19.13) as a definition of the Lie
product of values in arbitrary states (not necessarily pure states as in the above derivation)
indeed turns the family of hAi with hi ranging over states defined by
hAi = tr A
(19.16)
for some strongly integrable density matrix and A ranging over the elements of Lin H into
a Lie algebra. Therefore, the Ehrenfest equation is valid for arbitrary states, not only for
pure states. By inserting (19.16) into the Ehrenfest equation and comparing coefficients,
one also sees that the Liouville equation (19.15) remains valid.
19.2
Quantum-classical dynamics
There are many systems of practical interest which are treated in a hybrid quantum-classical
fashion. The most important example is the Born-Oppenheimer approximation, where
396
nuclei are treated classically, while electrons remain quantized. Another truly quantumclassical system is a quantum Boltzmann equation with spin; here the spin is still an
operator, represented by 4 4 matrices parameterized by classical phase space variables.
On the other hand, the quantum-Boltzmann equation for spin zero is already a purely
classical equation, since its dynamical variables are all commuting.
In the Liouville picture, where the density matrices are the dynamical variables, the basic
equations for a large class of quantum-classical models are the generalized Liouville
equation
ih = [H(p, q), ],
and the generalized Hamilton equations
q = tr p H(p, q),
p = tr q H(p, q).
Here H C (M, Lin H) is an operator valued function on a classical phase space M. Thus
H(p, q) is, for any fixed vectors p, q, a linear operator on some Euclidean space H, the
density matrix = (t) is a time-dependent trace-class operator on H, and q = q(t), p =
p(t) are classical, time-dependent vectors, not quantum objects. The classical quantities
are the functions of the values
hf (p, q)i = tr f (p, q)
where f is a (p, q)-dependent operator on H. Expressed in terms of values, we have
D
E
D
E
q = p H(p, q) , p = q H(p, q) ,
which looks like the Ehrenfest theorem, except that on the left hand side we have classical
variables and no expectations. The equations are conservative equations for the evolution
of values (the value hHi of the energy is conserved); dissipative systems and stochastic
systems can be also modelled, but this is beyond the scope of the present exposition.
The quantum-classical dynamics preserves the rank of the density . In particular, if has
the rank 1 form
=
(19.17)
at some time, it has at any time the form (19.17) with time-dependent . The fact that
has trace 1 translates into the statement that the state vector is normalized to = 1.
One easily checks that the Liouville equation holds iff the state vector psi, determined by
(19.17) up to a phase, satisfies the Schrodinger equation
ih = H(p, q).
In terms of the state vector, values take the familiar form
hf (p, q)i = f (p, q).
397
Poisson algebra. Now the Lie product is the tensor product of that of the classical subsystem and that of the quantum subsystem treated as a classical Hamiltonian system. The
Ehrenfest equation still has the form
d
hAi = hHihAi,
dt
but the right hand side no longer simplifies to the value of a commutator; instead, one gets
a nonlinear dependence on values. Such nonlinearities are common for reduced descriptions
coming from a pure quantum theory by coarse graining. Usually, quantum-classical systems
are regarded as reduced descriptions, and the same phenomenon occurs. There are plenty of
other examples of practical importance, the primary one being the Schr
odinger-Poisson
equations in semiconductor modeling.
19.2.1 Examples. We mention two important examples, molecular quantum chemistry
and a spinning electron.
(i) The Born-Oppenheimer approximation of the dynamics of molecules, widely used
in quantum chemistry, is a typical quantum-classical system of the above kind. The nuclei
are described by classical phase space variables, while the electrons are described quantum
mechanically by means of a state vector in a Hilbert space of antisymmetrized electron
wave functions.
(ii) A spinning electron, while having no purely classical description, can be modelled
quantum-classically by classical phase space variables p, q and a quantum 4-component spin.
Then, with , as in the Dirac equation,
H(p, q) = p + m + eV (q)
(19.18)
where now h i is the fixed Heisenberg state. From this, one can immediately see that
everything depends only on values by applying h i to this equation:
i
d
hf i = hq f ihp Hi hp f ihq Hi + h [H, f ]i.
dt
h
398
This is now a fully classical equation for classical values of the quantum-classical hybrid
model considered.
In the interpretation given in Chapter 10, densities are irreducible objects describing a single
quantum system, not stochastic entities that make sense only under repetition. (This is
analogous to the way phase space densities appear in the Boltzmann equation, though the
analogy is not very deep.)
In general, values in the quantum-classical dynamics are to be interpreted as objects characterizing a single quantum system, in the sense of the consistent experiment interpretation,
and not as the result of averaging over many realizations.
By design, in the Heisenberg picture, the state does not take part in the dynamics. What
is new, however, compared to pure quantum dynamics is that the Heisenberg state occurs
explicitly in the differential equation. In practical applications, the Heisenberg state is
fixed by the experimental setting; hence this state dependence of the dynamics is harmless.
However, because the dynamics depends on the Heisenberg state, calculating results by
splitting a density at time t = 0 into a mixture of pure states no longer makes sense. One
gets different evolutions of the operators in different pure states, and there is no reason why
their combination should at the end give the correct dynamics of the original density. (And
indeed, this will usually fail.) This splitting is already artificial in pure quantum mechanics
since there is no natural way to tell of which pure states a mixed state is composed of. But
there the splitting happens to be valid and useful as a calculational tool since the dynamics
in the Heisenberg picture is state independent.
In contrast to the pure quantum case, there is now a difference between averaging results of
two experiments 1 , 2 and the results of a single experiment given by (1 + 2 )/2. That,
in ordinary quantum theory, the two are indistinguishable in their statistical properties is
a coincidental consequence of the linearity of the Schrodinger equation, and the resulting
state independence of the Heisenberg equation; it does no longer hold in effective quantum
theories where nonlinearities appear due to a reduced description.
19.3
Deformation quantization
There are many ways to quantize a classical system, i.e., to relate to a dynamical description of a classical system a corresponding quantum version. This process is far from
unique, but there are a number of well-explored (and only sometimes equivalent) routes for
doing this. Whether a particular quantization is useful depends on how well the resulting
quantum system describes the intended application something outside the scope of our
discussion.
An important algebraic approach to quantization is Berezin quantization, also called
the method of orbits. Here classical Poisson representations of Lie algebras are lifted to
unitary representations. We only hint at the constructions, and refer for details to Berezin
399
[31], Bar-Moshe & Marinov [22], Landsman [169], and Kirillov [152]. The construction of the LiePoisson algebra in Section 12.5 from a Lie -algebra L implies that the dual
of L becomes in a natural way a Poisson manifold; the corresponding symplectic leaves are
the so-called co-adjoint orbits, the orbits of the universal covering group corresponding
to L in its co-adjoint action on L . The canonical Poisson algebras on the co-adjoint orbits
carry an irreducible Poisson representation of L, and any irreducible Poisson representation of L arises in this way (up to equivalence). Thus, classifying the co-adjoint orbits
is the classical analogue of classifying irreducible unitary representations. The quantization
constructions mentioned above rely on close relations between co-adjoint orbits, coherent
states over Lie groups, and irreducible unitary representations. These relations can even
be generalized further, replacing the Lie algebra structure by a purely geometric setting,
which then leads to the framework of geometric quantization, cf. Woodhouse [294].
Another possibility is deformation quantization which deforms a commutative product
into a so-called Moyal product; for definitions and details, see, e.g., Rieffel [235].
Alternatively, deformation quantization may be viewed as a deformation of the quantities
in a Poisson algebra E. This is the procedure we shall discuss in more detail.
The deformation can be obtained by embedding E into the algebra Lin E, identifying f E
with the multiplication mapping Mf which maps g to
Mf {g} := f g,
writing for emphasis the arguments (in E) of operators from Lin E (often referred to as
superoperators, to distinguish them from operators acting on E itself) in curly braces.
Recall the linear operator adf Lin E defined by
adf {g} := f g,
For f E, we define the quantization fb of f by
ih
fb := f adf Lin E.
2
(19.19)
(19.20)
Note that the quantization preserves nonlinear operations (product and Lie product) only
up to terms of formal order O(h). This reflects the ordering ambiguity in traditional
quantization procedures.
For an arbitrary Gibbs state on Lin E, the expectation
hfbi = hf i
ih
hadf i
2
(19.21)
400
[fb, b
g ] = [f, g] ihf g
(19.22)
ih
f g.
2
(19.23)
h
2
adf g ,
4
(19.24)
and
Finally, since
i
h
ih
ih
ih
[fb, g] = f adf , g = [f, g] [adf , g] = [f, g] f g,
2
2
2
[f, b
g ] = [b
g , f ] = [g, f ] +
ih
ih
gf = [f, g] gf.
2
2
h
i
ih
ih
f adf , g adg
2
2
ih 2
ih
ih
= [f, g] [adf , g] [f, adg ] +
[adf , adg ],
2
2
2
[fb, b
g] =
To actually quantize a classical theory, one may choose a Lie algebra of relevant quantities
generating the Poisson algebra, quantize its elements by the above rule, express the classical
action as a suitably ordered polynomial expression in the generators, and use as quantum
action this expression with all generators replaced by their quantizations.
In general, the above recipe for phase space quantization gives an approximate Poisson
isomorphism, up to O(h) terms.
We now show that, however, Lie subalgebras are mapped into (perhaps slightly bigger)
Lie algebras defining an abelian extension , and that one gets a true isomorphism for all
embedded Heisenberg Lie algebras and all embedded abelian Lie algebras.
401
Qf {g} := fb{g} = f g
ih
f g,
2
ih
1
adf g = (Mf g + Qf g ),
4
2
1
Qf Mg = Mf Qg = Mf g .
2
Any Lie subalgebra L of E defines a Lie algebra
b = {Mf g + Qh | f, g, h L}
L
(19.25)
(19.26)
(19.27)
under the quantum Lie product. If L is an abelian Lie algebra or a Heisenberg Lie algebra
b
then Q is a Lie isomorphism between L and L.
Proof. Since E is commutative, the first term in (19.24) and (19.23) vanishes, and multiplication by = i/h gives (19.25) and (19.26). The final statement is immediate from (19.25)
and (19.26).
Note that by the so-called Groenewold-van Hove Theorem (Groenewold [113], van
Hove [277], Gotay et al. [108]), no quantization procedure can exist which possesses all
features desirable from a naive point of view. The present quantization procedure sacrifices
the exact preservation of commutation rules.
19.4
We now specialize the preceding to the standard symplectic Poisson algebra E = C (Rn
Rn ). Thus, E is a commutative Poisson algebra of phase space functions as discussed in
Section 18.2. in this very important case, which covers N-particle quantum mechanics, the
embedding discussed in Section 19.3 turns out to be equivalent to standard quantization.
The equivalence is given in terms of the so-called Wigner transform. The Wigner transform
relates kernels of linear integral operators over Rn to corresponding phase space functions. It
therefore mediates between a quantum (operator) ansd a classical (phase space) description
of the same situation, and is heavily used in semiclassical approximations of quantum
mechanics. For example (though this is outside the scope of the present book), they turn
402
the Heisenberg equation of motion for quantum field expectations (combined with certain
approximations) into quantum kinetic equations on phase space.
The basic idea is to rewrite a kernel K(x, y) as a function of the mean coordinate q =
(x + y)/2 and the difference q = x y, and then Fourier transform with respect to q to
get a function of q and a momentum vector p.
In the present special case, phase space quantization amounts to using the reducible representation
ih
ih
pb = p q , qb = q + p
2
2
of the canonical commutation rules on the Hilbert space of square integrable functions on
phase space instead of the traditional irreducible position representation by
pe = ihx ,
qe = x
on the Hilbert space of square integrable functions of configuration space, or of the irreducible momentum representation by
p = p,
q = ihp
on the Hilbert space of square integrable functions of momentum space. Since the momentum representation is obtained from the position representation by the simple canonical
transformation which interchanges x and p and then writing q = e
p, p = qe, it is enough
to discuss in the following the transformation to the position representation.
By quantizing in phase space, one gives up irreducibility (and hence the description of a
state by a unique density) but gains in simplicity. This may be compared to the situation
in gauge theory, where the description by gauge potentials introduces some arbitrariness
with which one pays for the more elegant formulation of the field equations but which does
not affect the observable consequences.
We now show that these representations are related by a Wigner transform (cf. Wigner
[292]).
We consider the quantization of the commutative Poisson algebra E = C (Rn Rn ) with
standard Poisson bracket (18.17). Since for f = f (p, q) we have
pf = q f,
qf = p f,
(19.28)
By (19.25),
pb = p
ih
q ,
2
qb = q +
ih
p .
2
pb b
q = p q = .
(19.29)
(19.30)
403
Thus we have a unitary representation of the Heisenberg algebra H(n) by linear operators
on the phase space = Rn Rn , equipped with the standard inner product. To relate this
representation to the traditional position representation given by
pe = ihx ,
qe = x,
(19.31)
(19.32)
(19.33)
h = 2h,
(19.34)
of a function f C (Rn Rn ).
where
n = dim p = dim q,
pf
bf = pefe,
qf
bf = qefe.
Proof. We have
Z
Z
Z
T
2pT e
2pT
dke2k f (k, q)
de
f (q + , q ) =
de
Z
Z
T
=
dp
de2(kp) f (k, q)
Z
=
dp hn (k p)f (k, q) = hn f (p, q),
proving (19.33). (19.35) follows from
and
Z
x + y
T
e
pef (x, y) = ihx dpep (xy) f p,
2
Z
1
x + y
T
p (xy)
= ih dpe
p + q f p,
2
2
Z
x + y
ih
pT (xy)
=
dpe
(p q )f p,
2
Z
2x + y
pT (xy)
f
=
dpe
pbf p,
= pbf (x, y)
2
Z
x + y
ih x + y
dpe
+ p f p,
2
2
Z
2 x + y
ih
pT (xy) z + y
(x y) f p,
=
dpe
2
2
Z
x + 2y
T
=
dpep (xy) xf p,
Z
x 2+ y
pT (xy)
= x dpe
f p,
= xfe(x, y) = qefe(x, y).
2
qf
bf (x, y) =
pT (xy)
(19.35)
404
Thus the Wigner transform provides an isomorphism between the two representations. Note
that the phase space representation is highly redundant since the position representation
does not act at all on the y-coordinate. The redundancy is apparent from the fact that the
algebra generated by pb and qb is much smaller than Lin E, and in fact isomorphic (modulo
convergence issues) to Lin C (Rn ) via the Wigner transform. However, this redundancy
is very helpful since it makes the classical limit and the approximation by semiclassical
techniques much simpler.
The Wigner transform can be applied to all nonlinear PDEs of Schrodinger or Dirac type.
These have the form
I( ) = 0, C (Rn ),
(19.36)
where I is an operator-valued function of the density matrix
e :=
(19.37)
I()
=0
(19.38)
in phase space. However, (19.37) loses the rank 1 condition implicit in (19.36), hence
corresponds to a mixing of pure states.
19.4.2 Proposition. The bilinear inner product
Z
(f |g) = dpdq f (p, q)g(p, q)
satisfies
(f |g) =
2 n Z
h
(19.39)
(19.40)
405
(19.41)
dkdxA(k, x) =
hA(q)ix = A(x)h1ix
(19.43)
(19.44)
dkhAix =
dxhAik .
(19.45)
(19.46)
406
q f = qbf.
(19.47)
The symbol formulation. To extend the quantization rule to arbitrary smooth functions
we need the symbol formulation of pb and qb.
Let = Rm Rm the classical phase space, and let
j = ih/2.
19.4.5 Proposition. For every distribution on ,
Z
(k, x) = b(k + p, x + q)epq/j dpdq,
where
b(p, q) = (h)
2m
(k + p, x + q)ekx/j dkdx.
(19.48)
(19.49)
(19.50)
(19.51)
2m
(k + p , x + q )ep q /j epq /j ep q/j dpdq dqdp
= (h)
Z
m
(k, x + q )epq /j dpdq
= (h)
= (k, x).
407
19.4.6 Proposition. For every Schwartz function A on phase space , the symbol A(p, q)
defined by
Z
A(p, q)(k, x) :=
A(k + p, x q)b
(k + p, x + q)epq/j dpdq
(19.52)
q = x jk
(19.53)
when A is a normally ordered polynomial acting on (k, x) where all p are to the right of
all q . Moreover, we have the canonical commutation rules (CCR)
[p , q ] = .
(19.54)
q p (k, x) =
(k ) (2x x ) b(k , x )e(k k)(x x)/j dk dx
Z
=
(k + p) (x q) b(k + p, x + q)epq/j dpdq.
Now take linear combinations and limits to find (19.52).
The CCR follow since (with fu = f /u)
p q f = p (x f jfk ) = k (x f jfk ) + j(x f jfk )x
= k x f + j( f + x fx k fk ) j 2 fk x ,
q p f = q (k f + jfx ) = x (k f + jfx ) j(k f + jfx )k
= k x f + j( f + x fx k fk ) j 2 fk x ,
hence
[p , q ]f = p q f q p f = 2j f = ih f.
(19.55)
408
where
{H, } =
1
(H (H) ) = Jm(H)
2j
(19.56)
(19.57)
leaves the total density h1i invariant. Moreover, if for all A we have hA Ai 0 at time
t = 0 then hA Ai 0 at all times t 0.
19.4.8 Proposition. (Classical limit)
In the limit j 0,
and
{H(p, q), }(p, q) = p Hq q Hp + O(h)
{p, } = q , {q, } = p
= (q
q + p
p ) = Hp q Hq p
from Hamiltonian dynamics.
To get the density in a position representation, write
Z
where
b(p, x ) = (2h)
(x, y)epx/ihdx
H(p, x)b
(p, y)epx/ihdp.
(k, k ) = (k , k)
409
410
Part VI
Representations and spectroscopy
411
Chapter 20
Harmonic oscillators and coherent
states
Part VI applies the concepts introduced so far to the study of the dominant kinds of
elementary motion in a bound system, vibrations of oscillators (described by Poisson representations of the Heisenberg group), rotations of rigid bodies (described by Poisson representations of the rotation group), and their interaction. On the quantum level, quantum
oscillators are always bosonic systems, while spinning systems may be bosonic or fermionic
depending on whether or not the spin is integral. The analysis of experimental spectra,
concentrating on the mathematical contents of the subject, concludes our discussion.
This chapter is a detailed study of harmonic oscillators (bosons, elementary vibrations),
both from the classical and the quantum point of view. We introduce raising and lowering
operators in the symplectic Poisson algebra, and show that the classical case is the limit
h
0 of the quantum harmonic oscillator.
The representation theory of the single-mode Heisenberg algebra is particularly simple since
by the Stonevon Neumann theorem, all unitary representations are equivalent. We find
that the quantum spectrum of a harmonic oscillator is discrete and consists of the classical frequency (multiplied by h
) and its nonnegative integral multiples (overtones, excited
states).
We shall work in the representation where the harmonic oscillator Hamiltonian is diagonal,
which gives rise to the ladder operators mediating between neighboring eigenstates. We
introduce Diracs bra-ket notation, and deduce the basic properties of the bosonic Fock
spaces, first for a single harmonic oscillator and then for a system of finitely many harmonic
modes.
We then introduce coherent states, an overcomplete basis representation in which not only
the Heisenberg algebra, but the action of the Heisenberg group is explicitly visible. Coherent
states are quantum states that behave as classically as possible, thereby making a bridge
between the quantum system and classical systems. The coherent state representation is
413
414
particularly relevant for the study of quantum optics, but we only indicate its connection
to the modes of the electromagnetic field.
20.1
p2
+ V (q) ,
2m
(20.1)
where V (q) is quadratic and bounded from below, so that there are constants q0 , V0 and
k > 0 with
k
V (q) = V0 + (q q0 )2 .
2
The number k is called the stiffness; the greater the constant k, the more difficult is it to
move away from equilibrium. The Hamilton equations are:
q =
p
,
m
p = V (q) = k(q q0 ) .
A complex exponential ansatz shows that the solution of the Hamilton equations is:
q(t) = q0 + 2 Re(eit x) ,
p(t) = Re(imeit x) ,
p(t) =
1
2m(a(t) a (t)) ,
2i
(20.2)
hence the description by a normal mode is equivalent to the original description. Differentiating a(t) and using q = p/m, we obtain
r
k p(t)
i
k(q(t) q0 ) = ia(t) .
a(t)
=
2 m
2m
We conclude that a(t) has to obey
a(t) = a(0)eit .
415
p q
p q
r
k
1
= 2i
2 2m
= i,
aa =
(20.3)
The relation (20.3) is called the canonical commutation relation (CCR) for the harmonic oscillator. More generally, one finds for the Lie product of general functions f, g of
a and a the formula
f g
f g
f g = i
i .
(20.4)
a a
a a
This will be seen later as a special case of a general principle for constructing so-called
LiePoisson algebras from a Lie algebra.
20.2
For a classical harmonic oscillator, the Lie product in the CCR (20.3) is defined via the
Poisson bracket. To quantize the harmonic oscillator, all we do is replace the Lie product
in the CCR by its quantum analogue. Thus we postulate the existence of an operator a
and its conjugate a with the relation
i
[a, a ] = i ,
h
equivalently
[a, a ] = h
.
(20.5)
to zero, we have to end up in the classical regime, where the operators a and a become
functions on phase space and hence commute.
Equation (20.5) defines a -algebra, i.e., an associative algebra with unity and an involution
, generated by a with the relation aa a a = h
. Later we look for representations in
a Hilbert space, where the involution then corresponds to Hermitian conjugation. But
already at this level, we call expressions in a and a operators.
The quantum mechanical Hamiltonian for the harmonic oscillator is the operator given by
direct substitution of the p and q from (20.2):
a a 2
a + a 2
H =
+
+ V0
2i
2
= 21 (aa + a a) + V0
= a a + 12 h + V0 .
(20.6)
416
simple formula H = a a.
In the classical theory we have commuting variables a and a with a Lie product aa = i.
That is, we have a commutative -Poisson algebra. In the quantum theory we have an
associative algebra generated by a and a with the relation aa a a = h
. Since in the
quantum theory two seemingly different polynomial expressions (such as aa and a a + h
)
can be the same, there is a need for a preferred ordering of a and a in monomials. The
normal ordering is that ordering of a and a in monomials where all a s are moved to
the left of the as. It is easy to see that every noncommutative polynomial in a and a
can be normally ordered by repeated use of the relation aa = a a + h
; in the process of
normal ordering, lower degree monomials are generated with higher powers of h
. We give
the following proposition that guarantees that taking h
0 we recover the classical theory:
20.2.1 Proposition. Let f and g be noncommutative polynomials in a, a and h
. Viewing
f and g as polynomials in commuting variables a and a , one can calculate f g using (20.4).
As noncommutative polynomials one can calculate the commutator [f, g] = f g gf . The
two results are related by:
i
[f, g] = f g + O(h) .
(20.7)
h
One expresses this relation by saying that the quantum Lie product is a deformation of
the classical Lie product.
Proof. The order of the a and a does not matter since changing the order we generate
powers of h
. We use induction on the degree of the polynomials. For degree zero and one,
(20.7) holds. Suppose it holds for degree of f smaller than n and degree of g one. If we
write f = a S + T a for some normally ordered polynomials S and T with degrees smaller
than n, we see that for g = a :
i[a S + T a, a ] = ia [S, a ] + ihT + i[T, a ]a
= h
a Sa + ihT + h
T a a + O(h2 )
= h
(a S)a + h
(T a)a + O(h2 ) ,
and the result holds for f arbitrary and g = a . For g = a it goes similar. Suppose the
claim holds for all g with degree k, with 0 k n. Then for degree n + 1 let us write
g = aP + a Q + R, where P , Q and R are polynomials of degree strictly less than n + 1.
Then we have:
i
i
[f, g] =
[f, aP + a Q + R]
h
i
i
i
i
i
[f, a]P + a[f, P ] + [f, a ]Q + a [f, Q] + [f, R]
=
h
= f aP + af P + f a Q + a f Q + f R + O(h)
= f (aP ) + f (a Q) + f R + O(h)
= f (aP + a Q + R) + O(h) .
And the proof is complete.
(20.8)
417
= Ha(t) = i
H
.
a
(20.10)
We remark that if (20.3) holds for t = 0 then it holds for all t. Indeed, the derivative of the
left-hand side of (20.3) vanishes identically.
Using a different choice of the parameters defining a we get a different variable a , which is
affinely related to the original a,
a = + a + a .
(20.11)
The requirement that a satisfies the same commutation relations as a leads to the restriction
||2 ||2 = 1.
(20.12)
A transformation of the form (20.11) satisfying (20.12) is called a Bogoliubov transformation. Bogoliubov transformations have important applications; for example, they were
at the heart of Hawkings proof that black holes radiate. The generalization of Bogoliubov
transformations to systems of oscillating electron pairs in metals is an important ingredient
for the theory of Cooper pairs, which explains superconductivity effects in metals at low
temperature.
Different choices of the coefficients in the definition of a lead of course to different forms of
H(a, a ); this means that different Hamiltonians H(a, a ) can describe the same oscillator.
The particular choice above for the harmonic oscillator is the one leading to H(a, a ) =
E0 + a a, for which the dynamics (20.10) takes the simple form a = ia. In theoretical
physics there are different operators ak and ak labeled by some parameter k. One tries to
find by means of Bogoliubov transformations the simplest form of the Hamiltonian. The
418
The quantization of an anharmonic oscillator is done as in the harmonic case. For each
classical Hamiltonian polynomial in a and a , there is a unique normally ordered quantum
version. However, when modeling the same system both in a classical and in a quantum
setting, the coefficients of the quantum system in a normal ordering of the operators must be
taken to depend on h
, and the form of this dependence is not determined by the quantumclassical correspondence. Therefore, the best fit of coefficients of H to experimental data
will generally produce different optimal values in the classical and the quantum case. In a
quantum field theory, the coefficients will also be dependent on the scale at which frequencies
remain unresolved, giving so-called running coupling constants which play an important
role in renormalization techniques.
20.3
We saw at the end of Section 20.1 that the Heisenberg algebra t(3, C) can be considered as
being generated by 1, a, and a where a and a satisfy the CCR aa = i.
In the classical case, we know a realization of these commutation relations in terms of a
Poisson bracket. In the quantum case, we must find a representation in terms of operators
in a Hilbert space. The representations of physical interest are the unitary representations,
which represent the one as identity and behave properly under the -operation. In this
section we construct a unitary representation of the Heisenberg algebra.
In the quantized version of a classical theory the functions on phase space become elements
of some associative algebra E. For a representation we want to realize the algebra E as a
subalgebra of an algebra of linear operators.
The approach of Schrodinger (1926) to this problem was to take as Hilbert space the space
of square integrable complex-valued functions on R3 ; then the Schrodinger equation for
the dynamics of a pure state takes the form of a wave equation, which was familiar to physicists at that time and hence came to dominate quantum mechanics. The approach taken
by Schrodinger proved to be very successful and is also presented in many quantum physics
textbooks, since (for a single particle) the real-valued function |(x)|2 has an intuitive
semiclassical probability interpretation (discussed in Section s.motQM). For multiparticle
systems, the intuitive advantages of Schrodingers representation is no longer given, as the
wave functions are no longer in physical space R3 but, for n particles, in an abstract 3ndimensional configuration space. For systems involving an unconserved number of particles,
in particular for interactions with light, and for systems in the thermodynamic limit, things
are even more complicated since the configuration space becomes infinite-dimensional, and
the wave function representation becomes unwieldy instead one usually resorts to the
techniques of quantum field theory. Nevertheless, there are interesting papers using the resulting functional Schrodinger equation to illuminate the relations between classical solitons
and quantum bound states (see, e.g., Jackiw [135]).
419
One year earlier than Schrodinger, Heisenberg invented his infinite-dimensional matrix algebra. We present Heisenbergs approach since it generalizes easily to the most complex
quantum systems, including the universe as a whole.
We now look at an arbitrary unitary representation J : L Lin H in a Euclidean space H
satisfying
J(a ) = J(a) , J(1) = 1.
We shall write the operators corresponding to a and a in the representation again by a and
a (rather than using J(a), etc.), in order to avoid clumsy notation. This will not cause
problems since the representation turns out to be faithful. Then the operator
1
a a,
h
for reasons that will soon be apparent, is called the number operator, satisfies the commutation relations
[a, n] = a , [a , n] = a ,
n :=
as is easily checked. This implies that the vector space generated by 1, a, a and n is closed
under the commutator, and hence forms a Lie -algebra L with the quantum Lie product,
called the oscillator algebra os(1). In this section (as always when classifying unitary
representations), it will be more convenient to work directly with commutators.
We now illustrate an important technique in representation theory, which in many cases of
interest provides all irreducible representations of a certain kind. See Section 22.4 for some
other applications.
We define the Verma module corresponding to a complex number by
V = { H | n = } ,
where the Hilbert space H is the closure of H. If V is nontrivial, it contains a nonzero
vector, is an eigenvalue of n, and any nonzero V is a corresponding eigenvector.
Thus the nonzero Verma modules are just the eigenspaces of the eigenvalues of n. Since we
consider here only unitary representations where * is the adjoint, this implies that
kak2
n
0
= =
h
kk2
is real and nonnegative. Noting that in general n is Hermitian, we now make the slightly
stronger assumption that n is self-adjoint as a densely defined operator of the Hilbert space
H. Then the spectral theorem implies that the infimum
n
b
= inf
6=0
1
a a = 0,
h
420
ka k2 = aa = (h + a a) = h
kk2 + kak2 h
kk2 > 0,
20.4
In his groundbreaking work on quantum mechanics, Dirac introduced a notation for vectors
and operators that is widely used by physicists but is quite different from what mathematicians are used to. Diracs bra-ket calculus is not very well defined in the way actually used
by physicists, since the basis vectors considered in the calculus do not necessarily lie in the
Hilbert space in which everything should happen from a strictly axiomatic point of view.
We define here a precise version of Diracs bra-ket calculus, which can also satisfy mathematicians. Instead of working in a Hilbert space we consider a fixed dense subspace which
we denote by H. Thus H is a vector space with a Hermitian inner product h|i, antilinear in
the first argument and linear in the second, such that h|i is always real and nonnegative,
and the relation
h|i = h|i
(20.13)
421
(20.15)
which just asks us to replace two adjacent vertical bars by a single one.
If H is itself a Hilbert space (and in particular, if the dimension of H is finite) then it is not
difficult to see that all continuous linear functionals arise in this way. However, in many
interesting infinite-dimensional vector spaces H, the situation is different. For example, if
H = C (R) and z R then the mapping z which maps H to
z () := (z)
is a continuous linear functional which cannot be obtained as for some smooth vector .
We can accommodate this in the bra-ket calculus by allowing as bras all continuous linear
functionals rather than only those which have the form with H. We simply need to
label the continuous linear functional as bras h| with symbols from a set H such that the
functionals of the form with H get the label . The set H can be made canonically
into a vector space containing H as a subspace by requiring the mapping : := h|
to be antilinear. Then H = H in case H is a Hilbert space, but in general H may be a
proper subspace of H . Since the inner product extends continuously from H to the Hilbert
space completion H, every element of H defines a continuous linear function. Thus, in
general, the Hilbert space H sits somewhere in between H and H ,
H H H .
(20.16)
Frequently, some extra nuclear structure on H is assumed which turns (20.16) into a
so-called Gelfand triple or rigged Hilbert space (see, e.g., Maurin [190], Bohm &
Gadella [38], and for applications to resonances Kukulin et al. [165]); however, on the
level of our discussion, we dont need this extra structure.
If H = C (R), physicists call the vectors H wave functions well being aware
that they are not always functions in the standard sense , and write them with a dummy
argument x as (x). For example, they consider z to be a shifted delta function, and
write it as (x z).
A wave function which is in the Hilbert space H H is called normalizable, the remaining
wave functions are called non-normalizable. In mathematical terms, the normalizable
422
wave functions are equivalence classes of square integrable functions, with two functions
being regarded as equivalent when they differ only on a set of measure zero. The shifted
delta functions are examples of non-normalizable wave functions.
For a general Euclidean space H, we refer to the elements of H as rough vectors since
they correspond in the special case H = C (R) to functions that are less smooth, possibly
not even continuous, and possibly (as in case of the z ) not functions at all.
Having extended the bra-ket notation to allow rough vectors as labels in bras, the symmetry
property (20.13) is lost. To restore that, we simply extend the inner product to enforce
the validity of (20.13) by defining h|i := h|i if H and H. This can be
done consistently, and implies that now kets can be labeled by rough vectors, too. But now
the formula (20.15) makes trouble. What is h|i when both and are rough vectors?
In general, there is no solution; this product cannot be always defined. However, one can
of H and
consistently define it in certain cases, namely when is in some subspace H
by some limiting
the linear functional defined at first only on H can be extended to H
procedure. We wont list here the various possibilities; our usage of bras and kets will be
restricted to cases where at least one of the two labels in an inner product is smooth.
The main use of Diracs notation is for the specification of vectors and matrices in a particular representation of the algebra of quantities. We first review the notation in the case
where a countable orthonormal basis of smooth states is available. In this case there is a
countable set K of labels such that the basis consists of the kets |ki with k K, and
orthogonality implies that
hj|ki = jk ,
and the resolution of unity
X
k
|kihk| = 1 .
and
hk|x = xk
gives the components of x.
X
A=
|jiAjk hk|
jk
hj|A =
A|ki =
X
k
X
j
Ajk hk|
|jiAjk
423
and
hj|A|ki = Ajk
gives the matrix entries of A. Compared to the standard linear algebra notation there is
no gain.
The situation is different when K is a structured set, for example a set of pairs (k, s)
where k is a momentum label and s a spin label, or other such sets arising naturally in the
dynamical symmetry approach of Section 23.6. Then the index notation becomes somewhat
cumbersome to comprehend, and the more lengthy bra-ket notation is superior.
20.5
As we have seen in Section 20.3, every nice unitary representation of the oscillator algebra
contains a ground state b of norm 1, and hence the representation contains the vectors
(a )k b (k = 0, 1, 2, . . .). Their span defines a Euclidean vector space F+ whose closure F+
is a Hilbert space, called the single mode bosonic Fock space, or simply Fock space.
Clearly, F+ is closed under the action of L, hence we have a unitary representation of L
on F+ . It is not difficult to see that different choices of the ground state either define the
same Fock space (if the ground states differ only by a phase) or orthogonal Fock spaces.
Indeed, if F F+ is an invariant submodule, it needs to have a vector 0 , which necessarily
coincides with the ground state of F+ up to a complex number. Thus an arbitrary unitary
representation is a direct sum of Fock spaces. Thus the representations on a Fock space
are irreducible representations. We shall show in a moment that the unitary representation
on a Fock space is essentially unique. This is the content of the celebrated StoneVon
Neumann theorem, which actually is about the representation of the Heisenberg group.
Bosonic Fock spaces with more degrees of freedom are obtained by taking tensor products
of the Fock space with one degree of freedom, and describe systems of quantum oscillators.
As we shall see in Chapter 21, there is also a fermionic counterpart of Fock spaces, which are
related to so-called Clifford algebras. The single mode case describes a so-called qubit and
is simply the vector space C2 ; the general case is a tensor product of these, and describes
systems of qubits.
We now study the structure of F+ for a given ground state b of norm 1 in more detail.
The properties found will lead to a construction of a Hilbert space which actually contains
a representation of the Heisenberg algebra (which, so far, we simply had assumed).
20.5.1 Proposition. The vectors
|ki :=
satisfy the relations
1 kb
(a ) (k = 0, 1, 2, . . .)
k!
a |k 1i = k|ki,
a|ki = h
|k 1i,
h
k
hk|k i = kk .
k!
n|ki = k|ki,
(20.17)
424
Proof. The first relation is just definition. For the second observe that ab = 0 and
[a, (a )k ] = k(a )k1 . For the third, just combine n = h1 a a and the first and second
b ak and hk|k i = 0 if k 6= k since
relation. For the fourth relation we have hk| = k!1 ()
eigenvectors of a Hermitian operator corresponding to different eigenvalues are orthogonal.
So only the normalization needs to be checked:
hk|ki =
1
1
h
hk 1|aa |k 1i = 2 hk 1|h + h
n|k 1i = hk 1|k 1i .
2
k
k
k
X
k=0
k |ki ,
k C .
(a)k = h
k+1 ,
and = (
k |ki)
l |li =
P hk
k!
(n)k = kk ,
(20.18)
k k , hence
Xh
k
k!
k k .
(20.19)
Equations (20.18) and (20.19) are an equivalent description of the equations of Proposition
20.5.1.
We now define H as the closure of F+ . This makes F+ a dense subspace of H, and we
say that the operators a, a and n are densely defined in H (meaning that they are
defined on a dense subspace). Previously we have seen that if the canonical commutation
relations admit an irreducible representation, then it has to be of the form as described by
Proposition 20.5.1. But now we can say more:
The set H of vectors C with finite norm
v
u k
uX h
kk := t
|k |2
k!
k=0
is a Hilbert space with inner product (20.19), on which the definitions 20.18 give densely
defined operators a, a , n. The components of a grow significantly faster than those of
, so that a H only for in a proper subspace of H. This subspace is dense, since
it contains the dense subset of with only finitely many nonzero entries. Note that the
operators 1, a, a and n, and hence all elements of L are represented by infinite tridiagonal
matrices, where only matrix elements in which the indices differ by at most one are nonzero.
This is the representation of the quantum harmonic oscillator discovered by Heisenberg in
his groundbreaking paper [123].
425
It is now easy to check that the operators 1, a, a and n satisfy the canonical commutation
relations, that a is the Hermitian conjugate of a, and that n = a a. Thus we have a
representation of L. The representation is irreducible since acting repeatedly with a L
on the vector b with entries bk = k0 (the ground state) gives a basis of H. Combining
this with the uniqueness statement obtained before, we arrive at the following theorem of
Stone and Von Neumann (but essentially already obtained in [123]):
20.5.2 Theorem. The canonical commutation relations admit an (up to equivalence)
unique irreducible unitary representation on a Hilbert space such that the action of a,
a and n = a a is defined on a dense subspace and n is self-adjoint.
The theorem holds with a similar proof for arbitrary finite-dimensional Heisenberg algebras
coming from a nondegenerate alternating form. It fails spectacularly in infinite dimensions.
In this case there are uncountably many inequivalent representations; see, e.g., Barton
[27] for an (in spite of the title of the book) elementary discussion of these. Their existence
is one of the main stumbling blocks for extending quantum mechanics to quantum field
theory.
20.6
BargmannFock representation
We present an important but easy representation of the Heisenberg algebra h(n), which
will be useful to us when we study coherent states in Section 20.7. Consider the vector
space of complex polynomials in n variables C[z1 , . . . , zn ]. We then identify ak and ak with
the operators defined by1
(ak p)(z1 , . . . , zn ) := zk p(z1 , . . . , zn ) ,
and
p(z1 , . . . , zn ) .
zk
It is easy to check that this indeed defines a representation. We can even make a unitary
representation out of this. For that purpose we consider the vector space H of all entire
functions on Cn with finite norm with respect to the inner product
Z
f (z)g(z)ezz .
hf |gi =
(ak p)(z1 , . . . , zn ) :=
Cn
The space H with the above inner product is a Euclidean space; its closure is a Hilbert space,
the multi-dimensional version of the BargmannFock space described in Section 20.5.
The operators ak and ak are adjoints of each other. An orthogonal basis is given by the
monomials:
n
Y
k1
ln
kn l1
ki !ki ,li .
hz1 . . . zn |z1 . . . zn i =
i=1
pk
iqk
2
and ak =
pk
+iqk
.
2
426
From the discussion in Section 13.1 it follows that the quadratic expressions modulo the
linear expressions in the elements zi and zk form the Lie algebra sp(2n, C). Taking all
quadratic expressions (so not modding out by the linear polynomials) in the elements zi
and zk one obtains a central extension of isp(2n).
The above representation is irreducible (one sees rather quickly that starting with 1, acting
with pk gives all entire functions) and is called the BargmannFock representation. By the
StoneVon Neumann theorem, which says that there is only one irreducible representation
of the Heisenberg algebra, the BargmannFock representation is up to isomorphism the
only irreducible representation of the Heisenberg algebra.
We have seen in Section 13.1 that the quadratic expressions (modulo linear terms) in the
qi and the pk rotate the generators qi and pk into each other under the action of the Lie
product. In other words, the action of the quadratic expressions builds a representation of
sp(2n, C) inside the BargmannFock representation. That this happens is not so strange.
Let us consider the automorphism group of the Heisenberg algebra, consisting of all the
invertible maps h(n) h(n) preserving the Lie product. But from equation (13.1) we see
that the automorphism group contains the group Sp(2n, C). Now let us denote the above
given BargmannFock representation by U : h(n) Lin(H), then using Sp(2n, C) we get a
new representation of the Heisenberg algebra as follows. For each g Sp(2n, C) we consider
the representation
Ug : h(n) Lin(H) , Ug (x) = U(gx) .
Since Sp(2n, C) Aut(h(n)) the Ug are indeed representations. But the unitary irreducible
representation of h(n) is unique, up to isomorphism, and hence there must be a unitary
operator R(g) such that
Ug = R(g)UR(g)1 .
It is clear that the R(g) are determined up to a sign. Thus R(g)R(h) = R(gh) and
we say that the R(g) form a projective representation of the group Sp(2n, C). This
representation is called the metaplectic representation. The operators R(g) themselves
form a group, closely related to the metaplectic group Mp(2n, R), the universal covering
group of the Lie algebra sp(2n, R). The metaplectic group is a two-fold cover of Sp(2n, R),
hence has a center of order 2, while our group has the multiplicative group of the reals as
center. Factoring out the positive reals leaves the metaplectic group.
20.7
Coherent states were introduced in 1963 by Glauber [105], who recognized their importance in quantum optics; he received in 2005 the Nobel prize for his work in this direction.
But the notion of a coherent state (without the name) was already introduced by Erwin
Schrodinger [248] in 1926 when he was looking for solutions to the Schrodinger equation
that satisfy the Heisenberg uncertainty relation
pq
,
2
(20.20)
427
where x denotes the variance of a quantity x. Schrodinger was looking for states that were
as classical as possible, having equality pq = h2 . The coherent states, and only these
satisfy equality; they therefore build a connection between classical physics and quantum
physics that grew stronger as the notion of coherent states was extended to more general
situations.
To introduce Glaubers coherent states, we remind the reader that for the harmonic oscillator we constructed the Fock space H of = (k )k0 satisfying
X
h
k
k=0
k!
k k < .
(20.21)
One may regard either as a vector with infinitely many components, or as an infinite
sequence. Equivalently, in Diracs bra-ket notation, the k s are the complex coefficients
in
P
the expansion of with respect to an eigenbasis |ki of the number operator, = k |ki.
The inner product is given by
X
h
k
k .
=
k! k
k=0
(, z C)
X
zk
(a )k ,
|, zi =
k!
k=0
(20.22)
k0
h
k ||2
|z|2k
2
= ||2 eh|z| < .
k!
X
h
k z k z k
k=0
k!
= ehz z .
hn z n
.
n!
428
X
(hz)k
k=0
k!
k (z) ,
(20.23)
which defines the function (z) corresponding to . Conversely, given an analytic function
g
X (hz)k
gk ,
g(z) =
k!
k0
P
h2 |gk |2
with
< we assign to g the element g = (gk )k0 in H. We claim that
k0
k!
7 (z) is a map from H to the set of analytic functions. In order to prove the claim we
have to prove that the power series (20.23) converges everywhere. We calculate the radius
of convergence R
s
1
k!
R = lim sup q
= lim sup k k
k
k
h k
h
k
k
k
k!
since k satisfies (20.21). Hence the function (z) is analytic everywhere. The state is
uniquely described by the function (z) in the sense that (z) = 0 = 0, since
1 dk
= k .
(z)
k dz k
z=0
h
X
k=0
h dk
i
1
dk
(z)
.
(z)
dz k
z=0
k!||2h
k dz k
So we can use the powerful theorems of complex analysis to deal with the states in the
Hilbert space H. For the relations between complex analysis and coherent states, including
important generalizations to coherent states associated with other Lie groups, see Perelomov [216], Upmeier [274], Faraut & Koranyi [84].
Every element in H is a linear combination of coherent states, but the combination is in
general not unique. For the harmonic oscillator a set of finitely many coherent states |, zi
with different z is linearly independent, since suppose
|vi :=
n
X
i=1
|i , zi i = 0 ,
i 6= 0 ,
i ehwzi = 0 ,
i=1
for all w. But a finite set of exponential functions is linearly independent. Hence it follows
that a finite set of coherent states |, zi with different z is linearly independent. The set
of linear combinations of finitely many coherent states is dense in H. The coherent states
429
form a kind of a basis, but an overcomplete set. Such a set is called a frame. Frames
are widely used in wavelet analysis.
We now show that coherent states of unit norm have a basis-like property, expressed through
a so-called resolution of the identity. To simplify the notation we put h
= 1. Next we define
1
|zi = |e 2 |z| , zi .
The vectors |zi have a unit norm; hz|zi = 1. We calculate for an element |f i =
the following
X
fk
2
|zihz|f i =
e|z| zn z k |ni .
k!
fk |ki
k,n
n = e|z| f (z)
zn ,
where f (z) is defined as
f (z) =
X zn
n
n!
fn .
From
the above discussion we know that f (z) is analytic everywhere. Thus the vector
P
n |ni is of finite norm;
X |n |2
n
n!
2|z|2
|f (z)|
2 |z|
2n
n!
Hence |zihz|f i represents an element in H and we can integrate each component to get
Z
Z
1 2
1
1
2
|zihz|f id z =
|zif (z)e 2 |z| dz = |f i ,
(20.24)
C
C
where the integration measure is dz = d(Rez)d(Imz) and where we used
Z
2
zn z m e|z| dz = n!n,m .
(20.25)
In mathematics, such an expression is called a resolution of the identity. The fact that
the coherent states admit a resolution of the identity makes them useful. We now wish
to show that the expansion of |f i in coherent states is unique, thereby proving that the
coherent states make up a tight frame. We use (20.24) to compute the inner product of |f i
with a coherent state hw|
Z
1 1 |w|2
2
ewz |z| f (z)dz .
hw|f i = e 2
430
But f is an analytic function, so we first try f = z n . Using (20.25) we obtain the identity
Z
1
2
e|z| ewz z m dz = w m .
C
Hence we derive the more general identity for analytic functions
Z
1
2
e|z| ewz f (z)dz = f (w) ,
C
from which we obtain
Hence the expansion of f in coherent states is unique, since if f = 0, then all the hz|f i = 0
and the expansion vanishes identically. Note that the above discussion only works for
analytic functions f . If we admit a non-analytic f we get for example
Z
1 2
|zi
z n e 2 |z| dz = 0 ,
C
for all n > 0. There is a relation between coherent states and the Hilbert space HC of
analytic functions f : C C such that
Z
2
|f (z)|2 e|z| dz < ,
C
for which the (z n )n0 form a basis. We refer the interested reader to Glauber [105], Segal
[251], Bargmann [23, 24]. We just remark that if f HC and expand f as
f (z) =
X fn z n
n0
n!
then
Z
2 |z|2
|f (z)| e
dz =
X zn z m fn fm
2
e|z| dz
n!m!
C n,m0
X |fn |2
n0
n!
Hence f defines an element in the Fock space H. (We have assumed one can change the
order of integration and summation, but that can be made rigorous, see Bargmann [23]
for a readable explanation.) The above discussion on the uniqueness of the expansion of
an analytic function in terms of coherent states was taken from Glauber [105], which is a
very readable account on coherent states and the physics and mathematics behind them.
We now reinsert the constant h
to see some of the behavior of the coherent states. Remember
the formula for q
1
q = 2m 2 (a + a ) .
We see easily that
a|, zi = h
z|, zi.
(20.26)
431
X
k,l
||2
X |z|2k
z i z j k l+1
a (a ) = z||2
,
k!l!
k!
k=0
ei h t |, zi = |, zeit i .
The coherent states thus swing from left to right in the potential with a frequency and
with amplitude |z|/2m. In order to see the action of the Heisenberg group on the coherent
state, we calculate
(en )k = ek k en |, zi = |, e zi .
From a|, zi = h
z|, zi it follows that
ea |, zi = ehz |, zi = |ehz , zi .
Further, we have
a
X k (a )k z l (a )l
X + zk
|, zi =
=
(a )k = |, z + i ,
k!
l!
k!
k,l
k
432
ea |, zi = |, z + i , e |, zi = |e , zi .
(20.27)
(20.28)
20.8
As indicated in Section 5.6 for the case of a beam of monochromatic light, the modes
of the electromagnetic field play the role of the annihilation and creation operator of the
quantum field. Classically the observables are functions on phase space, hence specified to
a certain observable we have an operator on the configuration space. Namely a physical
configuration is specified by giving the values of the observables, and to any observable we
assign the operator that reads off the value of that observable. Thus if a configuration of a
laser beam, which we suggestively denote |Ei, is specified by an electric field E(x, y, z, t),
then the operator E(x, y, z, t) reads off the values of the components of the electric field at
the space-time point (x, y, z, t):
E(x, y, z, t)|Ei = E(x, y, z, t)|Ei .
In the transition from classical mechanics to quantum mechanics the role of the operator
E(x, y, z, t) is played by the operators positive frequency part of the electromagnetic field.
The above equation then tells us that |Ei is an eigenvalue of the annihilation operator.
In a classical system there are many photons and the number of photons need not be
constant, due to absorption and due to the constant photon production of the laser. Hence,
from a micromechanical point of view, the quantum number n is no longer a good quantum
number to assign to a system resembling a laser. However we know that the electric field is
nearly perfectly constant, and if the beam goes in one direction we can take the expression
E(x, y, z, t) = a eik t u(x) ,
for the electric field, where a is an annihilator operator. Since the classical state of the
laser has a well-defined value of the electric field, the quantum state |Ei that mimics the
433
434
Chapter 21
Spin and fermions
This chapter discusses the quantum mechnaics of spinning systems, where the only relevant
degrees of freedom correspond to rotation.
The quantum version of the classical rotator discussed in Section 12.4 can be obtained by
looking for canonical anticommutation relations, which naturally produce the Lie algebra
of a spinning top. As for oscillators, the canonical anticommutation relations have a unique
irreducible unitary representation, which corresponds to a spin 1/2 representation of the
rotation group. The multimode version gives rise to fermionic Fock spaces; in contrast
to the bosonic case, these are finite-dimensional when the number of modes is finite. In
particular, the single mode fermionic Fock space is 2-dimensional.
Many constructions for bosons and fermions only differ in the signs of certain terms, such
as commutators versus anticommutators. For example, quadratic expressions in bosonic or
fermionic Fock spaces form Lie algebras, which give natural representations of the universal
covering groups of the Lie algebras so(n) in the fermionic case and sp(2n, R) in the bosonic
case, the so-called spin groups and metaplectic groups, respectively. In fact, the analogies
apart from sign lead to a common generalization of bosonic and fermionic objects in form
of super Lie algebras, which, however are outside the scope of the book.
Apart from the Fock representation, the rotation group has a unique irreducible unitary
representation of each finite dimension. We derive these spinor representations by restriction of corresponding nonunitary representations of the general linear group GL(2, C) on
homogeneous polynomials in two variables, and find corresponding spin coherent states.
21.1
As we have seen in Section 12.4 the affine functions in the Poisson algebra of the spinning
top make up the Lie algebra u(2). One can thus expect that the quantization of the
spinning top boils down to representation theory of su(2) and u(2) and indeed it does. In
the following sections the representations of su(2) and u(2) play an important role. See for
435
436
example Humphreys [131] or Jacobsen [136] for a comparison of the methods used.
In this section, however, we look at a particular representation, the Fock representation of
u(2). It behaves in many respects like the Fock representation of the Heisenberg algebra,
and gives the right generalization to the case of many fermionic modes, and in particular
to quantum field theory.
In fact, there are many analogies between bosonic and fermionic systems many formulas
look alike, apart for the occurrence of additional minus signs in certain places.1 Although
very similar in many respects, there is a fundamental difference with basic representation
theory of bosons and fermions. While bosons are characterized by canonical commutation
relations, fermions are quantized using canonical anticommutation relations. We shall see
in a moment that this naturally reproduces the Lie algebra u(2) of a spinning top, and
just like for canonical commutation relations uniquely fixes the representation.
We define a signed commutator
[f, g] := f g gf ;
the upper sign applies to bosonic quantities f, g, and reproduces the ordinary commutator,
[f, g]+ = [f, g], while the lower sign applies to fermionic quantities f, g, and reproduces
the anticommutator
[f, g] = f g + gf .
Often, the anticommutator is written instead as {f, g}, which looks like a Poisson bracket,
so that we dont recommend this notation. In the theory of Lie superalgebras, the sign at
the commutator is not written at all, since the context already determines the nature of
the arguments, and hence implies the commutator sign.
To understand how anticommutators give rise to the u(2) Lie algebra governing a spinning
top, we impose the canonical anticommutation relations on operators a and a = (a)
in some Hilbert space
[a, a ] = h
,
[a, a] = 0 ,
[a , a ] = 0 .
In particular we have a2 = (a )2 = 0. The algebra E spanned by 1, a and a is fourdimensional since these generators together with aa a a already span E. Hence E is
isomorphic to the algebra of complex 2 2-matrices; an explicit isomorphism is obtained
by identifying a and a with the matrices
0 1
0 0
, and
,
0 0
1 0
respectively and aa a a with 3 . The Lie -algebra L described by 1, a, a and [a, a ] is
thus u(2). Thus the anticommutation relation automatically produce the right Lie algebra
for a spinning rigid body, and u(2) is the fermionic analogue of the oscillator algebra os(1).
1
To see how this leads to the vast mathematical area of superalgebras and supergeometry we refer the
interested reader to for example Varadarajan [280], Scheunert [246], Tuynman [272], Deligne et al.
[82] and references.
437
We can get the same result more formally on the quantum level in a way which is completely
analogous to the bosonic case, by considering an arbitrary unitary representations of the
canonical anticommutation relations, i.e., for linear operators a and a satisfying these
relations. We introduce the operator
n := h
1 a a ,
and we let V for some Verma module V ; as in Section 20.3, this means n = . We
obtain
na = h
1 a aa = 0 ,
and hence a V0 . Further,
na = h
1 a aa = a ,
and therefore a V1 . To compute we proceed
h
2 n2 = a aa a = h
a a = h
2 n
and hence 2 = 0, from which we deduce = 0, 1. Thus we have arrived at the remarkable conclusion that the canonical anticommutation relations lead to two-dimensional
Hilbert spaces.
We take a basis vector |0i in V0 and we define |1i = a |0i. The Lie -algebra L acts on the
space spanned by |0i and |1i, which is isomorphic to C2 . This representation is called the
Pauli representation. In the above we have thus shown that the Pauli representation is
the unique irreducible unitary representation of the canonical anticommutation relations.
This is the fermionic analogue of the Stonevon Neumann theorem.
In analogy with the boson case, the representation space C2 is called the single mode
fermion Fock space.
We shall see in Section 2.11 that the irreducible representations of su(2) are in one-toone correspondence with the (finite) dimension of the representation space. For historical
reasons, this dimension is usually denoted by 2j + 1, and j is called the spin of the representation. Clearly, the spin j is half a nonnegative integer. In particular, the single mode
fermionic Fock space has dimension 2 and hence spin j = 1/2.
In general, cf. Section 3.14, elementary particles are associated with an irreducible representation of the Poincare algebra (or in the nonrelativistic limit the Galileo algebra), which
is characterized by mass and spin. The spin assigment in these representations is such
that, in the massive case, the restriction to a center of mass frame at a fixed time gives an
irreducible representation of the Lie algebra so(3) = su(2) of the same spin. (The massless
case is not related to u(2).)
Elementary particles of integral spin (bosons) are represented by a bosonic Fock space,
those of nonintegral spin (fermions) by a fermionic Fock space. This fact is a consequence
of the so-called spin-statistics theorem which holds under certain causality assumptions
related to Poincare invariance of a field theory. Fermionic particles obey the Pauli exclusion
principle (Pauli [212], Schwinger [250], Streater [264]).
438
21.2
Suppose that the algebra of linear operators on some vector space H contains, for some
linearly ordered set2 M of labels, quantities aj and aj (j M) satisfying the relations
[aj , ak ] = [aj , ak ] = 0,
[aj , ak ] = jk ,
(j, k M).
(21.1)
For the upper sign (the bosonic case), these are just the canonical commutation relations
defining a Heisenberg algebra corresponding to harmonic oscillators with finitely many
degrees of freedom. For the lower sign (the fermionic case), the relations (21.1) generalize
the canonical anticommutation relations which we have met for the spinning top; we thus
expect to get an analogue of the spinning top with n degrees of freedom.
In this section, we consider the fermionic case. We first assume that we have a unitary
faithful representation and deduce enough properties that determine the representation
uniquely. Then we use the properties deduced to construct the representation.
The canonical anticommutation relations imply that
aj ak = ak aj ,
aj ak = ak aj ,
aj ak = jk ak aj ,
(21.2)
and again we have in particular a2j = (aj )2 = 0. To find the unitary representations of
physical interest, we assume in analogy to the bosonic case of the canonical commutation
relations the existence of a nonzero vector 0 , the ground state, such that
aj 0 = 0 for all j M .
(21.3)
We next define for any finite set J = {j1 , . . . , jl } of distinct labels j1 < . . . < jl from M the
vectors
|Ji := |j1 . . . jl i := aj1 ajl 0 |i = 0 .
(21.4)
aj |Ji =
aj |Ji =
(21.5)
j (J)|J {j}i if j
/ J,
0
if j 6 J,
where the sign j (J) is defined to be +1 if there is an even number of indices in J that are
smaller than j and 1 otherwise.
21.2.1 Proposition. We have the following identities:
j (J \ {j}) = j (J) for j J ,
(21.6)
(21.7)
(21.8)
(21.9)
(21.10)
A set M is a linearly ordered if there is a binary relation such that for all m, n, p M : (1) m m,
(2) m n and n m then m = n, (3) m n and n p then m p, (4) either m n or n m.
439
Proof. This is a straightforward consequence of the definition, taking into account when
j (J) and k (J) change sign if an index is removed or added to J.
We define F (M) to be the vector space spanned by the |Ji. By definition, F (M) consists
of the finite linear sums of the elements |Ji.
21.2.2 Proposition.
(i) The vectors |Ji are linearly independent.
(ii) The vector space F (M) is an irreducible representation space for the canonical anticommutation relations.
P
Proof. (i) Suppose that we have J cJ |Ji = 0 with finitely many nonzero coefficients. and
let J = {j1 , . . . , jl } (j1 < . . . < jl ) be a set of maximal size among the sets with cJ 6= 0. In
view of (21.5), multiplication by ajl aj1 leaves as only nonzero term cJ |i = 0. Since
the ground state 0 is nonzero, we conclude that cJ = 0, contradiction. Therefore, the
vectors (21.4) are linearly independent and form a basis of F (M).
(ii) Equations (21.5) imply that aj and aj map F (M) into itself. Irreducibility of the
representations follows since the same argument used in (i) implies that any invariant
subspace of F (M) containing a nonzero element contains the ground state, hence all |Ji,
and hence all elements of F (M).
Since the |Ji form a basis of F (M), we may identify a vector F (M) with the fermion
wave function defined on the finite subsets of M whose value at J M is the coefficient
(J) in the basis expansion
X
=
(J)|Ji ,
JM
where the summation is over all finite subsets J of M. Note that only finitely many
coefficients (J) are nonzero.
21.2.3 Proposition. Under the assumption (21.3), the anticommutation relations (21.2)
imply that the linear operators
X
X
a(u) :=
uj aj , a (u) :=
uj aj ,
jM
jM
defined for all vectors u indexed by M which have only finitely many nonzero entries, act
on fermion wave functions according to
X
X
(a(u))(J) =
j (J)uj (J {j}) , (a (u))(J) =
j (J)uj (J \ {j}) . (21.11)
jJ
j J
/
Proof. We have
X
X
X
(J {j})j (J {j})|Ji,
aj =
(J)aj |Ji =
(J)j (J)|J \ {j}i =
J
Jj
J6j
440
0
if j J,
j (J)(J {j}) if j 6 J.
Jj
j (J)(J \ {j}) if j J,
0
if j
/J.
To show that unitary representations with the desired conjugation and anticommutation
relations actually exist, we start with the space F (M) of complex valued functions
defined on finite subsets of an arbitrary set M such that only finitely many values (J) are
nonzero. Then (21.12) defines an inner product on F (M), and the completion F (M) of
F (M) in the associated norm is a Hilbert space, called the fermion Fock space over M.
For a concise formulation of the result, we use a slightly more abstract notation. We
introduce the Euclidean space H of vectors indexed by M with finite support, equipped
with the bilinear form
X
uk vk ,
uT v :=
kM
and write F H := F (M). In applications to quantum field theory, H becomes the infinitedimensional single-particle Hilbert space, and the sums become integrals over momentum
vectors, but the formulas below remain valid with an appropriate interpretation.
21.2.4 Theorem. The relations (21.11) define two linear mappings a, a from H to the
algebra Lin(F , H), and we have
(a(u)) = a (u) ,
a(u)a(v) + a(v)a(u) = 0 ,
(21.13)
(21.14)
(21.15)
In particular, taking for u, v vectors with a single nonzero entry, we find the canonical
anticommutation relations (21.2).
441
jJ
J j J
/
X
X
J
j J
/
X X
k6J j6J{k}
= (a(v)a(u))(J) .
This proves the first formula in (21.14), and the second formula follows with (21.13). Finally,
to prove (21.15), we note that
X X
(a(u)a (v))(J) =
j (J)k (J {j})uj vk (J {j} \ {k}) ,
(21.16)
j6J kJ{j}
and
(a (v)a(u))(J) =
X X
kJ j J\{k}
/
(21.17)
The sets of pairs (j, k) over which the summation in the two equations is taken, contain the
pairs with j 6 J, k J, for which the signs are opposite by (21.10). Thus the corresponding
terms in the sums cancel when the two equations are added. The remaining terms consist
in (21.16) of the terms with k = j
/ J and in (21.17) of the terms with j = k J; for
the corresponding terms in the sums, all signs are +1. Therefore, adding the two equations
results in
X
X
(a(u)a (v))(J) + (a (v)a(u))(J) =
uj vj (J) +
uk vk (J) = uT v(J) .
j6J
kJ
21.3
442
L
For any vector space H we consider the tensor algebra
H = C H (H H) . . ., which
is an associative algebra with unity where multiplication is the tensor product. We define
the ideal J to be the ideal generated by the elements v w w v for v, w H. The
quotients
_
^
H/J =
H , H/J+ =
H,
that we obtain by dividing out by the ideals JWare equipped with a natural algebra
V structure, since we divided out by ideals. We call H the symmetric algebra and H the
exterior algebra. The product in the exterior algebra is written as v w in place of vw,
and is then called the exterior product or wedge product. The product satisfies the
anticommutative law
v w = w v
V
for any two vectors v, w H, but not for general elements of H; u v w = w u v.
The symmetric algebra leads to a representation of the canonical commutation relations
(see Section 20.6); the exterior algebra to one of the canonical anticommutation relations.
We concentrate on the latter, and restrict to finite-dimensional vector spaces. If H is a vector
space of finite dimension n we may choose a basis e1 , . V
. . , en . Using the anticommutation
relations one easily verifies that the exterior algebra H has a basis consisting of the
elements
ei ej ek (i < j < k) ; . . . ; e1 e2 en ,
V
making a total of 2n basis vectors. Thus the dimension of H is 2n . We now introduce
operators ak given by
^
ak () = ek for
H;
1;
ei ;
ei ej (i < j) ;
(21.18)
First we assume that we cannot write in the form ek and neither in the form el .
In this case we have
ak al () + al ak () = al (ek ) = kl .
If we can write as ek but not as el then we have k 6= l and
ak al () + al ak () = al ak (ek ) = 0 .
If k = l and = ek we have
ak ak () + ak ak (ek ) = ak ak (ek ) = .
443
21.4
In analogy to the bosonic case treated in Section 13.1, we now show that quadratic expressions in anticommuting operators a and a make up well-known finite-dimensional Lie
algebras, in this case the orthogonal algebras so(2n) and so(2n + 1).
The method of derivation is different however. It works for bosons and fermions simultaneously, with differences only in certain signs, and gives in the bosonic case a construction
of the metaplectic representation of sp(2n, R) and the central extension of isp(2n, R). In
the sequel, the upper signs apply for the bosonic case, and the lower signs apply for the
fermionic case. We use coordinate-independent notation, so that the method can be taken
over almost literally to the infinite-dimensional case.
We assume that we have a linear mapping a : H E that assigns to each from some
vector space H an element a() in an associative algebra E with identity 1 such that
[a(), a()] = a()a() a()a() C .
(21.19)
For example, with the standard generators ak , ak in a bosonic or fermionic Fock space, we
can take
X
a() =
(k ak + k ak ).
k
For the bosonic case, (21.19) means that the Lie algebra that is obtained by equipping E
with the commutator as Lie product, contains a central extension of a commutative algebra.
The ground state on E is a positive linear functional hi that satisfies h1i = 1. Linearity
implies that there is a linear operator G satisfying
ha()a()i = T G ;
(21.19) then implies that
a()a() a()a() = T J .
where
J := G GT ,
J T = J .
444
1
a()a() T G .
2
and extend them by linearity (using a basis it is easy to see that the extension is unique
and well-defined). We thus have hN(f )i = 0 for all quadratic expressions f . We also have
a()a() = 2N( T ) + tr(GT T ) .
Remember that in Section 13.1 we considered the symmetric
combination 2Eij = pi qj +qj pi .
P
T
Motivated by this we restrict our attention to f = i i i such that f T = f . For a single
term f = T = f T , we find
2[N(f ), a()] =
=
=
=
a()a()a() a()a()a()
a()a()a() + a() T J a()a()a() a() T J
a( T J) a(T J T )
a(f J f T J T ) ,
so that by linearity,
[N(f ), a()] = N(f )a() a()N(f ) = a(f J)
for f = f T . Similarly for g = T = g T we find
2[N(f ), N(g)] =
=
=
=
Writing
(s, , f ) = N(f ) + a() + s1,
3
445
1
2
tr((G + GT )f Jf ) + T J , f J f J, f Jf f Jf
[(s, , f ), (s, , f )] = S(f, f , , ), f J f J, f Jf f Jf + 2(T T ) ,
where
S(f, f , , ) = 21 tr (G GT )f Jf + T J .
Fermionic case. To exploit these formulas, we first focus on the fermionic case, and
assume that J is nondegenerate; without loss of generality, we may choose J to be the
2n 2n identity matrix, J = 1.
We consider the Lie algebra L defined by the quadratic elements modulo the constant term;
that is, we factor out the center. We write (, f ) for the equivalence class of (s, , f ). The
quadratic expressions f are antisymmetric, f T = f , and thus correspond to so(2n, C).
Let us consider the map u : L sl(2n + 1, C) defined by
f
21/2
u : (, f ) 7
.
21/2 T
0
The map u is injective and preserves the Lie product and thus is an isomorphism onto its
image. The image under u of L is the Lie algebra so(2n + 1, C). It can easily be seen that
matrices in the image satisfy
T
f
f
J
0
J + J
= 0 , J =
,
T 0
T 0
0 1
since, for fermions, J is the 2n 2n identity matrix. Restricting this basis to the real
numbers we obtain the real form so(2n, 1). Summarizing, we have thus established that
the quadratic elements (with center) form a central extension of so(2n + 1, C). The purely
quadratic expressions (no linear and constant terms) form the Lie algebra so(2n, C). Note
that the group O(2n, C) is the automorphism group of the algebra defined by the relation
bk bl + bl bk = kl .
(21.20)
Going to the real basis ck = bk + bk , ck+n = i(bk bk ), we see that the real Lie group O(2n)
preserves the relations (21.20).
For a finite number of generators, the canonical anticommutation relations have a unique
faithful unitary representation. Therefore as in Section 20.6 we can say something interesting about the
P automorphism group of the algebra defined by (21.20). Performing a rotation
bk 7 bk := gkl bl with g = (gkl ) an element of SO(2n) on the generators bk we get another
representation of the canonical anticommutation relation, but since this representation is
unique, there exists a unitary transformation U(g) that relates the obtained representation
with the original representation: bk = U(g)bk U(g)1 , where we simply wrote bk for the
representation of bk . Again, U(g) is not unique for a given g, since U(g) also does the job.
446
In this way we get a double cover of the group SO(2n), called the spin group Spin(2n),
just as in Section 20.6 we obtained the metaplectic cover.
Bosonic case. For the bosonic case we may proceed in an analogous way. Again, we
assume that J is nondegenerate; this time, the normal form can be taken without loss of
generality as an antisymmetric 2n 2n-matrix J that squares to 1.
We again form the Lie algebra L of inhomogeneous quadratic expressions and factor out
the center. We then apply the map to the equivalence classes (, f )
Jf
u : (, f ) 7
.
0 0
It is clear that
(Jf )T J + J(Jf ) = f T + f = 0 ,
so that the map u is an isomorphism from L to isp(2n). We thus see that the inhomogeneous
quadratic quantities form a central extension of the Lie algebra isp(2n).
Chapter 22
Highest weight representations
This chapter discusses highest weight representations, providing tools for classifying many
irreducible representations of interest. We extend the ladder technique used in Section 20.3
for determining the unitary representations of the oscillator algebra to some other small
Lie algebras of interest, and indicate how the ideas generalize further.
The basic ingredient is a triangular decomposition, which exists for all finite-dimensional
semisimple Lie algebras, but also in other cases of interest such as the oscillator algebra,
the Heisenberg algebra with the harmonic oscillator Hamiltonian adjoined.
We look in detail at 4-dimensional Lie algebras with a nontrivial triangular decomposition
(among them the oscillator algebra and so(3)), which behave almost like the oscillator
algebra. As a result, the analysis leading to Fock spaces generalizes without problems, and
we are able to classify all irreducible unitary representations of the rotation group. Various
related material concerning SO(3) and its universal covering group SU(2) is also included.
22.1
Triangular decompositions
Note that the present concept of a triangular decomposition is less demanding and hence more general
than in the treatment by Moody & Pianzola [193]. Their additional restrictions allow them to extend
much of the finite-dimensional semisimple theory outlined below to the infinite-dimensional case.
447
448
L0 = C 1 + C n ,
L+ = C a .
A triangulated Lie algebra is a Lie -algebra with a distinguished triangular decomposition. We call the number rk L := dim L0 /Z(L) the rank, and deg L := dim L the degree
of the triangulated Lie algebra L. The elements of the dual space2 L0 are called weights.
A highest weight representation is a representation J of L on a vector space V with a
distinguished element 1, called the ground state3 , such that
(HW1) J()1 = 0 for all L , and
(HW2) J()1 C for all L0 .
The elements of L thus behave like annihilation operators. The defining properties imply
that
w() := J()1 ,
for L0 ,
defines a weight w L0 , called the highest weight of the representation. A highest weight
representation is irreducible if and only if the elements a1 . . . ak 1 with a1 , . . . , ak L span
a dense subspace of V. In an irreducible highest weight representation with highest weight
w, all Casimir elements C of L have a fixed value C(w) C.
The spectrum of L is the set (L) of weights w for which a unitary group representation
exists, whose associated infinitesimal representation is a highest weight representation of L
with highest weight w. The spectrum of L determines the possible spectra of each Casimir
element C in arbitrary unitary representations of the universal covering group of L, since
the possible eigenvalues are precisely the possible C(w) where w ranges over the spectrum
of L.
Note that a weight w belongs to the spectrum of L iff there is a unitary (cf. Definition
13.2.1) highest weight representation of L with highest weight w. In this case, there is a
Euclidean inner product on V, and without loss of generality, the ground state 1 may be
assumed to be normalized.
2
Since in the context of Lie -algebras, the notation V for the dual of V is ambiguous, we use in this
section a prime to indicate the dual.
3
In a quantum field theory context, the ground state is referred to as the vacuum.
449
The semisimple case. There are many examples of triangulated Lie algebras, related to
finite-dimensional semisimple Lie algebras (see the outline below) and to important classes
of infinite-dimensional Lie algebras.
We mention without proof (which can be found in many places, e.g., Fuchs & Schweigert
[95], Fulton & Harris [96], Humphreys [131], Jacobsen [136], Knapp [154], Kirillov
[151]) a number of facts about finite-dimensional semisimple Lie algebras.
All finite-dimensional semisimple real Lie algebras have a triangular decomposition, which
is unique up to automorphisms. In this case, L0 is a Cartan subalgebra (a maximal abelian
subalgebra generated by diagonal matrices in the adjoint representation, for some choice of
basis), which is unique up to conjugation, and the Lie algebra L decomposes as
M
L = L0
L ,
where L0 is the set of roots. The roots are nonzero elements of the dual of the Cartan
subalgebra such that the Cartan subalgebra acts diagonally on L :
hx = (h)x ,
h L0 , x L .
and finds (using further properties of the roots) that the semisimple Lie algebra is a triangulated Lie algebra.
22.1.2 Example. Take L = sl(n, C), the Lie algebra of n n matrices with trace zero.
Let us we write Eij for the matrix that is 1 on the (i, j)-entry and zero everywhere else.
Then the diagonal matrices that have trace zero make up the Cartan subalgebra, which is
thus spanned by the matrices Eii Ei+1,i+1 for 1 i n 1 so that the rank is n 1. We
have for h = diag(h1 , . . . , hn ) L0 in the Cartan subalgebra and for Eij with i 6= j
hEij = (hi hj )Eij .
Hence the roots are of the form i j where i reads off the ith diagonal entry of an
element of the Cartan subalgebra. We can choose a root i j to be positive if i < j.
Then L+ are the upper triangular matrices, and L the lower triangular matrices. The
positive root generators are Eij with i < j.
Associated with each semisimple Lie algebra is a weight lattice, which is a discrete additive
subgroup of L0 and whose elements are called integral weights. Additionally, there is a
450
distinguished subset of the weight lattice, which is closed under addition and whose elements
are called dominant integral weights. In terms of these:
(i) For each weight w, there is a Lie representation with w as highest weight.
(ii) A highest weight representation is finite-dimensional if and only if the highest weight
is dominant and integral.
(iii) For compact finite-dimensional Lie algebras, that is finite-dimensional Lie algebras with
a negative definite CartanKilling form (these are automatically semisimple, see Lemma
13.5.1), a highest weight representation is unitary if and only if it is finite-dimensional. The
inner product is then uniquely determined by the requirement that the ground state 1 is
normalized.
(iv) The Lie algebra induces a unitary representation of the universal covering group G if
and only if w is a dominant integral weight. Thus the spectrum of L consists of all dominant
integral weights of L.
In the context of an integrable classical theory associated with L, (iv) is equivalent to the
BohrSommerfeld quantization condition. (This folklore result is never stated in a
precise form, but see, e.g., Voros [281], Kochetov [155], and Gadiyar [97].)
22.2
We have seen that the oscillator algebra os(1) has a triangular decomposition of rank and
degree 1. A general triangulated Lie -algebra of rank and degree 1 with center C must be
the direct sum of the algebras
L = Ca ,
L+ = Ca ,
L0 = C + Ch ,
a h = ia .
For the Lie product of a and a we introduce complex numbers u and v and write
aa = i(uh + v) ,
but noting that (aa ) = aa we see that u, v R. It is easy to check that for all
u, v R the Jacobi identities are fulfilled and hence for all real numbers u, v we have a Lie
-algebra.
For the two-parameter family of Lie -algebras just defined there are essentially four different
cases;
451
hs = is ,
rs = a ,
which defines case 4 of the list above: so(3). However, the map from so(2, 1) to so(3) does
not preserve the -operation, since r = (ia) = ia 6= s. That means that so(2, 1) and
so(3) are not isomorphic as Lie -algebras.
Among the triangulated Lie algebras of rank and degree 1 listed above, the most interesting
cases for both classical and quantum mechanics are os(1) and so(3) C. As we have seen,
the oscillator algebra os(1) is related to the harmonic oscillator. The algebra so(3) C
involves infinitesimal ordinary rotations and arises when dealing with the spinning top, as
explained in Chapter 20. The algebra so(2, 1) C is less prominent in classical mechanics
although it arises in the analysis of the celestial 2-body problem. The algebra so(2, 1) C
has important applications to exactly solvable problems in quantum mechanics, and even
appears in so-called gauged supergravity theories.
22.3
We now discuss the unitary representations of the Lie groups SU(2) and SO(3). The
method presented below is often encountered in quantum physics textbooks. In Section
22.4 we discuss the highest weight representations of triangulated Lie algebras of rank and
degree 1, which shows a great similarity with the discussion here.
Since the group SU(2) is compact it has an invariant Haar measure d(g). Therefore
we can integrate over the group in an invariant way; invariance of the Haar measure means
452
R
N
X
1
1
+ n = (N + 1) + N(N + 1) = (N + 1)(2 + N) .
2
2
n=0
453
22.4
For the quantum theory one considers the unitary highest weight representations. We
investigate the unitary highest weight representations for the triangulated Lie algebras of
rank and degree 1 listed in Section 22.2. We thus look for a realization of operators a, a , h
and 1 such that 1 acts as the identity, h acts diagonally and is Hermitian, a is the adjoint
of a and the following relations hold (see (11.21) and Definition 13.2.1):
[a, n] = h
a ,
[a , n] = ha ,
[a, a ] = h
(un + v) .
n|0i = |0i .
By acting with a on |0i we obtain the other vectors in the representation. We define
|ki =
(ha )k
|0i ,
k!
so that
a |k 1i = h
k|ki .
It follows that
n|ki = h
(k + )|ki .
We have a|ki = ck |k 1i and we want to determine ck . Since aa = [a, a ] + a a we find
h
kck |k 1i = aa |k 1i = h
(uh + v) + (k 1)ck1 |k 1i ,
from which it follows
kck (k 1)ck1 = h
(uk + u + v) ,
454
which is solved by
so that
The vectors |ki are orthogonal, as in the case of the harmonic oscillator. So we suppose
hj|ki = Nk jk , and calculate hj|a |ki in two ways:
hj|a |ki = (k + 1)hj|k + 1i,
hj|a |ki = (v +
hu + 21 h
u(k + 1))hj 1|ki.
Choosing k = j 1 we find
jhNj = v + uh
+ 12 (j + 1)uh Nj1 .
Chapter 23
Spectroscopy and spectra
This final chapter applies the Lie theoretic structure to the analysis of quantum spectra.
After a short history of some aspects of spectroscopy, we look at the spectrum of bound
systems of particles. We show how to obtain from a measured spectrum the spectrum of
the associated Hamiltonian, and discuss qualitative results on vibrations (giving discrete
spectra) and chemical reactions (giving continuous spectra) that come from the consideration of simple systems and the consideration of approximate symmetries. The latter are
shown to result in a clustering of spectral values.
The structure of the clusters is determined by how the irreducible representations of a
dynamical Lie algebra split when the algebra is reduced to a subalgebra of generating
symmetries. The clustering can also occur in a hierarchical fashion with fine splitting and
hyperfine splitting, corresponding to a chain of subgroups. As an example, we discuss the
spectrum of the hydrogen atom.
23.1
In this chapter we show some features of spectra and spectroscopy. In the preceding chapters
we discussed properties of systems. The Hamiltonian of a system has a spectrum consisting
of the eigenvalues, but in practice we dont see this spectrum, but the energy differences.
One perturbs the system by shining light on it for example and then observes some response.
The responses give rise to the observed spectrum, the study of which is spectroscopy.
To study the structure of molecules and atoms, we often rely on destructive methods.
The destructive nature of the experiments in chemistry was taken as a primitive distinction
between chemistry and physics. Nowadays the situation is different. In high-energy physics,
one also shoots particles at each other such that the original particles are destroyed and
energy is converted into the creation of other particles. On the other side, in chemistry
new laser-techniques are used where molecules are kept intact, and information about the
structure of the molecular bonds is obtained.
455
456
With spectroscopy one can study properties of materials and mixtures without destructing
the sample. There are crudely speaking two kinds of spectra, relying on different experimental methods. An emission spectrum is obtained by putting a system in a state of high
energy. The system then falls back to a state with lower energy, and the energy difference is
emitted in the form of light. Of course, in order to emit light, the system needs to interact
with light. The kind of interaction then dictates which transitions are possible and hence
which frequencies are emitted. For the absorption spectrum one more or less does the converse. One puts a system into a beam of (nearly) white light. The system then absorbs
light and re-emits it again, but then in all directions.
In the 19th century Kirchhoff used an invention of the German chemist Robert Bunsen to
heat up elements in a flame to study the emitted light. He passed light through a prism
to study the intensity of light at different wavelengths. It turned out that the emitted
spectrum of an element had quite clearly defined lines at certain wavelengths. In 1859
Kirchhoff pointed out that all the elements that he had been studying had a different
emission spectrum. Hence disentangling the lines of an emission spectrum can help in
finding the components an unknown mixture is made of. Figure 23.1 gives as example an
emission and an absorption spectrum of Helium.
Figure 23.1: The emission (upper) and absorption (lower) spectrum of Helium.
Already much earlier, Isaac Newton had used in 1670-1672 a prism to study the decomposition of white light into a spectrum of different colors. In 1814 Joseph von Fraunhofer
invented the spectroscope and identified 574 dark lines in the light of the sun. In fact,
the Fraunhofer experiment can already be done with primitive equipment. On a sunny,
cloudless day one sits in a dark room with one little hole through which the sun shines. In
the beam of sunlight one places a prism and lets the light after the prism fall onto a white
piece of paper. The observed spectrum can be seen to display dark lines; in Figure 23.1
the lines corresponding to helium are displayed. The Fraunhofer lines are a manifestation
of the absorption spectrum. It was Kirchhoff who later explained the origin; light from
the sun has to pass the atmosphere of the sun. In the atmosphere the elements that are
present absorb certain parts of sunlight, at well-defined frequencies and re-emit it later, but
then in all directions. Therefore the sunlight going in the forward direction that is, away
from the core of the sun has lost intensity at certain well-defined frequencies. In this
way, Kirchhoff showed that the atmosphere of the sun contained among others hydrogen
457
and sodium. The reason why the sunlight is almost white before entering the atmosphere
of the sun we will not explain. When the light of the sun reaches the earth it is already so
diluted that the elements in the earths atmosphere give almost unobservable absorption
lines. Therefore, the dark lines in the spectrum of the sun are due to the suns atmosphere
and not the earths atmosphere.
In 1868, the French astronomer Pierre-Jules-Cesar Janssen observed a line in the spectrum
of the sun that did not match any element known by then. The reason he observed it
and not Fraunhofer was because Janssen used the better observing circumstances that a
solar eclipse offers. Normally, the sun is too bright, but when the moon blocks the solar
disc, one sees solely the atmosphere of the sun. The astronomer Joseph Norman Lockyear
concluded that the new line must represent a new element. They tossed the name helium,
from the Greek word helios, which means sun. It was not until 1895 that the physicist
John William Strutt, Lord Rayleigh or in short John Rayleigh proved that helium is
also present on earth; he found it in samples of the mineral clevite. He exposed the mineral
to some acids that reacted with the material thereby producing gasses. Then he studied
the contents of the gas mixtures, and he found that helium was present. The reason why he
found helium was explained later. Clevite is a mineral that contains uranium. The element
uranium is radio-active; it can emit -particles, which are the nuclei of helium atoms.
23.2
458
a photon and attain a state with more energy. The energy difference between the ground
state and the state with the second lowest energy is called the energy gap. If a photon has
the frequency with the energy corresponding to the energy gap, it can be absorbed by the
atom and the atom can be excited to the state above the ground state.
An excited atom cannot move down to a state with lower energy due to energy conservation, unless there is interaction with light. Incorporating interaction with light into the
Hamiltonian makes the energies acquire a small imaginary part, representing the possibility
to decay. If an atom jumps down in energy, it emits a photon with the same energy. This
process is called spontaneous emission. The nice feature of spontaneous emission is that we
can observe it.
The interaction with light is not just any arbitrary interaction. The interaction term Vint
in the Hamiltonian
Htot = Hatom + Henv + Vint
needs to respect some symmetries like Galilean invariance.
The result is that not all
transitions but only a selected set of transitions is allowed. The rules that dictate which
transitions are allowed are therefore called selection rules.
The interaction is often treated as a perturbation. The justification is that the interaction
term in the Hamiltonian is small compared to the other terms. One introduces a dimensionless variable and re-writes Vint as Vint () = Vint . One recalculates the spectrum and
expands it in to find
Ek () = Ek (0) + Ek1 + 2 Ek2 + . . . .
Since the interaction is small, the first order correction often gives the interaction with light
accurately enough. Using the techniques of perturbation theory one then finds the possible
transitions, i.e. the selection rules, and the probabilities of the transitions. The probabilities
gives the dominance in the observed spectrum; if a transition A is more probable than a
transition B this will result in more spontaneous emission along transition A. Therefore the
peak in the spectrum corresponding to A is bigger than the peak corresponding to B.
Observed spectra are often displayed by plotting, as in Figure 23.2, on the horizontal
axis the frequency and on the vertical axis the observed intensity. Due to imperfections in
measuring methods one never observes a real peak, but always a smeared out peak, that
is, peaks have a width. However, there can be many reasons why a peak has a certain
width. Imagine for example that one measures the spontaneous emission of a gas contained
in cylinder. The gas atoms are moving around in the cylinder, with different velocities with
respect to the measuring device. For each atom the spectrum is shifted due to the Doppler
effect, known from a similar effect with sound, which can be observed when an ambulance
passes by. Since one measures the emission of a whole population of atoms, the measured
peak is a superposition of peaks that are distributed around a certain frequency. That is,
the Doppler effect broadens a peak.
Technical imperfections of the measuring device also broaden peaks. Making the measuring
equipment more and more accurate one can try to get a better and better resolved spectrum.
459
Doing this one might see that broad peaks resolve into a group of smaller peaks. One sees
therefore more structure.
The result of a measurement is a list of data, the frequencies l . Using the data one wants
to obtain information of the system under study. If one knows the system already quite
well, for example if one knows the parametric form of the Hamiltonian but not the precise
values of the parameters, one may fit the measured energies to obtain a set of parameters
that describes the measurements best. One therefore has to solve a data analysis problem.
For each label l one has to find energies Ek(l) and Ej(l) with
l Ej(l) Ek(l) ,
within the experimental accuracy. Therefore, one solves the least-squares problem of minimizing the sum
2
X Ej(l) Ek(l)
ql
S(E, j, k) :=
1 ,
h
l
l
for some weight factors ql related to the inverse of the accuracy of the measurement of l .
In general, both the list E of energy levels Ei and the functions j, k which determine the
assignment of spectroscopic lines to transitions are unknown, and must be determined by
minimizing S(E, j, k). Usually, one starts with a preliminary list E of energy levels, and
assigns each line l to a transition which minimizes the lth term, breaking ties arbitrarily.
This defines preliminary assignment functions j, k. Fixing these turns the problem of minimizing S(E, j, k) into a least squares problem for finding the energy levels, resulting in an
improved E. Clearly, each cycle decreases the value of S(E, j, k). The process is stopped
when the assignments no longer change. Then S(E, j, k) has reached a local minimum.
Multiple lists of trial energy levels may be used to increase the likelihood that the assignment found corresponds to a global minimum. Frequently, one first assigns a subset of lines
to a subset of levels to find good starting values.
460
Figure 23.4: On the left a double well potential with the first energy levels indicated. On the
right the Morse potential; the bound states have discrete energy but above the dissociation
energy the spectrum is continuous.
23.3
Examples of spectra
The geometry of the molecule or atom under consideration strongly influences the spectrum,
since the geometry determines the potential.
Consider a molecule of two atoms. We assume that the excitations inside each atom are of
another magnitude than the excitations of the bond between the atoms. In that case we
may consider the molecule as two balls connected by a spring. The spectrum is as in Figure
23.3, and the observed spectrum consists of one peak.
Consider now a system that has two local minima. An example of this would be a molecule
C2 H4 of which two versions exist, the cis and trans molecules. The molecular bond between
the two C-atoms then behaves around each local minimum as a harmonic oscillator in some
approximation. For higher energies however the two states start to interact and the molecule
can change from cis to trans and vice versa. A typical spectrum then looks like Figure 23.4.
461
Figure 23.5: Sketch of the potential a proton experiences in the force field of a nucleus.
When there are asymptotically free states, one says that the system admits dissociation.
Free states have continuous kinetic energy and hence the spectrum contains continuous
parts. A potential showing dissociation is the Morse potential given by
V (r) = (er )2 2 , r 0
where r is the atomic distance and , , and are positive parameters, see Figure 23.4. The
potential of the H2 molecule discussed above is another example. Above the dissociation
energy the spectrum is continuous; the bound states have discrete energy.
Quantum physics has a remarkable feature compared to classical mechanics, called tunneling. If a particle is in a local minimum at energy E1 and another minimum is available with
energy level E0 < E1 , then (in a semiclassical particle view) there is a nonzero probability
that the particle travels through the barrier and ends up in the local minimum with lower
energy. For example, the potential of the C2 H4 -molecule discussed above admits tunneling since the potential has two local minima. The probability of tunneling decreases with
the height of the barrier between the two energy levels. Another example where tunneling
occurs is in nuclear physics; the potential of Figure 23.5 represents the energy a proton
feels in the potential field of a nucleus. The diameter of the nucleus is roughly the distance
between the two peaks in Figure 23.5. The difference to the C2 H4 -molecule is here that the
tunneling takes place between two states one of which is not integrable. Tunneling can go
in two different directions; one direction is where the proton is shot at the nucleus with too
little energy to classically penetrate the nucleus, the other direction is where the proton
is inside the nucleus and classically cannot get out. In the latter case, there is a certain
probability that the proton escapes the nucleus. This explains qualitatively the stochastic
behavior of radio-active decay.
As another example, consider a chemical reaction of the form AB + C A + BC, that is,
the molecule AB splits off a part B that then attaches to C to form BC. Here there are
462
two important parameters. The distance |AB| between A and B and the distance |BC|
between B and C. A possible potential is plotted in Figure 23.6. The plot shows two valleys
separated by a saddle point, marked by a red cross. The horizontal valley corresponds to
|BC| constant, hence to the state A + BC. The other valley corresponds to AB + C, and
at the saddle point part B is exchanged.
23.4
Dynamical symmetries
As discussed before, when one looks at a poorly resolved spectrum, one sees some rough
features of the system under study. Improving the resolution allows one to study more
structure of the system.
A similar process happens when one studies a hydrogen atom in an external magnetic field.
Upon increasing the magnetic field one sees that many lines of the original spectrum split
into several close lines. Thus what first seems to be one state in fact turns out to be an
agglomeration of different states. The states first had energies that were so close together
that they could not be recognized as belonging to different states indeed, they have exactly
the same energy. As we shall see, that these states (seemingly) agglomerate to one single
state is due to symmetry reasons.
The rotational symmetry implies that the energies of different states related by a rotation
have the same energy; more pictorially, whether an electron circles around the proton with
463
the rotation axis in the z-direction or the y-direction gives the same energy. Turning on
the magnetic field results in breaking the symmetry; then the different states that first
agglomerated to form a single state are disentangled and can be observed separately in the
spectrum.
But as with the increasing resolution, taking a closer look at the hydrogen atom reveals
more and more structure. In a first approximation, the electron in the hydrogen atom can
be treated nonrelativistically. Treating the electron relativistically, one gets a correction
to the spectrum. The first order corrections of special relativity go under the name of the
first radiative corrections.
We shall look in some detail at the hydrogen atom once we have clarified the general
principles.
Symmetry and broken symmetry. The most symmetric physical systems, in particular
the standard 2-body problems (the classical Kepler problem and the quantum hydrogen
atom) are exactly solvable. The Helium atom is already a three-body problem and is not
exactly solvable.
A physical system is called exactly solvable (or integrable, or completely integrable)
if it has enough constants of motion. Equivalently, if the centralizer CE (H) of the Hamiltonian H in the algebra E of observables is large enough. The effect of having enough
central elements is that the system has enough conserved quantities to explicitly solve the
differential equations of the system.
A dynamical algebra of a classical physical system is a Lie algebra L that one can associate
to the system such that the Hamiltonian H is contained in the LiePoisson algebra C (L ).
An extensive treatment of the role of Lie algebras in infinite-dimensional classical integrable
systems (field theories in one and two space dimensions) see Roy Chowdhury [62].
In this section, we are however, only interested in the application to spectroscopy and hence
concentrate on the quantum case. For a quantum mechanical system the requirement defining a dynamical algebra is that H is contained in the closure of the universal enveloping
algebra U(L) of L, equipped with a locally convex topology such that potentials of the form
2
ex are allowed.
For example, the Heisenberg algebra h(n) is the dynamical algebra of symplectic classical
systems with n position degrees of freedom, and of traditional Schrodinger quantum mechanics. The hydrogen atom has additional rotational symmetry, and the special properties
of the Coulomb potential imply that one can in fact find a fairly big dynamical algebra,
namely so(2, 4), see e.g. Wybourne [296].
Now consider any Lie algebra L as a dynamical algebra. Call E the LiePoisson algebra
associated to L for a classical case or the universal enveloping algebra of L in the quantum
case. The symmetry algebra is the centralizer of the Hamiltonian in E, written CE (H).
In the nicest case one has E = CE (H), which means that H is a Casimir of L. Normally,
the Lie algebra L describes the symmetries of the (unperturbed) system and thus one would
expect that the nicest case is the general case.
464
However, a very symmetric system is rarely studied in isolation, and realistic systems are
at best perturbations of nice systems. In this case one gets broken symmetries, meaning
that the Hamiltonian is only almost a Casimir. Note that it might happen that the classical
theory has a symmetry, but that in the quantum version of the theory the symmetry gets
broken. In case of a broken symmetry, one usually first tries to solve the symmetric problem
and then perturb the solutions to get approximate solutions to the problem with broken
symmetry. We will not go into details about the mathematics of perturbation theory, since
this topic is amply treated in every book on quantum mechanics. But we will consider some
of its qualitative implications.
Suppose we have solved a symmetric problem. Then the solutions are described as elements
of some Hilbert space H on which L acts unitarily. We can decompose the Hilbert space
into a direct sum of eigenspaces of the Hamiltonian; H = H . Let be some eigenstate
in H of the Hamiltonian and let f L, then we see
Hf = [H, f ] + f H = [H, f ] + f .
Since [f, H] = 0 we see that L maps each eigenspace into itself. Thus all H are L-modules.
We call the eigenvalue nondegenerate if the dimension of H is 1, and degenerate if
it is bigger than 1. (Dimension zero means that is not an eigenvalue.)
If is degenerate, H has many essentially distinct bases of eigenvectors of H. One of these
is usually distinguished by the concrete representation used to describe the module; H and
usually a distinguished part of L act diagonally. In general the perturbed Hamiltonian no
longer acts diagonally on H , and as a result the level usually splits into several distinct
levels. The energy level splits into new levels that are of the form + j for some different
but small values j , giving rise to a fine structure. In general, a fine structure implies that
either a symmetry is broken (the system reached a nonsymmetric state) or an external force
that broke the symmetry explicitly has been applied.
The induced representation of L on H is unitary. Therefore knowing the irreducible unitary
representations of L can give information about the system under study.
23.5
A hydrogen atom is a bound state of a proton (the nucleus) and an electron. It is most easily
described by treating the much heavier nucleus as fixed (which amounts to neglecting recoil
effects) and considering the electron as moving in the spherically symmetric electrostatic
Coulomb field generated by the nucleus.
The electron is a spin 1/2 particle, a fermion, meaning that it is described by the spin 1/2
representation of so(3) on the Hilbert space L2 (R3 , P1 )
= L2 (R3 ) P1 defined in Section
2.11. Below we first discuss the orbital part of the wave functions, i.e. the L2 (R3 )-part.
Then we discuss the dynamical symmetries and how they get broken.
465
The orbital quantum states are labeled by integers n, l and m. The integer n takes the
values 1, 2, 3, . . . and the number l takes for each fixed value of n the values 0, 1, 2, . . . , n1.
Finally, the number m takes for each l the values l, l + 1, . . . , l 1, l. Hence the (orbital)
state of an electron is described by a state
|n, m, li where n 1 , 0 l < n ,
l m l.
(23.1)
The quantum number n determines (to a first approximation) the energy of the state:
En =
13.6eV
.
n2
(23.2)
The abbreviation eV means electron Volt and is a unit for energy. The quantum number l
specifies a representation of so(3). Thus we can make use of the representation theory of
so(3) developed in Section 22.3.
The electrostatic potential of the hydrogen atom is SO(3)-invariant, hence it is not too
surprising that SO(3)-representations plays a role; the orbital part of the electron wave
function can be decomposed in representations of SO(3). The quantum number l corresponds precisely to the irreducible representation of so(3) of integral spin l, that is, precisely
to the representations of so(3) that lift to SO(3)-representations. The quantum number m
labels the 3 -eigenvectors of the representation and corresponds to the eigenvalue m. The
quantum number n thus determines which SO(3)-representations are allowed, and the l
and m then specify the representation and an eigenvector in this representation.
Now we shortly describe the relation between the quantum numbers and the orbital wave
function of the electron in the hydrogen atom. We can give the hydrogen atom a coordinate
system as follows. We put the proton in the center and describe the position of the electron
by a radial coordinate r measuring the distance between the proton and the electron and
by two angles [0, ] and [0, 2). The solutions to the Schrodinger equation for the
hydrogen atom are then given by
(r, , ) = Rn,l (r)Yl,m(, ) .
The radial part of the wave function Rn,l is completely determined by the quantum numbers
n and l and is given by
2l+1
Rn,l = Cn,l e/2 l Lnl1
().
Here Cn,l is some constant such that Rn,l is normalized to integrate to one, the Lpq () are
generalized Laguerre polynomials (one of the well-known families of special functions); is
2r
, and a0 is a constant called the Bohr radius. The angular
the normalized radius = na
0
part Yl,m of the wave function is given by
Yl,m (, ) = Kl,m Plm (cos )eim ,
where the Kl,m are normalization constants, and the Plm are the associated Legendre
polynomials given by
l+m
(1)m
d
m
2 m/2
Pl (x) =
(1 x )
(x2 1)l .
l
2 l!
dx
466
Symmetries and symmetry breaking. Nonrelativistically the electron in an electromagnetic field is treated with the Pauli equation. The Pauli equation looks like the
Schrodinger equation, but has some extra terms, describing the coupling of a spin 1/2 particle to the electromagnetic field. We now indicate why, in the case where the external electromagnetic field is switched off, the symmetry group of the Hamiltonian is SO(4) SO(3).
The second factor in the symmetry group, the SO(3), is the symmetry group that acts on
the spin of the electron. That is, it acts on the D1/2 part of L2 (R3 ) D1/2 .
The first factor in the symmetry group, the SO(4), acts on the space-part L2 (R3 ) of the wave
function. The Hamiltonian of the hydrogen atom is rotationally invariant. Infinitesimal
rotations are generated by the angular momentum J = r p, where r is the radius and
p is the linear momentum and hence the angular momentum components describe the Lie
algebra so(3). However, there exists an additional vector whose length is conserved: the
length of the LenzRunge vector. (Some people call it the LaplaceRungeLenz vector,
or even Laplace vector.) This leads to the bigger group SO(4); see e.g. Goldstein [106].
To treat the electron relativistically one uses the Dirac equation for a spin 1/2 particle
coupled to an electromagnetic field. The coupling to the electromagnetic field can be done
in a quite easy way. Starting with the Dirac equation
(h + mc) = 0
one simply replaces the derivatives with iqA where the zeroth component of A gives the
Coulomb potential and the spatial A components contain the magnetic field via B = A;
the parameter q is interpreted as the charge. We obtain
(h iqh A + mc) = 0 ,
where A = A .
The effect of having the fully relativistic coupling terms is that there is a coupling between
the spin of the electron and the orbital angular momentum of the electron. The additional
coupling terms in the Hamiltonian are called spin-orbit coupling terms.
Due to the coupling the separate SO(3) of the spin gets destroyed; without coupling there
is a rotational symmetry group acting separately on the orbit and on the spin and due
to the coupling, the two rotational symmetries are no longer independent. The angular
momentum L and the spin S are no longer separately conserved in magnitude, but (L + S)2
is constant. The symmetry group of the relativistic hydrogen atom is therefore SO(4). The
spectrum that is observed is called the fine structure spectrum.
Going even further and treating the hydrogen atom with quantum field theory results in a
further breakdown of the symmetry to the group SO(3). The group SO(4) is isomorphic
to SO(3) SO(3) (see Section 3.11) and corrections from quantum field theory break it
down to the diagonal subgroup SO(3). The observed spectrum is called the hyperfine
structure spectrum.
23.6
467
Chains of subalgebras
In more realistic situations, the Hamiltonian is not invariant under the total dynamical
algebra E, the universal enveloping algebra of L. In this case, the Hamiltonian is not in
the center of E, but we can consider the centralizer of H in L. The centralizer of H in L is
a subalgebra of L, and is therefore a Lie subalgebra of E and we denote it by L1 . We thus
have H CE (L1 ). The Lie subalgebra L1 generates a subalgebra of E, which we denote by
E1 .
In simple applications, it often happens that the Hamiltonian H is a function H(C0, C1 )
where C0 is a Casimir of L (that is, it is a central element of E) and where C1 is a Casimir of
L1 (in the center of E1 ). In more complicated applications we have a series of approximations
to the problem, as explained for the hydrogen atom before, where relativistic and quantum
field theory effects modify the Hamiltonian. In each step one modifies the Hamiltonian by
adding terms with fewer and fewer symmetries, and the symmetry algebra is reduced to
correspondingly smaller subalgebras. We thus have a sequence of subalgebras
.
L = L0 L1 . . . Ln = L
commutes with H. The generated subalgebra of E, denoted E
cenThe final subalgebra L
tralizes H in E. If the Hamiltonian is a function H = H(C0 , . . . , Cn ) where Ck is a Casimir
of Lk , the scheme gives explicitly solvable problems. For example, for the nonrelativistic
hydrogen atom without spin, one finds a series
so(4) so(3) so(2) 1 .
Of course, there are many Hamiltonians that cannot be represented as functions of a chain
of Casimirs, but the above scheme covers many applications, and is a starting point for a
perturbative treatment of many others.
to so-called action variables
In classical symplectic mechanics one relates the Lie algebra L
and the steps to L0 are constructed using conjugate angle variables. We will not go into
the details defining variables and the related techniques.
Consider the situation where H = H(C0 , C1 ), that is, the simple application. We write
H = H0 + H1 where H0 is only a function of C0 and H1 depends on C0 and C1 . As before,
we suppose we have realized the elements of E (and thus of L) as operators on some Hilbert
space H. We assume that the subspaces H on which the Hamiltonian H0 = H0 (C0 ) acts
diagonally are finite-dimensional. This is for example the case for the hydrogen atom. We
furthermore split up H in irreducible representations of L0 so that we may assume that
H is irreducible. Modifying H0 to H0 + H1 means that the symmetry algebra becomes
smaller; it becomes L1 . We can restrict the representation of L0 on H to the subalgebra
L1 to obtain a representation of L1 . In most cases this representation is reducible and we
write the decomposition of H into L1 irreducibles as
H =
H(1)
.
468
a b 0
a
b
c d 0 , where
su(2) .
c d
0 0 1
We see that the three-dimensional representation of su(3) splits into two irreducible representations of su(2), the trivial one, spanned by e3 and the two-dimensional (fundamental)
representation spanned e1 and e2 . One writes this in shorthand as: 3 2 + 1 under
su(2) su(3). In the reference Slansky [255], one can find tables of branching rules.
ClebschGordan coefficients. In an important special case one can relate the branching
rules to the so-called ClebschGordan coefficients, which are widely used in physics.
Let us explain the ClebschGordan coefficients for su(2). Given two representation Dl and
Dk (see Section 22.3), we can form the tensor product Dk Dl . An su(2)-element x acts
on v w by mapping v w to x(v) w + v x(w), where we write x(v) for the action of
x on v. In general, the representation Dk Dl is not irreducible, and we have
Dk Dl = Dk+l Dk+l2 . . . D|kl| .
The precise decomposition of a vector v w, where v and w are eigenvectors of 3 , into the
irreducible components is given by the ClebschGordan coefficients. For the vector in the
469
Dk representation with 3 eigenvalue m and norm one we write |k, mi. If the representation
DK is inside the tensor product of Dk1 and Dk2 , we can decompose any vector |K, Mi as a
sum of vectors of the form |k1, m1 i |k2, m2 i and the ClebschGordan coefficients are then
the coefficients in the decomposition
X
CkKM
|k1 , m1 i |k2 , m2 i .
|K, Mi =
1 k2 m1 m2
k1 ,k2 m1 ,m2
More generally, the ClebschGordan decompositions say how the tensor product of two
irreducible representations V1 and V2 of a compact Lie group (or its Lie algebra) decompose
into irreducible representations Wi as V1 V2 = j Wj . The ClebschGordan coefficients
are the numerical coefficients in the projection from one of the summands Wj to V1 V2 .
Now suppose that L0 = L L decomposes into two copies of the same Lie algebra L .
An important choice of L1 is the diagonal Lie subalgebra given by elements of the form
(a, a) with a L . Then as a Lie algebra L1
= L . The irreducible representations of
L0 are given by tensor products of representations of L1 . Therefore, decomposing the
irreducible representations of L0 with respect to L1 amounts to giving the ClebschGordan
decompositions.
Much more could be said on the topic of symmetries and broken symmetries in physics.
A nice overview is given in a paper by Bijker [35]. It shows how the symmetry concept
organizes not only the world of atoms and molecules that we considered here, but also
that of elementary particles. The isospin symmetry between protons and neutrons has a
symmetry group SU(2), which extends to the flavor symmetry group SU(3) for the three
light quarks. Applications to molecular spectra and the interacting boson model for
modelling atomic nuclei are discussed extensively in the book by Frank & van Isacker
[90].
Quantum field theory, culminating in the standard model, is also based on symmetries,
namely the space-time symmetries of the Poincare group, and a gauge group U(2) SU(3)
which combines the broken symmetry group U(2) = U(1) SU(2) of the weak interaction
(of which only a diagonal subgroup U(1) encoding the electromagnetic charge is unbroken)
with the unbroken color symmetry group U(3) of the strong interaction.
While these topics lie far beyond the scope of this book, the interested reader will take
the next step and consult deeper work of others who studies this in depth. Our journey is
finished.
470
References
[1] A. Aartinez. An introduction to semiclassical and microlocal analysis. Springer, New
York, 2002. [177]
[2] I.D. Ado. Lie groups. Number 9 in AMS Translations, Series I. Amer. Math. Soc.,
1962. [28]
[3] A. Aiello, G. Puentes, and J. P. Woerdman. Linear optics and quantum maps. Physical
Review A, 76:032323, 2007. quant-ph/0611179. [43]
[4] G. Alberti and L. Ambrosio. A geometrical approach to monotone functions in Rn .
Math. Z., 230:259316, 1999. [162]
[5] R.A. Alberty. Use of Legendre transforms in chemical thermodynamics (IUPAC technical report). Pure Appl. Chem., 73:13491380, 2001. [162, 171]
[6] A.D. Aleksandrov. Almost everywhere existence of the second differential of a convex function and some properties of convex surfaces connected with it (in russian).
Leningrad State Univ.Ann. Math. Ser, 6:335, 1939. [162]
[7] Y. Alhassid and R. D. Levine. Connection between the maximal entropy and the
scattering theoretic analyses of collision processes. Phys. Rev. A, 18:89116, 1978.
[188]
[8] F. Alizadeh. Interior point methods in semidefinite programming with applications
to combinatorial optimization. SIAM J. Optim., 5:1351, 1995. [315]
[9] A.E. Allahverdyan, R. Balian, and T.M. Nieuwenhuizen. The quantum measurement
process: Lessons from an exactly solvable model. [234]
[10] A.E. Allahverdyan and T.M. Nieuwenhuizen. Explanation of the Gibbs paradox
within the framework of quantum thermodynamics. Phys. Rev. E, 73:066119, 2006.
[224]
[11] Carl D. Anderson. . Science, 76:238, 1933. [148]
[12] Carl D. Anderson. Cosmic-ray positive and negative electrons. Phys. Rev., 44:406
416, 1933. [148]
[13] Carl D. Anderson. The positive electron. Phys. Rev., 43:491494, 1933. [148]
471
472
REFERENCES
[14] T. Andrews. The bakerian lecture: On the continuity of the gaseous and liquid states
of matter. Phil. Trans. Royal Soc. London, 159:575590, 1869. [174]
[15] V. I. Arnold. Mathematical methods of classical mechanics.
1978,1989. [264, 276, 280]
Springer Verlag,
[16] V.I. Arnold. Mathematical Methods of Classical Mechanics. Springer, New York,
1989. [171]
[17] L.W. Baggett. Functional Analysis, A primer. Marcel Dekker, Inc, 1992. Pure and
Applier Mathematics, A Program of Monographs, Textbooks and Lecture Notes. [255]
[18] R. Balian. Incomplete descriptions and relevant entropies. Amer. J. Phys., 67:1078
1090, 1999. [177, 188, 201, 226, 228, 229]
[19] R. Balian. Information in statistical physics. Studies in History and Philosophy
of Modern Physics, 36:323353, 2005. Available from World Wide Web: http:
//arxiv.org/abs/cond-mat/0501322. [244]
[20] R. Balian. From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, 2 Vols. Springer, Berlin, 2007. [168, 222]
[21] L. E. Ballentine. The statistical interpretation of quantum mechanics. Rev. Mod.
Phys., 42:358381, 1970. [219]
[22] D. Bar-Moshe and M. S. Marinov. Berezin quantization and unitary representations
of Lie groups, 1994. Available from World Wide Web: http://www.citebase.
org/abstract?id=oai:arXiv.org:hep-th/9407093. [399]
[23] V. Bargmann. . Commun. Pure and Appl. Math., 14:187, 1961. [430]
[24] V. Bargmann. . Proc. Natl. Acad. Sci. U.S., 48:199, 1962. [430]
[25] O. Barndorff-Nielsen. Information and exponential families in statistical theory. Wiley, Chichester, 1978. [204, 244, 245]
[26] O.E Barndorff-Nielsen, R.D. Gill, and P.E. Jupp. On quantum statistical inference.
J. Roy. Statist. Soc. B, 65:131, 2003. Available from World Wide Web: http:
//arxiv.org/abs/quant-ph/0307191. [204, 244]
[27] G. Barton. Introduction to advanced field theory. Wiley Interscience, New York, 1963.
[425]
[28] A.O. Barut and R. Raczka. Theory of group representations and applications. World
Scientific, 2nd ed. edition, 2000. [x]
[29] R. Battino, L.E. Strong, and S.E. Wood. A brief history of thermodynamics notation.
J. Chem. Education, 74:304305, 1997. [162]
[30] F. Benatti and H. Floreanini. Effective dissipative dynamics for polarized photons.
Physical Review D, 62:125009, 2000. [43]
REFERENCES
473
[31] F.A. Berezin. General concept of quantization. Commun. Math. Phys., 40:153174,
1975. [399]
[32] J. Berges. Introduction to nonequilibrium quantum field theory. volume 739, pages 3
62, 2004. Available from World Wide Web: http://arxiv.org/abs/hep-ph/
0409233. [228]
[33] A.N. Beris and B.J. Edwards. Thermodynamics of flowing systems with internal
microstructure. Oxford Univ. Press, New York, 1994. [201, 222, 227, 286]
[34] J.M. Bernardo and A.F.M. Smith. Bayesian theory. Wiley, Chichester, 1994. [204]
[35] R. Bijker. Symmetries in physics. arXiv, pages nuclth/0509007, 1995. [469]
[36] J.M. Blatt. An alternative approach to the ergodic problem. Prog. Theor. Phys.,
22:745756, 1959. [228]
[37] N.N. Bogoliubov. On a variational principle in the problem of many bodies (in russian). Dokl. Akad. Nauk SSSR, 119:244246, 1958. [190]
[38] A. Bohm and M. Gadella. Dirac kets, Gamow vectors, and Gel fand triplets. Springer,
berlin, 1989. [421]
[39] A.R. Bohm. Time asymmetry and quantum theory of resonances and decay. Int. J.
Theor. Phys., 42:23172338, 2003. [149]
[40] N. Bohr. On the constitution of atoms and molecules (part 1 of 3). Philosophical
Magazine, 26:125, 1913. [146]
[41] N. Bohr. On the constitution of atoms and molecules, part iii. Philosophical Magazine,
26:857875, 1913. [146]
[42] N. Bohr. On the constitution of atoms and molecules, partii, systems containing only
a single nucleus. Philosophical Magazine, 26:476502, 1913. [146]
[43] N. Bohr. The spectra of helium and hydrogen. Nature, 92:231232, 1914. [146]
[44] L. Boltzmann. Ableitung des Stefanschen Gesetzes, betreffend die Abhangigkeit
der Warmestrahlung von der Temperatur aus der electromagnetischen Lichttheorie.
Annalen der Physik und Chemie, Bd. 22, 1884. [151]
[45] Bonhoeffer and P. Harteck. Para- and ortho hydrogen. Z. Physikalische Chemie B,
4:113141, 1929. [225]
[46] M. Born. Zur quantenmechanik der stovorgange. Zeitschrift fr Physik, 37:863867,
1926. [238]
[47] M. Born and P. Jordan. Zur quantenmechanik.
XXXIV:858888, 1925. [146]
Zeitschrift f
ur Physik, Band
[48] T. Bornath, D. Kremp, W.D. Kraeft, and M. Schlanges. Kinetic equations for a
nonideal quantum system. Phys. Rev. E, 54:32743284, 1996. [227]
474
REFERENCES
[49] F.P. Bowden and L. Leben. The nature of sliding and the analysis of friction. Proc.
Royal Society London. Series A, 169:371391, 1939. [121, 171]
[50] H Breuer and Petruccione. The theory of open quantum systems. Clarendon Press,
Oxford, 2002. [150]
[51] L. Brillouin. Science and Information Theory. 2nd ed. Acad. Press, New York, 1962.
[242]
[52] R. Brown. Philosophical magazine 4 161-173 (1828), and 6 161-166 (1829). [121]
[53] M. Brune, E. Hagley, J. Dreyer, X. Matre, A. Maali, C. Wunderlich, J.M. Raimond,
and S. Haroche. Observing the progressive decoherence of the meter in a quantum
measurement. Phys. Rev. Lett., 77:48874890, 1996. [228]
[54] C. Bustamente, J. Liphardt, and F. Ritort. The nonequilibrium thermodynamics of
small systems. Physics Today, 58:4348, 2005. Available from World Wide Web:
http://arxiv.org/abs/cond-mat/0511629. [208, 210]
[55] H.B. Callen. Thermodynamics and an introduction to thermostatistics, 2nd. ed. Wiley,
New York, 1985. [161, 195, 210, 220]
[56] E. Calzetta and B.L. Hu. Nonequilibrium quantum fields: Closed-time-path effective
action, wigner function, and boltzmann equation. Phys. Rev. D, 37:28782900, 1988.
[227]
[57] C. Caratheodory. Untersuchungen u
ber die Grundlagen der Thermodynamik. Mathematische Annalen, 67:355386, 1909. [204]
[58] J. Casas-Vasquez and D. Jou. Temperature in non-equilibrium states: a review of
open problems and current proposals. Rep. Prog. Phys., 66:19372023, 2003. [167]
[59] W.G. Chapman, K.E. Gubbins, G. Jackson, and M. Radosz. SAFT:equation-of-state
solution model for associating fluids. Fluid Phase Equilib., 52:3138, 1989. [172]
[60] W.G. Chapman, K.E. Gubbins, G. Jackson, and M. Radosz. New reference equation
of state for associating liquids. Ind. Eng. Chem. Res., 29:17091721, 1990. [172]
[61] C. Chevalley. Theory of Lie groups. Princeton Univ. Press, Princeton, 1946. [36]
[62] A. Roy Chowdhury. Lie algebraic methods in integrable systems. Chapman and Hall,
Boca Raton, 2000. [463]
[63] O. Civitarese, P.O. Hess, and J.G. Hirsch. Low temperature S-shaped heat capacities
in finite nuclei. Rev. Mex., 50:406411, 2004. [210]
[64] E. Clapeyron. Memoire sur la puissance motrice de la chaleur. J. lEcole Polytechnique, 14:153190, 1834. [166]
REFERENCES
475
476
REFERENCES
[83] G.L. Eyink and H. Spohn. Negative-temperature states and large-scale, long-lived
vortices in two-dimensional turbulence. J. Stat. Phys., 70:833886, 1993. [206]
[84] J. Faraut and A. Koranyi. Analysis on symmetric cones. Clarendon, Oxford, 1994.
[428]
[85] A. Farkas. Orthohydrogen and Parahydrogen and Heavy Hydrogen. Cambridge Univ.
Press, London, 1935. [225]
[86] D. R. Farkas and G. Letzter. Ring theory from symplectic geometry. J. Pure Appl.
Algebra 125, pages 155190, 1998. [273]
[87] T.L. Fine. Theory of probability; an examination of foundations. Acad. Press, New
York, 1973. [219]
[88] D. Forster. Hydrodynamic fluctuations, broken symmetry, and correlation functions.
Benjamin, Reading, Mass., 1975. [174]
[89] R.H. Fowler and E.A. Guggenheim. Statistical thermodynamics. Cambridge Univ.
Press, Cambridge, 1939. [167]
[90] A. Frank and P. van Isacker. Algebraic methods in molecular and nuclrear structure
physics. Wiley, New York, 1994. [469]
[91] Theodore Frankel. The Geometry of Physics. Cambridge University Press, 2nd edition
edition, 2003. ISBN 0521539277. [299]
[92] I.I. Frenkel. Wave Mechanics, Advanced General Theory. Clarendon Press, Oxford,
1934. pp. 253, 436. [394]
[93] A. Fresnel. Considerations mecaniques sur la polarisation de la lumi`ere, pages 629
653. Imprimirie Imperiale, 1866. [41]
[94] H.U. Fuchs. The dynamics of heat. Springer, New York, 1996. [176]
[95] J. Fuchs and C. Schweigert. Symmetries, Lie algebras and representations, A graduate
course for physicists. Cambridge University Press, 1997. Cambridge Monographs on
Mathematical Physics. [x, 297, 449]
[96] W. Fulton and J. Harris. Representation Theory: A First Course. Springer Verlag,
2004. Graduate Texts in Mathematics / Readings in Mathematics. [301, 449]
[97] H. Gopalkrishna Gadiyar. A fresh look at the bohr-rosenfeld analysis and a proof of
a conjecture of heisenberg. arXiv, hep-th:0104256, 2001. [450]
[98] J. L. Garcia-Palacios. Introduction to the theory of stochastic processes and Brownian motion problems. 2007. Available from World Wide Web: http://www.
citebase.org/abstract?id=oai:arXiv.org:cond-mat/0701242. [121]
[99] C.W. Gardiner. Handbook of stochastic methods. Springer, 2004. [150, 343]
REFERENCES
477
[100] I.M. Gelfand and M.A. Naimark. On the imbedding of normed rings into the ring of
operators in hilbert spaces (in Russian). Mat Sbornik, 12:197213, 1943. [232]
[101] J.W. Gibbs. Elements of Vector Analysis. Yale Univ. Press, New Haven, 1881.
Reprinted, Dover 1960. [36]
[102] J.W. Gibbs. Elementary Principles in Statistical Mechanics. Yale Univ. Press, New
Haven, 1902. Reprinted, Dover 1960. [8, 187, 188, 190, 209]
[103] R. Gilmore. Uncertainty relations of statistical mechanics. Phys. Rev. A, 31:3237
3239, 1985. [213]
[104] R. Gilmore. Lie Groups, Lie Algebras, and Some of Their Applications. Dover Publications, 2006. [x, 36, 303]
[105] R.J. Glauber. Coherent and incoherent states of the radiation field. Physical Review,
131:27662788, 1963. [426, 430]
[106] H. Goldstein. Classical Mechanics. AddisonWesley, 2nd edition edition, 1950. [276,
280, 466]
[107] V. Gorini, A. Kossakowski, and E.C.G. Sudarshan. Completely positive dynamical
semigroups of N-level systems. J. Math. Phys., 17:821825, 1976. [228]
[108] M.J. Gotay, H.B. Grundling, and G.M. Tuynman. Obstruction results in quantization
theory. J. Nonlinear Science, 6:469498, 1996. [401]
[109] H. Grabert. Projection Operator Techniques in Nonequilibrium Statistical Mechanics.
Springer Tracts in Modern Physics, Berlin, 1982. [168, 201, 228]
[110] H. Grad. The many faces of entropy. Comm. Pure Appl. Math., 14:323354, 1961.
[225, 229]
[111] D.J. Griffiths. Introduction to quantum mechanics. Prentice-Hall Inc., 1995. [431]
[112] R.B. Griffiths. A proof that the free energy of a spin sytem is extensive. J. Math.
Phys, 5:12151222, 1964. [190, 208]
[113] H.J. Groenewold. On the principles of elementary quantum mechanics. Physica,
12:405460, 1946. [401]
[114] D.H.E. Gross. Phase transitions in small systems a challenge for thermodynamics.
Nucl. Phys. A, 681:366373, 2001. [208, 210]
[115] H-Geiger and E. Marsden. On a diffuse reflection of the -particles. On a Diffuse
Reflection of the -Particles, Series A 82:95500, 1909. [145]
[116] I. Hacking. The emergence of probability. Cambridge Univ. Press, Cambridge, 1975.
[219]
[117] E. L. Hahn. Spin echoes. Phys. Rev., 80:580594, 1950. [225]
478
REFERENCES
[118] E. L. Hahn. Free nuclear induction. Physics Today, 6:49, 1953. [225]
[119] H. Haken. An introduction. Springer, Berlin, 1978. [204]
[120] W.R. Hamilton. Lectures on Quaternions. Royal Irish Academy, 1853. Reprinted,
Dover 1960. [36]
[121] P. Hanggi and F. Marchesoni. 100 years of Brownian motion. 2005. Available from
World Wide Web: http://www.citebase.org/abstract?id=oai:arXiv.
org:cond-mat/0502053. [121]
[122] S.A. Hassan, A.R. Vasconcellos, and R. Luzzi. The informational-statistical-entropy
operator in a nonequilibrium ensemble formalism. Physica A, 262:359375, 1999. [188]
REFERENCES
479
[137] E.T. Jaynes. Information theory and statistical mechanics. Phys. Rev., 106:620630,
1957. [244]
[138] E.T. Jaynes. Information theory and statistical mechanics II. Phys. Rev., 108:171
190, 1957. [244]
[139] E.T. Jaynes. Probability Theory in Science and Engineering. Socony Mobil Oil Co.,
Dallas, 1958. [204]
[140] E.T. Jaynes. Maximum Entropy and Bayesian Methods (C.R. Smith et al., eds.).
Kluwer, Dordrecht, 1992. [224]
[141] W.B. Jensen. The universal gas constant R. Chem. Education Today, 80:731732,
2003. [166]
[142] G. Job. Neudarstellung der Warmelehre. Akad. Verlagsges, Frankfurt, 1972. [176]
[143] J.P. Joule. On the calorific effects of magneto-electricity, and on the mechanical value
of heat. Philos. Mag. London, 23:435443, 1843. [168]
[144] J.P. Joule. Laser-induced chemical reactions. Applied Optics, 13:301309, 1974. [190]
[145] V.G. Kac. Infinite-dimensional Lie algebras. Cambridge University Press, 1994. [x]
[146] B.L. Karger, L.R. Snyder, and C. Horvath. An Introduction to Separation Science.
Wiley, 1973. [208]
[147] A. Katz. Principles of statistical mechanics. The information theory approach. Freeman, San Francisco, 1967. [204, 242]
[148] B.D. Keister and W.N. Polyzou. Relativistic hamiltonian dynamics in nuclear and
particle physics. Adv. Nuclear Physics, 20:226479, 1991. [13]
[149] M. Keller, B. Lange, K. Hayasaka, W. Lange, and H. Walther. A calcium ion in a
cavity as a controlled single-photon source. New Journal of Physics, 6:95, 2004. [48,
49]
[150] Shoon Kyung Kim. Group Theoretical Methods and Applications to Molecules and
Crystals. Cambridge University Press, 1999. [31]
[151] A. Kirillov. Introduction to Lie groups and Lie algebras. http://www.math.
sunysb.edu/~kirillov/mat552/liegroups.pdf. New York 2004. [299, 449]
[152] A.A. Kirillov. Lectures on the orbit method. American Mathematical Society, Providence, RI, 2004. [399]
[153] C. Kittel. Elementary Statistical Physics. John Wiley and Sons Ltd., 1966. [153]
[154] A.W. Knapp. Lie Groups Beyond an Introduction. Birkhauser, 2002. [259, 266, 297,
299, 301, 303, 449]
480
REFERENCES
REFERENCES
481
[171] J. Laskar. Large schale chaos in the solar system. Astron. Astrophys., 287, 1994. [4]
[172] P.D. Lax. Linear algebra and its applications. Wiley-Interscience, 2007. [26]
[173] J.L. Lebowitz and H.L. Frisch. Model of nonequilibrium ensemble: Knudsen gas.
Phys. Rev., 107:917923, 1957. [228]
[174] U. Leonhardt and A. Neumaier. Explicit effective hamiltonians for general linear
quantum-optical networks. J. Optics B: Quantum Semiclass. Opt., 6:L1L4, 2004.
quant-ph/0306123. [310]
[175] E.H. Lieb and J. Yngvason. The physics and mathematics of the second law of
thermodynamics. Physics Report, 310:196, 1999. [205]
[176] G. Lindblad. On the generators of quantum dynamical semigroups. Commun. Math.
Phys., 48:119130, 1976. [228]
[177] J.J. Lissauer. Chaotic motion in the solar system. Rev. Mod. Phys., 71:835845, 1999.
[4]
[178] L. Ljung. System identification: theory for the user. Prentice-Hall, Upper Saddle
River, 1986. [225]
[179] W. Heisenberg M. Born, P. Jordan. Zur quantenmechanik ii. Zeitschrift f
ur Physik,
Band XXXV:557615, 1925. [146]
[180] E.-L. Malus. Sur une propriete de la lumi`ere reflechie. Mem. Phys. Chim. Soc.
DArcueil, 2:143158, 1809. [41]
[181] L. Mandel and E. Wolf. Optical coherence and quantum optics. Cambridge Univ.
Press, Cambridge, 1995. [47, 235]
[182] F. Mandl. Statistical Physics. Wiley, 2nd edition edition, 1988. [153]
[183] J.B. Marion and S.T. Thornton. Classicla dynamics of particles and systems. Saunders
College Publishing, 4th edition edition, 1995. [276, 280]
[184] J.E. Marsden and T.S. Ratiu. Introduction to Mechanics and Symmetry. Springer,
New York, 1994. [379]
[185] J.E. Marsden and T.S. Ratiu. Introduction to Mechanics and Symmetry. Springer,
New York, 1994. [382]
[186] P.C. Martin and J. Schwinger. Theory of many-particle systems. i. Phys. Rev.,
115:13421373, 1959. [188]
[187] I. Martinson and L.J. Curtis. Janne rydberg his life and work. Nuclear Instruments
and Methods in Physics Research B, 235:1722, 2005. [145]
[188] R.I. Masel. Bemerkungen uber die Krafte der unbelebten Natur. Ann. Chem.
Pharmacie, 42, 1842. [168]
482
REFERENCES
[189] R.I. Masel. Principles of Adsorption and Reaction on Solid Surfaces. Wiley, New
York, 1996. [208]
[190] K. Maurin. General Eigenfunction Expansions and Unitary Representations of Topological Groups. Polish Scientific Publishers, Warsaw, 1968. [421]
[191] N. Moiseyev. Quantum theory of resonances: calculating energies, widths and crosssections by complex scaling. Physics Reports, 302:211293, 1998. [149]
[192] D. Montgomery and G. Joyce. Statistical mechanics of negative temperature states.
Phys. Fluids, 17:11391145, 1974. [206]
[193] R.V. Moody and A. Pianzola. Lie algebras with triangular decompositions. WileyInterscience, New York, 1995. [447]
[194] H. Mori. Transport, collective motion, and Brownian motion. Prog Theor Phys.,
33:423455, 1965. [190]
[195] P.J. Morrison. Hamiltonian description of the ideal fluid. Rev. Mod. Phys. 70, pages
467521, 1998. [379]
[196] R. Mrugala, J. D. Nulton, J. C. Schon, and P. Salamon. Statistical approach to the
geometric structure of thermodynamics. Phys. Rev. A, 41:31563160, 1990. [188]
[197] I. M
uller and T. Ruggeri. Rational Extended Thermodynamics, volume 37. Springer,
New York, 1999. [227]
[198] S.L. Murov, I. Carmichael, and G..L Hug. Handbook of Photochemistry. Marcel
Dekker, New York, 1993. [190]
[199] K.-H. Neeb. Holomorphy and Convexity in Lie Theory. Walter de Gruyter - Berlin New York, 2000. [270]
[200] K.-H. Neeb. Towards a Lie theory of locally convex groups. Japan. J. Math., 1:291
468, 2006. [x, 258, 374]
REFERENCES
483
[206] A. Neumaier. Optical models for quantum mechanics. Slides, 2008. [42]
[207] M. A. Nielsen and I. L. Chuang. Quantum Computation and Quantum Information.
Cambridge University Press, Cambridge, 2002. [40]
[208] L.J. Norrby. Why is mercury liquid. J. Chem. Educ. 68, pages 110113, 1991. [6]
[209] H.C. Oettinger. Beyond Equilibrium Thermodynamics. Wiley, Hoboken, 2005. [201,
202, 222, 227]
[210] W. of Ockham. Philosophical Writings (P. Boehner, ed.). Nelson, Edinburgh, 1957.
[220]
[211] D. Papousek and M.R. Aliev. Molecular vibrational-rotational spectra. Elsevier, Amsterdam, 1982. [x]
[212] W. Pauli. The connection between spin and statistics. Phys. Rev., 58:716722, 1940.
[437]
[213] R.E. Peierls. On a minimum property of the free energy,. Phys. Rev., 54:918919,
1938. [190]
[214] O. Penrose. Foundations of statistical mechanics. Rep. Prog. Phys., 42:19372006,
1979. [219]
[215] R. Penrose and W. Rindler. pinors and space-time, Vol. 1. Cambridge University
Press, 1987. [365]
[216] A.M. Perelomov. Generalized Coherent States and Their Applications. Spring-Verlag,
Berlin, 1986. [428, 432]
[217] A. Peres. Pure states, mixtures, and compounds. Acad. Press, New York, 1978. [47,
232, 237]
[218] A. Peres and D.R. Terno. Quantum information and relativity theory. Rev. Mod.
Phys., 76:93123, 2004. [219]
[219] M. Planck. Entropy and temperature of radiant heat. Annalen der Physik, 1:719737,
April 1900. [26, 141]
[220] M. Planck. On the law of distribution of energy in the normal spectrum. Annalen
der Physik, vol. 4:p. 553, 1901. [150]
[221] H. Poincare. Concerning theoretical descriptions of polarized light, Theorie mathematique de la lumi`ere. Saint-Andre-des-Arts, Paris, 1892. [41]
[222] J. Polonyi and K. Sailer. Renormalization group in internal space. Phys. Rev. D, 71,
2005. [228]
[223] J.M. Prausnitz, R.N. Lichtenthaler, and E. Gomes de Azevedo. Molecular thermodynamics of fluid-phase equilibria. Prentice Hall, Upper Saddle River, 1999. [172]
484
REFERENCES
[224] E.M. Purcell and R.V. Pound. A nuclear spin system at negative temperature. Phys.
Rev., 81:279280, 1951. [206]
[225] J.M. Radcliffe. Some properties of coherent spin states. J. Phys. A: Gen. Phys. 4,
pages 313323, 1971. [46]
[226] R. Ramamoorthi and A.H. Barr. Fast construction of accurate quaternion splines. In
Proc. 24th ann. conf. Computer graphics interactive techniques, pages 287292, 1997.
Available from World Wide Web: http://www.cs.columbia.edu/cg/pdfs/
64_sig97.pdf. [64]
[227] T.S. Ratiu. A crash course in geometric mechanics. http://www.math.
univ-metz.fr/ecoles/monastir05-dir/notes/CoursTudorRatiu.
pdf, 2001. [379]
[228] J. Rau and B. M
uller. From reversible quantum microdynamics to irreversible quantum transport. Physics Rep., 272:159, 1996. [168, 201, 228]
[229] J.W.S. Rayleigh. Remarks upon the law of complete radiation. Philosophical Magazine
XLIX, 1900. [152]
[230] L.E. Reichl. A Modern Course in Statistical Physics, 2nd. ed. Wiley, New York, 1998.
[153, 161, 195, 209]
[231] H. Renon and J.M. Prausnitz. Local compositions in thermodynamic excess functions
for liquid mixtures. AIChE J., 14:135144, 1968. [172]
[232] C.E. Rickart. General theory of Banach algebras. Van Nostrand, Princeton, 1960.
[255]
[233] K. Ridderbos. The coarse-graining approach to statistical mechanics: how blissful
is our ignorance? Studies in History and Philosophy of Modern Physics, 33:6577,
2002. [228]
[234] T.M. Ridderbos and M.L.G. Redhead. The spin-echo experiments and the second law
of thermodynamics. Foundations of Physics, 28:12371270, 1998. [225]
[235] M.A. Rieffel. Deformation quantization for actions of Rd . Mem. Amer. Math. Soc.,
506, 1993. [399]
[236] W. Ritz. Recherches critiques sur lelectrodynamique generale. Annales Chemie et
de Physique, 13:145275, 1908. [141]
[237] B. Robertson. Equations of motion in nonequilibrium statistical mechanics. Phys.
Rev., 144:151161, 1966. [199, 227]
[238] R.T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1996. [162]
[239] R.T. Rockafellar. Second-order convex analysis. J. Nonlinear Convex Anal, 1:116,
1999. [171]
REFERENCES
485
[240] D. Roller. The early developmewnt of the concepts of temperature and heat. Harvard
Univ. Press Cambridge, Mass, 1950. [167]
[241] J. Rothstein. Nuclear spin echo experiments and the foundations of statistical mechanics. Amer. J. Physics, 25:510518, 1957. [225]
[242] W. Rudin. Functional Analysis. McGrawHill, 1991. [370]
[243] D. Ruelle. Statistical mechanics. Rigorous results. Imperial College Press, London,
1999. [208]
[244] D. Ruelle. Thermodynamic formalism. The mathematical structures of equilibrium
statistical mechanics, 2nd. ed. Cambridge Univ. Press, Cambridge, 2004. [208]
[245] E. Rutherford. The scattering of and particles by matter and the structure of
the atom. Philosophical Magazine, Series 6 21:669688, 1911. [146]
[246] M. Scheunert. The theory of Lie superalgebras. Springer Verlag, Lecture notes in
mathematics (716), 1979. [436]
[247] M. Schlosshauer. Decoherence, the measurement problem, and interpretations of
quantum mechanics. Rev. Mod. Phys., 76:12671305, 2005. [219]
[248] E. Schrodinger. Der stetige u
bergang von der Mikro- zur Makromechanik. Naturwissenschaften, 14(28):664666, 1926. [426]
[249] E. Schrodinger. An undulatory theory of the mechanics of atoms and molecules. Phys.
Rev., 28:1049, 1926. [146]
[250] J.S. Schwinger. Spin, statistics, and the TCP theorem. Proc. Nat. Acad. Sci., 44:223
228, 1958. [437]
[251] I. E. Segal. . Illinois J. Math., 6:520, 1962. [430]
[252] K. Shoemake. Animating rotation with quaternion curves. In SIGGRAPH 85 proceedings, pages 245254, 1985. [64]
[253] B. Simon. Resonances and complex scaling: A rigorous overview. International
Journal of Quantum Chemistry, 1978. [149]
[254] L. Sklar. Physics and Chance. Cambridge Univ. Press, Cambridge, 1993. [189, 219,
232, 243]
[255] R. Slansky. Group theory for unified model building. Phys. Reports, 79:1128, 1981.
[468]
[256] F.T. Smith. Diabatic and adiabatic representations for atomic collision problems.
Phys. Rev., 179:111123, 1969. [218]
[257] H. Spohn. Kinetic equations from Hamiltonian dynamics: Markovian limits. Rev.
Mod. Phys., 52:569615, 1980. [201, 228]
486
REFERENCES
[258] H.P. Stapp. The Copenhagen interpretation. Amer. J. Phys., 40:10981116, 1972.
[219]
REFERENCES
487
[275] I. Vaisman. Lectures on the Geometry of Poisson Manifolds. Birkhauser, Basel, 1994.
[379]
[276] B.L. van der Waerden. Sources of quantum mechanics. North-Holland Publishing
Company, 1967. [x, 144, 150]
[277] L. van Hove. Sur certaines representations unitaires dun groupe infini de transformations. Proc. Roy. Acad. Sci. Belgium, 26:1102, 1951. [401]
[278] M.A.A. van Leeuwen, A.M. Cohen, and B. Lisser. Lie, a computer algebra package for lie group computations, 2008. Available from World Wide Web: http:
//www-math.univ-poitiers.fr/~maavl/LiE/. Web site. [250]
[279] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38:4995,
1996. [315]
[280] V.S. Varadarajan. Supersymmetry for mathematicians: An introduction.
Courant Lecture Notes, 2004. [436]
AMS,
[281] A. Voros. Semiclassical approximations. Ann. Inst. Henri Poincare, 24:3190, 1976.
[450]
[282] D. Wallace. Implications of quantum theory in the foundations of statistical mechanics. Technical report, 2001. Available from World Wide Web: http://
philsci-archive.pitt.edu/archive/00000410. [219]
488
REFERENCES
39
7n
m
r
86
6n
m
la
se
r
la
se
Ca+
resonator
(0.1mm)
mirror
semipermeable
mirror
detector