MC MC Revolution
MC MC Revolution
MC MC Revolution
i
M (f(s
i
), f(s
i+1
)) ,
where s
i
runs over consecutive symbols in the coded message. Functions f which
have high values of Pl(f) are good candidates for decryption. Maximizing fs were
searched for by running the following Markov chain Monte Carlo algorithm:
Start with a preliminary guess, say f.
Compute Pl(f).
Change to f
.
If not, ip a Pl(f
.
If the coin toss comes up tails, stay at f.
The algorithm continues, trying to improve the current f by making random trans-
positions. The coin tosses allow it to go to less plausible fs, and keep it from
getting stuck in local maxima.
Of course, the space of fs is huge (40! or so). Why should this Metropolis
random walk succeed? To investigate this, Marc tried the algorithm out on a
problem to which he knew the answer. Figure 2 shows a well-known section of
Shakespeares Hamlet.
Figure 2:
The text was scrambled at random and the Monte Carlo algorithm was run.
Figure 3 shows sample output.
Figure 3:
THE MARKOV CHAIN MONTE CARLO REVOLUTION 181
After 100 steps, the message is a mess. After two thousand steps, the decrypted
message makes sense. It stays essentially the same as further steps are tried. I nd
it remarkable that a few thousand steps of this simple optimization procedure work
so well. Over the past few years, friends in math and computer science courses
have designed homework problems around this example [17]. Students are usually
able to successfully decrypt messages from fairly short texts; in the prison example,
about a page of code was available.
The algorithm was run on the prison text. A portion of the nal result is shown
in Figure 4. It gives a useful decoding that seemed to work on additional texts.
Figure 4:
I like this example because a) it is real, b) there is no question the algorithm found
the correct answer, and c) the procedure works despite the implausible underlying
assumptions. In fact, the message is in a mix of English, Spanish and prison jargon.
The plausibility measure is based on rst-order transitions only. A preliminary
attempt with single-letter frequencies failed. To be honest, several practical details
have been omitted: we allowed an unspecied ? symbol in the deviation (with
transitions to and from ? being initially uniform). The display in Figure 4 was
cleaned up by a bit of human tinkering. I must also add that the algorithm
described has a perfectly natural derivation as Bayesian statistics. The decoding
function f is a parameter in a model specifying the message as the output of a
Markov chain with known transition matrix M(x, y). With a uniform prior on f,
the plausibility function is proportional to the posterior distribution. The algorithm
is nding the mode of the posterior.
In Section 2, I explain Markov chains and the Metropolis algorithm more care-
fully. A closely related Markov chain on permutations is analyzed in Section 3.
The arguments use symmetric function theory, a bridge between combinatorics and
representation theory.
A very dierent example hard discs in a box is introduced in Section 4. The
tools needed for study are drawn from analysis, micro-local techniques (Section 5)
along with functional inequalities (Nash and Sobolev inequalities).
Throughout, emphasis is on analysis of iterates of self-adjoint operators using
the spectrum. There are many other techniques used in modern probability. A brief
overview, together with pointers on how a beginner can learn more, is in Section 6.
182 PERSI DIACONIS
2. A brief treatise on Markov chains
2.1. A nite case. Let A be a nite set. A Markov chain is dened by a matrix
K(x, y) with K(x, y) 0,
y
K(x, y) = 1 for each x. Thus each row is a probability
measure so K can direct a kind of random walk: from x, choose y with probability
K(x, y); from y choose z with probability K(y, z), and so on. We refer to the
outcomes X
0
= x, X
1
= y, X
2
= z, . . . , as a run of the chain starting at x. From
the denitions P(X
1
= y[X
0
= x) = K(x, y), P(X
1
= y, X
2
= z[X
0
= x) =
K(x, y)K(y, z). From this,
P(X
2
= z[X
0
= x) =
y
K(x, y)K(y, z),
and so on. The nth power of the matrix has x, y entry P(X
n
= y[X
0
= x).
All of the Markov chains considered in this article have stationary distributions
(x) > 0,
x
(x) = 1 with satisfying
(2.1)
x
(x)K(x, y) = (y).
Thus is a left eigenvector of K with eigenvalue 1. The probabilistic interpretation
of (2.1) is pick x from and take a step from K(x, y); the chance of being at y is
(y). Thus is stationary for the evolution. The fundamental theorem of Markov
chains (a simple corollary of the PeronFrobenius theorem) says, under a simple
connectedness condition, is unique and high powers of K converge to the rank
one matrix with all rows equal to .
Theorem 1 (Fundamental theorem of Markov chains). Let A be a nite set and
K(x, y) a Markov chain indexed by A. If there is n
0
so that K
n
(x, y) 0 for all
n > n
0
, then K has a unique stationary distribution and, as n ,
K
n
(x, y) (y) for each x, y A.
The probabilistic content of the theorem is that from any starting state x, the
nth step of a run of the Markov chain has a chance close to (y) of being at y if
n is large. In computational settings, [A[ is large, it is easy to move from x to y
according to K(x, y), and it is hard to sample from directly.
Consider the cryptography example in the Introduction. There, A is the set of all
one-to-one functions f from code space to the usual alphabet A, B, . . . , Z, 1, 2, . . . ,
9, 0, , ., ?, . . . . Assume there are m distinct code symbols and n symbols in the
alphabet space. The stationary distribution is
(2.2) (f) = z
1
i
M (f(s
i
), f(s
i+1
)) ,
where M is the (assumed given) rst-order transition matrix of English and the
product ranges over consecutive coded symbols in the xed message. The normal-
izing constant z is dened by
z =
i
(M (f(s
i
), f(s
i+1
))) .
Note that z is unknowable practically.
The problem considered here is to sample fs repeatedly from (f). This seems
daunting because of the huge size of A and the problem of unknown z. The Me-
tropolis Markov chain K(f, f
J(x, y) if x ,= y, A(x, y) 1,
J(x, y)A(x, y) if x ,= y, A(x, y) < 1,
J(x, y) +
z:A(x,z)<1
J(x, z)(1 A(x, z)) if x = y.
In (2.3), the acceptance ratio is A(x, y) = (y)J(y, x)/(x)J(x, y). The formula
(2.3) has a simple interpretation: from x, choose y with probability J(x, y); if
A(x, y) 1, move to y; if A(x, y) < 1, ip a coin with this success probability and
move to y if success occurs; in other cases, stay at x. Note that the normalizing
constant for cancels out in all calculations. The new chain satises
(x)K(x, y) = (y)K(y, x),
and thus
x
(x)K(x, y) =
x
(y)K(y, x) = (y)
x
K(y, x) = (y),
so that is a left eigenvector with eigenvalue 1. If the chain (2.3) is connected,
Theorem 1 is in force. After many steps of the chain, the chance of being at y is
approximately (y), no matter what the starting state A. Textbook treatments of
the Metropolis algorithm are in [44] or [62]. A literature review can be found in
[31].
In the cryptography example A is all one-to-one functions from symbol space (say
of size m) to alphabet space (say of size n m). Thus [A[ = n(n1) (nm+1).
This is large if, e.g., m = n = 50. The stationary distribution is given in (2.2). The
proposal chain J(f, f
) =
_
1
n(n1)(mn+2)(mn+1)
if f, f
) = J(f
, f), so A(f, f
) = (f
)/(f).
2.3. Convergence. A basic problem of Markov chain theory concerns the rate of
convergence in K
n
(x, y) (y). How long must the chain be run to be suitably
close to ? It is customary to measure distances between two probabilities by total
variation distance:
|K
n
x
|
TV
=
1
2
y
[K
n
(x, y) (y)[ = max
AX
[K
n
(x, A) (A)[.
This yields the math problem: Given K, , x and > 0, how large n so
|K
n
x
|
TV
< ?
Sadly, there are very few practical problems where this question can be an-
swered. In particular, no useful answer in known for the cryptography problem. In
Section 3, a surrogate problem is set up and solved. It suggests that when n
.
= m,
order nlog n steps suce for mixing.
184 PERSI DIACONIS
Suppose, as is the case for the examples in this paper, that the Markov chain
is reversible: (x)K(x, y) = (y)K(y, x). Let L
2
() be g : A R with inner
product
g, h) =
x
g(x)h(x)(x).
Then K operates on L
2
by
Kg(x) =
g(y)K(x, y).
Reversibility implies Kg, h) = g, Kh), so K is self-adjoint. Now, the spectral
theorem says there is an orthonormal basis of eigenvectors
i
and eigenvalues
i
(so K
i
=
i
i
) for 0 i [A[ 1 and 1 =
0
1
|X|1
1. By
elementary manipulations,
K(x, y) = (y)
|X|1
i=0
i
(x)
i
(y),
K
n
(x, y) = (y)
|X|1
i=0
n
i
i
(x)
i
(y).
Using the CauchySchwartz inequality, we have
(2.4) 4|K
n
x
|
2
TV
y
(K
n
(x, y) (y))
2
(y)
=
|X|1
i=1
2n
i
2
i
(x).
The bound (2.4) is the basic eigenvalue bound used to get rates of convergence
for the examples presented here. To get sharp bounds on the right hand side
requires good control of both eigenvalues and eigenvectors. For more detail and
many examples, see [79]. A detailed example on the permutation group is given
in Section 3 below. Examples on countable and continuous spaces are given in
Section 5.
2.4. General state spaces. Markov chains are used to do similar calculations
on Euclidean and innite-dimensional spaces. My favorite introduction to Markov
chains is the book by Bremaud [10], but there are many sources: For nite state
spaces see [83]. For a more general discussion, see [7] and the references in Section
6.1.
Briey, if (A, B) is a measurable space, a Markov kernel K(x, dy) is a probability
measure K(x, ) for each x. Iterates of the kernel are given by, e.g.,
K
2
(x, A) =
_
K(z, A)K(x, dz).
A stationary distribution is a probability (dx) satisfying
(A) =
_
K(x, A)(dx)
under simple conditions K
n
(x, A) (A) and exactly the same problems arise.
Reversible Markov chains yield bounded self-adjoint operators and spectral tech-
niques can again be tried. Examples are in Section 4, Section 5, and Section 6.
THE MARKOV CHAIN MONTE CARLO REVOLUTION 185
3. From cryptography to symmetric function theory
This section answers the question, What does a theorem in this subject look
like? It also illustrates how even seemingly simple problems can call on tools from
disparate elds, in this case, symmetric function theory, a blend of combinatorics,
and representation theory. This section is drawn from joint work with Phil Hanlon
[21].
3.1. The problem. Let A = S
n
, the symmetric group on n letters. Dene a
probability measure on S
n
by
(3.1) () = z
1
d(,
0
)
for ,
0
S
n
, 0 < 1.
In (3.1), d(,
0
) is a metric on the symmetric group, here taken to be
d(,
0
) = minimum number of transpositions required to bring to
0
.
This is called Cayleys distance in [20] because a result of A. Cayley implies that
d(,
0
) = n c(
1
0
) with c() the number of cycles in . The metric is bi-
invariant:
d(,
0
) = d(,
0
) = d(,
0
).
The normalizing constant z is known in this example:
z =
d(,
0
)
=
n
i=1
(1 +(i 1)).
If = 1, () is the uniform distribution on S
n
. For < 1, () is largest
at
0
and falls o from its maximum as moves away from
0
. It serves as a
natural non-uniform distribution on S
n
, peaked at a point. Further discussion of
this construction (called Mallows model through Cayleys metric) with examples
from psychology and computer science is in [18, 19, 28]. The problem studied here
is
How can samples be drawn from ?
One route is to use the Metropolis algorithm, based on random transpositions.
Thus, from , choose a transposition (i, j) uniformly at random and consider
(i, j) =
. If d(
,
0
) d(,
0
), the chain moves to
. If d(
,
0
) > d(,
0
),
ip a -coin. If this comes up heads, move to
) =
1/
_
n
2
_
if
= (i, j), d(
,
0
) < d(,
0
),
/
_
n
2
_
if
= (i, j), d(
,
0
) > d(,
0
),
c
_
1 /
_
n
2
__
if
) = (
)K(
, ),
186 PERSI DIACONIS
so that the chain has stationary distribution . When n = 3 and
0
= id, the
transition matrix is
id (12) (13) (23) (123) (132)
1
3
3
0 0
1
3
2
3
(1 ) 0 0
3
3
1
3
0
2
3
(1 ) 0
3
3
1
3
0 0
2
3
(1 )
3
3
0
1
3
1
3
1
3
0 0
0
1
3
1
3
1
3
0 0
.
id
(12)
(13)
(23)
(123)
(132)
The stationary distribution is the left eigenvector proportional to (1, , , ,
2
,
2
).
This example bears a passing resemblance to the cryptography example: the set
of one-to-one functions of an m-set to an n-set is replaced by the symmetric group.
Presumably, the stationary distribution in the cryptography example is peaked at
a point (the best decoding) and the algorithms are essentially the same.
To analyze the chain (3.2) using spectral theory requires knowledge of the eigen-
values and vectors. By what still seems like a miracle, these are available in closed
form. When = 1, the chain (3.2) reduces to the transpose at random chain, per-
haps the rst Markov chain given a sharp analysis [32]. Here is a typical result
drawn from work with Phil Hanlon [21].
Theorem 2. For 0 < 1, the Markov chain K(,
) commutes with
the action of the symmetric group by conjugation, so only transitions between
conjugacy classes are needed. When n = 3, the transition matrix becomes
1
3
1, 2 3
1
3
1 0
1
3
2
3
(1 )
2
3
0 1 0
1, 2
3
THE MARKOV CHAIN MONTE CARLO REVOLUTION 187
with stationary distribution proportional to (1, 3, 2
2
). Let
(3.3) M(, ), m()
be the transition matrix and let stationary distribution be indexed by partitions
, .
Theorem 3. For 0 < 1, the Markov chain (3.3) has an eigenvalue
for each
partition (
1
,
2
, . . . ,
r
) of n with
= (1 ) +
n(
t
) +n()
_
n
2
_ , n() =
n
i=1
(i 1)
i
.
The corresponding right eigenfunction, normed to be orthonormal in L
2
(m), is
(3.4)
c
()
m()j
/
n
n!
1/2
.
In (3.4), c
= P
1
P
2
P
n
, the P
(x; ) =
n
c(, )P
(x).
The c(, ) are rational functions of . For example, when n = 3,
J
1
3 = P
3
1
3P
12
2P
3
,
J
12
= P
3
1
+ ( 1)P
12
P
3
,
J
3
= P
3
1
3P
12
+ 2
2
P
3
.
The algebraic combinatorics community had developed properties of Jack sym-
metric functions because they were there. Using this knowledge allowed us to
properly normalize the eigenfunctions and work with them to prove Theorems 1 and
2. Many more examples of this type of interplay are in [14]. A textbook account
of our work is in [48].
188 PERSI DIACONIS
There is a fascinating research problem opened up by this analysis. When = 1,
the Jack symmetric functions are Schur functions and the change of basis coef-
cients are the characters of the symmetric group. The Markov chain becomes
random transpositions. This was analyzed in joint work with Shahshahani [32].
Adding in the deformation by the Metropolis algorithm deforms the eigenvalues
and eigenvectors in a mathematically natural way. Is there a similar deformation
that gets the coecients of the Macdonald polynomials? This is ongoing joint work
with Arun Ram. Changing the metric on S
n
, using pairwise adjacent transpositions
instead of all transpositions, gives a deformation to Hecke algebras. The Metropolis
algorithm gives a probabilistic interpretation of the multiplication in these algebras.
This again is joint work with Ram [28]. This anity between the physically natural
Metropolis algorithm and algebra is a mystery which cries out for explanation.
Turning back toward the cryptography example, how do things change if we go
from the permutation group to the set of one-to-one functions from an m-set to
an n-set? When = 1, this was worked out by Andrew Greenhalgh. The analysis
involves the algebra of functions on S
n
which are invariant under conjugation by
the subgroup S
m
S
nm
and bi-invariant under the subgroup S
nm
. These doubly
invariant functions form a commutative algebra discussed further in [14, Sect. 9.8].
Do things deform well when ,= 1? It is natural to guess the answer is Yes.
It is important to end these fascinating success stories with the observation that
any similarly useful analysis of the original cryptography example seems remote.
Further, getting rates of convergence for the Metropolis algorithm for other metrics
in (3.1) is a challenging open problem.
4. Hard discs in a box
Consider possible placements of n discs of radius in the unit square. The discs
must be non-overlapping and completely contained in the unit square. Examples
at low and high density (kindly provided by Werner Krauth from [57]) are shown
in Figure 5.
= 0.48 = 0.72
Figure 5:
In applications, n is fairly large (e.g., 10010
6
) and of course should be suitably
small. The centers of the discs give a point in R
2n
. We know very, very little about
the topology of the set A(n, ) of congurations: for xed n, what are useful bounds
on for the space to be connected? What are the Betti numbers? Of course, for
small this set is connected but very little else is known. By its embedding in
R
2n
, A(n, ) inherits a natural uniform distribution, Lebesgue measure restricted
to A(n, ). The problem is to pick points in A(n, ) uniformly. If X
1
, X
2
, . . . , X
k
THE MARKOV CHAIN MONTE CARLO REVOLUTION 189
are chosen from the uniform distribution and f : A(n, ) R is a function, we may
approximate
(4.1)
_
X(n,)
f(x)dx by
1
k
k
i=1
f(X
i
).
Motivation for this task and some functions f of interest will be given at the end
of this section.
This hard disc problem is the original motivation for the Metropolis algorithm.
Here is a version of the Metropolis algorithm for hard discs.
Start with some x A(n, ).
Pick a disc center at random (probability 1/n).
Pick a point in a disc of radius h, centered at the chosen disc center at
random (from Lebesgue measure).
Try to move the chosen disc center to the chosen point; if the resulting
conguration is in A(n, ), accept the move; else, stay at x.
The algorithm continues, randomly moving coordinates. If X
1
, X
2
, . . . , X
k
denotes
the successive congurations, theory shows that X
k
becomes uniformly distributed
provided , k are small. For large k, the X
i
can be used as in (4.1).
Motivation. The original motivation for this problem comes from the study of
phase transition in statistical mechanics. For many substances (e.g., water), exper-
iments produce phase diagrams such as that shown in Figure 6.
Figure 6:
Every aspect of such phase diagrams is intensely studied. The general picture, a
nite length liquidvapor phase transition line ending in a critical point, a triple
point where all three forms co-exist and a solidliquid phase line seemingly extend-
ing to innity, seems universal. The physicist G. Uhlenbeck [87, p. 11] writes Note
that since these are general phenomena, they must have general explanation; the
precise details of the molecular structure and of the intermolecular forces should
not matter. In discussing the solidliquid transition, Uhlenbeck [87, p. 18] notes
that the solidliquid transition seemingly occurs at any temperature provided the
pressure is high enough. He suggests that at high pressure, the attractive inter-
molecular force does not play a role . . . and that it is the sharp repulsive forces that
are responsible for the soliduid transition. It is this train of thought that explains
the great interest of the so-called Kirkwood transition. In 1941, Kirkwood posed
the problem of whether a gas of hard spheres would show a phase transition. . . .
From then to now, chemists and physicists have studied this problem using a
variety of tools. Current ndings indicate a phase transition when the density of
discs is large (about .71, still well below the close packing density). Below this
190 PERSI DIACONIS
transition density, the discs look random; above this density the discs look close
to a lattice packing. These notions are quantied by a variety of functions f. For
example,
f(x) =
1
N
N
j=1
1
N
j
k
e
6i
jk
,
where the sum is over the N particles encoded by x R
2N
, the sum in k is over the
N
j
neighbors of the jth particle, and
jk
is the angle between the particles j and k
in an arbitrary but xed reference frame. If the conguration x has a local hexatic
structure, this sum should be small. Typical values of f are studied by simulation.
Dierent functions are used to study long-range order.
The above rough description may be supplemented by the useful survey of [64].
A host of simulation methods are detailed in [2]. An up-to-date tutorial on hard
discs appears in [57, Chap. 2].
For the purposes of this paper, the main points are i) the hard disc model is a
basic object of study and ii) many key ndings have been based on variants of the
Metropolis algorithm. In the next section, we ush out the Metropolis algorithm
to more standard mathematics.
5. Some mathematics
Here is a generalization of the hard discs Metropolis algorithm. Let R
d
be
a bounded connected open set. Let p(x) > 0, z =
_
B
1
_
x y
h
_
min
_
p(x)
p(y)
, 1
_
dy
with m(x) = 1
_
R
d
h
d
Vol(B
1
)
B
1
_
x y
h
_
min
_
p(x)
p(y)
, 1
_
dy.
This kernel operates on L
2
(p) via
P f(x) =
_
R
d
f(y)P(x, dy).
THE MARKOV CHAIN MONTE CARLO REVOLUTION 191
It is easy to see that P(x, dy) is a bounded self-adjoint operator on L
2
(p). The
associated Markov chain may be described in English by
Start at X
0
= x .
Pick X
1
from P(x, dy).
Pick X
2
from P(X
1
, dy).
And so on . . . .
Thus
PX
2
A = P
2
x
(A) =
_
R
d
P(z, A)P(x, dz),
PX
k
A = P
k
x
(A) =
_
R
d
P(z, A)P
k1
(x, dz).
Under our assumptions ( connected, h small), for all x and A , the
algorithm works:
P
k
x
(A)
k
_
A
p(y)dy.
It is natural to ask how fast this convergence takes place: how many steps should
the algorithm be run to do its job? In joint work with Gilles Lebeau and Laurent
Michel, we prove the following.
Theorem 4. Let be a connected Lipshitz domain in R
d
. For p measurable (with
0 < m p(x) M < on ) and h xed and small, the Metropolis algorithm
(5.1) satises
(5.2)
P
k
x
(A)
_
A
p(y)dy
c
1
e
c
2
kh
2
uniformly in x , A .
In (5.2), c
1
, c
2
are positive constants that depend on p and but not on x, k
or h. The result is sharp in the sense that there is a matching lower bound. Good
estimates of c
2
are available (see the following section).
Note that the Metropolis algorithm (5.1) is based on steps in the full-dimensional
ball B
(x) while the Metropolis algorithm for discs in Section 2 is based on just
changing two coordinates at a time. With extra eort, a result like (5.2) can be
shown for the hard disc problem as well. Details are in [25]. As a caveat, note that
we do not have good control on c
1
in terms of the dimension d or smoothness of .
The results are explicit but certainly not sharp.
The Metropolis algorithm of this section is on a Euclidean space with basic steps
driven by a ball walk. None of this is particularly important. The underlying state
space can be quite general, from nite (all one-to-one functions from one nite set
to another as in our cryptography example) to innite-dimensional (Markov chains
on spaces of measures). The proposal distribution neednt be symmetric. All of
the introductory books on simulation discussed in Section 6 develop the Metropolis
algorithm. In [8] it is shown to be the L
1
projection of the proposal distribution to
the p self-adjoint kernels on . A survey of rates of convergence results on nite-
state spaces with extensive references to the work of computer science theorists
on approximate counting and mathematical physicists on Ising models is in [31].
Finally, there are many other classes of algorithms and proof techniques in active
development. This is brought out in Section 6 below.
192 PERSI DIACONIS
5.1. Ideas and tools. To analyze rates of convergence it is natural to try spec-
tral theory, especially if the operators are self-adjoint. This sometimes works. It
is sometimes necessary to supplement with tools, such as comparison and exten-
sion theory, Weyl-type bounds on eigenvalues, bounds on eigenvectors, and Nash
Sobolev inequalities. These are basic tools of modern analysis. Their use in a
concrete problem may help some readers come into contact with this part of the
mathematical world.
Spectral bounds for Markov chains. Let A be a set, (dx) a reference measure
and m(x) a probability density with respect to (so m(x) 0,
_
m(x)(dx) =
1). Let P(x, dy) be a Markov kernel on A. This means that for each x, P(x, )
is a probability measure on A. This P may be used to run a Markov chain
X
0
, X
1
, X
2
, . . . , with starting state X
0
= x say, by choosing X
1
from P(x, ) and
then X
2
from P(X
1
, ), and so on. The pair (m, P) is called reversible (physicists say
satises detailed balance) if P operating on L
2
(m) by Pf(x) =
_
f(y)P(x, dy)
is self-adjoint: Pf, g) = f, Pg). Often, P(x, dy) = p(x, y)(dy) has a kernel and
reversibility becomes m(x)p(x, y) = m(y)p(y, x) for all x, y. This says the chain run
forward is the same as the chain run backward, in analogy with the time reversibility
of the laws of mechanics. Here P operates on all of L
2
(m) so we are dealing with
bounded self-adjoint operators.
Suppose for a moment that P has a square integrable kernel p(x, y), so Pf(x) =
_
X
p(x, y)f(y)(dy). Then P is compact and there are eigenvectors f
i
and eigen-
values
i
so
Pf
i
=
i
f
i
under a mild connectedness condition f
0
1,
0
= 1 and 1 =
0
>
1
2
>
1. Then
p(x, y) = m(x)
i=0
i
f
i
(x)f
i
(y),
and the iterated kernel satises
p
n
(x, y) = m(y)
i=0
n
i
f
i
(x)f
i
(y).
If f
i
(x), f
i
(y) are bounded (or at least controllable), since f
0
1,
p
n
(x, y) m(y) as n .
This is the spectral approach to convergence. Note that to turn this into a quanti-
tative bound (From starting state x, how large must n be to have |P
n
x
m| < ?),
the
i
and f
i
must be well understood.
The Metropolis algorithm on the permutation group discussed in Section 3 gives
an example on nite spaces. Here is an example with an innite state space drawn
from my work with Khare and Salo-Coste [23] where this program can be use-
fully pushed through. Note that this example does not arise from the Metropolis
construction. It arises from a second basic construction, Glauber dynamics.
Example 2 (Birth and immigration). The state space A = 0, 1, 2, . . . . Let (dx)
be a counting measure and
(5.3) m(x) =
1
2
x+1
, p(x, y) =
_
1
3
_
x+y+1
_
x +y
x
___
1
2
_
x+1
.
THE MARKOV CHAIN MONTE CARLO REVOLUTION 193
This Markov chain is used to model population dynamics with immigration. If the
population size at generation n is denoted X, then, given X
n
= x,
X
n+1
=
x
i=1
N
i,n
+M
n+1
,
where N
i,n
, the number of ospring of the ith member of the population at time
n, are assumed to be independent and identically distributed with
p(N
i,n
= j) =
2
3
_
1
3
_
j
, 0 j < .
Here M
n+1
is migration, assumed to have the same distribution as N
i,n
. Note that
m(x)p(x, y) = m(y)p(y, x) so reversibility is in force.
In (5.3) the eigenvalues are shown to be
j
= 1/2
j
, 0 j < . The eigenfunc-
tions are the orthogonal polynomials for the measure 1/2
j+1
. These are Meixner
polynomials M
j
(x) =
2
F
1
__
jx
1
_
[ 1
_
. Now, the spectral representation gives the
following.
Proposition 1. For any starting state x for all n 0,
2
x
(n) =
y=0
(p
n
(x, y) m(y))
2
m(y)
=
i=1
2n
i
M
2
i
(x)
1
2
i
.
Next, there is an analysis problem: Given the starting population x, how large
should n be so that this chi-square distance to m is small? For this simple case,
the details are easy enough to present in public.
Proposition 2. With notation as in Proposition 1,
2
x
(n) 2
2c
for n = log
2
(1 +x) +c, c > 0,
2
x
(n) 2
2c
for n = log
2
(x 1) c, c > 0.
Proof. Meixner polynomials satisfy for all j and x > 0
[M
j
(x)[ =
jx
i=0
(1)
i
_
j
i
_
x(x 1) (x i + 1)
i=0
_
j
i
_
x
i
= (1 +x)
j
.
Thus, for n log
2
(1 +x) +c,
2
x
(n) =
j=1
M
2
j
(x)2
j(2n+1)
j=1
(1 + x)
2j
2
j(2n+1)
(1 +x)
2
2
(2n+1)
1 (1 +x)
2
2
(2n+1)
2
2c1
1 2
2c1
2
2c
.
The lower bound follows from using only the lead term. Namely
2
x
(n) (1 x)
2
2
2n
2
2c
for n = log
2
(x 1) c.
The results show that convergence is rapid: order log
2
(x) steps are necessary
and sucient for convergence to stationarity.
We were suprised and delighted to see classical orthogonal polynomials appearing
in a natural probability problem. The account [23] develops this and gives dozens
of other natural Markov chains explicitly diagonalized by orthogonal polynomials.
194 PERSI DIACONIS
Alas, one is not always so lucky. The Metropolis chain of (5.1) has the form
Pf(x) = m(x)f(x) +
_
h(x, y)f(y)dy. The multiplier m(x) leads to a continuous
spectrum. One of our discoveries [24, 25, 59] is that for many chains, this can be
side-stepped and the basic outline above can be pushed through to give sharp useful
bounds.
5.2. Some theorems. Return to the Metropolis algorithm of Theorem 4. We are
able to prove the following.
Theorem 5. For a bounded Lipshitz domain in R
d
, let p(x) satisfy 0 < m p(x)
M < for all x . Let P
h
be dened by (5.1). There are h
0
> 0,
0
(0, 1),
and c
i
> 0 so that
Spec P
h
[1 +
0
, 1] for all h h
0
.
1 is a simple eigenvalue of P
h
.
Spec P
h
[1
0
, 1] is discrete.
The number of eigenvalues of P
h
in [1 h
2
, 1], 0
0
h
2
(with
multiplicity), is bounded above by c
1
(1 +)
d/2
.
The spectral gap G(h) satises c
2
h
2
G(h) c
3
h
2
.
For all n 1 and any x , |P
n
x,h
p|
TV
c
4
e
nG(h)
.
More precise evaluation of the gap is available if the boundary of the domain is
quasi-regular. Then consider the operator
Lf(x) =
1
2(d + 1)
_
f +
p
p
f
_
with domain L = f H
2
(p) : [
n
f[
j
close to 1
j
(h)f
j,h
(x)f
j,h
(y),
P
2
h
(x, y) =
1
10
<
1
j
h
2
<
9
10
j,h
f
j,h
(x)f
j,h
(y),
and
P
3
h
= P
h
P
1
h
P
2
h
.
The pieces are orthogonal, so powers of P
h
equal the sum of powers of the pieces.
Now P
1
h
and P
2
h
can be analyzed as above. Of course, bounds on eigenvalues and
eigenfunctions are still needed. The continuous spectrum is hidden in P
3
h
and one
easily gets the crude bounds |P
3
h
|
L
L
ch
3d/2
, |P
3
h
|
L
2
L
2 (1
0
). These
can be iterated to give |(P
3
h
)
n
|
L
L
ce
n
for a universal > 0 and all
n > 1/h. Thus P
3
h
is negligible.
The work is fairly technical but the big picture is fairly stable. It holds for
natural walks on compact Riemannian manifolds [59] and in the detailed analysis
of the one-dimensional hard disc problem [24, 27].
The main purpose of this section is to show how careful analysis of an applied
algorithm can lead to interesting mathematics. In the next section, several further
applications of Markov chain Monte Carlo are sketched. None of these have been
analyzed.
196 PERSI DIACONIS
6. Going further, looking back;
contacts with math, contacts outside math
This section covers four topics: how someone outside probability can learn more
about the present subject; a literature review on rates of convergence; a list of
examples showing how a wide spectrum of mathematical tools have been used in
analyzing Markov chains; and pointers to applications in various scientic applica-
tions.
6.1. Going further. Suppose you are a grown-up mathematician who wants
to learn some probability. The problem is, probability has its own language and
images. Its a little like learning quantum mechanics the mathematical tools are
not a problem but the basic examples and images are foreign. There are two steps.
The rst is elementary probability the language of random variables, expectation,
independence, conditional probability, and the basic examples (binomial, Poisson,
geometric, normal, gamma, beta) with their distribution theory. The second is
mathematical probability -algebras, laws of large numbers, central limit theory,
martingales, and brownian motion. Not to mention Markov chains.
The best procedure is to rst sit in on an elementary probability course and then
sit in on a rst-year graduate course. There are hundreds of books at all levels.
Two good elementary books are [39] and [78]. This last is a very readable classic
(dont miss Chapter 3!). I use Billingsleys book [9] to teach graduate probability.
To learn about Monte Carlo, the classic book [44] is short and contains most of
the important ideas. The useful books ([15] or [62]) bring this up to date. Two
very good accounts of applied probability which develop Markov chain theory along
present lines are [7] and [10]. The advanced theory of Markov chains is well covered
by [3] (analytic theory), [38] (semi-group theory), and [42] (Dirichlet forms). Two
very useful survey articles on rigorous rates of convergence are [67] and [79]. The
on-line treatise [1] has a wealth of information about reversible Markov chains. All
of the cited works contain pointers to a huge literature.
6.2. Looking back. In this article, I have focused on using spectral theory to
give rates of convergence for Markov chains. There are several other tools and
schools. Two important ones are coupling and Harris recurrence. Coupling is a
pure probability approach in which two copies of a Markov chain are considered.
One begins in stationarity, the second at a xed starting point. Each chain evolves
marginally according to the given transition operator. However, the chains are also
set up to move towards each other. When they hit a common point, they couple
and then move on together. The chain started in stationarity is stationary at every
step, in particular at the coupling time T. Thus, at time T, the chain starting from
a xed point is stationary. This approach transforms the task of bounding rates
of convergence to bounding the coupling time T. This can sometimes be done by
quite elementary arguments. Coupling is such a powerful and original tool that
it must have applications far from its origins. A recent example is Robert Neels
proof [71] of Liouville theorems for minimal surfaces.
Book-length treatments of coupling are [61] and [86]. The very useful path cou-
pling variant in [11] and [35] is developed into a marvelous theory of Ricci curvature
for Markov chains by [73]. The connections between coupling and eigenvalues is
discussed in [12]. The coupling from the past algorithm of ProppWilson [77] has
made a real impact on simulation. It sometimes allows exact samples to be drawn
THE MARKOV CHAIN MONTE CARLO REVOLUTION 197
from intractable probability distributions. It works for the Ising model. I clearly
remember my rst look at David Wilsons sample of a 2, 000 2, 000 Ising model
at the critical temperature. I felt like someone seeing Mars through a telescope for
the rst time.
Harris recurrence is a sophisticated variant of coupling which has a well-developed
user interface. This avoids the need for clever, ad hoc constructions. The two chains
can be exactly coupled for general state spaces when they hit a small set. They
can be driven to the small set by a Lyapounov function. A splendid introduction
to Harris recurrence is in [53]. A book-length development is [68]. The topic is
often developed under the name of geometric ergodicity. This refers to bounds of
the form |K
l
x
|
TV
A(x)
l
for A(x) > 0 and 0 < < 1. Observe that usually,
proofs of geometric ergodicity give no hold on A(x) or on . In this form, the
results are practically useless, saying little more than the chain converges for large
l. Bounds with explicit A(x), are called honest in the literature [53]. The
work of Jim Hobert and his collaborators is particularly rich in useful bounds for
real examples. For further discussion and references, see [23] and [50].
In the presence of geometric ergodicity, a wealth of useful auxiliary results be-
comes available. These include central limit theorems and large deviations bounds
for averages 1/N
f(X
i
) [56]. The variance of such averages can be usefully esti-
mated [46]. One can even hope to do perfect sampling from the exact stationary
distribution [55]. There has been a spirited eort to understand what the set-up
required for Harris recurrence says about the spectrum [5, 6]. (Note that coupling
and Harris recurrence do not depend on reversibility.)
6.3. Contacts with math. The quest for sharp analysis of Markov chains has led
to the use and development of tools from various areas of mathematics. Here is a
personal catalog.
Group representations. Natural mixing schemes can sometimes be represented as
random walks on groups or homogeneous spaces. Then, representation theory al-
lows a useful Fourier analysis. If the walks are invariant under conjugation, only
the characters are needed. If the walks are bi-invariant under a subgroup giving a
Gelfand pair, the spherical functions are needed. A book-length discussion of this
approach can be found in [14]. Sometimes, the probability theory calls for new
group theory. An example is the random walk on the group of upper-triangular
matrices with elements in a nite eld: Starting at the identity, pick a row at
random and add it to the row above. The character theory of this group is wild.
Carlos-Andre has created a cruder super-character theory which is suciently rich
to handle random walk problems. The detailed use of this required a new formula
[4] and leads to an extension of the theory to algebra groups in joint work with
Isaacs and Theme [22, 34]. This has blossomed into thesis projects [45, 84, 85].
This thread is a nice example of the way that applications and theory interact.
Algebraic geometry. The creation of Markov chains to eciently perform a sampling
task can lead to interesting mathematics. As an example, consider the emerging
eld of algebraic statistics. I was faced with the problem of generating (uniformly)
random arrays with given row and column sums. These arrays (called contingency
tables in statistics) have non-negative integer entries. For two-dimensional arrays,
a classical Markov chain Monte Carlo algorithm proceeds as follows. Pick a pair of
198 PERSI DIACONIS
rows and a pair of columns at random; this delineates four entries. Change these
entries by adding and subtracting in one of the following patterns:
_
+
+
_
or
_
+
+
_
.
This doesnt change the row/column sums. If the resulting array still has non-
negative entries, the chain moves there. Otherwise, the chain stays at the original
array.
I needed to extend this to higher-dimensional arrays and similar problems on the
permutation group and other structures where linear statistics are to be preserved.
The problem is that the analog of the
_
+
+
_
moves that statisticians have thought
of does not connect the space. Bernd Sturmfels recognized the original
_
+
+
_
moves as generators of a determinental ideal and suggested coding the problem
up as nding generators of a toric ideal. All of the problems t into this scheme
and the emerging elds of computational algebra and Gr obner bases allow practical
solutions. The story is too long to tell here in much detail. The original ideas are
worked out in [33]. There have been many extensions, bolstered by more than a
dozen Ph.D. theses. A avor of this activity and references can be gathered from
[49]. The suite of computational resources in the computer package Latte also
contains extensive documentation. The subject of algebraic statistics has expanded
in many directions. See [74] for its projection into biology, and [76] for its projection
into the design of experiments. As usual, the applications call for a sharpening of
algebraic geometric tools and raise totally new problems.
For completeness I must mention that despite much eort, the running time
analysis of the original Markov chain on contingency tables has not been settled.
There are many partial results suggesting that (diam)
2
steps are necessary and
sucient, where diameter refers to the graph with an edge between two arrays if
they dier by a
_
+
+
_
move. There are also other ways of sampling that show
great promise [16]. Carrying either the analysis or the alternative procedures to
the other problems in [33] is a healthy research area.
PDE. The analysis of Markov chains has a very close connection with the study
of long time behavior of the solutions of dierential equations. In the Markov
chain context we are given a kernel K(x, dy) with reversible stationary measure
(dx) on a space A. Then K operates as a self-adjoint contraction on L
2
() via
Kf(x) =
_
f(y)K(x, dy). The associated quadratic form c(f[g) = (I K)f, g) is
called the Dirichlet form in probability. A Poincare inequality for K has the form
|f|
2
2
Ac(f[f) for all f L
2
() with
_
fd = 0.
Using the minimax characterization, a Poincare inequality shows that there is no
spectrum for the operator in [1 1/A, 1) (Markov operators always have 1 as an
eigenvalue). There is a parallel parity form which allows bounding negative spec-
trum. If the spectrum is supported on [1 + 1/A, 1 1/A] and the Markov chain
is started at a distribution with L
2
density g, then
|K
l
|
2
TV
|g 1|
2
2
_
1
1
A
_
2l
.
THE MARKOV CHAIN MONTE CARLO REVOLUTION 199
This is a useful, explicit bound but it is often o, giving the wrong rate of
convergence by factors of n or more in problems on the symmetric group S
n
. A
host of more complex techniques can give better results. For example, K satises
a Nash inequality if for all suitable f,
|f|
2+1/D
2
A
_
c(f[f) +
1
N
|f|
2
2
_
|f|
1/D
1
,
and a log-Sobolev inequality if
L(f) Ac(f[f), L(f) =
_
f
2
(x) log
f(x)
2
|f|
2
2
(dx).
Here A, N and D are constants which enter into any conclusions. These inequalities
are harder to establish and have stronger conequences. Related inequalities of
Cheeger and Sobolev are also in widespread use. For surveys of this technology,
see [69] or [79]. The point here is that most of these techniques were developed
to study PDE. Their adaptation to the analysis of Markov chains requires some
new ideas. This interplay between the proof techniques of PDE and Markov chains
has evolved into the small but healthy eld of functional inequalities [5, 6] which
contributes to both subjects.
Modern PDE is an enormous subject with many more tools and ideas. Some
of these, for example, the calculus of pseudo-dierential operators and micro-local
techniques, are just starting to make inroads into Markov chain convergence [24,
25, 59].
A major omission in the discussion above are the contributions of the theoret-
ical computer science community. In addition to a set of problems discussed in
the nal part of this section, a host of broadly useful technical developments have
emerged. One way of saying things is this: How does one establish any of the in-
equalities above (from Poincare through log-Sobolev) in an explicit problem? Mark
Jerrum and Alistair Sinclair introduced the use of paths to prove Cheeger inequali-
ties (called conductance in computer science). Dyer, Frieze, Lov asz, Kannan and
many students and coauthors have developed and applied these ideas to a host of
problems, most notably the problems of approximating the permanent of a matrix
and approximating the volume of a convex set. Alas, this work suers from the
polynomial time bug. The developers are often satised with results showing
that n
17
steps suce (after all, its a polynomial). This leads to theory of little use
in practical problems. I believe that the ideas can be pushed to give useful results,
but at the present writing much remains to be done. A good survey of this set of
ideas can be found in [69].
6.4. Contacts outside math. To someone working in my part of the world, asking
about applications of Markov chain Monte Carlo (MCMC) is a little like asking
about applications of the quadratic formula. The results are really used in every
aspect of scientic inquiry. The following indications are wildly incomplete. I
believe you can take any area of science, from hard to social, and nd a burgeoning
MCMC literature specically tailored to that area. I note that essentially none of
these applications is accompanied by any kind of practically useful running time
analysis. Thus the following is really a list of open research problems.
Chemistry and physics. From the original application to hard discs through lattice
gauge theory [66], MCMC calculations are a mainstay of chemistry and physics.
200 PERSI DIACONIS
I will content myself by mentioning four very readable books, particularly good
at describing the applications to an outsider; I have found them useful ways to
learn the science. For physics, [57] and [72]. For chemistry, [41] and [58]. A good
feeling for the ubiquity of MCMC can be gleaned from the following quote from
the introductory text of the chemist Ben Widom [88, p. 101]:
Now, a generation later, the situation has been wholly trans-
formed, and we are able to calculate the properties of ordinary
liquids with nearly as much assurance as we do those of dilute
gases and harmonic solids . . . . What is new is our ability to realize
van der Waals vision through the intervention of high speed digital
computing.
Biology. One way to access applications of MCMC in various areas of biology is to
look at the work of the statistical leaders of groups driving this research: Jun Liu
(Harvard), Michael Newton (Wisconsin), Mike West (Duke) and Wing Wong (Stan-
ford). The homepages of each of these authors contain dozens of papers, essentially
all driven by MCMC. Many of these contain innovative, new algorithms (waiting to
be studied). In addition, I mention the online resources Mr. Bayes and Bugs.
These give hundreds of tailored programs for MCMC biological applications.
Statistics. Statisticians work with scientists, engineers, and businesses in a huge
swathe of applications. Perhaps 1015% of this is driven by MCMC. An overview
of applications may be found in the books [43] or [62]. For the very active area
of particle lters and their many engineering applications (tracking, ltering), see
[36]. For political scienceavored applications, see [43]. Of course, statisticians
have also contributed to the design and analysis of these algorithms. An important
and readable source is [13].
Group theory. This is a much smaller application. It seems surprising, because
group theory (the mathematics of symmetry) seems so far from probability. How-
ever, computational group theory, as coded up in the online libraries Gap and
Magma, makes heavy use of randomized algorithms to do basic tasks such as decid-
ing whether a group (usually given as the group generated by a few permutations or
a few large matrices) is all of S
n
(GL
n
). Is it solvable, can we nd its lower central
series, normal closure, Sylow(p) subgroups, etc.? Splendid accounts of this subject
are in [47] or [80]. Bounding the running time of widely available used algorithms,
such as the meat axe or the product replacement algorithm [75], are important
open problems on the unlikely interface of group theory and probability.
Computer science (theory). The analysis of algorithms and complexity theory is
an important part of computer science. One central theme is the polynomial/non-
polynomial dichotomy. A large class of problems such as computing the permanent
of a matrix or the volume of a convex polyhedron have been proved to be # p-
complete. Theorists (Broder, Jerrum, Vazarani) have shown that while it may take
exponential time to get an exact answer to these problems, one can nd provably
accurate approximations in a polynomial number of operations (in the size of the
input) provided one can nd a rapidly mixing Markov chain to generate problem
instances at random. The above rough description is made precise in the readable
book [81]. This program calls for methods of bounding the running time of Markov
chains. Many clever analyses have been carried out in tough problems without
THE MARKOV CHAIN MONTE CARLO REVOLUTION 201
helpful symmetries. It would take us too far aeld to develop this further. Three of
my favorite papers (which will lead the reader into the heart of this rich literature)
are the analysis [63] of the hit-and-run algorithm, the analysis [52] of the problem
of approximating permanents, and the analysis [70] of knapsack problems. All of
these contain deep, original mathematical ideas which seem broadly useful. As a
caveat, recent results of Widgerson suggest a dichotomy: either randomness can
be eliminated from these algorithms or P = NP. Since nobody believes P = NP,
there is true excitement in the air.
To close this section, I reiterate that almost none of these applications comes
with a useful running time estimate (and almost never with careful error estimates).
Also, for computer science, the applications are to computer science theory. Here
the challenge is to see if practically useful algorithms can be made from the elegant
mathematics.
Acknowledgments
This paper leans on 30 years of joint work with students and coauthors. It was
presented at the 25th anniversary of the amazing, wonderful MSRI. I particularly
thank Marc Coram, Susan Holmes, Kshitij Khare, Werner Krauth, Gilles Lebeau,
Laurent Michel, John Neuberger, Charles Radin, and Laurent Salo-Coste for their
help with this paper. A helpful referee and the patience of editor Susan Friedlander
are gratefully acknowledged.
About the author
Persi Diaconis is the Mary Sunseri Professor of Statistics and Mathematics at
Stanford University. He is a member of the National Academy of Sciences and is a
recipient of a MacArthur Fellowship.
References
1. Aldous, D. and Fill, J. (2002). Reversible Markov chains and random walks on graphs. Mono-
graph.
2. Allen, M. P. and Tildesely, D. J. (1987). Computer simulation of liquids. Oxford University
Press, Oxford.
3. Anderson, W. J. (1991). Continuous-time Markov chains. An applications-oriented approach.
Springer Series in Statistics: Probability and its Applications. Springer-Verlag, New York.
MR1118840 (92k:60170)
4. Arias-Castro, E., Diaconis, P., and Stanley, R. (2004). A super-class walk on upper-triangular
matrices. J. Algebra, 278(2):739765. MR2071663 (2005f:60101)
5. Bakry, D., Cattiaux, P., and Guillin, A. (2008). Rate of convergence for ergodic continuous
Markov processes: Lyapunov versus Poincare. J. Funct. Anal., 254(3):727759. MR2381160
6. Barthe, F., Bakry, P., Cattiaux, P., and Guillin, A. (2008). Poincare inequalities for log-
concave probability measures: a Lyapounov function approach. Electron Comm. Probab.,
13:6066.
7. Bhattacharya, R. N. and Waymire, E. C. (1990). Stochastic processes with applications. Wiley
Series in Probability and Mathematical Statistics: Applied Probability and Statistics. John
Wiley & Sons Inc., New York. A Wiley-Interscience Publication. MR1054645 (91m:60001)
8. Billera, L. J. and Diaconis, P. (2001). A geometric interpretation of the Metropolis-Hastings
algorithm. Statist. Sci., 16(4):335339. MR1888448 (2002m:60133)
9. Billingsley, P. (1995). Probability and measure. Wiley Series in Probability and Mathematical
Statistics. John Wiley & Sons Inc., New York, third edition. A Wiley-Interscience Publication.
MR1324786 (95k:60001)
202 PERSI DIACONIS
10. Bremaud, P. (1999). Markov chains, volume 31 of Texts in Applied Mathematics. Springer-
Verlag, New York. Gibbs elds, Monte Carlo simulation, and queues. MR1689633
(2000k:60137)
11. Bubley, B. and Dyer, M. (1997). Path coupling: a technique for proving rapid mixing in
Markov chains. FOCS, pages 223231.
12. Burdzy, K. and Kendall, W. S. (2000). Ecient Markovian couplings: examples and coun-
terexamples. Ann. Appl. Probab., 10(2):362409. MR1768241 (2002b:60129)
13. Cappe, O., Moulines, E., and Ryden, T. (2005). Inference in hidden Markov models. Springer
Series in Statistics. Springer, New York. With Randal Doucs contributions to Chapter 9
and Christian P. Roberts to Chapters 6, 7 and 13, With Chapter 14 by Gersende Fort,
Philippe Soulier and Moulines, and Chapter 15 by Stephane Boucheron and Elisabeth Gassiat.
MR2159833 (2006e:60002)
14. Ceccherini-Silberstein, T., Scarabotti, F., and Tolli, F. (2008). Harmonic analysis on nite
groups, volume 108 of Cambridge Studies in Advanced Mathematics. Cambridge University
Press, Cambridge. Representation theory, Gelfand pairs and Markov chains. MR2389056
15. Chen, M.-H., Shao, Q.-M., and Ibrahim, J. G. (2000). Monte Carlo methods in Bayesian com-
putation. Springer Series in Statistics. Springer-Verlag, New York. MR1742311 (2000k:65014)
16. Chen, Y., Diaconis, P., Holmes, S. P., and Liu, J. S. (2005). Sequential Monte Carlo methods
for statistical analysis of tables. J. Amer. Statist. Assoc., 100(469):109120. MR2156822
(2006f:62062)
17. Conner, S. (2003). Simulation and solving substitution codes. Masters thesis, Department of
Statistics, University of Warwick.
18. Critchlow, D. E. (1985). Metric methods for analyzing partially ranked data, volume 34 of
Lecture Notes in Statistics. Springer-Verlag, Berlin. MR818986 (87c:62044)
19. Diaconis, P. (1988). Group representations in probability and statistics. Institute of Mathe-
matical Statistics Lecture NotesMonograph Series, 11. Institute of Mathematical Statistics,
Hayward, CA. MR964069 (90a:60001)
20. Diaconis, P. and Graham, R. L. (1977). Spearmans footrule as a measure of disarray. J. Roy.
Statist. Soc. Ser. B, 39(2):262268. MR0652736 (58:31575)
21. Diaconis, P. and Hanlon, P. (1992). Eigen-analysis for some examples of the Metropolis algo-
rithm. In Hypergeometric functions on domains of positivity, Jack polynomials, and applica-
tions (Tampa, FL, 1991), volume 138 of Contemp. Math., pages 99117. Amer. Math. Soc.,
Providence, RI. MR1199117 (93h:33001)
22. Diaconis, P. and Isaacs, I. M. (2008). Supercharacters and superclasses for algebra groups.
Trans. Amer. Math. Soc., 360(5):23592392. MR2373317
23. Diaconis, P., Khare, K., and Salo-Coste, L. (2008a). Gibbs sampling, exponential families
and orthogonal polynomials, with discussion. Statist. Sci., to appear.
24. Diaconis, P. and Lebeau, G. (2008). Micro-local analysis for the Metropolis algorithm. Math.
Z., to appear.
25. Diaconis, P., Lebeau, G., and Michel, L. (2008b). Geometric analysis for the Metropolis algo-
rithm on Lipshitz domains. Technical report, Department of Statistics, Stanford University,
preprint.
26. Diaconis, P. and Limic, V. (2008). Spectral gap of the hard-core model on the unit interval.
Technical report, Department of Statistics, Stanford University, preprint.
27. Diaconis, P. and Neuberger, J. W. (2004). Numerical results for the Metropolis algorithm.
Experiment. Math., 13(2):207213. MR2068894
28. Diaconis, P. and Ram, A. (2000). Analysis of systematic scan Metropolis algorithms using
Iwahori-Hecke algebra techniques. Michigan Math. J., 48:157190. Dedicated to William Ful-
ton on the occasion of his 60th birthday. MR1786485 (2001j:60132)
29. Diaconis, P. and Salo-Coste, L. (1993). Comparison theorems for reversible Markov chains.
Ann. Appl. Probab., 3(3):696730. MR1233621 (94i:60074)
30. Diaconis, P. and Salo-Coste, L. (1996). Nash inequalities for nite Markov chains. J. Theoret.
Probab., 9(2):459510. MR1385408 (97d:60114)
31. Diaconis, P. and Salo-Coste, L. (1998). What do we know about the Metropolis algorithm?
J. Comput. System Sci., 57(1):2036. 27th Annual ACM Symposium on the Theory of Com-
puting (STOC95) (Las Vegas, NV). MR1649805 (2000b:68094)
32. Diaconis, P. and Shahshahani, M. (1981). Generating a random permutation with random
transpositions. Z. Wahrsch. Verw. Gebiete, 57(2):159179. MR626813 (82h:60024)
THE MARKOV CHAIN MONTE CARLO REVOLUTION 203
33. Diaconis, P. and Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional
distributions. Ann. Statist., 26(1):363397. MR1608156 (99j:62137)
34. Diaconis, P. and Thiem, N. (2008). Supercharacter formulas for pattern groups. Trans. Amer.
Math. Soc., to appear.
35. Dobrushin, R. L. (1970). Prescribing a system of random variables by conditional distributions.
Theor. Probab. Appl. Engl. Tr., 15:453486.
36. Doucet, A., de Freitas, N., and Gordon, N. (2001). Sequential Monte Carlo in Practice.
Springer-Verlag, New York.
37. Dress, C. and Krauth, W. (1995). Cluster algorithm for hard spheres and related systems. J.
Phys. A, 28(23):L597L601. MR1381129
38. Ethier, S. N. and Kurtz, T. G. (1986). Markov processes. Wiley Series in Probability and
Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc.,
New York. Characterization and convergence. MR838085 (88a:60130)
39. Feller, W. (1968). An introduction to probability theory and its applications. Vol. I. Third
edition. John Wiley & Sons Inc., New York. MR0228020 (37:3604)
40. Fill, J. A. (1991). Eigenvalue bounds on convergence to stationarity for nonreversible
Markov chains, with an application to the exclusion process. Ann. Appl. Probab., 1(1):62
87. MR1097464 (92h:60104)
41. Frenkel, D. and Smit, B. (2002). Understanding molecular simulation: From algorithms to
applications, 2nd edition. Computational Science Series, Vol 1. Academic Press, San Diego.
42. Fukushima, M.,
Oshima, Y., and Takeda, M. (1994). Dirichlet forms and symmetric Markov
processes, volume 19 of de Gruyter Studies in Mathematics. Walter de Gruyter & Co., Berlin.
43. Gill, J. (2007). Bayesian methods: a social and behavioral sciences approach, 2nd ed. Statistics
in the Social and Behavioral Sciences. Chapman & Hall/CRC. Second edition.
44. Hammersley, J. M. and Handscomb, D. C. (1965). Monte Carlo methods. Methuen & Co.
Ltd., London. MR0223065 (36:6114)
45. Hendrickson, A. O. F. (2008). Supercharacter theories of nite simple groups. PhD thesis,
University of Wisconsin.
46. Hobert, J. P., Jones, G. L., Presnell, B., and Rosenthal, J. S. (2002). On the applicability of
regenerative simulation in Markov chain Monte Carlo. Biometrika, 89(4):731743. MR1946508
(2003m:60200)
47. Holt, D. F., Eick, B., and OBrien, E. A. (2005). Handbook of computational group theory.
Discrete Mathematics and its Applications (Boca Raton). Chapman & Hall/CRC, Boca Raton,
FL. MR2129747 (2006f:20001)
48. Hora, A. and Obata, N. (2007). Quantum probability and spectral analysis of graphs. The-
oretical and Mathematical Physics. Springer, Berlin. With a foreword by Luigi Accardi.
MR2316893
49. Hosten, S. and Meek, C. (2006). Preface. J. Symb. Comput., 41(2):123124.
50. Jarner, S. F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic
Process. Appl., 85(2):341361. MR1731030 (2001c:60108)
51. Jaster, A. (2004). The hexatic phase of the two-dimensional hard disks system. Phys. Lett.
A, 330(cond-mat/0305239):120125.
52. Jerrum, M., Sinclair, A., and Vigoda, E. (2004). A polynomial-time approximation algorithm
for the permanent of a matrix with nonnegative entries. J. ACM, 51(4):671697 (electronic).
MR2147852 (2006b:15013)
53. Jones, G. L. and Hobert, J. P. (2001). Honest exploration of intractable probability distribu-
tions via Markov chain Monte Carlo. Statist. Sci., 16(4):312334. MR1888447
54. Kannan, R., Mahoney, M. W., and Montenegro, R. (2003). Rapid mixing of several Markov
chains for a hard-core model. In Algorithms and computation, volume 2906 of Lecture Notes
in Comput. Sci., pages 663675. Springer, Berlin. MR2088246 (2005d:68160)
55. Kendall, W. S. (2004). Geometric ergodicity and perfect simulation. Electron. Comm. Probab.,
9:140151 (electronic). MR2108860 (2006e:60098)
56. Kontoyiannis, I. and Meyn, S. P. (2003). Spectral theory and limit theorems for geometrically
ergodic Markov processes. Ann. Appl. Probab., 13(1):304362. MR1952001 (2003m:60187)
57. Krauth, W. (2006). Statistical mechanics. Oxford Master Series in Physics. Oxford University
Press, Oxford. Algorithms and computations, Oxford Master Series in Statistical Computa-
tional, and Theoretical Physics. MR2370557
204 PERSI DIACONIS
58. Landau, D. P. and Binder, K. (2005). A Guide to Monte Carlo Simulations in Statistical
Physics. Cambridge University Press, Cambridge. MR1781083 (2001m:82051)
59. Lebeau, G. and Michel, L. (2008). Semiclassical analysis of a random walk on a manifold.
Ann. Probab., to appear (arXiv:0802.0644).
60. Liggett, T. M. (1985). Interacting particle systems, volume 276 of Grundlehren der Mathema-
tischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag,
New York. MR776231 (86e:60089)
61. Lindvall, T. (2002). Lectures on the coupling method. Dover Publications Inc., Mineola, NY.
Corrected reprint of the 1992 original. MR1924231
62. Liu, J. S. (2001). Monte Carlo Strategies in Scientic Computing. Springer Series in Statistics.
Springer-Verlag, New York. MR1842342 (2002i:65006)
63. Lovasz, L. and Vempala, S. (2006). Hit-and-run from a corner. SIAM J. Comput., 35(4):985
1005 (electronic). MR2203735 (2007h:60041)
64. Lowen, H. (2000). Fun with hard spheres. In Statistical physics and spatial statistics (Wupper-
tal, 1999), volume 554 of Lecture Notes in Phys., pages 295331. Springer, Berlin. MR1870950
65. Macdonald, I. G. (1995). Symmetric functions and Hall polynomials. Oxford Mathematical
Monographs. The Clarendon Press Oxford University Press, New York, second edition. With
contributions by A. Zelevinsky, Oxford Science Publications. MR1354144 (96h:05207)
66. Mackenzie, P. (2005). The fundamental constants of nature from lattice gauge theory simula-
tions. J. Phys. Conf. Ser., 16(doi:10.1088/1742-6596/16/1/018):140149.
67. Martinelli, F. (2004). Relaxation times of Markov chains in statistical mechanics and combi-
natorial structures. In Probability on discrete structures, volume 110 of Encyclopaedia Math.
Sci., pages 175262. Springer, Berlin. MR2023653 (2005b:60260)
68. Meyn, S. P. and Tweedie, R. L. (1993). Markov chains and stochastic stability. Communi-
cations and Control Engineering Series. Springer-Verlag London Ltd., London. MR1287609
(95j:60103)
69. Montenegro, R. and Tetali, P. (2006). Mathematical aspects of mixing times in Markov chains.
Found. Trends Theor. Comput. Sci., 1(3):x+121. MR2341319
70. Morris, B. and Sinclair, A. (2004). Random walks on truncated cubes and sampling 0 1
knapsack solutions. SIAM J. Comput., 34(1):195226 (electronic). MR2114310 (2005k:68095)
71. Neel, R. W. (2008). A martingale approach to minimal surfaces. J. Funct. Anal.,
(doi:10.1016/j.jfa.2008.06.033). arXiv:0805.0556v2 [math.DG] (in press).
72. Newman, M. E. J. and Barkema, G. T. (1999). Monte Carlo methods in statistical physics.
The Clarendon Press Oxford University Press, New York. MR1691513 (2000m:82030)
73. Ollivier, Y. (2008). Ricci curvature of Markov chains on metric spaces. Preprint, submitted,
2008.
74. Pachter, L. and Sturmfels, B., editors (2005). Algebraic statistics for computational biology.
Cambridge University Press, New York. MR2205865 (2006i:92002)
75. Pak, I. (2001). What do we know about the product replacement algorithm? In Groups and
computation, III (Columbus, OH, 1999), volume 8 of Ohio State Univ. Math. Res. Inst. Publ.,
pages 301347. de Gruyter, Berlin. MR1829489 (2002d:20107)
76. Pistone, G., Riccomagno, E., and Wynn, H. P. (2001). Algebraic statistics, volume 89 of
Monographs on Statistics and Applied Probability. Chapman & Hall/CRC, Boca Raton, FL.
Computational commutative algebra in statistics. MR2332740 (2008f:62098)
77. Propp, J. G. and Wilson, D. B. (1996). Exact sampling with coupled Markov chains and
applications to statistical mechanics. In Proceedings of the Seventh International Confer-
ence on Random Structures and Algorithms (Atlanta, GA, 1995), volume 9, pages 223252.
MR1611693 (99k:60176)
78. Ross, S. M. (2002). A First Course in Probability, 7th Edition. Cambridge University Press,
Cambridge.
79. Salo-Coste, L. (1997). Lectures on nite Markov chains. In Lectures on probability theory
and statistics (Saint-Flour, 1996), volume 1665 of Lecture Notes in Math., pages 301413.
Springer, Berlin. MR1490046 (99b:60119)
80. Seress,
A. (2003). Permutation group algorithms, volume 152 of Cambridge Tracts in Mathe-
matics. Cambridge University Press, Cambridge. MR1970241 (2004c:20008)
81. Sinclair, A. (1993). Algorithms for random generation and counting. Progress in Theoret-
ical Computer Science. Birkhauser Boston Inc., Boston, MA. A Markov chain approach.
MR1201590 (93j:65011)
THE MARKOV CHAIN MONTE CARLO REVOLUTION 205
82. Stanley, R. P. (1999). Enumerative combinatorics. Vol. 2, volume 62 of Cambridge Studies in
Advanced Mathematics. Cambridge University Press, Cambridge. With a foreword by Gian-
Carlo Rota and appendix 1 by Sergey Fomin.
83. Taylor, H. M. and Karlin, S. (1984). An introduction to stochastic modeling. Academic Press
Inc., Orlando, FL. MR778728 (86j:60003)
84. Thiem, N. and Marberg, E. (2008). Superinduction for pattern groups. Technical report,
Department of Mathematics, University of Colorado, Boulder.
85. Thiem, N. and Venkateswaran, V. (2008). Restricting supercharacters of the nite group of
unipotent uppertriangular matrices. Technical report, Department of Mathematics, University
of Colorado, Boulder.
86. Thorisson, H. (2000). Coupling, stationarity, and regeneration. Probability and its Applica-
tions (New York). Springer-Verlag, New York. MR1741181 (2001b:60003)
87. Uhlenbeck, G. E. (1968). An outline of statistical mechanics. In Cohen, E. G. D., editor, Fun-
damental Problems in Statistical Mechanics, volume 2, pages 119. North-Holland Publishing
Co., Amsterdam.
88. Widom, B. (2002). Statistical Mechanics: A Concise Introduction for Chemists. Cambridge
University Press, Cambridge. MR1921032 (2004a:82001)
89. Yau, H.-T. (1997). Logarithmic Sobolev inequality for generalized simple exclusion processes.
Probab. Theory Related Fields, 109(4):507538. MR1483598 (99f:60171)
Department of Mathematics and Statistics, Stanford University, Stanford, Califor-
nia