[go: up one dir, main page]

0% found this document useful (0 votes)
68 views22 pages

An Introduction To Quantum Annealing: Diego de Falco and Dario Tamascelli

This document provides an introduction to quantum annealing and its application to solving hard optimization problems. It discusses how combinatorial problems can be mapped to quantum systems and solved using quantum adiabatic computation or quantum annealing algorithms. Specifically, it describes how satisfiability problems like 3-SAT can be encoded as a local Hamiltonian problem on a quantum computer and potentially solved faster than with classical algorithms through quantum annealing or adiabatic evolution.

Uploaded by

Kithmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views22 pages

An Introduction To Quantum Annealing: Diego de Falco and Dario Tamascelli

This document provides an introduction to quantum annealing and its application to solving hard optimization problems. It discusses how combinatorial problems can be mapped to quantum systems and solved using quantum adiabatic computation or quantum annealing algorithms. Specifically, it describes how satisfiability problems like 3-SAT can be encoded as a local Hamiltonian problem on a quantum computer and potentially solved faster than with classical algorithms through quantum annealing or adiabatic evolution.

Uploaded by

Kithmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

An introduction to quantum annealing

arXiv:1107.0794v1 [quant-ph] 5 Jul 2011

Diego de Falco and Dario Tamascelli


Dipartimento di Scienze dell’Informazione, Università degli Studi di Milano
Via Comelico, 39/41, 20135 Milano- Italy
e-mail:defalco@dsi.unimi.it,tamascelli@dsi.unimi.it

Abstract
Quantum Annealing, or Quantum Stochastic Optimization, is a
classical randomized algorithm which provides good heuristics for the
solution of hard optimization problems. The algorithm, suggested by
the behaviour of quantum systems, is an example of proficuous cross
contamination between classical and quantum computer science. In
this survey paper we illustrate how hard combinatorial problems are
tackled by quantum computation and present some examples of the
heuristics provided by Quantum Annealing. We also present prelimi-
nary results about the application of quantum dissipation (as an alter-
native to Imaginary Time Evolution) to the task of driving a quantum
system toward its state of lowest energy.

1 Introduction
Quantum computation stems from a simple observation[23, 19]: any com-
putation is ultimately performed on a physical device; to any input-output
relation there must correspond a change in the state of the device. If the
device is microscopic, then its evolution is ruled by the laws of quantum
mechanics.
The question of whether the change of evolution rules can lead to a break-
through in computational complexity theory is still unanswered [11, 34, 40].
What is known is that a quantum computer would outperform a classi-
cal one in specific tasks such as integer factorization [47] and searching an
unordered database [28, 29]. In fact, Grover’s algorithm can search a key-
word quadratically faster than any classical algorithm, whereas in the case of
Shor’s factorization the speedup is exponential. However, since factorization

1
is in NP1 but not believed to be NP-complete, the success of factorization
does not extend to the whole NP class.
Nonetheless, quantum computation is nowadays an active research field and
represents a fascinating example of two way communication between com-
puter science and quantum physics. On one side, computer science used
quantum mechanics to define a new computational model, on the other the
language of information and complexity theory has allowed a sharper un-
derstanding of some aspects of quantum mechanics.
Moreover, it is clear that quantum mechanics will sooner or later force its
way into our classical computing devices: Intel R has recently (Feb 20102 )
announced the first 25nm NAND logical gate: 500 Bohr radii.
In this paper we present an introduction to quantum adiabatic computation
and quantum annealing, or quantum stochastic optimization, representing,
respectively, a quantum and a quantum-inspired optimization algorithm. In
Section 2 we introduce the class of problems that are hard for a quantum
computer. We consider the problem 3-SAT, its mapping into a quantum
problem and the quantum adiabatic technique. In Section 3 we describe
Quantum Annealing, a classical algorithm which captures the features of
the quantum ground state process to provide good heuristics for combinato-
rial problems. Section 4 is devoted to dissipative quantum annealing and to
the presentation of preliminary heuristic results on the dynamics of a dissi-
pative system. Concluding remarks and possible lines of future research are
presented in the last section.

2 Transition to the quantum world


The need of considering the features of quantum computing devices led to the
definition of the quantum counterpart of some classical complexity classes.
Quantum algorithms are intrinsically probabilistic, since the result of the
measurement of an observable of a quantum system is a random variable.
Not surprisingly, the quantum correspondent of P is BQP (Bounded-error
Quantum Polynomial-time), the class of decision problems solvable in poly-
nomial time by a quantum Turing machine, with error probability at most
1/3.
QMA (Quantum Merlin Arthur) is the quantum counterpart of the class
1
To be more precise, factorization is in FNP, the function problem extension of NP.
The decision problem version of factorization (given an integer N and an integer M with
1 ≤ M ≤ N , does N have a factor d with 1 < d < M ?) is in NP.
2
http://www.intel.com/pressroom/archive/releases/2010/20100201comp.htm

2
NP[50]. It is defined as the class of decision problems such that a “yes”
answer can be verified by a 1-message quantum interactive proof. That is:
a quantum state (the “proof”) is given to a BQP (i.e. quantum polynomial-
time) verifier. We require that if the answer to the decision problem is “yes”
then there exists a state such that the verifier accepts with probability at
least 2/3; if the answer is “no” then for all states the verifier rejects with
probability at least 2/3.
All the known complete problems for the class QMA are promise problems,
i.e. decision problems where the input is promised to belong to a subset
of all possible inputs. An example of QMA-complete problem is the k-local
Hamiltonian problem:
INSTANCE: a collection {H1 , H2 , . . . , Hn } of Hamiltonians each of which
acts on at most k qubits; real numbers a, b such that b − a = O(1/poly(n)).
QUESTION: is the smallest eigenvalue of nj=1 Hj less than a or greater
P
than b, promised that this is the case?
It has been shown that this decision problem is complete for the class QMA
even in the case of 2-local Hamiltonians[34].
The well known problems k-SAT, which are complete for NP for k ≥ 3, can
be encoded in a k-local Hamiltonian problem.
For instance, let us consider a given boolean formula:
C1 ∧ C2 ∧ . . . ∧ CM
over the variables x1 , x2 , . . . , xN ; each Ci is the disjunction (bi1 ∨ bi2 ∨ . . . ∨ bik )
and bi is either xi or its negation ¬xi . Our problem is to decide whether
there is an assignment to the boolean variables x1 , x2 , . . . , xN that satisfies
all the clauses simultaneously.
Given a clause Ci = (bi1 ∨ bi2 ∨ . . . ∨ bik ) we define the local function:
Y Y
fCi (x1 , x2 , . . . , xN ) = (1 − xij ) · x ij
ij ∈Λi+ ij ∈Λi−

Λi− , Λi+ containing all the indices of the variables that compare negated or
not negated in the clause Ci respectively. Given an assignment (x1 , x2 , . . . , xN ),
the function fCi will return 0 if the clause Ci is satisfied, 1 otherwise.
The cost V (x1 , x2 , . . . , xN ) of an assignment can be defined therefore in
terms of the functions fCi as:
M
X
V (x1 , x2 , . . . , xN ) = fCi (x1 , x2 , . . . , xN ),
i=1

3
that is the total number of violated clauses. An instance of k-SAT is there-
fore satisfiable if there exists a configuration of zero cost.
The translation of a k-SAT problem to the corresponding k-local Hamilto-
nian, problem is straightforward.
To each variable xi we assign a qubit, i.e. a 2-level quantum system. For the
sake of definiteness we will consider spins (σ1 (i), σ2 (i), σ3 (i)), i = 1, 2, . . . , N
and their component σ3 (i) along the z axis of a given reference frame as com-
putational direction. Furthermore, we decide that if an assignment assigns
the value 1 (true) to the variable xi then the corresponding qubit is in the
state σ3 (i) = +1 (spin up) and in the state σ3 (i) = −1 otherwise.
To each clause we associate the Hamiltonian term:
Y 1 − σ3 (ij ) Y 1 + σ3 (ij )
Hi = · , (1)
i
2 i
2
ij ∈Λ+ ij ∈Λ−

which acts non trivially only on k qubits. The total Hamiltonian:


M
X
H= Hi (2)
i=1

is k-local and plays the role of a cost function: if an assignment satisfies all
the clauses simultaneously then the corresponding energy is zero. Otherwise
the energy would be equal to the number of violated clauses, thus > 0.
The k-local Hamiltonian problem associated to k-SAT is a particular case
of the k-QSAT problem, which is formulated as follows:
INSTANCE: Hamiltonian H = M
P
j=1 Pj acting on N qubits, where each Pj
is projector on a 2 dimensional subspace of the whole (2N ) dimensional
k

Hilbert space.
QUESTION: Is the ground state energy E0 of H zero, promised that either
E0 = 0 or E0 > 1/poly(N )?
If each Pj projects on an element of the computational basis, we obtain
the k-local Hamiltonian associated to k-SAT by the construction described
above.
Interestingly enough it is proved that 4-QSAT is QMA complete whereas
2-QSAT is in P[14]. For 3-QSAT the answer is not known.
Quantum Adiabatic Computation [21] was welcomed by the quantum com-
puting community due to the preliminary good results produced when ap-
plied to small random instances of NP-complete problems [21, 22, 51] and

4
its equivalence to the quantum circuital model [1].
The computational paradigm is based on the well known adiabatic theorem
[13, 5]. Given a time T > 0 and two Hamiltonians HI and HT we consider
the time dependent Hamiltonian:

H(t) = tHT + (T − t)HI , 0 ≤ t ≤ T, (3)

or, equivalently,  
t
H̃(s) = H , 0 ≤ s ≤ 1.
T
Let us indicate with | s; ek (s) i the instantaneous eigenvector of H̃(s) corre-
sponding to the instantaneous eigenvalue ek (s) with e0 (s) ≤ e1 (s) ≤ . . . ≤
en (s), 0 ≤ s ≤ 1 and by | ψ(s) i the solution of the Cauchy problem:
(
d
i ds | ψ(s) i = H̃(s)| ψ(s) i
| ψ(0) i = | 0; e0 (0) i.

The adiabatic theorem states that, if the time T satisfies


ξ
T ≫ 2 , (4)
gmin

where gmin is the minimum gap

gmin = min (e1 (s) − e0 (s))


0≤s≤1

and
dH̃
ξ = max h s; e1 (s) | | s; e0 (s) i ,
0≤s≤1 ds

then |h 1; e0 (1) | ψ(1) i| can be made arbitrarily close to 1. In other words if


we start in the state | 0; e0 (0) i we will end up in the ground state | T ; e0 (T ) i
of the target Hamiltonian HT . In practical cases ξ is not too large; thus
−2
the size of T is governed by gmin : the smaller gmin the slower must be the
change rate of the Hamiltonian if we want to avoid transitions (the so called
Landau-Zener transitions [53]) from the ground state to excited states.
The adiabatic method can be used to find the unknown ground state of an
Hamiltonian HT . We can start from the known (or easy to prepare) ground
state of an auxiliary Hamiltonian HI and consider the time dependent con-
vex combination (3).

5
For example let us consider the 3-SAT problem[22]: we need to understand
if the ground state of the Hamiltonian (2) has energy 0 or not. We set
the target Hamiltonian HT to H. As initial Hamiltonian we consider the
Hamiltonian X
HI = − σ1 (i),
i

having a ground state easy to prepare (all the spins aligned along positive
x direction).
The time T required to fulfill the hypotheses of the adiabatic theorem is
then the cost in time of the algorithm and is determined by the gap gmin .
The results obtained in the seminal paper by Farhi et al. [22] on small
random instances of 3-SAT suggested that the gap gmin scaled polynomially
with the problem size. However, subsequent results [49, 44, 54, 3, 6] proved
that gmin can be exponentially small. It has been even shown that adiabatic
quantum computation can perform worse than other heuristic classical and
quantum algorithms on some instance of 3-SAT [30].
Though these results do not prove that 3-SAT is QMA-complete, they hint
nevertheless that 3-SAT is hard for quantum computers as well. Recent
studies focused on determining why 3-QSAT has instances which are hard
for adiabatic quantum computation and what general features they possess.
The idea is to use typical, but not necessarily worst, random cases with
a specific density of clauses α = MN (M number of clauses, N number of
variables) and to investigate how gmin varies (on the average) as a function
of α. We refer the interested reader to [40, 39, 6] and references therein.

3 Quantum annealing
So far we discussed if and how quantum computational devices could be
used to tackle problems which are hard to solve by classical means. In this
section we present a class of classical heuristics (i.e. algorithms meant to
be run on classical computational devices) suggested by the behaviour of
quantum systems.
Many well-known heuristic optimization techniques [43] are based on natu-
ral metaphors: genetic algorithm, particle swarm optimization, ant-colony
algorithms, simulated annealing and taboo search. In simulated annealing
[35], for example, the space of admissible solutions to a given optimization
problem is visited by a temperature dependent random walk. The cost func-
tion defines the potential energy profile of the solution space and thermal

6
fluctuations avoid that the exploration gets stuck in a local minimum. An
opportunely scheduled temperature lowering (annealing), then, stabilizes
the walk around a, hopefully global, minimum of the potential profile.
The idea of using quantum, instead of thermal, jumps to explore the solution
space of a given optimization problem was proposed in Refs.[9, 8]3 . It was
suggested by the behaviour of the stochastic process [2, 20] qν associated
with the ground state (state of minimal energy) of a Hamiltonian of the
form:
ν2 ∂2
Hν = − + V (x), (5)
2 ∂x2
where the potential function V encodes the cost function to be minimized.

10
SA

5
QA

-4 -2 2 4

-5

Figure 1: Thermal jumps, which in SA allow the exploration of the solution


space, are substituted, in QA, by quantum jumps (tunneling).

For any fixed ν, once given the ground state ψν of Hν , the stochastic process
qν can be built by the ground state transformation [2]: let ψν ∈ L2 (R, dx)
be the ground state of the Hamiltonian (5). Under quite general hypotheses
on the potential V , the ground state ψν can be taken strictly positive. The
transformation:
ψ
U : ψ → Uψ = ,
ψν
3
For the connection with the QAC of the previous section we refer the reader to [46].

7
or ground state transformation, is well defined and unitary from L2 (R, dx)
to L2 (R, ψν2 dx). Under this transformation, Hν takes the form:

Hν = U Hν U −1 = −νLν + Eν

where
1 d2
L ν = ν 2 + bν
2 dx
has the form of the generator of a diffusion process qν on the real line, with
drift
1 d
ln ψν2 (x) .

bν (x) =
2 dx
The behaviour of the sample paths of the stationary ground state process
qν is characterized by long sojourns around the stable configurations, i.e.
minima of V (x), interrupted by rare large fluctuations which carry qν from
one minimum to another: qν is thus allowed to “tunnel” away from local
minima to the global minimum of V (x) (see Fig.2). The diffusive behaviour
of qν is determined by the Laplacian term in (5), i.e. the kinetic energy,
which is controlled by the parameter ν. The deep analysis of the semi-
classical limit performed in [32] shows, indeed, that as ν ց 0 “the process
will behave much like a Markov chain whose state space is discrete and given
by the stable configurations”.
Quantum annealing, or Quantum Stochastic Optimization (QSO), in the
original proposals of Refs.[9, 8], did not intend to reproduce the dynamics
of a quantum mechanical system (a task computationally untractable [23]),
but rather to simulate the ground state process of the Hamiltonian Hν as
ν ց 0. The algorithm is completely classical and intends to capture the
features of the quantum process which can allow an efficient exploration of
the solution space described above.
The desired ground state estimation is obtained by means of imaginary time
evolution, that is by letting an arbitrarily chosen normalized initial state
ψ(x, 0) of the system evolve not under the action of e−itHν but under e−tHν .
This replacement of the Schrödinger equation by a heat equation has the
following property:
1 −t Hν
lim e h̄ | ψ(0) i = | ψν i,
t→∞ αt

where αt = h φ0 | ψ(0) i exp(−tE0 ). The proof of this fact is straightforward:


given the eigenstates φ0 (= ψν ), φ1 , . . . , φN of Hν and the corresponding

8
x
4
x
-4 -2 2 4

t
100 200 300 400 500

-2

-4

Figure 2: A sample path of the ground state process qν for small ν. The po-
tential profile is reproduced on the vertical axes. In the inset we reproduce
the potential V (x) (dashed line) and the ground state for the this example
(solid line). For the example chosen the probability distribution is more
concentrated around a local rather than the global minimum. What mat-
ters, however, is that the process jumps between stable configurations, the
absolute minimum included.

eigenvalues E0 < E1 < . . . < EN , we have:


N Hν
1 −t Hν 1 X e−t h̄
lim e h̄ | ψ(0) i = lim E0 | φn ih φn | ψ(0) i (6)
t→∞ αt t→∞ h φ0 | ψ(0) i
e−t h̄
n=0
N
1 X En −E0
= lim e−t h̄ | φn ih φn | ψ(0) i
t→∞ h φ0 | ψ(0) i
n=0
N
1 X En −E0
= lim | φ0 i + e−t h̄ | φn ih φn | ψ(0) i.
t→∞ h φ0 | ψ(0) i
n=1

9
The excited states are “projected out” and, asymptotically, only the ground
state survives.
We are nevertheless left with the problem of determining e−tHν . A direct
diagonalization of Hν is impossible, given the dimensionality of the operator
which scales exponentially with the dimension of the system. However, the
Feynman-Kac formula [33, 24]:
  Z t  
−tHν

e ψ (x) = E exp − V (ε(τ ))dτ ψ(ε(t)) ε(0) = x , (7)
0

suggests a way to estimate the evolved state e−tHν ψ for any initial state
ψ: the component x of the state vector corresponds to the expected value
of (the exponential) of the integral of the potential (cost) function V along
the stochastic trajectories ǫ(τ ). Each trajectory ǫ(τ ) starts at x and, in
the time interval (0, t), makes N (t) transitions toward nearest-neighboburs
with uniform probability; the number of random transition N (t) is a Pois-
son process of intensity ν. A detailed description of the sampling and of
the ground state estimation process is beyond the scope of this paper and
we refer the interested reader to Ref.[8]. Here it suffices to say that once
given an estimate of ψν (y) for every neighbour y of the current solution x,
the exploration proceeds with high probability toward the nearest-neighbour
solution having the highest estimated value of ψν . In Quantum Stochastic
Optimization, therefore, each move is local (i.e. to a nearest-neighbour of
the current solution) but the decision rule on which neighbour to accept is
based on the prospection of an ensemble of long chains ǫ(τ ).
When implemented in a working computer program, the procedure described
above requires a variety of approximations. First of all, equation (7) repro-
duces the ground-state only in the limit t → ∞, corresponding to sample
paths of infinite length. The expected value of the right hand side of (7),
moreover, will be estimated by means of a finite size sample. Finally, the
number of neighbours of a configuration may be too large to allow the es-
timation of ψν on the whole neighbourhood of a given solution. The accu-
racy of the approximation depends, for example, on the actual length νt of
the sample paths, the number of paths n per neighbour and the dimension
|N eigh| of the subset of the set of neighbours of a given solution to consider
at each move. Some “engineering” is also in order [9] (see the pseudo-code of
Quantum Stochastic Optimization reported here): for example, we can call
a local optimization procedure every tloc quantum transitions. In addition,
if the search looks to be stuck for in a local minimum, we can force a jump
to another local minimum.
In Fig. 3 we show the results produced by QSO applied to random instances

10
of the Graph Partitioning problem: given a graph G = (V, E), where V de-
notes the set of vertices and E the set of edges, partition V into two subsets
such that the subsets have equal size and the number of edges with end-
points in different subsets is minimized. We extracted our instances from
the family G500,0.01 (i.e. random graph with 500 nodes and an edge between
any two nodes with probability 0.01.) used in Ref.[31] and compare them
with the results obtained on the same instances by Simulated Annealing.
The comparison has been made by letting the programs implementing SA
and QA run for essentially the same (machine) time. Simulated Annealing
performs better than Quantum Stochastic Optimization: it finds, on the av-
erage, better (smaller mean value and variance) approximations of the best
partition. What is interesting is that QSO goes down very fast toward a
local minimum and then relies on quantum transitions to escape from it. In
SA, on the contrary, a steep descend toward a local minimum could result
in a early freezing of the search. We tested QSO also random satisfiable
instances of 3-SAT with qualitatively similar results.
Recent variants of QSO, as Imaginary Time Quantum Monte Carlo (ITQMC)
[26, 45, 46, 38], introduce a proper annealing schedule in the algorithm:
the control parameter ν(t) is suitably reduced during the algorithm, as
the temperature is in reduced in SA. Between each “annealing” step, the
ground state ψν is estimated by the same estimation procedure used in
QSO. For some optimization problems, such as the Graph partitioning de-
scribed above, ITQMC performs better than SA, for others, e.g. 3-SAT,
it does not. There is therefore no a-priori guarantee that QA will produce
better heuristics than SA [10].
Interestingly enough, it has been shown that ITQMC performs at least as
well as the real time adiabatic approximation [42] described in the previous
section. The intuitive reason behind this result has been clearly stated in
[4, 48]: the projection mechanism proper of imaginary time makes the evo-
lution of the initial condition much more stable than the one determined
by the Schrödinger equation. The adiabatic theorem, in fact, relates the
energy gap between the ground state and the first excited state to the an-
nealing time: the smaller the energy gap, the slower must be the change
of the Hamiltonian HA (t) in order to avoid Landau-Zener transitions of the
system from the ground to excited states (which do not correspond to the
solution we are looking for). Since the projection mechanism suppresses
the amplitude associated to excited states by accidental Landau-Zener tran-
sitions, the interpolation between the initial Hamiltonian HA (0) and the
target Hamiltonian HT (the annealing) can be carried out more rapidly.

11
Procedure 1 Quantum annealing
Input: initial condition init; control parameter ν; duration tmax ; tunnel
time tdrill ; local opt. time tloc .
t ← 0;
ǫ ← init;
vmin = cost(ǫ);
while t < tmax do
j ← 0;
repeat
i ← 0;
repeat
ǫ ← Quantum Transition(ǫ, ν, tmax );
if cost(ǫ) < vmin then
vmin ← cost(ǫ);
i, j ← 0;
else
i ← i + 1;
end if
until i > tloc
epsilon ← Local Optimization(ǫ).
if cost(ǫ) < vmin then
vmin ← cost(ǫ);
j ← 0;
end if
until j < tdrill
draw a trajectory of length νtmax and jump there.
Local Optimization(ǫ)
end while

Procedure 2 Quantum Transitions


Input: initial condition ǫ; chain length νt; set of neighbours to estimate
N eigh
for all neighbour k ∈ N eigh do
estimate the wave function ψν (k);
end for
best ← select a neighbour in N eigh with probability proportional to ψν
return best

12
(a) (b)

(c) (d)

Figure 3: Quantum Stochastic Optimization (QSO) vs. Simulated Anneal-


ing (SA). Benchmark: 100 random instances of G500,0.01 . Allocated CPU-
time: 600 seconds. Parametrization: QSO: νt = 20; n = 4; |N eigh| = 5; SA:
geometric scheduling: temperature at k-th annealing step: tk = t0 (0.99)k ;
t0 = 3.0; number of proposed moves per annealing step = 16 · 500. (a) and
(b) distribution of Vmin for QSO and SA respectively. (c) and (d) Values
of Vmin reached respectively by QSO and SA on each instance of Graph
Partitioning after every 50 (machine-time) seconds. In the inset of (c) and
(d): values of vmin along a single search path.

4 Dissipative dynamics
It becomes quite natural to ask whether a “projection” mechanism similar
to the one operated by the unphysical imaginary time evolution is available
in some real time quantum dynamics. If it were possible, we could hope
to exploit it to speed up quantum adiabatic computation without worrying
too much about Landau-Zener transitions. Dissipative quantum annealing,
a novel model proposed in [18, 16], represents one first step in answering
this question.
Given the usual Hamiltonian Hν , we add a non-linear term, the Kostin

13
friction [36, 37], modeling the effective interaction of the quantum system
with an environment which absorbs energy. In the continuous case the
Schrödinger-Kostin equation reads:
d
i ψ = Hν ψ + βK(ψ), (8)
dt
where Hν is the Hamiltonian of (5) and
 
1 ψ
K(ψ) = log . (9)
2i ψ∗

By rewriting the state ψ(x) à la de Broglie,


p
ψ(x) = ρ(x) eiS(x) , (10)
p
the nonlinear part K of the Hamiltonian (8) assumes the form K( ρ(x) eiS(x) ) =
S(x), whose gradient corresponds to the current velocity [41]. This justifies
the names friction for the term K and friction constant for β > 0.
d
Given a solution ψ(t) of the Schrödinger equation i dt ψ = (Hν + K)ψ the
following inequality holds:
d
h ψ(t) |Hν | ψ(t) i ≤ 0.
dt
It means that the energy of the system is a monotone non increasing func-
tion of time: the system dissipates energy.4
In Ref.[18] we showed that friction can play a useful role in suppressing two
genuinely quantum effects which can affect the exploration of the solution
space: Bloch oscillations and Anderson localization. Bloch oscillations [12]
are due to the relation v = sin p between momentum p and velocity v of
a particle moving on a regular lattice: as the momentum of the particle
increases monotonically, the velocity can change sing and the particle starts
4
We point out that the full interaction with the environment would include a ran-
dom force describing the back action of the environment onto the system. We are then
well aware of the fact that we are considering just the dissipative part of a fluctuating-
dissipating system. Indeed, the complete Kostin equation was the correspondent in the
Schrödinger representation of the Quantum Langevin Equation, that is the equation for
the observables of an harmonic oscillator coupled to a bath of oscillators [25] in the ther-
modynamic limit. The proper language to describe such an interaction would be the
theory of open quantum systems [15]. We decided, however, to adopt a phenomenological
approach such as the one used in [27] and more recently, in the context of time dependent
density functional theory, in [52]. A proper analysis of the complete system-environment
interaction is currently under study.

14
moving backward along the lattice.
Anderson localization [7], instead, appears when the lattice presents irreg-
ularities, manifesting themselves as a random potential on the sites of the
lattice. The state of a quantum particle moving in a highly irregular poten-
tial landscape is spatially localized and the probability of tunneling through
large regions is exponentially suppressed.
In [18] we considered the simple case of a spin chain of finite size s governed
by an XY Hamiltonian, that is:
s−1
X
HXY = λ (σ1 (j)σ1 (j + 1) + σ2 (j)σ2 (j + 1)) . (11)
j=1

We took an initial configuration having a single spin up and all the others
down. The spin up plays the role of a quantum walker and, by a suitable
choice of the initial conditions [17], it behaves as an excitation moving ballis-
tically along the chain as in Figure 4. We introduce a simple potential/cost

Figure 4: s = 100, λ = 1. The excitation, initially localized at the beginning


of the chain, moves ballistically forth and back.

function V (x) = −gx. The motion of the walker is affected as shown in Fig-
ure 5(a): Bloch oscillations appear hindering an exhaustive exploration of
the solution space. As Figure 5(b) shows, the oscillations can be suppressed
by adding the frictional term, in the discretized form:
x
X
(Kψ)(x) = sin(S(y) − S(y − 1)), (12)
y=2

15
to the system Hamiltonian. Here the phase function S is defined as in
Eq.(10).
Moreover, the convergence toward the ground state is witnessed by a pro-
gressive concentration of the probability mass at the end of the chain were
the ground state is localized.
Potential profiles determined by optimization problems are usually quite ir-
regular. Figures 6(a) shows the consequence of the addition of a random
potential to the free Hamiltonian (11): the probability amplitude remains
trapped in the first half of the chain, because of Anderson localization.
Figure 6(b) shows how dissipation can contrast Anderson localization: the
probability mass percolates through irregularities and concentrates in the
region where we chose to localize the ground state by adding a linear poten-
tial V (x) = −gx as in the previous example.

(a) (b)

Figure 5: s = 100, 0 ≤ t ≤ 20s. Frame (a): g = 6/s, β = 0: the wave


packet gets confined due to Bloch reflection. (b): for g = 6/s, β = 8/s:
Bloch reflections are suppressed by friction.

5 Conclusion and outlook


Quantum Annealing is an example of cross contamination between two dif-
ferent research areas: computer science and physics. The “constructive
interference” goes in two ways: on one side, a genuinely quantum effect,
tunneling, suggested a completely classical approximation algorithm which

16
Procedure 3 Local optimization
Input: initial condition ǫ.
return the best solution found by any steepest descent strategy.

(a) (b)

Figure 6: s = 100, 0 ≤ t ≤ 20s. Frame (a): g = 0, β = 0: The wavepacket


is localized due to Anderson localization (b): for g = 6/s, β = 8/s: friction
suppresses Anderson localization: the probability mass percolates through
the imperfections (additional random potential extracted from a normal
population of mean 0 and standard deviation σ = 0.06) of the spin chain.

17
provides good heuristics to hard combinatorial problems. In this paper we
presented in details the earliest version of Quantum Annealing, Quantum
Stochastic Optimization.
On the other side, a trick used to estimate the state of minimal energy of
a quantum system by classical means, Imaginary Time evolution (6), sug-
gested the idea of exploiting dissipation to stabilize the evolution of quantum
systems.
A mechanism able to suppress Anderson localization would be most welcome
in Quantum Adiabatic Computation. In fact, Altshuler et al. in Ref.[3]
advanced the conjecture that “Anderson localization casts clouds over adia-
batic quantum optimization”. The preliminary results presented here on the
effects Kostin friction on simple toy models are promising: Anderson local-
ization is suppressed. Still, whether an analogous mechanism can be of some
use when solving hard optimization problems, together with a quantitative
assessment of the results presented in [16], is an open research problem.

References
[1] D. Aharonov et al. Adiabatic quantum computation is equivalent to
standard quantum computation. SIAM J. on Computing, 37:166, 2007.

[2] S. Albeverio, R. Hoeg-Krohn, and L. Streit. Energy forms, Hamiltoni-


ans and distorted Brownian paths. J. Math. Phys., 18:907–917, 1977.

[3] B. Altshuler, H. Krovi, and J. Roland. Anderson localiza-


tion casts clouds over adiabatic quantum optimization. e-print
arXiv:0912.0746v1, 2009.

[4] P. Amara, D. Hsu, and J. Straub. Global minimum searches using an


approximate solution of the imaginary time Schrödinger equation. J.
Chem. Phys., 97:6715–6721, 1993.

[5] A. Ambainis and O. Regev. An elementary proof of the quantum adi-


abatic theorem. arXiv:quant-ph/0411152, 2004.

[6] M. H. S. Amin and V. Choi. First order quantum phase transition


in adiabatic quantum computation. arXiv:quant-ph/0904.1387v3, Dec.
2009.

[7] P. Anderson. Absence of diffusion in certain random lattices. Phys.


Rev., 109(5):1492–1505, 1958.

18
[8] B. Apolloni, C. Carvalho, and D. de Falco. Quantum stochastic opti-
mization. Stoc. Proc. and Appl., 33:223–244, 1989.

[9] B. Apolloni, N. Cesa-Bianchi, and D. de Falco. A numerical implemen-


tation of Quantum Annealing. In Albeverio et al. (Eds.), Stochastic
Processes, Physics and Geometry, Proceedings of the Ascona/Locarno
Conference, 4-9 July 1988, pages 97–111. World Scientific, 1990.

[10] D. Battaglia, G. Santoro, L. Stella, E. Tosatti, and O. Zagordi. Deter-


ministic and stochastic quantum annealing approaches, pages 171–206.
Lecture Notes in Physics vol. 206. Springer-Verlag, 2005.

[11] E. Bernstein and U. Vazirani. Quantum complexity theory. SIAM J.


on Computing, 26(5):1411–1473, 1997.

[12] F. Bloch. Über die Quantenmechanik der Elektronen in Kristallgittern.


Z. Phys., 52:555–600, 1929.

[13] M. Born and V. Fock. Beweis des Adiabatensatzes. Z. Physik A, 51:


165, 1928.

[14] S. Bravyi. Efficient algorithm for a quantum analogue of 2-sat.


arXiV:quant-ph/0602108, 2006.

[15] H. P. Breuer and F. Petruccione. The theory of open quantum systems.


Oxford University Press, New York, 2002.

[16] D. de Falco, E. Pertoso, and D. Tamascelli. Dissipative quantum an-


nealing. In Proceedings of the 29th Conference on Quantum Probability
and Related Topics. World Scientific (in press), 2009.

[17] D. de Falco and D. Tamascelli. Speed and entropy of an interacting


continuous time quantum walk. J. Phys. A: Math. Gen., 39:5873–5895,
2006.

[18] D. de Falco and D. Tamascelli. Quantum annealing and the


Schrödinger-Langevin-Kostin equation. Phys. Rev. A, 79:012315, 2009.

[19] D. Deutsch. Quantum theory, the Church-Turing principle and the


universal quantum computer. Proc. Roy. Soc. London, Series A, 400:
97–117, 1985.

[20] S. Eleuterio and S. Vilela Mendes. Stochastic ground-state processes.


Phys. Rev. B, 50:5035–5040, 1994.

19
[21] E. Farhi and et al. A quantum adiabatic evolution algorithm applied
to random instances of an NP-Complete problem. Science, 292, 2001.

[22] E. Farhi and et al. Quantum computation by adiabatic evolution.


arXiv:quant-ph/0001106v1, 2000.

[23] R. Feynman. Simulating physics with computers. Int. J. Theor. Phys.,


21:467–488, 1982.

[24] R. P. Feynman. Space-time approach to non-relativistic quantum me-


chanics. Rev. Mod. Phys., 20(2):367–387, 1948.

[25] G. Ford, M. Kac, and P. Mazur. Statistical mechanics of assemblies of


coupled oscillators. J. Math. Phys., 6:504–515, 1965.

[26] T. Gregor and R. Car. Minimization of the potential energy surface of


Lennard—Jones clusters by quantum optimization. Chem. Rev. Lett.,
412:125–130, 2005.

[27] J. J. Griffin and K.-K. Kan. Colliding heavy ions: Nuclei as dynamical
fluids. Rev. Mod. Phys., 48(3):467–477, 1976.

[28] L. Grover. A fast quantum-mechanical algorithm for database search.


In Proc. 28th Annual ACM Symposium on the Theory of Computing.
New York: ACM, 1996.

[29] L. Grover. From Schrödinger equation to the quantum search algorithm.


Am. J. Phys., 69:769–777, 2001.

[30] T. Hogg. Adiabatic quantum computing for random satisfiability prob-


lems. Phys. Rev. A, 67(2):022314, Feb 2003.

[31] D. S. Johnson, C. R. Aragon, L. A. McGeoch, and C. Shevon. Opti-


mization by simulated annealing: An experimental evaluation; part i,
graph partitioning. Operations Research, 37:865–892, 1989.

[32] G. Jona-Lasinio, F. Martinelli, and E. Scoppola. New approach to the


semiclassical limit of quantum mechanics. i. Multiple tunnelling in one
dimension. Comm. Math. Phys., 80:223–254, 1981.

[33] M. Kac. On distributions of certain Wiener functionals. Trans. Am.


Math. Soc., pages 1–13, 1949.

[34] J. Kempe, A. Kitaev, and O. Regev. The complexity of the local Hamil-
tonian problem. SIAM J. on Computing, 35(5):1070–1097, 2006.

20
[35] S. Kirkpatrik, C. D. Gelatt Jr., and M. P. Vecchi. Optimization by
simulated annealing. Science, 220:671–680, 1983.

[36] M. Kostin. On the Schrödinger-Langevin equation. J. Chem. Phys.,


57:3589–3591, 1972.

[37] M. Kostin. Friction and dissipative phenomena in quantum mechanics.


J. Stat. Phys., 12:145–151, 1975.

[38] K. Kurihara, S. Tanaka, and S. Miyashita. Quantum annealing for


clustering. arXiv:quant-ph/09053527v2, 2009.

[39] C. Laumann et al. On product, generic and random generic quantum


satisfiability. arXiv:quant-ph/0910.2058v1, 2009.

[40] C. Laumann et al. Phase transitions and random quantum satisfiability.


arXiv:quant-ph/0903.1904v1, 2009.

[41] A. Messiah. Quantum Mechanics. John Wiley and Sons, 1958.

[42] S. Morita and H. Nishimori. Mathematical foundations of quantum


annealing. J. Math. Phys., 49:125210, 2008.

[43] C. Papadimitriou and K. Steiglitz. Combinatorial optimization: algo-


rithms and complexity. New York: Dover, 1998.

[44] B. Reichardt. The quantum adiabatic optimization algorithm and local


minima. In: Proc. 36th STOC., page 502, 2004.

[45] G. Santoro and E. Tosatti. Optimization using quantum mechanics:


quantum annealing through adiabatic evolution. J. Phys. A: Math.
Gen., 39:R393–R431, 2006.

[46] G. Santoro and E. Tosatti. Optimization using quantum mechanics:


quantum annealing through adiabatic evolution. J. Phys. A: Math.
Theor., 41:209801, 2008.

[47] P. W. Shor. Polynomial-time algorithms for prime factorization and


discrete logarithms on a quantum computer. SIAM J. on Computing,
26:1484, 1997.

[48] L. Stella, G. Santoro, and E. Tosatti. Optimization by quantum an-


nealing: Lessons from simple cases. Phys. Rev. B, 72:014303, 2005.

21
[49] W. van Dam, M. Mosca, and U. Vazirani. How powerful is adiabatic
quantum computation. Proc. FOCS ’01, 2001.

[50] J. Watrous. Succint quantum proofs for properties of finite groups. In


Proc. IEEE FOCS 2000, pages 537–546, 2000.

[51] A. P. Young, S. Knysh, and V. N. Smelyanskiy. Size dependence of the


minimum excitation gap in the quantum adiabatic algorithm. Phys.
Rev. Lett., 101(17):170503, Oct 2008.

[52] J. Yuen-Zhou et al. Time-dependent density functional theory for


open quantum systems with unitary propagation. arXiv:cond-mat.mtrl-
sci/0902.4505v3, 2009.

[53] C. Zener. A theory of the electrical breakdown of solid dielectrics. Proc.


R. Soc. Lond. A, 145:523–529.

[54] M. Žnidarič and M. Horvat. Exponential complexity of an adiabatic


algorithm for an np-complete problem. Phys. Rev. A, 73(2):022329,
Feb 2006.

22

You might also like