Understanding
quantum information
and computation
By John Watrous
Lesson 3
Quantum circuits
© Copyright IBM Corp. 2024, 2025
Circuits
Circuits are models of computation:
• Wires carry information
• Gates represent operations
In this series, circuits are always acyclic — information flows from left to right.
Example: Boolean circuits
Wires store binary values, gates represent Boolean logic operations, such
as AND (∧), OR (∨), NOT (¬), and FANOUT ( ).
1
1 ∧
Y
¬ 1
1 1
1
∨
0 0
0 ¬ 0
X ∧
0
Circuits
Circuits are models of computation:
• Wires carry information
• Gates represent operations
In this series, circuits are always acyclic — information flows from left to right.
Example: arithmetic circuits
Wires store numbers and gates represent arithmetic operations, such as
addition (+) and multiplication (∗).
x
x ∗
2
x x
y + 2
x +y
y
2 2 2
y x y+x +y +y
∗
+
1 y+1
Quantum circuits
In the quantum circuit model, the wires represent qubits and the gates represent
both unitary operations and measurements.
Example
∣0⟩ H S H T
1+i
2
∣0⟩ + √1 ∣1⟩
2
⎛ √12 √1 ⎞ 1 0
H=⎜ ⎟
1 0
⎜ 1 ⎟ S=( ) T =( )
2
⎝√ ⎠ √
1+i
− √1 0 i 0
2
2 2
⎛ 1+i 1−i
⎞
T HSH = ⎜
⎜ 1 ⎟
⎟
2 2
⎝√ √i ⎠
2 2
Quantum circuits
Example Convention
In this series (and in Qiskit),
H
ordering qubits from
bottom-to-top is equivalent
+ to ordering them
left-to-right.
⎛ √2 √1 0 ⎞
1
0
⎜
⎜
2
⎟
⎟
⎜
⎜ − √1 ⎟
⎟
⎜
⎜
0 0 √1
2⎟
⎟
⎜
⎜ ⎟
⎟
⎜ ⎟
2
⎜
⎜ √ √ ⎟ ⎟
⎜ 2 ⎟
1 1
0 0
⎜
⎜ 2 ⎟
⎟
⎝ √1 − √1 0 0 ⎠
2 2
Quantum circuits
Example
Y H
X +
B
A
Quantum circuits
Example
Y H B
X + A
Quantum circuits
Single-qubit gates Controlled-NOT Swap gate
X
×
+ ×
Y
Toffoli gate Fredkin gate
Z
H ×
S + ×
T
Quantum circuits
It is also sometimes convenient to view arbitrary unitary operations as gates.
Unitary operation Controlled-unitary operation
U
Inner products
When we use the Dirac notation, a ket is a column vector, and its corresponding
bra is a row vector:
⎛ α1 ⎞
⎜ ⎟
∣ψ⟩ = ⎜
⎜
⎜ ⋮ ⎟
⎟ ⟨ψ∣ = (α1 ⋯ αn )
⎜ ⎟ ⎟
⎝αn ⎠
Suppose that we have two kets:
⎛ α1 ⎞ ⎛ β1 ⎞
⎜ ⎟ ⎜ ⎟
∣ψ⟩ = ⎜
⎜
⎜ ⋮ ⎟
⎟ ∣φ⟩ = ⎜
⎜ ⋮ ⎟
⎟
⎜ ⎟ ⎟ ⎜
⎜ ⎟ ⎟
and
⎝αn ⎠ ⎝βn ⎠
Inner products
Suppose that we have two kets:
⎛ α1 ⎞ ⎛ β1 ⎞
⎜ ⎟ ⎜ ⎟
∣ψ⟩ = ⎜
⎜
⎜ ⋮ ⎟
⎟ ∣φ⟩ = ⎜
⎜ ⋮ ⎟
⎟
⎜ ⎟ ⎟ ⎜
⎜ ⎟ ⎟
and
⎝αn ⎠ ⎝βn ⎠
We then have
⎛ β1 ⎞
⎜ ⎟
⟨ψ∣φ⟩ = (α1 αn ) ⎜
⎜
⎜ ⋮ ⎟
⎟ = α1 β1 + ⋯ + αn βn
⎜ ⎟ ⎟
⋯
⎝βn ⎠
This is the inner product of ∣ψ⟩ and ∣φ⟩.
Inner products
Alternatively, suppose that we have two column vectors expressed like this:
∣ψ⟩ = ∑ αa ∣a⟩ and ∣φ⟩ = ∑ βb ∣b⟩
a∈Σ b∈Σ
Then the inner product of these vectors is as follows:
⟨ψ∣φ⟩ = ( ∑ αa ⟨a∣)( ∑ βb ∣b⟩)
a∈Σ b∈Σ
= ∑ ∑ αa βb ⟨a∣b⟩
a∈Σ b∈Σ
= ∑ αa β a
a∈Σ
Inner products
Example
∣1⟩ √
∣φ⟩ = 12 ∣0⟩ + 2
3
∣1⟩
−∣0⟩ θ ∣0⟩
∣ψ⟩ = √1 ∣0⟩ − √1 ∣1⟩
2 2
−∣1⟩
The inner product of these two vectors is
√
1− 3
⟨ψ∣φ⟩ = √ ≈ −0.2588
2 2
Inner products
Example
∣1⟩ √
∣φ⟩ = 12 ∣0⟩ + 2
3
∣1⟩
∣0⟩
105
−∣0⟩
∣ψ⟩ = √1 ∣0⟩ − √1 ∣1⟩
2 2
−∣1⟩
The inner product of these two vectors is
√
1− 3
⟨ψ∣φ⟩ = √ = cos(105 ) ≈ −0.2588
◦
2 2
Inner products
Example
∣1⟩ √
∣φ⟩ = 12 ∣0⟩ + 2
3
∣1⟩
∣0⟩
90
−∣0⟩
√
∣ψ⟩ = 2
3
∣0⟩ − 12 ∣1⟩
−∣1⟩
The inner product of these two vectors is
⟨ψ∣φ⟩ = 0 = cos(90 )
◦
Inner products
Relationship to the Euclidean norm
The inner product of any vector
∣ψ⟩ = ∑ αa ∣a⟩
a∈Σ
with itself is
⟨ψ∣ψ⟩ = ∑ αa αa = ∑ ∣αa ∣ = ∥∣ψ⟩∥
2 2
a∈Σ a∈Σ
That is, the Euclidean norm of a vector ∣ψ⟩ is given by
√
∥∣ψ⟩∥ = ⟨ψ∣ψ⟩
Inner products
Conjugate symmetry
For any two vectors
∣ψ⟩ = ∑ αa ∣a⟩ and ∣φ⟩ = ∑ βb ∣b⟩
a∈Σ b∈Σ
we have
⟨ψ∣φ⟩ = ∑ αa βa and ⟨φ∣ψ⟩ = ∑ βa αa
a∈Σ a∈Σ
and therefore
⟨ψ∣φ⟩ = ⟨φ∣ψ⟩
Inner products
Linearity in the second argument
Suppose that ∣ψ⟩, ∣φ1 ⟩, and ∣φ2 ⟩ are vectors and α1 and α2 are
complex numbers. If we define a new vector
∣φ⟩ = α1 ∣φ1 ⟩ + α2 ∣φ2 ⟩
then
⟨ψ∣φ⟩ = ⟨ψ∣(α1 ∣φ1 ⟩ + α2 ∣φ2 ⟩) = α1 ⟨ψ∣φ1 ⟩ + α2 ⟨ψ∣φ2 ⟩
Inner products
Conjugate linearity in the first argument
Suppose that ∣ψ1 ⟩, ∣ψ2 ⟩, and ∣φ⟩ are vectors and β1 and β2 are
complex numbers. If we define a new vector
∣ψ⟩ = β1 ∣ψ1 ⟩ + β2 ∣ψ2 ⟩
then
⟨ψ∣φ⟩ = (β1 ⟨ψ1 ∣ + β2 ⟨ψ2 ∣)∣φ⟩ = β1 ⟨ψ1 ∣φ⟩ + β2 ⟨ψ2 ∣φ⟩
Inner products
The Cauchy–Schwarz inequality
For every choice of vectors ∣ψ⟩ and ∣φ⟩ we have
∣⟨ψ∣φ⟩∣ ≤ ∥∣ψ⟩∥ ∥∣φ⟩∥
(Equality holds if and only if ∣ψ⟩ and ∣φ⟩ are linearly dependent.)
Orthogonality and orthonormality
Two vectors ∣ψ⟩ and ∣φ⟩ are orthogonal if their inner product is zero:
⟨ψ∣φ⟩ = 0
An orthogonal set {∣ψ1 ⟩, . . . , ∣ψm ⟩} is one where all pairs pairs are orthogonal:
⟨ψj ∣ψk ⟩ = 0 / k)
(for all j =
An orthonormal set {∣ψ1 ⟩, . . . , ∣ψm ⟩} is an orthogonal set of unit vectors:
1 j=k
⟨ψj ∣ψk ⟩ = { / k)
(for all j =
/k
0 j=
An orthonormal basis {∣ψ1 ⟩, . . . , ∣ψm ⟩} is an orthonormal set that forms a
basis (of a given space).
Orthogonality and orthonormality
Example
For any classical state set Σ, the set of all standard basis vectors
{∣a⟩ ∶ a ∈ Σ}
is an orthonormal basis.
Example
The set {∣+⟩, ∣−⟩} is an orthonormal basis for the 2-dimensional space
corresponding to a single qubit.
Example
The Bell basis {∣φ ⟩, ∣φ ⟩, ∣ψ ⟩, ∣ψ ⟩} is an orthonormal basis for the
+ − + −
4-dimensional space corresponding to two qubits.
Orthogonality and orthonormality
Example
The set {∣+⟩, ∣−⟩} is an orthonormal basis for the 2-dimensional space
corresponding to a single qubit.
Example
The Bell basis {∣φ ⟩, ∣φ ⟩, ∣ψ ⟩, ∣ψ ⟩} is an orthonormal basis for the
+ − + −
4-dimensional space corresponding to two qubits.
Example
The set {∣0⟩, ∣+⟩} is not an orthogonal set because
1
⟨0∣+⟩ = √ = /0
2
Orthogonality and orthonormality
Fact
Suppose that
{∣ψ1 ⟩, . . . , ∣ψm ⟩}
is an orthonormal set of vectors in an n-dimensional space.
(Orthonormal sets are always linearly independent, so these vectors span
a subspace of dimension m ≤ n.)
If m < n, then there must exist vectors
∣ψm+1 ⟩, . . . , ∣ψn ⟩
so that {∣ψ1 ⟩, . . . , ∣ψn ⟩} forms an orthonormal basis.
(The Gram–Schmidt orthogonalization process can be used to construct
these vectors.)
Orthogonality and orthonormality
Orthonormal bases are closely connected with unitary matrices.
These conditions on a square matrix U are equivalent:
† †
1. The matrix U is unitary (i.e., U U = 1 = UU ).
2. The rows of U form an orthonormal basis.
3. The columns of U form an orthonormal basis.
For example, consider a 3 × 3 matrix U:
⎛α1,1 α3,1 ⎞ ⎛α1,1 α1,3 ⎞
⎜ ⎟ ⎜ ⎟
α2,1 α1,2
U =⎜
⎜
⎜ α3,2 ⎟
⎟
⎟ U=⎜
⎜
⎜ α2,3 ⎟
⎟
⎟
†
⎜ 1,2 ⎟ ⎜ 2,1 ⎟
α α2,2 α α2,2
⎝α1,3 α2,3 α3,3 ⎠ ⎝α3,1 α3,2 α3,3 ⎠
Orthogonality and orthonormality
For example, consider a 3 × 3 matrix U:
⎛α1,1 α3,1 ⎞ ⎛α1,1 α1,3 ⎞
⎜ ⎟ ⎜ ⎟
α2,1 α1,2
U =⎜
⎜
⎜ α3,2 ⎟
⎟
⎟ U=⎜
⎜
⎜ α2,3 ⎟
⎟
⎟
†
⎜ 1,2 ⎟ ⎜ 2,1 ⎟
α α2,2 α α2,2
⎝α1,3 α2,3 α3,3 ⎠ ⎝α3,1 α3,2 α3,3 ⎠
†
Forming vectors from the columns of U, we can express U U like this:
⎛α1,1 ⎞ ⎛α1,2 ⎞ ⎛α1,3 ⎞
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
∣ψ1 ⟩ = ⎜
⎜
⎜α2,1 ⎟
⎟
⎟ ∣ψ2 ⟩ = ⎜
⎜
⎜α2,2 ⎟
⎟
⎟ ∣ψ3 ⟩ = ⎜
⎜
⎜ α2,3 ⎟
⎟
⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
⎝α3,1 ⎠ ⎝α3,2 ⎠ ⎝α3,3 ⎠
⎛⟨ψ1 ∣ψ1 ⟩ ⟨ψ1 ∣ψ2 ⟩ ⟨ψ1 ∣ψ3 ⟩⎞
⎜ ⎟
U U=⎜
⎜
⎜⟨ψ ∣ψ ⟩ ⟨ψ2 ∣ψ2 ⟩ ⟨ψ2 ∣ψ3 ⟩⎟
⎟
⎟
†
⎜ 2 1 ⎟
⎝⟨ψ3 ∣ψ1 ⟩ ⟨ψ3 ∣ψ2 ⟩ ⟨ψ3 ∣ψ3 ⟩⎠
Orthogonality and orthonormality
These conditions on a square matrix U are equivalent:
† †
1. The matrix U is unitary (i.e., U U = 1 = UU ).
2. The rows of U form an orthonormal basis.
3. The columns of U form an orthonormal basis.
Fact
Given any orthonormal set of n-dimensional vectors
{∣ψ1 ⟩, . . . , ∣ψm ⟩}
there is a unitary matrix U whose first m columns are these vectors:
⎛ ⋮ ⋮ ⋮ ⋮ ⋮ ⎞
⎜ ⎟
U=⎜
⎜
⎜ ∣ψ1 ⟩ ∣ψ2 ⟩ ∣ψm ⟩ ∣ψm+1 ⟩ ∣ψn ⟩ ⎟
⎟
⎟
⎜ ⎟
⋯ ⋯
⎝ ⋮ ⋮ ⋮ ⋮ ⋮ ⎠
Projections
A square matrix Π is called a projection if it satisfies two properties:
†
1. Π = Π
2
2. Π = Π
Example
If ∣ψ⟩ is a unit vector, then this matrix is a projection:
Π = ∣ψ⟩⟨ψ∣
Π = (∣ψ⟩⟨ψ∣) = (⟨ψ∣) (∣ψ⟩) = ∣ψ⟩⟨ψ∣ = Π
† † † †
(AB) = B A
† † †
Projections
A square matrix Π is called a projection if it satisfies two properties:
†
1. Π = Π
2
2. Π = Π
Example
If ∣ψ⟩ is a unit vector, then this matrix is a projection:
Π = ∣ψ⟩⟨ψ∣
Π = (∣ψ⟩⟨ψ∣) = (⟨ψ∣) (∣ψ⟩) = ∣ψ⟩⟨ψ∣ = Π
† † † †
Π = (∣ψ⟩⟨ψ∣) = ∣ψ⟩⟨ψ∣ψ⟩⟨ψ∣ = ∣ψ⟩⟨ψ∣ = Π
2 2
Projections
A square matrix Π is called a projection if it satisfies two properties:
†
1. Π = Π
2
2. Π = Π
Example
If {∣ψ1 ⟩, . . . , ∣ψm ⟩} is an orthonormal set, then this is a projection:
Π = ∑ ∣ψk ⟩⟨ψk ∣
m
k=1
Π = ( ∑ ∣ψk ⟩⟨ψk ∣) = ∑ (∣ψk ⟩⟨ψk ∣) = ∑ ∣ψk ⟩⟨ψk ∣ = Π
m m m
† †
k=1 k=1 k=1
Π = ∑ ∑ ∣ψj ⟩⟨ψj ∣ψk ⟩⟨ψk ∣ = ∑ ∣ψk ⟩⟨ψk ∣ = Π
m m m
2
j=1 k=1 k=1
Projections
A square matrix Π is called a projection if it satisfies two properties:
†
1. Π = Π
2
2. Π = Π
Fact
Every projection matrix Π takes the form
Π = ∑ ∣ψk ⟩⟨ψk ∣
m
k=1
for some orthonormal set {∣ψ1 ⟩, . . . , ∣ψm ⟩}.
(This includes the case Π = 0.)
Projective measurements
A collection of projections {Π1 , . . . , Πm } that satisfies
Π1 + ⋯ + Πm = 1
describes a projective measurement.
When such a measurement is performed on a system in the state ∣ψ⟩, two
things happen:
1. The outcome k ∈ {1, . . . , m} of the measurement is chosen randomly:
Pr(outcome is k) = ∥Πk ∣ψ⟩∥ = ⟨ψ∣Πk ∣ψ⟩
2
2. The state of the system becomes
Πk ∣ψ⟩
∥Πk ∣ψ⟩∥
Projective measurements
We can also choose different names for the measurement outcomes. Any
collection of projections {Πa ∶ a ∈ Γ } that satisfies the condition
∑ Πa = 1
a∈Γ
describes a projective measurement having outcomes in the set Γ . The rules are
the same as before:
1. The outcome a ∈ Γ of the measurement is chosen randomly:
Pr(outcome is a) = ∥Πa ∣ψ⟩∥
2
2. The state of the system becomes
Πa ∣ψ⟩
∥Πa ∣ψ⟩∥
Projective measurements
Example
Standard basis measurements are projective measurements:
• The outcomes are the classical states of the system being
measured.
• The measurement is described by the set {∣a⟩⟨a∣ ∶ a ∈ Σ}.
Suppose that we measure the state
∣ψ⟩ = ∑ αa ∣a⟩
a∈Σ
Each outcome a appears with probability ∥∣a⟩⟨a∣ψ⟩∥ = ∣αa ∣ .
2 2
Conditioned on the outcome a, the state becomes
∣a⟩⟨a∣ψ⟩
∣a⟩
αa
∥∣a⟩⟨a∣ψ⟩∥ ∣αa ∣
=
Projective measurements
Example
Standard basis measurements are projective measurements:
• The outcomes are the classical states of the system being
measured.
• The measurement is described by the set {∣a⟩⟨a∣ ∶ a ∈ Σ}.
Example
Performing a standard basis measurement on a system X and doing
nothing to a system Y is equivalent to performing the projective
measurement
{∣a⟩⟨a∣ ⊗ 1Y ∶ a ∈ Σ}
on the system (X, Y).
Projective measurements
Example
Performing a standard basis measurement on a system X and doing
nothing to a system Y is equivalent to performing the projective
measurement
{∣a⟩⟨a∣ ⊗ 1Y ∶ a ∈ Σ}
on the system (X, Y).
Each measurement outcome a appears with probability
∥(∣a⟩⟨a∣ ⊗ 1)∣ψ⟩∥
2
The state of the system (X, Y) then becomes
(∣a⟩⟨a∣ ⊗ 1)∣ψ⟩
∥(∣a⟩⟨a∣ ⊗ 1)∣ψ⟩∥
Projective measurements
Example
Define two projections as follows:
Π0 = ∣φ⟩⟨φ ∣ + ∣φ ⟩⟨φ ∣ + ∣ψ ⟩⟨ψ ∣
+ − − + +
Π1 = ∣ψ ⟩⟨ψ ∣
− −
The projective measurement {Π0 , Π1 } is an interesting one…
Every projective measurements can be implemented using unitary operations
and standard basis measurements.
×
×
∣0⟩ H H
Irrelevance of global phases
Definition
Suppose that ∣ψ⟩ and ∣φ⟩ are quantum state vectors satisfying
∣φ⟩ = α∣ψ⟩
The states ∣ψ⟩ and ∣φ⟩ are then said to differ by a global phase.
(This requires ∣α∣ = 1. Equivalently, α = e
iθ
for some real number θ.)
Imagine that two states that differ by a global phase are measured. If we start
with the state ∣φ⟩, the probability to obtain any chosen outcome a is
∣⟨a∣φ⟩∣ = ∣α⟨a∣ψ⟩∣ = ∣α∣ ∣⟨a∣ψ⟩∣ = ∣⟨a∣ψ⟩∣
2 2 2 2 2
That’s the same probability as if we started with the state ∣ψ⟩.
Irrelevance of global phases
Definition
Suppose that ∣ψ⟩ and ∣φ⟩ are quantum state vectors satisfying
∣φ⟩ = α∣ψ⟩
The states ∣ψ⟩ and ∣φ⟩ are then said to differ by a global phase.
(This requires ∣α∣ = 1. Equivalently, α = e
iθ
for some real number θ.)
Imagine that two states that differ by a global phase are measured. If we start
with the state ∣φ⟩, the probability to obtain any chosen outcome a is
∥Πa ∣φ⟩∥ = ∥αΠa ∣ψ⟩∥ = ∣α∣ ∥Πa ∣ψ⟩∥ = ∥Πa ∣ψ⟩∥
2 2 2 2 2
That’s the same probability as if we started with the state ∣ψ⟩.
Irrelevance of global phases
Definition
Suppose that ∣ψ⟩ and ∣φ⟩ are quantum state vectors satisfying
∣φ⟩ = α∣ψ⟩
The states ∣ψ⟩ and ∣φ⟩ are then said to differ by a global phase.
(This requires ∣α∣ = 1. Equivalently, α = e
iθ
for some real number θ.)
Suppose we apply a unitary operation to two states that differ by a global phase:
U∣φ⟩ = αU∣ψ⟩ = α(U∣ψ⟩)
They still differ by a global phase…
Consequently, two quantum state vectors ∣ψ⟩ and ∣φ⟩ that differ by a global
phase are completely indistinguishable and are considered to be equivalent.
Irrelevance of global phases
Example
The quantum states
1 1 1 1
∣−⟩ = √ ∣0⟩ − √ ∣1⟩ and − ∣−⟩ = − √ ∣0⟩ + √ ∣1⟩
2 2 2 2
differ by a global phase.
Irrelevance of global phases
Example
The quantum states
1 1 1 1
∣+⟩ = √ ∣0⟩ + √ ∣1⟩ and ∣−⟩ = √ ∣0⟩ − √ ∣1⟩
2 2 2 2
do not differ by a global phase. (This is a relative phase difference.)
This is consistent with the observation that these states can be
discriminated perfectly:
∣⟨0∣H∣+⟩∣ = 1 ∣⟨0∣H∣−⟩∣ = 0
2 2
∣⟨1∣H∣+⟩∣ = 0 ∣⟨1∣H∣−⟩∣ = 1
2 2
No-cloning theorem
Theorem (No-cloning theorem)
Let X and Y both have the classical state set {0, . . . , d−1}, where d ≥ 2.
There does not exist a unitary operation U on the pair (X, Y) such that
∀∣ψ⟩ ∶ U(∣ψ⟩ ⊗ ∣0⟩) = ∣ψ⟩ ⊗ ∣ψ⟩
∣00000⟩ ∣ψ⟩
∣ψ⟩ ∣ψ⟩
No-cloning theorem
Theorem (No-cloning theorem)
Let X and Y both have the classical state set {0, . . . , d−1}, where d ≥ 2.
There does not exist a unitary operation U on the pair (X, Y) such that
∀∣ψ⟩ ∶ U(∣ψ⟩ ⊗ ∣0⟩) = ∣ψ⟩ ⊗ ∣ψ⟩
The operation U must clone the standard basis states ∣0⟩ and ∣1⟩:
U(∣0⟩ ⊗ ∣0⟩) = ∣0⟩ ⊗ ∣0⟩
U(∣1⟩ ⊗ ∣0⟩) = ∣1⟩ ⊗ ∣1⟩
Therefore, by linearity,
1 1 1 1
U(( √ ∣0⟩ + √ ∣1⟩) ⊗ ∣0⟩) = √ ∣0⟩ ⊗ ∣0⟩ + √ ∣1⟩ ⊗ ∣1⟩
2 2 2 2
No-cloning theorem
Theorem (No-cloning theorem)
Let X and Y both have the classical state set {0, . . . , d−1}, where d ≥ 2.
There does not exist a unitary operation U on the pair (X, Y) such that
∀∣ψ⟩ ∶ U(∣ψ⟩ ⊗ ∣0⟩) = ∣ψ⟩ ⊗ ∣ψ⟩
Therefore, by linearity,
1 1 1 1
U(( √ ∣0⟩ + √ ∣1⟩) ⊗ ∣0⟩) = √ ∣0⟩ ⊗ ∣0⟩ + √ ∣1⟩ ⊗ ∣1⟩
2 2 2 2
But this is not the correct behavior — we must have
1 1
U(( √ ∣0⟩ + √ ∣1⟩) ⊗ ∣0⟩)
2 2
1 1 1 1
= ( √ ∣0⟩ + √ ∣1⟩) ⊗ ( √ ∣0⟩ + √ ∣1⟩)
2 2 2 2
No-cloning theorem
Theorem (No-cloning theorem)
Let X and Y both have the classical state set {0, . . . , d−1}, where d ≥ 2.
There does not exist a unitary operation U on the pair (X, Y) such that
∀∣ψ⟩ ∶ U(∣ψ⟩ ⊗ ∣0⟩) = ∣ψ⟩ ⊗ ∣ψ⟩
Remarks:
• Approximate forms of the cloning theorem are known.
• Copying a standard basis state is possible — the no-cloning theorem does
not contradict this.
∣0⟩ + ∣a⟩
∣a⟩ ∣a⟩
• Cloning a probabilistic state (classically) is also impossible.
Discriminating non-orthogonal states
It is not possible to perfectly discriminate two non-orthogonal quantum states.
Equivalently, if we can discriminate two quantum states perfectly, then they must be
orthogonal.
Two states ∣ψ⟩ and ∣φ⟩ can be discriminated perfectly if there is a unitary operation
U that works like this:
0 1
∣ψ⟩ ∣φ⟩
U U
∣0⋯0⟩ ∣0⋯0⟩
Discriminating non-orthogonal states
0 1
∣ψ⟩ ∣φ⟩
U U
∣0⋯0⟩ ∣0⋯0⟩
U(∣0⋯0⟩∣ψ⟩) = ∣π0 ⟩∣0⟩ U(∣0⋯0⟩∣φ⟩) = ∣π1 ⟩∣1⟩
∣0⋯0⟩∣ψ⟩ = U (∣π0 ⟩∣0⟩) ∣0⋯0⟩∣φ⟩ = U (∣π1 ⟩∣1⟩)
† †
⟨ψ∣φ⟩ = ⟨0⋯0∣0⋯0⟩⟨ψ∣φ⟩
= (⟨π0 ∣⟨0∣)UU (∣π1 ⟩∣1⟩) = ⟨π0 ∣π1 ⟩⟨0∣1⟩ = 0
†
Discriminating non-orthogonal states
Conversely, orthogonal quantum states can be perfectly discriminated.
In particular, if ∣ψ⟩ and ∣φ⟩ are orthogonal, then any unitary matrix whose first
two columns are ∣ψ⟩ and ∣φ⟩ will work.
⎛ ⎞
⎜
⎜ ⎟
⎟
⎜
⎜
⋮ ⋮ ⎟
⎟
⎜
⎜ ⎟
⎟
U=⎜
⎜
⎜
⎜ ∣ψ⟩ ∣φ⟩ ⎟
⎟
⎟
⎟
⎜ ⎟
?
⎜
⎜ ⎟
⎟
⎜
⎜ ⎟
⎟
⎜ ⋮ ⋮ ⎟
⎝ ⎠
∣ψ⟩ U
† ∣0⋯00⟩ ∣φ⟩ †
U ∣0⋯01⟩
Discriminating non-orthogonal states
Alternatively, we can define a projective measurement {Π0 , Π1 } like this:
Π0 = ∣ψ⟩⟨ψ∣ Π1 = 1 − ∣ψ⟩⟨ψ∣
If we measure the state ∣ψ⟩…
Pr[outcome is 0] = ∥Π0 ∣ψ⟩∥ = ∥∣ψ⟩∥ = 1
2 2
Pr[outcome is 1] = ∥Π1 ∣ψ⟩∥ = ∥0∥ = 0
2 2
If we measure any state ∣φ⟩ orthogonal to ∣ψ⟩…
Pr[outcome is 0] = ∥Π0 ∣φ⟩∥ = ∥0∥ = 0
2 2
Pr[outcome is 1] = ∥Π1 ∣φ⟩∥ = ∥∣φ⟩∥ = 1
2 2