lLS5rREd9entv98RPa BVXyr94uwzKjQ6swuDbag
lLS5rREd9entv98RPa BVXyr94uwzKjQ6swuDbag
Give change for a specific amount n with the least number of coins of the
denominations d1>d2 > . . .>dm used in that locale. (Here, we assume that
the denominations are ordered in decreasing order.) For example, the
widely used coin denominations in the United States are
d1 = 25 (quarter), d2 = 10 (dime), d3 = 5 (nickel), and d4 = 1 (penny).
How would you give change with coins of these denominations of, say, 48
cents?
If you came up with the answer 1 quarter, 2 dimes, and 3 pennies, you
followed— consciously or not—a logical strategy of making a sequence of
best choices among the currently available alternatives.
Indeed, in the first step, you could have given one coin of any of the four
denominations. “Greedy” thinking leads to giving one quarter because it
reduces the remaining amount the most, namely, to 23 cents. In the second
step, you had the same coins at your disposal, but you could not give a
quarter, because it would have violated the problem’s constraints. So your
best selection in this step was one dime, reducing the remaining amount to
13 cents. Giving one more dime left you with 3 cents to be given with three
pennies.
Is this solution to the instance of the change-making
problem optimal? Yes, it is.
In fact, one can prove that the greedy algorithm yields
an optimal solution for every positive integer amount
with these coin denominations.
At the same time, it is easy to give an example of coin
denominations that do not yield an optimal
solution for some amounts—e.g., d1 = 25, d2 = 10, d3 = 1
and n = 30.
The approach applied to the change-making problem is called
greedy.
Computer scientists consider it a general design
technique despite the fact that it is applicable to optimization
problems only.
The greedy approach suggests constructing a solution through
a sequence of steps, each expanding a partially constructed
solution obtained so far, until a complete solution to the
problem is reached.
On each step—and this is the central point of this
technique—the choice made must be:
feasible, i.e., it has to satisfy the problem’s constraints
locally optimal, i.e., it has to be the best local choice among all
feasible choices available on that step
irrevocable, i.e., once made, it cannot be changed on
subsequent steps of the algorithm
These requirements explain the technique’s name: on
each step, it suggests a “greedy” grab of the best
alternative available in the hope that a sequence of locally
optimal choices will yield a (globally) optimal solution to
the entire problem.
What is usually more difficult is to prove that a greedy
algorithm yields an optimal solution (when it does).
One of the common ways to do this is using
mathematical induction, we show that a partially
constructed solution obtained by the greedy algorithm
on each iteration can be extended to an optimal solution
to the problem.
The second way to prove optimality of a greedy algorithm
is to show that on each step it does at least as well as any
other algorithm could in advancing toward the problem’s
goal.
Application Areas
Most networking algorithms use the greedy approach. Here
is a list of few of them −
Travelling Salesman Problem
Prim's Minimal Spanning Tree Algorithm
Kruskal's Minimal Spanning Tree Algorithm
Dijkstra's Minimal Spanning Tree Algorithm
Graph - Map Coloring
Graph - Vertex Cover
Knapsack Problem
Job Scheduling Problem
There are lots of similar problems that uses the greedy
approach to find an optimum
solution.
Prim’s Algorithm
The following problem arises naturally in many
practical situations:
Given n points, connect them in the cheapest possible
way so that there will be a path between every pair of
points.
It has direct applications to the design of all kinds of
networks— including communication, computer,
transportation, and electrical—by providing the
cheapest way to achieve connectivity.
We can represent the points given by vertices of a
graph, possible connections by the graph’s edges, and
the connection costs by the edge weights.
Then the question can be posed as the minimum
spanning tree problem, defined formally as follows.
DEFINITION: A spanning tree of an undirected
connected graph is its connected acyclic subgraph
(i.e., a tree) that contains all the vertices of the graph.
If such a graph has weights assigned to its edges, a
minimum spanning tree is its spanning tree of the
smallest weight, where the weight of a tree is
defined as the sum of the weights on all its edges.
The minimum spanning tree problem is the
problem of finding a minimum spanning tree for a
given weighted connected graph.
Prim’s algorithm
Prim’s algorithm constructs a minimum spanning tree
through a sequence of expanding subtrees.
The initial subtree in such a sequence consists of a single
vertex selected arbitrarily from the set V of the graph’s
vertices. On each iteration, the algorithm expands the current
tree in the greedy manner by simply attaching to it the
nearest vertex not in that tree. (By the nearest vertex, we
mean a vertex not in the tree connected to a vertex in the tree
by an edge of the smallest weight. Ties can be broken
arbitrarily.)
The algorithm stops after all the graph’s vertices have been
included in the tree being constructed.
Since the algorithm expands a tree by exactly one vertex on
each of its iterations, the total number of such iterations is n
− 1, where n is the number of vertices in the graph. The tree
generated by the algorithm is obtained as the set of edges
used for the tree expansions.
Here is pseudocode of this algorithm.
The nature of Prim’s algorithm
The nature of Prim’s algorithm makes it necessary to provide each vertex not in
the current tree with the information about the shortest edge connecting the
vertex to a tree vertex. We can provide such information by attaching two labels
to a vertex: the name of the nearest tree vertex and the length (the weight) of
the corresponding edge. Vertices that are not adjacent to any of the tree vertices
can be given the ∞ label indicating their “infinite” distance to the tree
vertices and a null label for the name of the nearest tree vertex. (Alternatively,
we can split the vertices that are not in the tree into two sets, the “fringe” and
the “unseen.” The fringe contains only the vertices that are not in the tree but
are adjacent to at least one tree vertex. These are the candidates from which the
next tree vertex is selected. The unseen vertices are all the other vertices of the
graph, called “unseen” because they are yet to be affected by the
algorithm.)With such labels, finding the next vertex to be added to the current
tree T = {V T,, ET} becomes a simple task of finding a vertex with the smallest
distance label in the set V − VT .
Ties can be broken arbitrarily.
After we have identified a vertex u ∗ to be added to
the tree, we need to perform two operations:
?
start ? ?
dead end
dead end
success!
41
Backtracking: Hamiltonian Circuit
Problem
The Hamiltonian path problem and the Hamiltonian
cycle problem are problems of determining whether
a Hamiltonian path (a path in an undirected or directed
graph that visits each vertex exactly once) or
a Hamiltonian cycle exists in a given graph (whether
directed or undirected)
Without loss of generality, we can assume that if a
Hamiltonian circuit exists, it starts at vertex a.
Accordingly, we make vertex a the root of the state-space
tree (Figure ?????).
The first component of our future solution, if it exists, is a
first intermediate vertex of a Hamiltonian circuit to be
constructed.
Using the alphabet order to break the three-way tie
among the vertices adjacent to a, we select vertex b.
From b, the algorithm proceeds to c, then to d, then to
e, and finally to f, which proves to be a dead end.
So the algorithm backtracks from f to e, then to d, and
then to c, which provides the first alternative for the
algorithm to pursue.
Going from c to e eventually proves useless, and the
algorithm has to backtrack from e to c and then to b.
From there, it goes to the vertices f , e, c, and d, from
which it can legitimately return to a, yielding the
Hamiltonian circuit a, b, f , e, c, d, a. If we wanted to
find another Hamiltonian circuit, we could continue
this process by backtracking from the leaf of the
solution found.
FIGURE ????? (a) Graph. (b) State-space tree for finding a Hamiltonian circuit. The
numbers above the nodes of the tree indicate the order in which the
nodes are generated.
Branch-and-Bound
Recall that the central idea of backtracking, discussed in
the previous section, is to cut off a branch of the problem’s
state-space tree as soon as we can deduce that it cannot
lead to a solution.
This idea can be strengthened further if we deal with an
optimization problem.
Compared to backtracking, branch-and-bound requires
two additional items:
a way to provide, for every node of a state-space tree, a bound
on the best value of the objective function1 on any solution
that can be obtained by adding further components to the
partially constructed solution represented by the node
the value of the best solution seen so far
Assignment Problem with
Branch-and-Bound