Daa Part 5
Daa Part 5
APPROXIMATION ALGORITHMS
G Chapter at a Glance
Approximation Algorithm:
Suppose you need to solve NP-hard problem X. So we have to find a polynomial
solve it. So, we must sacrifice one of three desired features: algorithm to
Find out exact optimal solution.
Solve the problem in polynomial time to ensure time feasibility.
Provide solutions to arbitrary instances of the problem.
Now applying approximation algorithm, the following issues are achieved:
Guaranteed polynomial time execution.
Guaranteed to find "high quality" near optimal solution within feasible time.
But there is an additional responsibly that we need to prove how close the value of
the
solution is to the optimum, without even knowing what optimum value is.
Examples of approximation algorithms for NP-Complete problem
Now we discuss about the following examples of approximation algorithms for NP-Complete
problems
Approximation ratio
Polynomial-Time Approximation Schemes
2-Approximation for Vertex Cover
2-Approximation for TSP special case
log n-Approximation for Set Cover
As it turns out, this is the best approximation algorithm known for vertex cover. It is an
open problem toeither do better or prove that this is a lower bound.
d) Randomized algorithms:
A randomized algorithm is an algorithm that employs a degree of randomness as part of
its logic. The algorithm typically uses uniformly random bits as an auxiliary input to
guide its behavior, in the hope of achieving good performance in the "average case" over
all possible choices of random bits. An algorithm that uses the random numbers to decide
what to do next anywhere in its logic is called Randomized Algorithm. For example, in
Randomized Quick Sort, we use random number to pick the next pivot (or we randomly
shuffle the array).
Analysis of Randomized Algorithm:
Some randomized algorithms have deterministic time complexity. For example, this
implementation of Karger's algorithm has time complexity as O(E). Such algorithms are
called Monte Carlo Algorithms and are easier to analyse for worst case. On the other
hand, time complexity of other randomized algorithms (other than Las Vegas) 15
dependent on value of random variable. Such Randomized algorithms are called Las
Vegas Algorithms. These algorithms are typically analysed for expected worst case. 10
compute expected time taken in worst case, all possible values of the used random
DAA-124
DESIGNAND ANALYSIS OF ALGORITHMS
nwiahle needs to be considered in worst case and time taken by every possible value
oeds to be evaluated. Average of all evaluated times is the expected worst case time
complexity.
Examples:
For example consider below a randomized version of QuickSort.
/ Sorts an array arr[low..high]
randQuickSort(arr[], low, high)
1. If low >= high, then EXIT.
2. While pivot 'x' is not a Central Pivot.
(i) Choose uniformly at random a number from [low..high].
Let the randomly picked number number be x.
(i) Count elements in arr[low..high] that are smaller
than arr[x]. Let this count be sc.
(ii) Count elements in arr[low..high] that are greater
than arr[x]. Let this count be gc.
(iv) Let n= (high-low+1), If sc >= n/4 and
gc >= n4, then x is a central pivot.
3. Partition arr[low..high] around the pivot x.
4. I/ Recur for smaller elements
randQuickSort(arr, low, sc-1)
5./ Recur for greater elements
randQuickSort(arr, high-gc+1, high)
The important thing in our analysis is, time taken by step 2 is O(n).
e) NP Complete problem:
The NP-Complete problems are an interesting class of problems whose status is
unknown. The properties of NP-Complete problem are
No polynomial-time algorithm has been discovered for solving an NP-Complete
on problem
No polynomial lower bound in respect of time overhead has been proved for. any
NP-Complete problem.
In figure 1we have shown the complexity of algorithms.
Exponentia NP-Complete
time (Hard)
Polyaominl-time (Easy)
F(o)
DAA-125
POPULAR PUBLICATIONS
NPclass
The notation NP stands for "nondeterministic polynomial time", since originally
NP was
defined in terms of nondeterministic machines (that is, machines that have more than
possible move from a given configuration). However now it is customary to one
equivalent definition using the notion of a checking relation, which is give an
simply
relation RC * x * for some finite alphabets and ,. We associate with binary a
relation Ra language L Over U , Uf#} defined by
each such
LR ={ w #y | R(w, y)}
Where the symbol#is not in . We say that Ris polynomial-time if and only if Lg e p
Now we define the class NP of langua ges by the condition that a language L over is in
NP if and only if there is ke N(N is the set of natural number) and a
checking relation R such that for all w e *, polynomial-time
weL y(y<|w and R(w, y))
Where w| and y denote the lengths ofw and respectively.
For example, in the Hamiltonian-cycle problem,y,given a directed graph
certificate would be a sequence (v, V2, V, ..., Vy) of |V| vertices. It is easy G=(V, E), a
to check in
polynomial time that (v, V)e Efori=1, 2, 3, .., |V -1 and that (Vy, Vi) ¬ E as well.
Is P=NP?
It is easy to see that the answer is
independent of the size of the alphabet Z.(we assume
|2| > 2), since strings over an alphabet of any fixed
size can be efficiently coded by
strings over a binary alphabet. (For |2 = 1the problem
possible that P =NP in this case but not in the general case). is still open, although it is
It is trivial to show that PcNP, since for
each language L
define the polynomial-time checking relation Rc *U *over
E, ifL E Pthen we can
by
R(w, y) wE L,for all w, y e E*.
Here are two simple examples, using decimal notation to
The set of perfect squares is in P and the set of code natural numbers:
known to be in P). For the latter, the associated composite numbers is in NP (and not
given by polynomial time checking relation R is
R(a, b) 1<b<a and bl a
In general the decimal notation for a
natural number c is denoted by c.
DAA-126
DESIGNAND ANALYSIS OF ALGORITHMS
MISCELLANEOUS
Multiple Choice Type Questions
ofthe following is solved by using Branch and Bound method?
1. Which
a) Knapsack Problem b) Hamiltonian Problem [WBUT 2015]
c) Travelling Salesman Problem d) 15-Puzzle Problem
Answer:(d)
¡=0
m
i=0
where, n>1
i=0
DAA-127
POPULAR PUBLICATIONS
a= (ao, ai,., an-i). Let us define the results yb for k = , 1, ... n1, by
n-l
=A(o,)= j=0Ea,o,"
The vector y = (yo, yl,.., yn) is the Discrete Fourier Transform (DFT) of the coefficient
vector a=(a0, a,.., ap-).
We also write y = DFT(@)
3. Write the FFT algorithm and find the computational complexity of this algorithm.
[WBUT 2010]
Answer: Refer to Question No. 4(b) of LongAnswer Type Ouestions.
4. What is meant by LUP decomposition? [WBUT 2013]
Answer:
The idea behind LUP decomposition is to find three n x n matrices L, U, and P such that,
PA=LU
Where,
"Lis a unit lower-triangular matrix,
"Uisan upper-triangular matrix, and
"Pis a permutationmatriX.
We call matrices L, U, and P satisfying LUP decomposition of the matrix A. We shall
show that every nonsingular matrix Apossesses such a decomposition.
The advantage of computing an LUP decomposition for the matrix A is that linear
systems can be solved more readily when they are triangular, as is the case for both
matrices Land U. Having found an LUP decomposition for A, we can solve the equation
Ax =b by solving only triangular linear systems, as follows.
Multiplying both sides of Ax =bby Pyields the equivalent equation PAx =Pb.
Using our decomposition, we obtain
LUX = Pb
We can now solve this equation by solving two triangular linear systems. Let us define
y=Ux,where x is the desired solution vector. First, we solve the lower triangular system
Ly = Pb for the unknown vector y by a method called forward substitution." Having
solved for y, we then solve the upper-triangular system
Ux= y
for the unknown x by a method called back substitution." The vector x is our solution to
Ax =b, since the permutation matrix P is invertible:
Ax = PLUx
- pLy
=PPb
=b.
5. Discuss Strassen's matrix multiplication procedure and show that the time
complexity is reduced from the conyentionalmultiplication. [WBUT 2017]
Answer:
Refer to Question No. 1(T" &PM Par) of Long Answer Type Questions.
DAA-128
DESIGN AND ANALYSIS OF ALGORITHMS
6. Write a recursive algorithm for finding maximum and minimum from a list of
elements. Also find the complexityy of your algorithm. WBUT 2018]
Answer:
Referto Ouestion No. 2 of Long Answer Type Questions.
Long Answer Type guestions
,Discuss the procedure for Strassen's matrix multiplication to evaluate the
product of 'n' matrices. Find the resulting recurrence relation for the same and
method an improvement over the conventional
analyzeits time-complexity. Is this why?
matrix multiplication method? If so, WBUT 2007, 2011, 2012, 2013]
Answer:
one-digit
We have seen that the divide-and-conquer approach can reduce the number of feat
multiplications in multiplying two integers; we should not be surprised that a similar V.
can be accomplished for multiplying matrices. Such an algorithm was published by
Strassen in 1969. The principal insight of the algorithm lies in the discovery that we can
find the product C of two 2-by-2 matrices A and B with just seven multiplications as
opposed to the eight required by the brute-force algorithm. This is accomplished by using
the following formulas:
Coo Col aoboo bom, +m, -m, + m, m, + m,
m, +m m +m, - m, +n,
where,
m=(a0 t a,)* (boo tb,).
m, =(a,o t4,)*boo
m, = a1 *(bo-bo0)
=(aoo t aoj)*b,i
m, =(a0 -ago)* (boo t boi)
m, =(ao -a,)*(,o t b,)
Ihus, to multiply two 2-by-2 matrices, Strassen's algorithm makes seven multiplications
and 18 additions/subtractions, whereas the brute-force algorithm requires cight
multiplications and four additions. These numbers should not lead us to multiplying 2-by
2matrices by Strassen's algorithm. Is importance stems from its asympBotic superiority
as matrix order n goes to infinity.
Let A and B be two n-by-n matrices where n is a power of two. (If nis not a power of
Wo, matrices can be padded with rows and columns of zeros.) We can divide 4, B, and
their product Cinto four n/2-by-n /2 sub-matrices each as follows:
[Coo Ao Ao .[B Bo
Cio
Snot difficult to verify that one can treat these sub-matrices as numbers to get the
eet product. For example, Coo can be computed either as Aoo* Boo + Aoi * Bio Or as
DAA-129
POPULAR PUBLICATIONS
M+ M,- Ms + M,where M, M4, Ms, and M, are found by Strassen's formulas, with h
numbers replaced by the corresponding sub-matrices. If the seven products of n/2-by
2matrices are computed recursively by the same method, we have Strassen's algorith
for matrixX multiplication.
Let us evaluate the asymptotic efficiency of this algorithm. If M(1) is the number of
multiplications made by Strassen's algoritlm in multiplying two n-by-nmatrices (where
Sa power of2), we get the following recurrence relation for it:
M(n) =7M(n/2) for n > 1, M(I) =1.
Since n = 2*
M(2k) = 7M(2k-I) =7[7 M(2k-2)] =72M(2k -2) =...
=7i M(2k -i).=7k M(2k-k) =7k
Since k = log; n,
M(n) =7og, n=nlos27 n2807
ince this saving in the number of multiplications was achieved at the expense of making
xtra additions, we must check the number of additions A (n) made by Strassen's
lgorithm. To multiply two matrices of ordern>1, the algorithm needs to multiply seven
natrices of order n2 and make 18 additions of matrices of size n/2; when n = 1, no
dditions are made since two numbers are simply multiplied. These observations yield
he following recurrence relation:
A(n) =74(N2) + 18(n/2) for n>1, A(1) =0.
hough one can obtain a closed-form solution to this recurrence, here we simply establish
he solution's order of growth. According to the Master Theorem, A(n) e (n°%). In
ther words, the number of additions has the same order of growth as the number of
nultiplications. This puts Strassen's algorithm in O(n°8), which is a better efficiency
lass than O(n )of the brute-force method.
Let T(n) be the number of comparisons performed by the maxmin procedure. When
n=lclearly there are no comparisons. Thus we have T()=0. Similarly, T(2)=1.
Otherwise when n>2 clearly
T(n)=2T(n/2) +2
Since maxmin performs two recursive calls on partitions of roughly half of the total size
of the list and then makes two further comparisons to sort out the max/min for the entire
list. (Of course, to be pedantic there should be floors and ceilings in the recursive
function, and something should be said about the fact that the following proof is only for
nwhich are powers of two and how this implies the general result. This is omitted.)
We next show that
induction on n .
T-2 for all n which are powers of 2. The proof is by
Base Case:
(n=2): formthe recursive definition we have that T(2)=l. Similarly, we have that
-2=3-2 =1, thus verifyingthebase case.
Inductive Step:
Let n>2 and n=2' for some integer j2. Assume that 2for all k=2!
Or some integer j 2and k <n. We want to show that this assumption implies that
for all positive n which are powers of 2.
DAA-131
POPULAR PUBLICATIONS
|15 2 5 6 7 8
13|7 6 1 9 101112
3 |10 48 131415
Iritial state Goal state
Fig: 1
Figure 2 shows the procedure and orders how we can arrange the tiles in such a way to
reach the goal state.
12 1234 1213 123
567|B 5678 5 678
9 10 11 12 9 1011|12
131415
Step by step movement of squares to arrnge the goal state
Fig: 2
Letkbethe position of tile k according to the frame position of goal state and P(k) is the
position index of the tile k in the initial position without considering the empty tle, i.e.
tile16.
number of tiles t where,
Now,let L(k) be-the
t<k ...(1)
..(2)
and P(k) > P(t) all the possible combination of the tiles within the frame. So, all the
State space is
possible movements of the tiles are in the state space. Naturally, goal state is also present
initial state.
in the state space, if the goal state can be reached from the
we canconstruct the tree of state space starting from initial state. Here all the nodes
of the tree represent one of the possible movements from the initial state. In the state
e of figure 3, we consider the movement of different tiles with respect to the empty
tile. 115|12|14|
15 2
13 7 6 1
310 4 8
Righ Down
Let/ Up
|115 12|14| 1151214 |11|5 14 115 1214|
152 9 15 29 |15| 2 12| 9 15| 2 69
137 61 137 61 |13 7 6 1 |13 7 1
3 104 8 310 4 8 31048 3 1048
u down
11512 |1151214
15 2 9 14 15 S13
13761 137 6
31048 310 48
State space of 15 puzzle problem starting from initial state
Fig: 3
Lne above state space denotes the all possible arrangement of the tiles within the frame
starting from initial state. We can remove those states from the state space which are not
reachable to the goal state. So, only those leave nodes are to be considered from a parent
node, which can step forward to the goal state, i.e. those configurations which satisfy
equation L(k)+e is even.
i=l
C21 Cn
We first compute the following products
d =(a, +a, Xb,, +b,)
d, =(a,, ta, ),,
d =4, (6:-b,)
d, = a,(b,i -b,)
d, =(4, ta)b,
d,=(a4 - a,X,+b,)
d =(a -a, Xb, tb,)
Next, we compute C from the equation
c-td, -d, +d, d, +ds
d, + d, d +d, -d, +d,)
Since commutatively of scalar products are not used here, the above formula holds for
matrices as well.
Time complexity: The number of additions used is 18 and the number of multiplications
1s 7. This gives rise to the following recurrence for the running time.
ifn =1
T(n)= |7T(n/2) +18(n/2)'a ifn >2,
m ifn =1
T(n)= (7T(n/2) +(9a/2)r² ifn> 2,
Assuming, that n is a power of 2, then we can write,
T(n)=| m+
,. (9a/2)22
7-22 (9a/2)2=
7-2 mnn =mng +6ans? -6an?
c) Amortized Analysis:
Amortized analysis is a tool for analyzing algorithms that perform a sequence of similar
operations. It can be used to show that the average cost of an operation is small, if one
averages over a sequence of operations, even though a single operation within the
sequence might be expensive. In a sequence of operations the worst case does not occur
often in each operation - some operations may be cheap, some may be expensive.
Therefore, a traditional worst-case per operation analysis can give overly pessimistic
bound. The amortized approach is going toassign an "artificial cost" to each operation in
the sequence, called the amortized cost of an operation. It requires that the total real cost
of the sequence should be bounded by the total of the amortized costs of all the
operations.
Three methods are used in amortized analysis
1. Aggregate Method (or brute force)
2. Accounting Method (or the banker's method)
3. Potential Method (or the physicist's method)
DAA-136