[go: up one dir, main page]

0% found this document useful (0 votes)
15 views15 pages

Daa Part 5

Uploaded by

hasimot979
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views15 pages

Daa Part 5

Uploaded by

hasimot979
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

POPULAR PUBLICATIONS

APPROXIMATION ALGORITHMS
G Chapter at a Glance
Approximation Algorithm:
Suppose you need to solve NP-hard problem X. So we have to find a polynomial
solve it. So, we must sacrifice one of three desired features: algorithm to
Find out exact optimal solution.
Solve the problem in polynomial time to ensure time feasibility.
Provide solutions to arbitrary instances of the problem.
Now applying approximation algorithm, the following issues are achieved:
Guaranteed polynomial time execution.
Guaranteed to find "high quality" near optimal solution within feasible time.
But there is an additional responsibly that we need to prove how close the value of
the
solution is to the optimum, without even knowing what optimum value is.
Examples of approximation algorithms for NP-Complete problem
Now we discuss about the following examples of approximation algorithms for NP-Complete
problems
Approximation ratio
Polynomial-Time Approximation Schemes
2-Approximation for Vertex Cover
2-Approximation for TSP special case
log n-Approximation for Set Cover

Long Answer Type Questions


1. Write short notes onthe following:
a) Approximation algorithms [WBUT 2008, 2012, 2014]
b) Approximation schemes [WBUT 2009, 2016]
c) Vertex Cover Algorithm WBUT 2014, 2016]
d) Randomized algorithms [MODEL QUESTION]
e) NP Complete problem [MODEL QUESTION]
Answer:
a) Approximation algorithms:
If a problem is NP-complete, we are unlikely to find a polynomial-time algorithm for
solving it. There are at least three approaches to getting around NP-completeness. First, 1
the actual inputs are small, an algorithm with exponential running time may be perfectly'
satisfactory. Second, we may be able to isolate important special cases that are solvable
in polynomial time. Third, it may still be possible to find near-optimal solutions n
polynomial time (either in the worst case or on average). In practice, near-optimality 1S
called an
often good enough. An algorithm that returns near-optimal solutions is
approximation algorithm.
DAA-122
DESIGNAND ANALYSIS OF ALGORITHMS

problem in which each potential solution


Suppose that we are working on an optimizationnear-optimal solution. Depending on the
has apositive cost, and we wish to find a possible cost or one
problem, an optimal solution may be defined as one with maximum maximization or a
either
with minimum possible cost; that is, the problem may be
minimization problem.
We say that an algorithm for a problem has an approximation ratio of p(n) if, for any
within a factor of
input of size n, the cost C of the solution produced by the algorithm is
p(n) of the cost C* of an optimal solution:
max spn)
We also call an algorithm that achieves an approximation ratio ofand p(n) a p(n)
approximation algorithm. The definitions of. approximation ratio problems. of pln)
Fora
approximation algorithm apply for both minimization and maximization
maximization problem, 0< C<C*, and the ratio C*/C gives the factor by which the cost
of an optimal solution is larger than the cost of the approximate solution. Similarly, for a
minimization problem, 0 < C* <C, and the ratio CIC* gives the factor by which the cost
of the approximate solution is larger than the cost of an optimal solution. Since all
solutions are assumed to have positive cost, these ratios are always well defined. The
approximation ratio of an approximation algorithm is never less than 1, since CIC* 003C;
I implies CIC > 1. Therefore, a 1-approximation algorithm produces an optimal
solution, and an approximation algorithm with a large approximation ratio may return a
solution that is much worse than optimal.
b) Approximation schemes:
Now applying approximation algorithm, the following issues are achieved:
Guaranteed polynomial time execution.
Guaranteed tofind "high quality" near optimal solution within feasible time.
But there is an additional responsibly that we need to prove how close the value of the
solution is to the optimum, without even knowing what optimum value is.

Diferent types of approximation algorithmns and schemes


There are different types of approximation algorithms and schemes described below:
p-approximation algorithm:
An algorithm A for problem P that runs in polynomial time.
For every problem instance, A outputs a feasible solution within ratio P of true
optimum for that instance.
Polynomial-time approximation scheme (PTAS):
Afamily of approximation algorithms {A;: [> 0} for a problem P.
A, is a (1 te) - approximation algorithm for P.
A, is runs in time polynomial in input size for a fixed [.
Fully polynomial-time approximation scheme (FPTAS):
PTAS where A, is runs in time polynomial both in input size and 1/e.
DAA-123
POPULAR PUBLICATIONS

c) Vertex Cover Algorithm:


Given a G=(V, E), find a minimum subset CeV, such that C covers" all edges in E,
i.e., every edge E E is incident to at least one vertex inC.
f
d
e

Fig.: Aninstance of Vertex Cover problem. An optimalvertex cover is {b,c, e, i, g.


Algorithm 1: Approx-Vertex-Cover (G)
1 C
2 while E # ;
pick any (u, v) E E
- c U (u, v)
delete all edges incident to either u or v
return C

As it turns out, this is the best approximation algorithm known for vertex cover. It is an
open problem toeither do better or prove that this is a lower bound.
d) Randomized algorithms:
A randomized algorithm is an algorithm that employs a degree of randomness as part of
its logic. The algorithm typically uses uniformly random bits as an auxiliary input to
guide its behavior, in the hope of achieving good performance in the "average case" over
all possible choices of random bits. An algorithm that uses the random numbers to decide
what to do next anywhere in its logic is called Randomized Algorithm. For example, in
Randomized Quick Sort, we use random number to pick the next pivot (or we randomly
shuffle the array).
Analysis of Randomized Algorithm:
Some randomized algorithms have deterministic time complexity. For example, this
implementation of Karger's algorithm has time complexity as O(E). Such algorithms are
called Monte Carlo Algorithms and are easier to analyse for worst case. On the other
hand, time complexity of other randomized algorithms (other than Las Vegas) 15
dependent on value of random variable. Such Randomized algorithms are called Las
Vegas Algorithms. These algorithms are typically analysed for expected worst case. 10
compute expected time taken in worst case, all possible values of the used random
DAA-124
DESIGNAND ANALYSIS OF ALGORITHMS

nwiahle needs to be considered in worst case and time taken by every possible value
oeds to be evaluated. Average of all evaluated times is the expected worst case time
complexity.
Examples:
For example consider below a randomized version of QuickSort.
/ Sorts an array arr[low..high]
randQuickSort(arr[], low, high)
1. If low >= high, then EXIT.
2. While pivot 'x' is not a Central Pivot.
(i) Choose uniformly at random a number from [low..high].
Let the randomly picked number number be x.
(i) Count elements in arr[low..high] that are smaller
than arr[x]. Let this count be sc.
(ii) Count elements in arr[low..high] that are greater
than arr[x]. Let this count be gc.
(iv) Let n= (high-low+1), If sc >= n/4 and
gc >= n4, then x is a central pivot.
3. Partition arr[low..high] around the pivot x.
4. I/ Recur for smaller elements
randQuickSort(arr, low, sc-1)
5./ Recur for greater elements
randQuickSort(arr, high-gc+1, high)
The important thing in our analysis is, time taken by step 2 is O(n).
e) NP Complete problem:
The NP-Complete problems are an interesting class of problems whose status is
unknown. The properties of NP-Complete problem are
No polynomial-time algorithm has been discovered for solving an NP-Complete
on problem
No polynomial lower bound in respect of time overhead has been proved for. any
NP-Complete problem.
In figure 1we have shown the complexity of algorithms.
Exponentia NP-Complete
time (Hard)

Polyaominl-time (Easy)

F(o)

DAA-125
POPULAR PUBLICATIONS

NPclass
The notation NP stands for "nondeterministic polynomial time", since originally
NP was
defined in terms of nondeterministic machines (that is, machines that have more than
possible move from a given configuration). However now it is customary to one
equivalent definition using the notion of a checking relation, which is give an
simply
relation RC * x * for some finite alphabets and ,. We associate with binary a
relation Ra language L Over U , Uf#} defined by
each such
LR ={ w #y | R(w, y)}
Where the symbol#is not in . We say that Ris polynomial-time if and only if Lg e p
Now we define the class NP of langua ges by the condition that a language L over is in
NP if and only if there is ke N(N is the set of natural number) and a
checking relation R such that for all w e *, polynomial-time
weL y(y<|w and R(w, y))
Where w| and y denote the lengths ofw and respectively.
For example, in the Hamiltonian-cycle problem,y,given a directed graph
certificate would be a sequence (v, V2, V, ..., Vy) of |V| vertices. It is easy G=(V, E), a
to check in
polynomial time that (v, V)e Efori=1, 2, 3, .., |V -1 and that (Vy, Vi) ¬ E as well.
Is P=NP?
It is easy to see that the answer is
independent of the size of the alphabet Z.(we assume
|2| > 2), since strings over an alphabet of any fixed
size can be efficiently coded by
strings over a binary alphabet. (For |2 = 1the problem
possible that P =NP in this case but not in the general case). is still open, although it is
It is trivial to show that PcNP, since for
each language L
define the polynomial-time checking relation Rc *U *over
E, ifL E Pthen we can
by
R(w, y) wE L,for all w, y e E*.
Here are two simple examples, using decimal notation to
The set of perfect squares is in P and the set of code natural numbers:
known to be in P). For the latter, the associated composite numbers is in NP (and not
given by polynomial time checking relation R is
R(a, b) 1<b<a and bl a
In general the decimal notation for a
natural number c is denoted by c.

DAA-126
DESIGNAND ANALYSIS OF ALGORITHMS

MISCELLANEOUS
Multiple Choice Type Questions
ofthe following is solved by using Branch and Bound method?
1. Which
a) Knapsack Problem b) Hamiltonian Problem [WBUT 2015]
c) Travelling Salesman Problem d) 15-Puzzle Problem
Answer:(d)

2. Byapplying Strassen's algorithm we can multiply two nxn matrices in


[WBUT 2017]
a) O(u')time b) O(n) time c) o(r) time d) o )time
Answer: (d)

Short Answer Type Questions

1. Prove that iff(n) = a n" +am-n+....an+ a,, then f(n) =0(n").


[WBUT 2007]
Answer:
Let, p(n)÷ an" +a,m-1+...t a,n* + a,n +a,

¡=0
m

i=0

where, n>1
i=0

So, p(n) < " a =n"(a, ta,t...tam) = n".A


i=0

Where, A= (a, + aj t...tam)


Now, p(n) sA. n",which shows that p(n) = O1")
of execution. [WBUT 2008]
Yite down DFT algorithm and explain its method
Answer:
We wish to evaluate a polynomial A()= j=0 Ea,x' of degree-bound n at a,,o,,..,o
(that is, at the n complex nth roots of unity). canWithout loss of generality, we assume that n
1S a power of 2, since a given degree-bound always be raised -we can always add new
We assume that Ais given in coefficient form:
high-order zero coefficients as necessary.

DAA-127
POPULAR PUBLICATIONS

a= (ao, ai,., an-i). Let us define the results yb for k = , 1, ... n1, by
n-l

=A(o,)= j=0Ea,o,"
The vector y = (yo, yl,.., yn) is the Discrete Fourier Transform (DFT) of the coefficient
vector a=(a0, a,.., ap-).
We also write y = DFT(@)
3. Write the FFT algorithm and find the computational complexity of this algorithm.
[WBUT 2010]
Answer: Refer to Question No. 4(b) of LongAnswer Type Ouestions.
4. What is meant by LUP decomposition? [WBUT 2013]
Answer:
The idea behind LUP decomposition is to find three n x n matrices L, U, and P such that,
PA=LU
Where,
"Lis a unit lower-triangular matrix,
"Uisan upper-triangular matrix, and
"Pis a permutationmatriX.
We call matrices L, U, and P satisfying LUP decomposition of the matrix A. We shall
show that every nonsingular matrix Apossesses such a decomposition.
The advantage of computing an LUP decomposition for the matrix A is that linear
systems can be solved more readily when they are triangular, as is the case for both
matrices Land U. Having found an LUP decomposition for A, we can solve the equation
Ax =b by solving only triangular linear systems, as follows.
Multiplying both sides of Ax =bby Pyields the equivalent equation PAx =Pb.
Using our decomposition, we obtain
LUX = Pb
We can now solve this equation by solving two triangular linear systems. Let us define
y=Ux,where x is the desired solution vector. First, we solve the lower triangular system
Ly = Pb for the unknown vector y by a method called forward substitution." Having
solved for y, we then solve the upper-triangular system
Ux= y
for the unknown x by a method called back substitution." The vector x is our solution to
Ax =b, since the permutation matrix P is invertible:
Ax = PLUx
- pLy
=PPb
=b.

5. Discuss Strassen's matrix multiplication procedure and show that the time
complexity is reduced from the conyentionalmultiplication. [WBUT 2017]
Answer:
Refer to Question No. 1(T" &PM Par) of Long Answer Type Questions.
DAA-128
DESIGN AND ANALYSIS OF ALGORITHMS

6. Write a recursive algorithm for finding maximum and minimum from a list of
elements. Also find the complexityy of your algorithm. WBUT 2018]
Answer:
Referto Ouestion No. 2 of Long Answer Type Questions.
Long Answer Type guestions
,Discuss the procedure for Strassen's matrix multiplication to evaluate the
product of 'n' matrices. Find the resulting recurrence relation for the same and
method an improvement over the conventional
analyzeits time-complexity. Is this why?
matrix multiplication method? If so, WBUT 2007, 2011, 2012, 2013]
Answer:
one-digit
We have seen that the divide-and-conquer approach can reduce the number of feat
multiplications in multiplying two integers; we should not be surprised that a similar V.
can be accomplished for multiplying matrices. Such an algorithm was published by
Strassen in 1969. The principal insight of the algorithm lies in the discovery that we can
find the product C of two 2-by-2 matrices A and B with just seven multiplications as
opposed to the eight required by the brute-force algorithm. This is accomplished by using
the following formulas:
Coo Col aoboo bom, +m, -m, + m, m, + m,
m, +m m +m, - m, +n,
where,
m=(a0 t a,)* (boo tb,).
m, =(a,o t4,)*boo

m, = a1 *(bo-bo0)
=(aoo t aoj)*b,i
m, =(a0 -ago)* (boo t boi)
m, =(ao -a,)*(,o t b,)
Ihus, to multiply two 2-by-2 matrices, Strassen's algorithm makes seven multiplications
and 18 additions/subtractions, whereas the brute-force algorithm requires cight
multiplications and four additions. These numbers should not lead us to multiplying 2-by
2matrices by Strassen's algorithm. Is importance stems from its asympBotic superiority
as matrix order n goes to infinity.
Let A and B be two n-by-n matrices where n is a power of two. (If nis not a power of
Wo, matrices can be padded with rows and columns of zeros.) We can divide 4, B, and
their product Cinto four n/2-by-n /2 sub-matrices each as follows:
[Coo Ao Ao .[B Bo
Cio
Snot difficult to verify that one can treat these sub-matrices as numbers to get the
eet product. For example, Coo can be computed either as Aoo* Boo + Aoi * Bio Or as
DAA-129
POPULAR PUBLICATIONS

M+ M,- Ms + M,where M, M4, Ms, and M, are found by Strassen's formulas, with h
numbers replaced by the corresponding sub-matrices. If the seven products of n/2-by
2matrices are computed recursively by the same method, we have Strassen's algorith
for matrixX multiplication.
Let us evaluate the asymptotic efficiency of this algorithm. If M(1) is the number of
multiplications made by Strassen's algoritlm in multiplying two n-by-nmatrices (where
Sa power of2), we get the following recurrence relation for it:
M(n) =7M(n/2) for n > 1, M(I) =1.
Since n = 2*
M(2k) = 7M(2k-I) =7[7 M(2k-2)] =72M(2k -2) =...
=7i M(2k -i).=7k M(2k-k) =7k
Since k = log; n,
M(n) =7og, n=nlos27 n2807
ince this saving in the number of multiplications was achieved at the expense of making
xtra additions, we must check the number of additions A (n) made by Strassen's
lgorithm. To multiply two matrices of ordern>1, the algorithm needs to multiply seven
natrices of order n2 and make 18 additions of matrices of size n/2; when n = 1, no
dditions are made since two numbers are simply multiplied. These observations yield
he following recurrence relation:
A(n) =74(N2) + 18(n/2) for n>1, A(1) =0.
hough one can obtain a closed-form solution to this recurrence, here we simply establish
he solution's order of growth. According to the Master Theorem, A(n) e (n°%). In
ther words, the number of additions has the same order of growth as the number of
nultiplications. This puts Strassen's algorithm in O(n°8), which is a better efficiency
lass than O(n )of the brute-force method.

e. The solution of recursive MAXMIN problem is based on some assumptions.


Briefty state the assumptions and its effect on the algorithm in comparison the
[WBUT 2011]
eality.
Answer:
motivated by the
The divide and conquer algorithm we develop for this problem is element
ollowing observation. Suppose we knew the maximum and minimum in both of
Then in order to find the
he roughly n/2 sized partitions of an n-element (n>2) list.need
maximum and minimum element of the entire list we simply to sét which of the two
smaller. We
maximum elements is the larger, and which of the two minimums is the the minimum
and
assume that in a 1-element list the sole element is both the maximum
max/min
element. With this in mind we present the following pseudo-code for the
oroblem.
procedure maxmin (A[1...n] of numbers ) -> (min, max)
pegin
if (n == 1)
return (A[1], A(1])
else if (n == 2)
DAA-130
DESIGN AND ANALYSIS OF ALGORITHMS

if( A[1] < A[21)


return (A[1], A[2])
else
return (A[2], A[1])
else
(max_left, min left) = maxmin(A[1...(n/2)1)
(max right, min_right) = maxmin(A[(n/2 +1)...n])
max_right)
if (max_left <
max = max_right
else
max = maxleft
if (min_left < min_right)
max = min_left
else
min = min_right
return (min, max)
end

Let T(n) be the number of comparisons performed by the maxmin procedure. When
n=lclearly there are no comparisons. Thus we have T()=0. Similarly, T(2)=1.
Otherwise when n>2 clearly
T(n)=2T(n/2) +2
Since maxmin performs two recursive calls on partitions of roughly half of the total size
of the list and then makes two further comparisons to sort out the max/min for the entire
list. (Of course, to be pedantic there should be floors and ceilings in the recursive
function, and something should be said about the fact that the following proof is only for
nwhich are powers of two and how this implies the general result. This is omitted.)
We next show that

induction on n .
T-2 for all n which are powers of 2. The proof is by

Base Case:
(n=2): formthe recursive definition we have that T(2)=l. Similarly, we have that
-2=3-2 =1, thus verifyingthebase case.

Inductive Step:
Let n>2 and n=2' for some integer j2. Assume that 2for all k=2!
Or some integer j 2and k <n. We want to show that this assumption implies that
for all positive n which are powers of 2.
DAA-131
POPULAR PUBLICATIONS

We start with the given


recurrence
T()-2T(n/2) +2-2||0-2+2
By the inductive hypothesis with k=n/2. This gives
3(n/2)
T(n)=2 -i+2
showing the desired result. By the principle of mathematical induction we are done.
3. Explain how do youattempt to solve 15-puzzle problem
strategy. Draw a portion of the state space generated by it.using branch and bound
Answer: [WBUT 2012]
The 15-puzzle has 15 tiles or squares, which are numbered from l to 15
a 4 x 4 box leaving one position out of the 16 empty and which lay in a that are placed in
goal isto reposition the tiles from a given arbitrary, starting square frame. The
one at a time into the configuration shown below in figure arrangement by sliding them
1, The only legal moves are
ones in which a tile adjacent to the empty spot (ES) is moved to ES. Each
move creates
an arrangement of the tiles. These arrangements are called the states of tile
puzzle. For
some initial arrangements, this goal arrangemernt is possible (admissible), but for others.
it may not (inadmissible).
115 1214 12 3 4

|15 2 5 6 7 8
13|7 6 1 9 101112
3 |10 48 131415
Iritial state Goal state
Fig: 1
Figure 2 shows the procedure and orders how we can arrange the tiles in such a way to
reach the goal state.
12 1234 1213 123
567|B 5678 5 678
9 10 11 12 9 1011|12
131415
Step by step movement of squares to arrnge the goal state
Fig: 2

Condition toreach the goal state frominitial state


We first assign an index (an integer number) to each tile of the frame. The numbering
starts from top position of the frame and move row wise from left to right. So, the goal
state of above figure 1shows the numbering of the each tile in the frame and empy
position is numbered by 16.
DAA-132
DESIGN AND ANALYSIS OF ALGORITHMS

Letkbethe position of tile k according to the frame position of goal state and P(k) is the
position index of the tile k in the initial position without considering the empty tle, i.e.
tile16.
number of tiles t where,
Now,let L(k) be-the
t<k ...(1)
..(2)
and P(k) > P(t) all the possible combination of the tiles within the frame. So, all the
State space is
possible movements of the tiles are in the state space. Naturally, goal state is also present
initial state.
in the state space, if the goal state can be reached from the
we canconstruct the tree of state space starting from initial state. Here all the nodes
of the tree represent one of the possible movements from the initial state. In the state
e of figure 3, we consider the movement of different tiles with respect to the empty
tile. 115|12|14|
15 2
13 7 6 1
310 4 8
Righ Down
Let/ Up
|115 12|14| 1151214 |11|5 14 115 1214|
152 9 15 29 |15| 2 12| 9 15| 2 69
137 61 137 61 |13 7 6 1 |13 7 1
3 104 8 310 4 8 31048 3 1048
u down

11512 |1151214
15 2 9 14 15 S13
13761 137 6

31048 310 48
State space of 15 puzzle problem starting from initial state
Fig: 3
Lne above state space denotes the all possible arrangement of the tiles within the frame
starting from initial state. We can remove those states from the state space which are not
reachable to the goal state. So, only those leave nodes are to be considered from a parent
node, which can step forward to the goal state, i.e. those configurations which satisfy
equation L(k)+e is even.
i=l

4. Write short notes on the


a) FFT following:
WBUT 2004, 2009]
b)
c) Strassen's matrix multiplication WBUT 2008, 2012, 2014)
Amortized Analysis WBUT 2013]
d) 15-puzzle problem WBUT 2013]
DAA-133
POPULAR PUBLICATIONS
Answer:
a) FFT:
There are several ways to calculate the Discrete Fourier Transform (DFT), such a.
solving simultaneous linear equations or the correlation method. The Fast Fourier
Transforrn (FFT) is another method for calculating the DFT. While it produces the same
resuit as the other approaches,it is incredibly more efficient, often reducing the
computation time.
There exists an efficient algorithm to compute the Discrete Fourier Transform and its
inverse; it is called Fast Fourier Transform (FFT).So, the Fast Fourier Transform has the
following properties:
The FFT is based on the divide and conquer principle
It has the time complexity O(n log n) , instead of O(n)of DFT.
It is only applicable for composite sizes, i.e. n = nn; (and often implemented for
powers of 2)

Ifn is even, we can divide a polynomial


plx)=a, + ar+ ag... ta, into polynomials
p"x)= a, + ax+ at...ta,X
(x)= aj + ax+ a...a,-x
And we can write
pls) =p") + xp )
Fast Fourier Transform (FFT) algorithm.
Input: An n-length coefficient vector a= [ as, aj, ..,.]and a primitive n hroot of
unity o, where n is a power of 2.
Output: Avector yof values of the polynomial for aat the n-th roots of unity.
FFT (2, o)

Step 1ifn = 1then


Step 1.1 returm y a;
Step 2x o
Ilx will store powers of o, so initíally x = 1.
I/Divide Step, which separates even and odd indices
Step 3 a En a, az, as, ...,a,2]
Step 4 as - [a1, as, as, ..,a-i
IIRecursive calls with a» as (n/2)-th root of unity, by the
Il reduction property
Step 5 y h FFT (a ven
odd
Step 6 yodd 4- FFT(a' ,o')
I/Combine Step using x = o'
Step 7 for i 0 to (n2)-1 do
Step 7.1 y, -y +xy
Step 7.2 y,en2 y-xy
-
I/Use reflexive property
DAA-134
DESIGN AND ANALYSIS OF ALCORITHMS

Step 7.3 XX0


Step 8 return y;

b) Strassen's matrix multiplication:


Strassen's matrix multiplication algorithm consists in reducing the number of
multiplication at the expense of increasing the number of additions and subtractions. In
sort. this algorithm uses 7 multiplications and 18 additions ofn/2 x n/2 matrices.
Let
a 42 and B=

Be two 2 x2 matrices. To comnpute the matrix product

C21 Cn
We first compute the following products
d =(a, +a, Xb,, +b,)
d, =(a,, ta, ),,
d =4, (6:-b,)
d, = a,(b,i -b,)
d, =(4, ta)b,
d,=(a4 - a,X,+b,)
d =(a -a, Xb, tb,)
Next, we compute C from the equation
c-td, -d, +d, d, +ds
d, + d, d +d, -d, +d,)
Since commutatively of scalar products are not used here, the above formula holds for
matrices as well.

Time complexity: The number of additions used is 18 and the number of multiplications
1s 7. This gives rise to the following recurrence for the running time.
ifn =1
T(n)= |7T(n/2) +18(n/2)'a ifn >2,
m ifn =1
T(n)= (7T(n/2) +(9a/2)r² ifn> 2,
Assuming, that n is a power of 2, then we can write,
T(n)=| m+
,. (9a/2)22
7-22 (9a/2)2=
7-2 mnn =mng +6ans? -6an?

ie., the time exity is (rs) =O(nt)


complexity
DAA-135
POPULAR PUBLICATIONS

c) Amortized Analysis:
Amortized analysis is a tool for analyzing algorithms that perform a sequence of similar
operations. It can be used to show that the average cost of an operation is small, if one
averages over a sequence of operations, even though a single operation within the
sequence might be expensive. In a sequence of operations the worst case does not occur
often in each operation - some operations may be cheap, some may be expensive.
Therefore, a traditional worst-case per operation analysis can give overly pessimistic
bound. The amortized approach is going toassign an "artificial cost" to each operation in
the sequence, called the amortized cost of an operation. It requires that the total real cost
of the sequence should be bounded by the total of the amortized costs of all the
operations.
Three methods are used in amortized analysis
1. Aggregate Method (or brute force)
2. Accounting Method (or the banker's method)
3. Potential Method (or the physicist's method)

d) 15-Puzzle Problem: Refer to Question No. 4 of LongAnswer Type Questions.

DAA-136

You might also like