0 ratings0% found this document useful (0 votes) 933 views138 pagesDesign Analysis and Algorithm Organiser
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here.
Available Formats
Download as PDF or read online on Scribd
DESIGN AND ANALYSIS
OF ALGORITHM
Introduction 2
Fundamental Algorithmic Strategies 24
Graph and Tree Traversal Algorithms 82
Tractable and intractable Problems 106
Approximation Algorithms 122
Miscellaneous 127
NOTE:
MAKAUT course structure and syllabus of 4" semester has been changed from 2020,
Previously DESIGN AND ANALYSIS OF ALGORITHM was in 5" [CS & IT]
‘semester. This subject has been completely redesigned and shifted in 4" semester in
present curriculum. Few new topics have been introduced also. Taking special care
Of this matter we are providing the relevant MAKAUT university solutions and some:
model questions & answers for newly introduced topics, so that students can get an
dea about university questions patterns.POPULAR PUBLICATIONS
INTRODUCTION
= Chapter at a Glance
+ Algorithms are classified into wo major groups. One is an "analytic" approach of
recognizing individual chavacters, while the other is a "holisc® approach, In some non-
heuristic studies, analysis consists of two phases:
1. A priori Analysis: By theoretical method we may calculates the compting time and space
required by an algorithm by formulating 2 function and itis independent of programming
language and the machine
2.A posteriori Analysis: in some cases we cannot calculate the time and space requirement of
an aleerithm using theoretical method. Then we cbserve its execution and measure the time
nd space actually used by the algorithm. Thies called a posteriori analysis
+ Space complexity: Iti defined as how much space is required to execute an algorithm. The
(Space) complexity of an algorithm (for a given input) isthe number of elementary objects
tat the algorithm needs to store during its implementation. This number is computed with
respect to the size n of the input data. This space is generally computer memory space
Naturally, an algorithm to achieve a specific growth with less space requirement is considered
«better algorithm in lems of space complexity.
‘The space carmplexity of analgorim X can be defined as, SCX) =€ (X)+ 10) Where C(X)
fs the required constant space whereas, IX) is the insluniancous space requirement Lor
algorithm X.
+ Time complexity of an algorithm is the measure of toll time required to exccule that
algorithm Time complenty is independent of computer, programming language,
programmer, and other implementation dei. Usually, itis dcpeading on the size of the
imput The (ime) complexty ofa algoritim (fora given input) isthe numberof elementary
instructions that this algorithm implements,
+ Asymptotic Notations: The algorithm that takes minimum time to exseule given an impat of
specific size is asymplotically more efficien algorithm.
= Worst-case compleaty: The worst-case complexity is the complexity of an algorithm
when the inputs the worst possible wath respect to complexity
Average case compleaity: The average complexity isthe compleaity of an algorithm
that is averaged overall the possible inputs
+ Master's Theorem:
The Master Theorem applies to recurences ofthe following forms: T(n) = aT(n/b)+1(n)
where ¢ > 1 and b> 1 are constants and fn) isan asymptotically postive funtion.
There are 3 cases
1. IF F@)=0("*) for some constant ¢>0, then Tia)=@(n"*'*)
2. IF f¢ay= O(n" log" n) with k 20, then T(n) = O( n***log!'n)
3. If f(a) = Qin"*""*) with 20 and_s(n) satisfies the regularity condition, then
Tin) O(n)
Regularity condition: aftw/b!1 and T(n)=p. 1 n=1, where p and q are constants.
‘The order of this algorithms is [weur 2011]
a) G(r") b) O(n") ©) O(n') ) O(n")
Answer: (¢)
12. Complexity the recurrence relation 71m-87() tw is (WBUT 2012, 2046]
a) On) b) Or") ¢) Ollog, n) d) O(n")
Answer: (b)
13. Complexity of Tower of Hanol problem is [WBUT 2012)
a) O(n) b) Or) ©) 00") 4) none of these
Answer: (c)
414, Consider the following three claims: [WeuT 2013]
1 (n-+4)" =O(n") where k and m are constants
m2 =002")
my 2°" =012")
Which of the following claims are correct?
a)l and ) land it ©) and it 4), land m
Answer: (d)
DAADESIGN AND ANALYSISO# ALGORITHM
15, Which of the following functions is asymptotically smallest? [WBUT 2013)
a2 by n'** ©) a @) Yiogn
Answer: (d)
16. An algorithm is made up of two independent time complexities /(n) and
g(n). Then the complexities of the algorithm is in the order of [WBUT 2014)
) S(n)x8(n) by max(f(n).8(n))
¢) min(f(n),¢(n)) OD F(n) +90)
Answer: (b)
417. Which of the following is usod to depict the working of algorithm? [WBUT 2014]
a) flowchart b) pseudo code ¢) source code d) all of these
Answer: (d)
18. The space requirement for the quick sort method depends on the [WBUT 2015]
a) number of nested recursive calls _b) size of the stack
c) both (a) and (b) 4) none of these
Answer: (°)
19. Which of the following property'properties isiare necessary for an algorithm?
ess b) Effectiveness: (WBUT 2017]
) Both (a) and (b) 4) None of these
Answer: (c)
20. Time complexity for recurrence relation T(n)=2T(n-1)+cis [WBUT 2047)
a) O(n") b) O(log n) ©) O(nlog n) 4) 0(2")
Answer: (b)
24. The Big O Notation of the expression /(n) = nlog, n-+n? +e "Is
[WBUT 2018, 2019]
a) O(n’) ») O(log, n) ©) O(nlog,n) 4) ofc")
Ansiver: (a)
22. Time complexity of quick sort in worst case is [WBUT 2019)
2) O(n) 8) O(log ny ©) 0 (n2) 4) 0 (n log n)
Answer: (d)
DAASPOPULAR PUBLICATIONS
‘Short Answer
4. Prove that 1!= O(n"). [BUT 2008, 2017]
nswer:
The n! is defined for integers n= 0 as
ela =
n(@-l ifn>0
Thus, a! = 1.23...
‘A weak upper bound on the factorial function isn! $1” a
since each of the 7 terms in the factorial product is at most n. Stirling's appraximation,
w(*) (1-6(4)
e n)
Where ¢ is the base of the natural logarithm, gives us a tighter upper bound, and a lower
bound as well
So, from the above equation (1), we can say that f(n) = n! and gin) = n°
the definition of O-notation,f(n) =n! Sn*~ 0 (n*) where n> 0
ind according to
2. Why Recursion Tree method is better than the Substitution method for solving a
recurrence relation? Find asymptotic upper bound of the following recurrence
relation with help of recursion tree method. [WBUT 2014)
1(n)=T(n/4)+7(n/2)+0(02)
mn Method: We make a guess for the solution and then we use mathematical
induction to prove the guess is correct or incorrect. So, this method consists of two steps:
‘+ Guess the form of the solution
‘* Use mathematical induction to find constants in the form and show that the
solution works
Recurrence Tree Method: In this method, we draw a recurrence tree and calculate the
time taken by every level of tree. Finally, we sum the work done at all levels. To draw the
recurrence tree, we start from the given recurrence and keep drawing tll we find a pattem.
among levels. The pattern is typically an arithmetic or geometric series
So, the main problem of substitution method is to find out the good guess otherwise, it
will not find out the correct solution, But in recurrence tree method, we start from the
initial source to expand the tree. So, recurrence tree method is better than the substitution
method,
DAAGDESIGN AND ANALYSISO# ALGORITHM
(oy
(5/160)?
P7252 56,0)"
(vioF = (as? (sy «ws?
eu) i
rae eb
= 04a} [geomesrc series]
3. State Master theorem and find the time complexity for the recurrence relation
T(n)=27(n/4) +a. [weur 2015]
Answer:
1* Part:
The master theorem often yields asymptotically tight bounds to some recurrences
from divide and conquer algorithms that partition an input into smaller sub-problems of
equal sizes, solve the sub-problems recursively, and then combine the sub-problem
solutions to give a solution to the original problem. The time for such an algorithm can be
expressed by adding the work that they perform at the top level of their recursion (to
divide the problems into sub-problems and then combine the sub-problem solutions)
together withthe time made in the recursive calls of the algorithm.
2 Part:
rop=visar( 4) vial Fean( 2) evnsa fier
aoafien lela) een,
fae fae Raat fa same
=vn
=Ya+vn+vn+Va+...+vn = Jn(log.2+1) =0(vnign)
4. Solve the following recurrence relation using generating function
6a, ~11d, 56a, for m3 with initial condition a, =1,a,=—1 and a, =1
[WBUT 2016)
DAATPOPULAR PUBLICATIONS
Answer:
We have
a,~64,,+110, »~6a, , w
Let, (x)= a,x"
Muliplying (1) by "and summing fiom » = 3 t0:0,
We get, Sas" ~634,,2° +a, "64, 5X
oon Gs a
: ea, sx” =
on, (ast ay ae 0.37) 624, ag ge 62 Sas —O
of, £e(8)— a, — 9,232") Ox(F 4,3" a — 0,3)
it Sa, x"? a) 62 Daas”? =0
oF, (g(x) “Le xx") 6x2, ye"! 143) 41x (Sa, 4x" 1) -6r'g@x)=0
0% fgls)—1+x— a} Gfx) —T ta} +1 gta) 1] 6 g(x) -0
of, (3=n)k bytes.
10. Find the time complexity of an algorithm that computes the multiplication of n
numbers stored in an array. [MODEL QUESTION]
Answer:
Multiply (2.0) complexity
Step 1 Initialize m 1; (1)
Sep? fori! Wado OC)
Step2.1 m= mali: OC)
Step 3 return m, o
So the total time complexity of algorithm Multiply () is O(1}+O(n).0(1)+0(1) = O(a),
Long Answer Type Questions
1. Discuss different types of asymptotic notation. [WBUT 2005, 2011, 2013]
OR,
Define the following notations with example
Big-oh (0), Little-oh (0), 2, © [WBUT 2015)
OR,
Writo the significance of different asymptotic notations (Big-O, Big-omoga, Big-
theta) with graphical analysis. [WBUT 2017]
DAA-I2DESIGN AND ANALYSISO# ALGORITHM
Answer:
Big-O notation: Big-O notation is a theoretical measure of resource requirement of an
algorithm. Usually time and memory needed, given the problem size n (Which is usually
the number of inputs) do count as resource. Informally speaking, some equation
fn) = O(g(n)) means, fi) is less than some constant multiple of g(a)
So. fin) = O(e(n)) means there are positive constants c and k. such that
O¢ fin) &. The
values of e and k must be fixed for the function Pand must not depend on n
@-notation: «» notation is also a theoretical measure of the resource requirement of an
algorithm. Usually the time and memory needed, matter as resources, given the problem
size n, (which is usually the number of inputs). Informally, saying some equation
An) ~ o(g{n)) means gn) relative to in) becomes insignificant as n goes to infinity. The
notation is read as, “fof nis litle omega of g of
Given any ¢ > 0 however small, there exists a k> 0 such that
£0) <5 whenever n>
(a
|
Ja)
4k The value of & must not depend on nm, but may depend on c. In other words,
© notation: This is also a theoretical measure of the execution of an algorithm, in terms
of the time or memory needed, given the problem size n. Informally, saying some
equation fin) = ©(g(x)) means it is within a spectrum induced by a pair of constant
multiples of g(n). So, fn) — © (g(n)) means there are positive constants ¢), ¢> and k,
such that 0 ¢,g(n)< f(n) 0 however small, there exists a k > 0 such that aay ce, whenevern > k.
etn
The value of k must not depend on a, but may depend on c. This is equivalent
fn)
tolimZ)
= gin)
2. a) Solve the following recurrence relation using generating function:
4, =6a,., 1a, +4a,, with initial condition ay =1, a,= 2, a; =4.
[WBUT 2007, 2011]
Answer:
(3) = $04," is the generating function of the sequence {a,, 22 0}
Relation, a, = 64,.,—a,
Multiplying both sides by
Then
See Bo,
or, Als)
and ae=1,.a\= 2, a2=1
25-1 69( Ms) ~ 48a) —T13?(A(s)~ a) 445° ACs)
of, A(S) -s° 2s -1=65(A(s) — 25-1) -11s*(Als) 1) 445° AG)
or, A(S) ~s? ~ 2-1 = 6A(s) 12s? ~6s —Ils'A(s) Ils? + 4s! (5)
of, A(T = 68 +113? 49° ]==12s? +57 + 11s? -65 + 2941 =—ds41
of, A(S) = (45 = 1)" =1Is* +651)
(4s—1) (I-1) ' where, T= 45’ —Ls" +65
= 491-1)! = (1-48) [4 THT 4
5) [14 631s? 443°) + (0)? 4G) toed
Collect coefficient of s’ terms in general, for r>( and that will be a,, r=0
or, A(S)
b) What is the Tail Recursion? Give an example. [WBUT 2007]
Answer:
‘A function call is said to be tail recursive if there is nothing to do ater the function
retums except retum its value. This is a special form of recursion where the last operation
of a function is a recursive eall, The recursion may be optimized away by executing the
call in the current stack frame and returning its result rather than creating a new stack
frame.
Although it may not save alot of run time, it can save a lot of memory.
The following finds the maximum value in a list.
DAA-I4DESIGN AND ANALYSISO# ALGORITHM
There is an example which is written in two different methods. First one is written in
general recursive method and second one is written in tail recursive method. Later, we
shall compare these two methods.
Max _list() is an algorithm to find the maximum element fiom a list A
Max_list (A, max)
{
if (AT ] = null) then
return max;
if (max < heed(A)) then
return max list (teil(A), head(A));
else
retuzn max_list (tail (A), max);
»
The return value of the current algorithm is just the return value of the recursive call. A
compiler could optimize it in the following way of implementation. So it doesn't allocate
new space for | and max on each invocation as is happening now.
Max list (A, mex)
{
waile( true)
(
if (| | onull) then
return max:
if (wax < head(A)) then
‘
max = head (A);
R= tail):
,
ese
A= tail:
»
}
Now, there is no need to allocate new memory for the parameters. So this is more
efficient
3. Stato master’s theorom and find the timo comploxity for the following
recurrence: T(n) = 2T(n"?) + log n. IWBUT 2008, 2016)
Answer:
Master Theorem
The Master Theorem applies to recurrences of the following forms
T (n)= al (w/b) + fin)
Where a 1 and b> I are constants and fin) is an asymptotically positive function
Thete are 3 cases:
1. If f(n)= O(n" for some constant «> 0, then Tin) = OC n'®™')
2. If f(n)= O(n" log! n) with k> 0, then T(r) = @( n'**log"*'n)
DAAAISPOPULAR PUBLICATIONS
3. If fin)= 00
in) =O (fin).
Regularity condition: affn/b)< f(a) for some constant c < | and all sufficiently large n.
Time complexity for the following recurrence: T(n) = 21(n'*) + log n
Let us consider that, p = log n
So, 7(2")=27(2"*)+p
Now, we further consider that K(p) = 7(2")
So, K(p)=2K(p/2)+ p = 2.2 K( p2.2)+ptp= 2° K(p/2)+2.p
‘Subject to some terminating condition (i.e. not given here), K(p) = O(p log p)
So, T(n) = T(2") = Kip) = O(p log p) = Of log n log(log n))
*) with £20 and fin) satisfies the regularity condition, then T
4. Suppose we have a recurrence relation Tapmar( 2) r(n ‘Show that the
followings: [WBUT 2010, 2012)
au a(*)-u7( for some constant k <4, then T(n)=O(/(n))
bit o(2)-win) for some constant k > 4, then 1'(n)=O(:!*
oir a2) for somo constant k= 4, thon 7(n)=O((n}log, n)
Answer:
To prove this, we basically draw a general version of the recursion tree, Master Theorem
js just a special case of the use of recursion trees. This statement of the Master Theorem
gives only upper bounds - although you can see that our analysis below allows us to
calculate T(n) explicitly (where n is a multiple of ) - and assumes f has a very
Particular form. It is actually possible to give accompanying lower bounds even with
‘weaker restrictions on f
Consider the onnton T(x) a7{*} /(0)
Wee start by drawing a recursion tree.
+ The root contains the value fn)
‘+ thas a children, each of which contains the value fiw).
‘+ Fach of these nodes has a children, containing the value f(n/b")
‘© Ingeneral, evel i contains a; nodes with values f(n/b))
‘+ Hence the sum of the nodes atthe ith level is a'b)
The tree stops when we get to the base case for the recurrence, We will assume that T(1)
= (1) = © (1) is the base case, Thus the depth of the tree is log , m and there are log ,
nt Hlevels.
Let T(n) be the sum of all values stored in all levels of the tree. From the recursion tree,
once again assuming that N is a power of b, we see
DAAI6DESIGN AND ANALYSISO# ALGORITHM
Tia) = fin)+a fln’b)+a° flid?}+ al flab’ +a" find") (1)
Where L = log, n is the depth of the tree
‘Since f(1) = © (/). the last term of this summation is
O(a!)= O (aly")= (nh*s*)
Now,
(x
aie S)-w(m) for some constant k <1, then () is a constant factor lager than a
f{w/b), then the sum is a descending geometric series. The sum of any geometric series is
4 constant times its largest term. In this case, the largest term is the first term f(n). So,
from equation |,
‘Tia)= lata finib)+a fab"), tal fln/b'y+ - +a fv")
fin) [tks 4%,"
if k < 1, then the series is O (1) ~as may be scen by the closed form of the sum — so,
Tia) =O (@))
by IF fin) is a constant factor smaller than a f(n/b), then the sum is an ascending geometric
series. The sum of any geometric series is a constant times its largest term. In this case,
this is the last term, which by our earlier argument is 7()=O(n"*).
©) if k = 1, then all terms in the series are equal, and there are log, n of them, so
T(n)=0(F(n)log, n).
5. State Master's theorem and find out the time complexity for the recurrence
T(n)=T(2n/3)+1, [weur 2012]
Answer:
T (n)= T Qw/3) + |, in which a= 1, b= 3/2, f(n) = |, and n**,*=
[let a> 1 and b> i be constants, it f(r) bea Funston, and let T (n) be defined on the
‘onnegative integers by the recurrence
T(n)=aT (nb) + Fn),
where we interpret wb t mean cither |n/b| or [n/b]. Then T (n) can be bounded
asymptotically as follows.
If f(n) = O(n"**) , then T (a) = O(n" Ign). |
Since {(n) = @(n'***)= @(1),, and thus the solution to the recurrence is
T(a)= @(lgn)
6. Use the recursion tree to give an asymptotically tight solution to the recursion
T(n)=T'(n—a)+T(a)+en where «>=! and ¢>0 are constant. [WBUT 2012]
DAAAITPOPULAR PUBLICATIONS
Answer:
The recursion tree is
booYN
ban cimey “9 ——
ZN LN
and? cod a twa +n
I\ ~~ /\ |
Teak Otrlog»)
The recursion tree is full for logon levels and each contributing cn, so we guess
‘{Q{nlogn). It has logs n levels, each contributing S cn, so we guess O(alog n).
7. Solve the recurrence relation using generating function a, —7a,, +100. 0,
[WBUT 2013}
So, the characteristic equation of the recurrence relation is*—7r+10=
Its roots are r = 2 and r = 5.
Hence the sequence {a,} isa solution to the recurrence relation iF and only if
a, 7a! +a,5"
for some constant a, and a,
From the initial condition, it follows that
a= 10= a, + a2 w
a ~ 41 = 2ay + Sar oo (2)
Solving the equations (1) and (2), we get a= 3, a2 = 7
Hence the solution is the sequence {a,} witha, = 3.2" +7.5"
Method 1 (Generating functions)
First, find the close form of the generating functions.
Fae) sas a 1S (00,,-100
)e"]+ 41x10
2 taf 8) 1032 Seog ge”) tbe 10
= Ix(G{x)— a) ~1OPG(x) + Als +10 = 7xG(x) ~10°G(x) 29x 410
DAAAI8So, we have, G(x)(10x? —7x+1)=—29x+10
=29x+10 _ =29x+10 a 6
Gx) +
10x? -7x4 xMI-Sx) (1-2x) (1-Sx)
Then we can solve, a= 3 and 6=7
+e) -—2 33.02) +78Gx)" “Sar er ser
80. Ge) = TAS ay LAN + TL! “Lor e7sy
Thus, a, =3.2" +75"
8.1f T(n) [WBUT 2015)
Ln
fararay Sn. m>1
Then show that /(12)=O(nlog. 7).
Show all steps of derivation.
Answer:
Let a2 land 4>1 be constants and /(n)be a function and 7(n)be defined on the non
negative integers by the recurrence
T(n)= a (n/b)+ f(n), then
We can apply the second rule of Master Theorem,
(**) then (n) O(n" ten)
Here, a= 2, b= 2 and fin) = Sn
anon
(1! log, 2) =O(nlog, n)
9. Write short notes on the following:
a) Asymptotic notation [WBUT 2008, 2012, 2014, 2016, 2019]
b) Turing machines [WBUT 2009],
‘¢) Recursion troo [WBUT 2009, 2012, 2015, 2016]
Answer:
) Asymptotic notation:
Suppose we are considering two algorithms, 4 and B, for solving a given problem.
Furthermore, let us say that we have done a careful analysis of the running times of each
of the algorithms and determined them to be T. (n) and Ty (n), respectively, where m is a
measure of the problem size. Then it should be a fairly simple matter to compare the two
functions T, (n) and T» (n) to determine which algorithm is the best
One possibility arises if we know the problem size @ priori. For example, suppose the
problem size is np and T, (n) < Tp (n). Then clearly algorithm 4 is better than algorithm,
B for problem size ne,
In the general case, we have not a priori knowledge of the problem size. However, if it
can be shown, say, that Ts (n) < Ty (n) for all n > 0, then algorithm is better than
algorithm B regardless of the problem size.
DAMSPOPULAR PUBLICATIONS
Unfortunately, we usually don't know the problem size beforehand, nor is it true that one
of the functions is less than or equal the other over the entire range of problem sizes, In
this ease, we consider the asymptotic behavior of the two functions for very large
problem sizes
The notations we use to describe the asympiotic running time ofan algorithm are defined
in terms of functions whose domains are the set of natural numbers N= {0. 1. 2...)
Such notations are convenient for deseribing the worst-case running.time function T(r),
‘which is usually defined only on integer input size.
Different types of asymptotic notations are,
Big-O notation
Qenotation
@-notation
@-notation
Little-0 notation
b) Turing machines:
‘A Turning machine (TM) denoted by M, is defined as 7-tuple,
M=(0.5.T.6,q,.#.H).
Where,
Q= a finite non-empty sot of siates,
E=a non-mply set oF input symbols (alphabets) which isa subset of Pand # ¢2,
Pa finite non-empty set of tape symbols, ic, (20)
6 =the transition function QxT > QxI™{L,R},,
It is mapping from the present state of automaton and tape symbol to next state, tape
symbol and movement of head in left or right direction along the tape, ‘This tells us that a
Turing machine is in some present state gc Q.after scanning an input symbol from
(2U#) it goes to next state g’eQ by writing a symbol xe Tin the current eell of input
tape and finally takes a left or right move.
qo=the initial state, and g, 3 vertices
and 2 edges is, [WBUT 2007, 2014, 2016)
a)2 bys ea at
Answer: (b)
DAA.2SPOPULAR PUBLICATIONS
4. Which of the following algorithm design techniques is used in the quick sort
algorithm? [WBUT 2008, 2040, 2016, 2018}
‘) Dynamic programming b) Backtracking
©) Divide and conquer 4) Greedy method
Answer: (c)
5. Kruskal algorithm is a [WBUT 2008, 2012]
4) Divide & conquer algorithm b) Branch and bound algorithm
¢) Greedy algorithm 4) Dynamic programming
Answer: (c)
6. Optimal substructure property is exploited by [WBUT 2008, 2013]
‘) Dynamic programming b) Greedy method
¢) Both (a) & (0) @) None of these
Answer: (2)
7. Which of the following approaches is adopted in Divide & Conquer algorithms?
) Top- down b) Bottom-up [WBUT 2009, 2018)
) Both (a) & (b) 4) None of these
Answer: (a)
8. The fractional Knapsack problem can be solved by using _[WBUT 2040, 2016]
‘) Greedy method b) Divide and Conquer method
¢) Dynamic programming @) none of these
Answer: (a)
9. Time complexity of Binary Search algorithm on n items is (WBUT 2010, 2016)
a) O(n) b) O(nlogn) ¢) O(n’) 4) O(nlogn)
Answer: correct answer is O(log: n)
14. Which of the following cannot be performed recursively? [WBUT 2011, 2013]
a) binary search b) quick sort ¢) DFS ‘d) nono of these
Answer: (i)
12. In which sorting technique, Is an oloment placed in Its propor position at each
step? [WBUT 2011]
a) Bubble sort. b) Quick sort ) Merge sort ) Heap sort
Answer: (a)
13. The average number of comparisons performed by merge sort algorithm in
merging ‘2’ sorted lists of length '2’ is [BUT 2011, 2018, 2019]
2) 8/5 b) 1477 ©) 1116 4) 83
Answer: (4)
DAA.26DESIGN AND ANALYSISO# ALGORITHM
14, Which of the following design techniques is used in the quick-sort algorithm?
[WBUT 2011, 2013, 2018, 2019)
a) Dynamic programming b) Back tracking
) Gready method 4) Divide and conquer
Answer: (4)
15. The time-complexity of TSP [WBuT 2014]
a) Or 2") b) @r2") ©) Q(n'2") 0) none of these
Answer: (a)
16. Which of the following algorithms solves the All-Pair Shortest Path problem?
[WBUT 2011, 2016]
2) Dijkstra's b) Floyd's Warshall’s ¢) Prim's @) Kruskal’s
Answer: (b)
17. Best case time complexity for Binary search in unsuccessful case is
[WBUT 2012, 2018]
a) OU) b) Ollogn) ©) O(n) 4) None of these
Answer: (6)
18. The tight bound for building a max heap is [WBUT 2012, 2016]
a) O(n) b) O(log, n) ©) Olnlog.n) ) none of these
Answer: (c)
19. The worst case running time of a quick sort algorithm is [weur 2012)
a) OW") b) Ovnlog, n) ©) O(n) d) O(log.)
Answer: (a)
20. Binary Search algorithm can't be applied to [WBUT 2012]
4) sorted linked lists b) sorted binary troos
) sorted linear array 4) sorted integer array
Answer: (a)
24. Kruskal’s algorithm Use ...sc.csesn and Prim’s algorithm US@S .sossseorsoe IM
determining the MST. (WBUT 2013)
‘a)edges, vertex b) vertex, edges c)edges,edges—_d) vertex, vertex
Answer: (a)
22. Match the following [wBur 2014)
‘) Fractional Knapsack 1) Groody Algorithm
) 0-1 Knapsack Ii) Dynamic Programming Algorithm
a) ae and b+ b) a-i and bell c) a-li and b-i ) a-iiand b-ii
Answer: (b)
DAA.27POPULAR PUBLICATIONS
23. The average number of comparisons performed by merge sort algorithm in
merging two sorted lists of 2 olomonts is. [WBUT 2014]
2) 85, b) 1477 <) 116 ars
Answer: (d)
24. Which of the following sorting methods would be most suitable for sorting a
list which is almost sorted [WBUT 2014]
a) bubble sort) Insertion sort —¢) selection sort. =) quick sort
Answer: (b)
25. Prime’s algorithm is an example of [WUT 2014)
a) backtracking b) dynamic programming
¢) groedy mothod 4) none of these
Anywer: (c)
26. Kruskal's Algorithm for finding minimum spanning tree is an example of
a) Dynamic programming b) Greedy method [WBUT 2015)
¢) Both (a) and (b) @) None of these.
Answer: (b)
27. Given two sorted lists of si
nd “n” respectively. The number of
comparisons neoded in the worst case by merge sort will be (WBUT 2015)
aymtn ) Max(m, 1) ©) Min(m,n) d) m+n-1
Answer: (2)
28, The running time 7() where * n” is the input size of a recursive algorithm is
given by [weur 2015)
T(n)=c+T(n-1), nat
ad, Wrst
The order of the algorithm is
a) b)n on dyn
Answer: (0)
29. Travelling salesman problem is [WeuT 2015, 2016)
ap b) NP ©)NP-complete — d) NP-Hard
Answer: (a)
30. Locally best computation is done in (WBUT 2017]
4) Dynamic programming b) Greedy method
¢) both (a) and (b) 4) none of these
Answer: (b)
DAA.28DESIGN AND ANALYSISO# ALGORITHM
31. Which of the following algorithm design techniques is used for solving graph
coloring problom? [BUT 2017]
2) Divide and conquer b) Backtracking
¢) Dynamic programming d) Greedy method
Answer: (b)
32. The total running time of matrix chein multiplication of n matrices is
[WBUT 2017]
a)o(n') b) on’) ©) o(n") 4d) o(n)
Answer: (b)
33. The sub-problems in Divide and Conquer are considered tobe — [WBUT 2017]
2) distinct ®) overlapping ¢) large size 4) small size
Answer: (0)
34. Which of the following algorithm design techniques is used in merge sort?
4) Dynamic programming ) Backtracking [WUT 2017)
¢) Divided and conquer d) Greedy method
Answer: (c)
35. Which of the following standard algorithm is not based on Dynamic
Programming? [WBUT 2018, 2019]
2) Bellman-Ford Algorithm for single source shortest path
b) Floyd Warshall Algorithm for all pairs shortest paths
¢) 0-1 Knapsack problem
4) Prim’s Minimum Spanning Tree
Answer: (d)
36. Time complexity of Quick sort in worst case IS [weur 2018]
2) 0(n) b) O(logn) ©) O(n") 4) O(nlogn)
Answer: (c)
37. A machine needs a minimum of 100 ms to sort 1000 names by quick sort. The
minimum time needed to sort 100 names will bo approximately [WBUT 2018, 2019]
a) 50.2 ms b) 6.7 ms ©) 72.7 ms d) 14.2 ms
Answer: (d)
38. What Is the time complexity to Insert an element into a heap?
[WBUT 2018, 2019]
4) O(nlogn) ——_b) O(logn) ©) O(n) 4) None of these
Answer: (0)
DAA.29POPULAR PUBLICATIONS
39. Best case time complexity for Binary search in unsuccessful case is.
[WBUT 2019)
2) O(n) b) 0 (log n) 0") 4) 0 (nog n)
Answer: (0)
40. Which of the following approaches is Divide & Conquer strategy? [WBUT 2019]
a)Top-down —b) Bottom-up ¢) Both (a) &(b) dd) None of these
Answer: (a)
41. A machine needs 200 ms to sort 200 names, using bubble sort. in 800 ms, it can
approximately sort (WBUT 2019)
'2)400 names ——_b) 800 names ¢) 750 namos. 4) 1800 namos.
Answer: (a)
42. Which of the following is not a backtracking algorithm? [weur 2019]
a) N queen problem b) Tower of Hanol
¢)M coloring problem @) None of these
Answer: (4)
‘Short Answer juestions
4. What Is unlon-find algorithm? [WBUT 2004, 2005, 2007, 2011, 2012]
oR,
What is union-find algorithm? Explain with an example. [weur 2015]
OR,
Write short note on Union Find Algorithm. [WBUT 2018, 2019]
nswer:
Union-Find Algorithm of disjoint sets
A disjoint set structure maintains a partition of a set A = (Aj, Az, .. A.) such that no
clement belongs to more than one sei simultaneously. For each set Aj in the partition, a
representative. which is some member in the set. is kept track of The intial partition has
each clement in a set by itself. Suppose A, and A, are two different sets where i 4 j. If
they are disjoint sets, then there is no element that is both in A, and Ay.
Union of disjoint-set
Union (A,, Aj), where i + j, replaces A, and Aj, and specifies a representative for A,
UA\, ie. the union of two disjoint sets Aj and Aj create a new set in which all the
elements of A, and A, are kept and whose representative is x. where x < A,orx € Ajisa
representative of either A, or A
Algorithm
Union (x, y)
parent [x]
DAA30