Daa 20 21
Daa 20 21
B. TECH.
(SEM-V) THEORY EXAMINATION 2020-21
DESIGN AND ANALYSIS OF ALGORITHM
Time: 3 Hours Total Marks: 100
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.
SECTION A
1. Attempt all questions in brief. 2 x 10 = 20
Case-01:
Case-02:
1|Page
Printed Page: 2 of 2
If a = bk and
If p < -1, then T(n) = θ (nlogba)
If p = -1, then T(n) = θ (nlogba.log2n)
If p > -1, then T(n) = θ (nlogba.logp+1n)
Case-03:
If a < bk and
If p < 0, then T(n) = O (nk)
If p >= 0, then T(n) = θ (nklogpn)
Omega () Notation:
The function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
1|Page
Printed Page: 3 of 2
The naïve approach tests all the possible placement of Pattern P [1.......m]
relative to text T [1......n]. We try shift s = 0, 1.......n-m, successively and
for each shift s. Compare T [s+1.......s+m] to P [1......m].
The naïve algorithm finds all valid shifts using a loop that checks the
condition P [1.......m] = T [s+1.......s+m] for each of the n - m +1 possible
value of s.
NAIVE-STRING-MATCHER (T, P)
1. n ← length [T]
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1.....m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s
Analysis: This for loop from 3 to 5 executes for n-m + 1(we need at least
m characters at the end) times and in iteration we are doing m
comparisons. So the total complexity is O (n-m+1).
1|Page
Printed Page: 4 of 2
The skip list is used to store a sorted list of elements or data with a linked
list. It allows the process of the elements or data to view efficiently. In
one single step, it skips several elements of the entire list, which is why it
is known as a skip list.
j. Explain how algorithms performance is analyzed? 2
Analysis of algorithms is the determination of the amount of time and
space resources required to execute it. Usually, the efficiency or running
time of an algorithm is stated as a function relating the input length to the
number of steps, known as time complexity, or volume of memory,
known as space complexity. Asymptotic analysis is a way of analysing the
behaviour of an algorithm as the size of the input grows indefinitely. The
asymptotic analysis allows you to determine the best, worst, and average
case performance of an algorithm, but it does not provide information
about the algorithm's performance on specific inputs. Analysis of
algorithm is the process of analysing the problem-solving capability of the
algorithm in terms of the time and size required (the size of memory for
1|Page
Printed Page: 5 of 2
storage while implementation). However, the main concern of analysis of
algorithms is the required time or performance. Generally, we perform the
following types of analysis −
Worst-case − The maximum number of steps taken on any instance of
size a.
Best-case − The minimum number of steps taken on any instance of size a.
Average case − An average number of steps taken on any instance of
size a.
SECTION B
2. Attempt any three of the following:
Q Question Marks CO
n
o
.
a Write an algorithm for counting sort? Illustrate the operation of counting sort on the 10
. following array: A={4, 0, 2, 0, 1, 3, 5, 4, 1, 3, 2, 3}
It performs sorting by counting objects having distinct key values like hashing. After that, it
performs some arithmetic operations to calculate each object's index position in the output
sequence. Counting sort is not used as a general-purpose sorting algorithm.Counting sort is
effective when range is not greater than number of objects to be sorted. It can be used to sort
the negative input values.
Algorithm
Time Complexity:
1|Page
Printed Page: 6 of 2
b Show the results of inserting the keys F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, 10
. Y, D, Z, E in order into an empty B-tree. Use t=3, where t is the minimum degree of B-
tree.
1|Page
Printed Page: 7 of 2
The activity selection problem is a mathematical optimization problem. Our first illustration
is the problem of scheduling a resource among several challenge activities. We find a greedy
1|Page
Printed Page: 8 of 2
algorithm provides a well designed and simple method for selecting a maximum- size set of
manually compatible activities.
Suppose S = {1, 2....n} is the set of n proposed activities. The activities share resources
which can be used by only one activity at a time, e.g., Tennis Court, Lecture Hall, etc. Each
Activity "i" has start time si and a finish time fi, where si ≤fi. If selected activity "i" take
place meanwhile the half-open time interval [si,fi). Activities i and j are compatible if the
intervals (si, fi) and [si, fi) do not overlap (i.e. i and j are compatible if si ≥fi or si ≥fi). The
activity-selection problem chosen the maximum- size set of mutually consistent activities.
Example: Given 10 activities along with their start and end time as
S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10)
Si = (1,2,3,4,7,8,9,9,11,12)
fi = (3,5,4,7,10,9,11,13,12,14)
Solution: The solution to the above Activity scheduling problem using a greedy strategy is
illustrated below:
1|Page
Printed Page: 9 of 2
Now, schedule A1
Skip A5 as it is interfering.
Skip A8 as it is interfering.
d What is sum of subset problem? Draw a state space tree for Sum of subset problem using 10
. backtracking? Let n=6, m=30 and w [1:6] = {5, 10, 12, 13, 15, 18}
Let, S = {S1 …. Sn} be a set of n positive integers, then we have to find a subset whose sum
1|Page
Printed Page: 10 of
is equal to given positive integer d.It is always convenient to sort the set’s elements
2 in
ascending order. That is, S1 ≤ S2 ≤…. ≤ Sn
Algorithm:
Let, S is a set of elements and m is the expected sum of subsets. Then:
1. Start with an empty set.
2. Add to the subset, the next element from the list.
3. If the subset is having sum m then stop with that subset as solution.
4. If the subset is not feasible or if we have reached the end of the set then backtrack
through the subset until we find the most suitable value.
5. If the subset is feasible then repeat step 2.
6. If we have visited all the elements without finding a suitable subset and if no
backtracking is possible then stop without solution.
7.
e Write KMP algorithm for string matching? Perform the KMP algorithm to search the 10
. occurrences of the pattern abaab in the text string abbabaabaabab.
Knuth-Morris and Pratt introduce a linear time algorithm for the string matching problem. A
matching time of O (n) is achieved by avoiding comparison with an element of 'S' that have
1|Page
Printed Page: 11 of
previously been involved in comparison with some element of the pattern 'p' to2 be matched.
i.e., backtracking on the string 'S' never occurs
1. The Prefix Function (Π): The Prefix Function, Π for a pattern encapsulates knowledge
about how the pattern matches against the shift of itself. This information can be used to
avoid a useless shift of the pattern 'p.' In other words, this enables avoiding backtracking of
the string 'S.'
2. The KMP Matcher: With string 'S,' pattern 'p' and prefix function 'Π' as inputs, find the
occurrence of 'p' in 'S' and returns the number of shifts of 'p' after which occurrences are
found.
In the above pseudo code for calculating the prefix function, the for loop from step 4 to step
10 runs 'm' times. Step1 to Step3 take constant time. Hence the running time of computing
prefix function is O (m).
SECTION C
3. Attempt any one part of the following:
Insertion sort is one of the simplest sorting algorithms for the reason that it
sorts a single element at a particular instance. It is not the best sorting
algorithm in terms of performance, but it's slightly more efficient
than selection sort and bubble sort in practical scenarios. It is an intuitive
1|Page
Printed Page: 12 of
sorting technique. 2
Suppose we have a set of cards in our hand, such that we want to arrange
these cards in ascending order. To sort these cards, we have a number of
intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left
hand, and we can start taking cards one after other from the left hand,
followed by building a sorted arrangement in the right hand.
Assuming the first card to be already sorted, we will select the next
unsorted card. If the unsorted card is found to be greater than the selected
card, we will simply place it on the right side, else to the left side. At any
stage during this whole process, the left hand will be unsorted, and the
right hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing
them in the correct position. At each iteration, the insertion algorithm
places an unsorted element at its right place.
1. 1. for j = 2 to A.length
2. 2. key = A[j]
3. 3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. 4. i = j - 1
5. 5. while i > 0 and A[i] > key
6. 6. A[i + 1] = A[i]
7. 7. ii = i -1
8. 8. A[i + 1] = key
1. We will start by assuming the very first element of the array is already
sorted. Inside the key, we will store the second element.
Next, we will compare our first element with the key, such that if the
key is found to be smaller than the first element, we will interchange their
indexes or place the key at the first index. After doing this, we will notice
that the first two elements are sorted.
2. Now, we will move on to the third element and compare it with the left-
hand side elements. If it is the smallest element, then we will place the
third element at the first index.
Else if it is greater than the first element and smaller than the second
element, then we will interchange its position with the third element and
place it after the first element. After doing this, we will have our first three
elements in a sorted manner.
1|Page
Printed Page: 13 of
3. Similarly, we will sort the rest of the elements and place them in their
2
correct position.
Consider the following example of an unsorted array that we will sort with
the help of the Insertion Sort algorithm.
Initially,
1st Iteration:
Set key = 22
Compare a1 with a0
2nd Iteration:
Set key = 63
1|Page
Printed Page: 14 of
2
3 Iteration:
rd
Set key = 14
Since a3 is the smallest among all the elements on the left-hand side, place
a3 at the beginning of the array.
4th Iteration:
Set key = 55
5th Iteration:
Set key = 36
1|Page
Printed Page: 15 of
2
Since a5 < a2, so we will place the elements in their correct positions.
Logic: If we are given n elements, then in the first pass, it will make n-
1 comparisons; in the second pass, it will do n-2; in the third pass, it will
do n-3 and so on. Thus, the total number of comparisons can be found by;
Output;
(n-1) + (n-2) + (n-3) + (n-4) + ...... + 1
Sum=
i.e., O(n2)
Time Complexities:
1|Page
Printed Page: 16 of
2
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is
also O(n2), which occurs when we sort the ascending order of an
array into the descending order.
In this algorithm, every individual element is compared with the
rest of the elements, due to which n-1 comparisons are made for
every nth element.
1|Page
Printed Page: 2 of 2
Subject Code: KCS503
Roll No:
1. function merge(a,b)
2. if a.root.key ? b.root.key
3. return a.add(b)
4. else
5. return b.add(a)
To perform the union of two binomial heaps, we have to consider the
below cases:
2|Page
Printed Page: 3 of 2
We can see that there are two binomial heaps, so, first, we have to combine
both heaps. To combine the heaps, first, we need to arrange their binomial
trees in increasing order.
2|Page
Printed Page: 4 of 2
In the above heap first, the pointer x points to the node 12 with degree B0,
and the pointer next[x] points the node 18 with degree B0. Node 7 with
degree B1 is the sibling of 18, therefore, it is represented as
sibling[next[x]].
Now, first apply Case1 that says 'if degree[x] ≠ degree[next x] then move
pointer ahead' but in the above example, the degree[x] = degree[next[x]],
so this case is not valid.
2|Page
Printed Page: 5 of 2
dgree[next[x]] = B1, degree[sibling[next[x]]] = B1
Now we will reapply the cases in the above binomial heap. First, we will
apply case 1. Since x is pointing to node 12 and next[x] is pointing to node
7, the degree of x is equal to the degree of next x; therefore, case 1 is not
valid.
Now, let's try to apply case 3, here, first condition of case3 is satisfied as
degree[x] = degree[next[x]] ≠ degree[sibling[next[x]]], but second
condition (key[x] < key[next x]) of case 3 is not satisfied.
Now, let's try to apply case 4. So, first condition of case4 is satisfied and
second condition (key[x] > key[next x]) is also satisfied. Therefore,
remove x from the root and attach it to [next[x]].
Now, the pointer x points to node 3, next[x] points to node 15, and
sibling[next[x]] points to the node 6. Since, the degree of x is equal to the
degree of next[x] but not equal to the degree[sibling[next[x]]], and the key
value of x is less than the key value of next[x], so we have to remove
next[x] and attach it to x as shown below -
Now, x represents to the node 3, and next[x] points to node 6. Since, the
2|Page
Printed Page: 6 of 2
degree of x and next[x] is not equal, so case1 is valid. Therefore, move the
pointer ahead. Now, the pointer x points the node 6. The B4 is the last
binomial tree in a heap, so it leads to the termination of the loop. The
above tree is the final tree after the union of two binomial heaps.
Time Complexity:
binomial heap is the collection of binomial trees, and every binomial tree
satisfies the min-heap property. It means that the root node contains a
minimum value. Therefore, we only have to compare the root node of all
the binomial trees to find the minimum key. The time complexity of
finding the minimum key in binomial heap is O(logn).
Dijkstra's Algorithm :
Step 1: First, we will mark the source node with a current distance of 0 and
set the rest of the nodes to INFINITY.
Step 2: We will then set the unvisited node with the smallest current
2|Page
Printed Page: 7 of 2
distance as the current node, suppose X.
Step 3: For each neighbor N of the current node X: We will then add the
current distance of X with the weight of the edge joining X-N. If it is
smaller than the current distance of N, set it as the new current distance of
N.
Step 5: We will repeat the process from 'Step 2' if there is any node
unvisited left in the graph.
Let us now understand the implementation of the algorithm with the help
of an example:
We will use the above graph as the input, with node A as the source.First,
we will mark all the nodes as unvisited.We will set the path to 0 at
node A and INFINITY for all the other nodes.We will now mark source
node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.We
will now update the path to node B by 4 with the help of relaxation
because the path to node A is 0 and the path from node A to B is 4, and
the minimum((0 + 4), INFINITY) is 4.We will also update the path to
node C by 5 with the help of relaxation because the path to node A is 0 and
the path from node A to C is 5, and the minimum((0 + 5), INFINITY) is 5.
Both the neighbors of node A are now relaxed; therefore, we can move
ahead.We will now select the next unvisited node with the least path and
visit it. Hence, we will visit node B and perform relaxation on its unvisited
neighbors. After performing relaxation, the path to node C will remain 5,
whereas the path to node E will become 11, and the path to node D will
become 13.We will now visit node E and perform relaxation on its
neighboring nodes B, D, and F. Since only node F is unvisited, it will be
relaxed. Thus, the path to node B will remain as it is, i.e., 4, the path to
node D will also remain 13, and the path to node F will become 14 (8 +
6).Now we will visit node D, and only node F will be relaxed. However,
the path to node F will remain unchanged, i.e., 14.Since only node F is
remaining, we will visit it but not perform any relaxation as all its
neighbouring nodes are already visited.Once all the nodes of the graphs are
visited, the program will end.the final paths we concluded are:
1. A = 0
2|Page
Printed Page: 8 of 2
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)
Pseudocode:
2|Page
Printed Page: 9 of 2
a. What is travelling salesman problem (TSP)? Find the solution of 10
following TSP using dynamic programming.
0 1 15 6
2 0 7 3
9 6 0 12
10 4 8 0
b. Discuss n queen’s problem. Solve 4 queen’s problem using backtracking 10
method?
Now, we place queen q1 in the very first acceptable position (1, 1). Next,
we put queen q2 so that both these queens do not attack each other. We
find that if we place q2 in column 1 and 2, then the dead end is
encountered. Thus the first acceptable position for q2 in column 3, i.e. (2,
3) but then no position is left for placing queen 'q3' safely. So we backtrack
one step and place the queen 'q2' in (2, 4), the next best possible solution.
Then we obtain the position for placing 'q3' which is (3, 2). But later this
position also leads to a dead end, and no place is found where 'q4' can be
placed safely. Then we have to backtrack till 'q1' and place it to (1, 2) and
then all other queens are placed safely by moving q2 to (2, 4), q3 to (3, 1)
and q4 to (4, 3). That is, we get the solution (2, 4, 1, 3). This is one possible
solution for the 4-queens problem. For another possible solution, the whole
method is repeated for all partial solutions. The other solutions for 4 -
queens problems is (3, 1, 4, 2) i.e.
2|Page
Printed Page: 10 of
2
2|Page
Printed Page: 11 of
2
S1 = {4,1,3}, Cost(S1) = 5
S2 = {2,5}, Cost(S2) = 10
S3 = {1,4,3,2}, Cost(S3) = 3
There are two possible set covers {S1, S2} with cost 15
and {S2, S3} with cost 13.
Why is it useful?
It was one of Karp’s NP-complete problems, shown to be so in 1972.
Other applications: edge covering, vertex cover Interesting example:
IBM finds computer viruses (wikipedia) Elements- 5000 known viruses
Sets- 9000 substrings of 20 or more consecutive bytes from viruses, not
found in ‘good’ code. A set cover of 180 was found. It suffices to search
for these 180 substrings to verify the existence of known computer
viruses.
2|Page
Printed Page: 13 of
2
2|Page