[go: up one dir, main page]

0% found this document useful (0 votes)
22 views28 pages

Daa 20 21

Aktu solution
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views28 pages

Daa 20 21

Aktu solution
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Printed Page: 1 of 2

Subject Code: KCS503


Roll No:

B. TECH.
(SEM-V) THEORY EXAMINATION 2020-21
DESIGN AND ANALYSIS OF ALGORITHM
Time: 3 Hours Total Marks: 100
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.
SECTION A
1. Attempt all questions in brief. 2 x 10 = 20

Qno. Questio Marks CO


n
a. What is recurrence relation? How is a recurrence solved using 2
master’s theorem?

A recurrence relation is an equation which represents a sequence based on


some rule. It helps in finding the subsequent term (next term) dependent
upon the preceding term (previous term). If we know the previous term in
a given series, then we can easily determine the next term. Master
Theorem-

Master’s Theorem is a popular method for solving the recurrence


relations.

Master Theorem Cases-

To solve recurrence relations using Master’s theorem, we compare a with


bk.
Then, we follow the following cases-

Case-01:

If a > bk, then T(n) = θ (nlogba)

Case-02:
1|Page
Printed Page: 2 of 2

If a = bk and
 If p < -1, then T(n) = θ (nlogba)
 If p = -1, then T(n) = θ (nlogba.log2n)
 If p > -1, then T(n) = θ (nlogba.logp+1n)

Case-03:

If a < bk and
 If p < 0, then T(n) = O (nk)
 If p >= 0, then T(n) = θ (nklogpn)

b. What is asymptotic notation? Explain Omega (Ω) notation? 2


Asymptotic Notation is used to describe the running time of an algorithm -
how much time an algorithm takes with a given input, n. There are three
different notations: big O, big Theta (Θ), and big Omega (Ω).

Omega () Notation:
The function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0

c. Write down the properties of binomial tree. 2


A Binomial Tree of order 0 has 1 node. A Binomial Tree of order k can
be constructed by taking two binomial trees of order k-1 and making one
the leftmost child of the other. A Binomial Tree of order k the has
following properties.
 It has exactly 2k nodes.
 It has depth as k.
 There are exactly kaiCi nodes at depth i for i = 0, 1, . . . , k.
 The root has degree k and children of the root are themselves
Binomial Trees with order k-1, k-2,.. 0 from left to right.

d. Differentiate Backtracking algorithm with branch and bound algorithm. 2

1|Page
Printed Page: 3 of 2

e. Solve the recurrence T (n) = 4T(n/2) + n2 2

f. Explain Fast Fourier Transform in brief. 2


The Fast Fourier Transform (FFT) is a algorithm that computes a Discrete
Fourier Transform (DFT) of n-length vector in O(n log n) time. 2. In the
FFT algorithm, we apply the divide and conquer approach to polynomial
evaluation by observing that if n is even, we can divide a degree (n – 1)
polynomial.
Application of Fast Fourier Transform :
1. Signal processing.
2. Image processing.
3. Fast multiplication of large integers.
4. Solving Poisson’s equation nearly optimally.

g. Write an algorithm for naive string matcher? 2

The naïve approach tests all the possible placement of Pattern P [1.......m]
relative to text T [1......n]. We try shift s = 0, 1.......n-m, successively and
for each shift s. Compare T [s+1.......s+m] to P [1......m].

The naïve algorithm finds all valid shifts using a loop that checks the
condition P [1.......m] = T [s+1.......s+m] for each of the n - m +1 possible
value of s.

NAIVE-STRING-MATCHER (T, P)
1. n ← length [T]
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1.....m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s

Analysis: This for loop from 3 to 5 executes for n-m + 1(we need at least
m characters at the end) times and in iteration we are doing m
comparisons. So the total complexity is O (n-m+1).

1|Page
Printed Page: 4 of 2

h. Explain searching technique using divide and conquer approach. 2

Divide and Conquer algorithm consists of a dispute using the following


three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the
solution to the whole problem.

Searching techniques that use divide and conquer approach is


binary search.

Binary Search: The binary search algorithm is a searching


algorithm, which is also called a half-interval search or logarithmic
search. It works by comparing the target value with the middle
element existing in a sorted array. After making the comparison, if
the value differs, then the half that cannot contain the target will
eventually eliminate, followed by continuing the search on the
other half. We will again consider the middle element and compare
it with the target value. The process keeps on repeating until the
target value is met. If we found the other half to be empty after
ending the search, then it can be concluded that the target is not
present in the array.

i. Explain Skip list in brief. 2

The skip list is used to store a sorted list of elements or data with a linked
list. It allows the process of the elements or data to view efficiently. In
one single step, it skips several elements of the entire list, which is why it
is known as a skip list.
j. Explain how algorithms performance is analyzed? 2
Analysis of algorithms is the determination of the amount of time and
space resources required to execute it. Usually, the efficiency or running
time of an algorithm is stated as a function relating the input length to the
number of steps, known as time complexity, or volume of memory,
known as space complexity. Asymptotic analysis is a way of analysing the
behaviour of an algorithm as the size of the input grows indefinitely. The
asymptotic analysis allows you to determine the best, worst, and average
case performance of an algorithm, but it does not provide information
about the algorithm's performance on specific inputs. Analysis of
algorithm is the process of analysing the problem-solving capability of the
algorithm in terms of the time and size required (the size of memory for
1|Page
Printed Page: 5 of 2
storage while implementation). However, the main concern of analysis of
algorithms is the required time or performance. Generally, we perform the
following types of analysis −
Worst-case − The maximum number of steps taken on any instance of
size a.
Best-case − The minimum number of steps taken on any instance of size a.
Average case − An average number of steps taken on any instance of
size a.

SECTION B
2. Attempt any three of the following:

Q Question Marks CO
n
o
.
a Write an algorithm for counting sort? Illustrate the operation of counting sort on the 10
. following array: A={4, 0, 2, 0, 1, 3, 5, 4, 1, 3, 2, 3}
It performs sorting by counting objects having distinct key values like hashing. After that, it
performs some arithmetic operations to calculate each object's index position in the output
sequence. Counting sort is not used as a general-purpose sorting algorithm.Counting sort is
effective when range is not greater than number of objects to be sorted. It can be used to sort
the negative input values.

Algorithm

Counting Sort(array, n) // 'n' is the size of array


max = find maximum element in the given array
create count array with size maximum + 1
Initialize count array with all 0's
for i = 0 to n
find the count of every unique element and
store that count at ith position in the count array
for j = 1 to max
Now, find the cumulative sum and store it in count array
for i = n to 1
Restore the array elements
Decrease the count of every restored element by 1
end countingSort

Time Complexity:

Best Case O(n + k)

Average Case O(n + k)

Worst Case O(n + k)

1|Page
Printed Page: 6 of 2
b Show the results of inserting the keys F, S, Q, K, C, L, H, T, V, W, M, R, N, P, A, B, X, 10
. Y, D, Z, E in order into an empty B-tree. Use t=3, where t is the minimum degree of B-
tree.

1|Page
Printed Page: 7 of 2

c Discuss greedy approach to an activity selection problem of scheduling several 10


. competing activities. Solve following activity selection problem S = {A1, A2, A3, A4,
A5, A6, A7, A8, A9, A10}
Si = {1, 2, 3, 4, 7, 8, 9, 9, 11, 12} Fi = {3, 5, 4, 7, 10, 9, 11, 13, 12, 14}

The activity selection problem is a mathematical optimization problem. Our first illustration
is the problem of scheduling a resource among several challenge activities. We find a greedy
1|Page
Printed Page: 8 of 2
algorithm provides a well designed and simple method for selecting a maximum- size set of
manually compatible activities.

Suppose S = {1, 2....n} is the set of n proposed activities. The activities share resources
which can be used by only one activity at a time, e.g., Tennis Court, Lecture Hall, etc. Each
Activity "i" has start time si and a finish time fi, where si ≤fi. If selected activity "i" take
place meanwhile the half-open time interval [si,fi). Activities i and j are compatible if the
intervals (si, fi) and [si, fi) do not overlap (i.e. i and j are compatible if si ≥fi or si ≥fi). The
activity-selection problem chosen the maximum- size set of mutually consistent activities.

Algorithm Of Greedy- Activity Selector:

GREEDY- ACTIVITY SELECTOR (s, f)


1. n ← length [s]
2. A ← {1}
3. j ← 1.
4. for i ← 2 to n
5. do if si ≥ fi
6. then A ← A ∪ {i}
7. j ← i
8. return A

Example: Given 10 activities along with their start and end time as

S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10)
Si = (1,2,3,4,7,8,9,9,11,12)

fi = (3,5,4,7,10,9,11,13,12,14)

Compute a schedule where the greatest number of activities takes place.

Solution: The solution to the above Activity scheduling problem using a greedy strategy is
illustrated below:

Arranging the activities in increasing order of end time

1|Page
Printed Page: 9 of 2

Now, schedule A1

Next schedule A3 as A1 and A3 are non-interfering.

Next skip A2 as it is interfering.

Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as


A1 A3 A4 and A6 are non-interfering.

Skip A5 as it is interfering.

Next, schedule A7 as A1 A3 A4 A6 and A7 are non-interfering.

Next, schedule A9 as A1 A3 A4 A6 A7 and A9 are non-interfering.

Skip A8 as it is interfering.

Next, schedule A10 as A1 A3 A4 A6 A7 A9 and A10 are non-interfering.

Thus the final Activity schedule is:

d What is sum of subset problem? Draw a state space tree for Sum of subset problem using 10
. backtracking? Let n=6, m=30 and w [1:6] = {5, 10, 12, 13, 15, 18}
Let, S = {S1 …. Sn} be a set of n positive integers, then we have to find a subset whose sum
1|Page
Printed Page: 10 of
is equal to given positive integer d.It is always convenient to sort the set’s elements
2 in
ascending order. That is, S1 ≤ S2 ≤…. ≤ Sn
Algorithm:
Let, S is a set of elements and m is the expected sum of subsets. Then:
1. Start with an empty set.
2. Add to the subset, the next element from the list.
3. If the subset is having sum m then stop with that subset as solution.
4. If the subset is not feasible or if we have reached the end of the set then backtrack
through the subset until we find the most suitable value.
5. If the subset is feasible then repeat step 2.
6. If we have visited all the elements without finding a suitable subset and if no
backtracking is possible then stop without solution.

7.

e Write KMP algorithm for string matching? Perform the KMP algorithm to search the 10
. occurrences of the pattern abaab in the text string abbabaabaabab.

Knuth-Morris and Pratt introduce a linear time algorithm for the string matching problem. A
matching time of O (n) is achieved by avoiding comparison with an element of 'S' that have
1|Page
Printed Page: 11 of
previously been involved in comparison with some element of the pattern 'p' to2 be matched.
i.e., backtracking on the string 'S' never occurs

Components of KMP Algorithm:

1. The Prefix Function (Π): The Prefix Function, Π for a pattern encapsulates knowledge
about how the pattern matches against the shift of itself. This information can be used to
avoid a useless shift of the pattern 'p.' In other words, this enables avoiding backtracking of
the string 'S.'

2. The KMP Matcher: With string 'S,' pattern 'p' and prefix function 'Π' as inputs, find the
occurrence of 'p' in 'S' and returns the number of shifts of 'p' after which occurrences are
found.

The Prefix Function (Π)

Following pseudo code compute the prefix function, Π:

COMPUTE- PREFIX- FUNCTION (P)


1. m ←length [P] //'p' pattern to be matched
2. Π [1] ← 0
3. k ← 0
4. for q ← 2 to m
5. do while k > 0 and P [k + 1] ≠ P [q]
6. do k ← Π [k]
7. If P [k + 1] = P [q]
8. then k← k + 1
9. Π [q] ← k
10. Return Π

Running Time Analysis:

In the above pseudo code for calculating the prefix function, the for loop from step 4 to step
10 runs 'm' times. Step1 to Step3 take constant time. Hence the running time of computing
prefix function is O (m).

SECTION C
3. Attempt any one part of the following:

Qno. Questio Marks CO


n
a. Solve the following recurrence relation: 10
i. T (n) = T (n-1) + n4
ii. T (n) = T (n/4) + T (n/2) + n2
b. Write an algorithm for insertion sort. Find the time complexity of 10
Insertion sort in all cases.

Insertion sort is one of the simplest sorting algorithms for the reason that it
sorts a single element at a particular instance. It is not the best sorting
algorithm in terms of performance, but it's slightly more efficient
than selection sort and bubble sort in practical scenarios. It is an intuitive

1|Page
Printed Page: 12 of
sorting technique. 2

Let's consider the example of cards to have a better understanding of the


logic behind the insertion sort.

Suppose we have a set of cards in our hand, such that we want to arrange
these cards in ascending order. To sort these cards, we have a number of
intuitive ways.

One such thing we can do is initially we can hold all of the cards in our left
hand, and we can start taking cards one after other from the left hand,
followed by building a sorted arrangement in the right hand.

Assuming the first card to be already sorted, we will select the next
unsorted card. If the unsorted card is found to be greater than the selected
card, we will simply place it on the right side, else to the left side. At any
stage during this whole process, the left hand will be unsorted, and the
right hand will be sorted.

In the same way, we will sort the rest of the unsorted cards by placing
them in the correct position. At each iteration, the insertion algorithm
places an unsorted element at its right place.

ALGORITHM: INSERTION SORT (A)

1. 1. for j = 2 to A.length
2. 2. key = A[j]
3. 3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. 4. i = j - 1
5. 5. while i > 0 and A[i] > key
6. 6. A[i + 1] = A[i]
7. 7. ii = i -1
8. 8. A[i + 1] = key

How Insertion Sort Works

1. We will start by assuming the very first element of the array is already
sorted. Inside the key, we will store the second element.

Next, we will compare our first element with the key, such that if the
key is found to be smaller than the first element, we will interchange their
indexes or place the key at the first index. After doing this, we will notice
that the first two elements are sorted.

2. Now, we will move on to the third element and compare it with the left-
hand side elements. If it is the smallest element, then we will place the
third element at the first index.

Else if it is greater than the first element and smaller than the second
element, then we will interchange its position with the third element and
place it after the first element. After doing this, we will have our first three
elements in a sorted manner.

1|Page
Printed Page: 13 of
3. Similarly, we will sort the rest of the elements and place them in their
2
correct position.

Consider the following example of an unsorted array that we will sort with
the help of the Insertion Sort algorithm.

A = (41, 22, 63, 14, 55, 36)

Initially,

1st Iteration:

Set key = 22

Compare a1 with a0

Since a0 > a1, swap both of them.

2nd Iteration:

Set key = 63

Compare a2 with a1 and a0

Since a2 > a1 > a0, keep the array as it is.

1|Page
Printed Page: 14 of
2
3 Iteration:
rd

Set key = 14

Compare a3 with a2, a1 and a0

Since a3 is the smallest among all the elements on the left-hand side, place
a3 at the beginning of the array.

4th Iteration:

Set key = 55

Compare a4 with a3, a2, a1 and a0.

As a4 < a3, swap both of them.

5th Iteration:

Set key = 36

Compare a5 with a4, a3, a2, a1 and a0.

1|Page
Printed Page: 15 of
2

Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is


required.

Complexity Analysis of Insertion Sort

Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will make n-
1 comparisons; in the second pass, it will do n-2; in the third pass, it will
do n-3 and so on. Thus, the total number of comparisons can be found by;

Output;
(n-1) + (n-2) + (n-3) + (n-4) + ...... + 1
Sum=

i.e., O(n2)

Therefore, the insertion sort algorithm encompasses a time complexity


of O(n2) and a space complexity of O(1) because it necessitates some extra
memory space for a key variable to perform swaps.

Time Complexities:

o Best Case Complexity: The insertion sort algorithm has a best-


case time complexity of O(n) for the already sorted array because
here, only the outer loop is running n times, and the inner loop is
kept still.
o Average Case Complexity: The average-case time complexity for
the insertion sort algorithm is O(n2), which is incurred when the
existing elements are in jumbled order, i.e., neither in the ascending

1|Page
Printed Page: 16 of
2
order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is
also O(n2), which occurs when we sort the ascending order of an
array into the descending order.
In this algorithm, every individual element is compared with the
rest of the elements, due to which n-1 comparisons are made for
every nth element.

1|Page
Printed Page: 2 of 2
Subject Code: KCS503
Roll No:

Qno. Questio Marks CO


n
a. Write an algorithm for insertion of key in the Red-Black Tree. Discuss 10
the various cases for insertion of key in red-black tree for given sequence
of key in an empty red-black tree- 5, 16, 22, 25, 2, 10, 18, 30, 50, 12, 1.
Already Done
b. Explain and write an algorithm for union of two binomial heaps and also 10
write its time complexity?

It is the most important operation performed on the binomial heap.


Merging in a heap can be done by comparing the keys at the roots of two
trees, and the root node with the larger key will become the child of the
root with a smaller key than the other. The time complexity for finding a
union is O(logn). The function to merge the two trees is given as follows -

1. function merge(a,b)
2. if a.root.key ? b.root.key
3. return a.add(b)
4. else
5. return b.add(a)
To perform the union of two binomial heaps, we have to consider the
below cases:

Case 1: If degree[x] is not equal to degree[next x], then move pointer


ahead.

Case 2: if degree[x] = degree[next x] = degree[sibling(next x)] then,Move


the pointer ahead.

Case 3: If degree[x] = degree[next x] but not equal to degree[sibling[next


x]] and key[x] < key[next x] then remove [next x] from root and attached
to x.

Case 4: If degree[x] = degree[next x] but not equal to degree[sibling[next


x]] and key[x] > key[next x] then remove x from root and attached to
[next

example. Consider two binomial heaps –

2|Page
Printed Page: 3 of 2

We can see that there are two binomial heaps, so, first, we have to combine
both heaps. To combine the heaps, first, we need to arrange their binomial
trees in increasing order.

2|Page
Printed Page: 4 of 2

In the above heap first, the pointer x points to the node 12 with degree B0,
and the pointer next[x] points the node 18 with degree B0. Node 7 with
degree B1 is the sibling of 18, therefore, it is represented as
sibling[next[x]].

Now, first apply Case1 that says 'if degree[x] ≠ degree[next x] then move
pointer ahead' but in the above example, the degree[x] = degree[next[x]],
so this case is not valid.

Now, apply Case2 that says 'if degree[x] = degree[next x] =


degree[sibling(next x)] then Move pointer ahead'. So, this case is also
not applied in the above heap.

Now, apply Case3 that says ' If degree[x] = degree[next x] ≠


degree[sibling[next x]] and key[x] < key[next x] then remove [next x]
from root and attached to x'. We will apply this case because the above
heap follows the conditions of case 3 -

degree[x] = degree[next x] ≠ degree[sibling[next x]] {as, B0 = B0¬ ≠ B1}


and key[x] < key[next x] {as 12 < 18}.

So, remove the node 18 and attach it to 12 as shown below -

x = 12, next[x] = 7, sibling[next[x]] = 3, and degree[x] = B1,

2|Page
Printed Page: 5 of 2
dgree[next[x]] = B1, degree[sibling[next[x]]] = B1

Now we will reapply the cases in the above binomial heap. First, we will
apply case 1. Since x is pointing to node 12 and next[x] is pointing to node
7, the degree of x is equal to the degree of next x; therefore, case 1 is not
valid.

Here, case 2 is valid as the degree of x, next[x], and sibling[next[x]] is


equal. So, according to the case, we have to move the pointer ahead.

Therefore, x = 7, next[x] = 3, sibling[next[x]] = 15, and degree[x] = B1,


dgree[next[x]] = B1, degree[sibling[next[x]]] = B2

Now, let's try to apply case 3, here, first condition of case3 is satisfied as
degree[x] = degree[next[x]] ≠ degree[sibling[next[x]]], but second
condition (key[x] < key[next x]) of case 3 is not satisfied.

Now, let's try to apply case 4. So, first condition of case4 is satisfied and
second condition (key[x] > key[next x]) is also satisfied. Therefore,
remove x from the root and attach it to [next[x]].

Now, the pointer x points to node 3, next[x] points to node 15, and
sibling[next[x]] points to the node 6. Since, the degree of x is equal to the
degree of next[x] but not equal to the degree[sibling[next[x]]], and the key
value of x is less than the key value of next[x], so we have to remove
next[x] and attach it to x as shown below -

Now, x represents to the node 3, and next[x] points to node 6. Since, the

2|Page
Printed Page: 6 of 2
degree of x and next[x] is not equal, so case1 is valid. Therefore, move the
pointer ahead. Now, the pointer x points the node 6. The B4 is the last
binomial tree in a heap, so it leads to the termination of the loop. The
above tree is the final tree after the union of two binomial heaps.

Time Complexity:

binomial heap is the collection of binomial trees, and every binomial tree
satisfies the min-heap property. It means that the root node contains a
minimum value. Therefore, we only have to compare the root node of all
the binomial trees to find the minimum key. The time complexity of
finding the minimum key in binomial heap is O(logn).

5. Attempt any one part of the following:

Qno. Questio Marks CO


n
a. Define minimum spanning tree (MST). Write Prim’s algorithm to 10
generate a MST for any given weighted graph. Generate MST for the
following graph using Prim’s algorithm.

b. Explain Dijkstra’s algorithm to solve single source shortest path problem 10


with suitable example.
Dijkstra's Algorithm is a Graph algorithm that finds the shortest path from
a source vertex to all other vertices in the Graph (single source shortest
path). It is a type of Greedy Algorithm that only works on Weighted
Graphs having positive weights. The time complexity of Dijkstra's
Algorithm is O(V2) with the help of the adjacency matrix representation of
the graph. This time complexity can be reduced to O((V + E) log V) with
the help of an adjacency list representation of the graph, where V is the
number of vertices and E is the number of edges in the graph.

Dijkstra's Algorithm :

The following is the step that we will follow to implement Dijkstra's


Algorithm:

Step 1: First, we will mark the source node with a current distance of 0 and
set the rest of the nodes to INFINITY.

Step 2: We will then set the unvisited node with the smallest current

2|Page
Printed Page: 7 of 2
distance as the current node, suppose X.

Step 3: For each neighbor N of the current node X: We will then add the
current distance of X with the weight of the edge joining X-N. If it is
smaller than the current distance of N, set it as the new current distance of
N.

Step 4: We will then mark the current node X as visited.

Step 5: We will repeat the process from 'Step 2' if there is any node
unvisited left in the graph.

Let us now understand the implementation of the algorithm with the help
of an example:

We will use the above graph as the input, with node A as the source.First,
we will mark all the nodes as unvisited.We will set the path to 0 at
node A and INFINITY for all the other nodes.We will now mark source
node A as visited and access its neighboring nodes.
Note: We have only accessed the neighboring nodes, not visited them.We
will now update the path to node B by 4 with the help of relaxation
because the path to node A is 0 and the path from node A to B is 4, and
the minimum((0 + 4), INFINITY) is 4.We will also update the path to
node C by 5 with the help of relaxation because the path to node A is 0 and
the path from node A to C is 5, and the minimum((0 + 5), INFINITY) is 5.
Both the neighbors of node A are now relaxed; therefore, we can move
ahead.We will now select the next unvisited node with the least path and
visit it. Hence, we will visit node B and perform relaxation on its unvisited
neighbors. After performing relaxation, the path to node C will remain 5,
whereas the path to node E will become 11, and the path to node D will
become 13.We will now visit node E and perform relaxation on its
neighboring nodes B, D, and F. Since only node F is unvisited, it will be
relaxed. Thus, the path to node B will remain as it is, i.e., 4, the path to
node D will also remain 13, and the path to node F will become 14 (8 +
6).Now we will visit node D, and only node F will be relaxed. However,
the path to node F will remain unchanged, i.e., 14.Since only node F is
remaining, we will visit it but not perform any relaxation as all its
neighbouring nodes are already visited.Once all the nodes of the graphs are
visited, the program will end.the final paths we concluded are:
1. A = 0

2|Page
Printed Page: 8 of 2
2. B = 4 (A -> B)
3. C = 5 (A -> C)
4. D = 4 + 9 = 13 (A -> B -> D)
5. E = 5 + 3 = 8 (A -> C -> E)
6. F = 5 + 3 + 6 = 14 (A -> C -> E -> F)

Pseudocode:

1. function Dijkstra_Algorithm(Graph, source_node)


2. // iterating through the nodes in Graph and set their distances to I
NFINITY
3. for each node N in Graph:
4. distance[N] = INFINITY
5. previous[N] = NULL
6. If N != source_node, add N to Priority Queue G
7. // setting the distance of the source node of the Graph to 0
8. distance[source_node] = 0
9.
10. // iterating until the Priority Queue G is not empty
11. while G is NOT empty:
12. // selecting a node Q having the least distance and marking it a
s visited
13. Q = node in G with the least distance[]
14. mark Q visited
15.
16. // iterating through the unvisited neighboring nodes of the nod
e Q and performing relaxation accordingly
17. for each unvisited neighbor node N of Q:
18. temporary_distance = distance[Q] + distance_between(Q, N
)
19.
20. // if the temporary distance is less than the given distance of
the path to the Node, updating the resultant distance with the mini
mum value
21. if temporary_distance < distance[N]
22. distance[N] := temporary_distance
23. previous[N] := Q
24.
25. // returning the final list of distance
26. return distance[], previous[]

6. Attempt any one part of the following:

Qno. Questio Marks CO


n

2|Page
Printed Page: 9 of 2
a. What is travelling salesman problem (TSP)? Find the solution of 10
following TSP using dynamic programming.
0 1 15 6
2 0 7 3
9 6 0 12
10 4 8 0
b. Discuss n queen’s problem. Solve 4 queen’s problem using backtracking 10
method?

N - Queens problem is to place n - queens in such a manner on an n x n


chessboard that no queens attack each other by being in the same row,
column or diagonal.It can be seen that for n =1, the problem has a trivial
solution, and no solution exists for n =2 and n =3. So first we will consider
the 4 queens problem and then generate it to n - queens problem. Given a 4
x 4 chessboard and number the rows and column of the chessboard 1
through 4.

Since, we have to place 4 queens such as q1 q2 q3 and q4 on the chessboard,


such that no two queens attack each other. In such a conditional each
queen must be placed on a different row, i.e., we put queen "i" on row "i."

Now, we place queen q1 in the very first acceptable position (1, 1). Next,
we put queen q2 so that both these queens do not attack each other. We
find that if we place q2 in column 1 and 2, then the dead end is
encountered. Thus the first acceptable position for q2 in column 3, i.e. (2,
3) but then no position is left for placing queen 'q3' safely. So we backtrack
one step and place the queen 'q2' in (2, 4), the next best possible solution.
Then we obtain the position for placing 'q3' which is (3, 2). But later this
position also leads to a dead end, and no place is found where 'q4' can be
placed safely. Then we have to backtrack till 'q1' and place it to (1, 2) and
then all other queens are placed safely by moving q2 to (2, 4), q3 to (3, 1)
and q4 to (4, 3). That is, we get the solution (2, 4, 1, 3). This is one possible
solution for the 4-queens problem. For another possible solution, the whole
method is repeated for all partial solutions. The other solutions for 4 -
queens problems is (3, 1, 4, 2) i.e.

2|Page
Printed Page: 10 of
2

The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as


follows:

2|Page
Printed Page: 11 of
2

7. Attempt any one part of the following:

Qno. Questio Marks CO


n
a. Write short notes on following: 10
(i.) Randomized algorithm.
(ii.) NP- complete and NP hard.

Randomized algorithm is a different design approach taken by the standard


algorithms where few random bits are added to a part of their logic. They
are different from deterministic algorithms; deterministic algorithms
follow a definite procedure to get the same output every time an input is
passed where randomized algorithms produce a different output every time
they’re executed. It is important to note that it is not the input that is
randomized, but the logic of the standard algorithm.

Figure 1: Deterministic Algorithm


Unlike deterministic algorithms, randomized algorithms consider
randomized bits of the logic along with the input that in turn contribute
towards obtaining the output.

Figure 2: Randomized Algorithm


However, the probability of randomized algorithms providing incorrect
output cannot be ruled out either. Hence, the process
called amplification is performed to reduce the likelihood of these
erroneous outputs. Amplification is also an algorithm that is applied to
execute some parts of the randomized algorithms multiple times to
increase the probability of correctness. However, too much amplification
can also exceed the time constraints making the algorithm ineffective.
Classification of Randomized Algorithms
Randomized algorithms are classified based on whether they have time
constraints as the random variable or deterministic values. They are
designed in their two common forms − Las Vegas and Monte Carlo.

 Las Vegas − The Las Vegas method of randomized


algorithms never gives incorrect outputs, making the time
constraint as the random variable. For example, in string
matching algorithms, las vegas algorithms start from the
beginning once they encounter an error. This increases the
probability of correctness. Eg., Randomized Quick Sort
Algorithm.
 Monte Carlo − The Monte Carlo method of randomized
algorithms focuses on finishing the execution within the
given time constraint. Therefore, the running time of this
method is deterministic. For example, in string matching, if
monte carlo encounters an error, it restarts the algorithm
2|Page
Printed Page: 12 of
from the same point. Thus, saving time. Eg., Karger’s
2
Minimum Cut Algorithm
Need for Randomized Algorithms
This approach is usually adopted to reduce the time complexity and space
complexity. But there might be some ambiguity about how adding
randomness will decrease the runtime and memory stored, instead of
increasing; we will understand that using the game theory.

b. What is approximation algorithm? Explain set cover problem using 10


approximation algorithm.
An approximation algorithm is a way of dealing with NP-
completeness for an optimization problem. This technique does not
guarantee the best solution. The goal of the approximation algorithm is to
come as close as possible to the optimal solution in polynomial time.
Such algorithms are called approximation algorithms or heuristic
algorithms.
Features of Approximation Algorithm :
Here, we will discuss the features of the Approximation Algorithm as
follows.
 An approximation algorithm guarantees to run in polynomial time
though it does not guarantee the most effective solution.
 An approximation algorithm guarantees to seek out high accuracy
and top quality solution (say within 1% of optimum)
 Approximation algorithms are used to get an answer near the
(optimal) solution of an optimization problem in polynomial time.
Given a universe U of n elements, a collection of subsets of U say S =
{S1, S2…,Sm} where every subset Si has an associated cost. Find a
minimum cost subcollection of S that covers all elements of U.
Example:
U = {1,2,3,4,5}
S = {S1,S2,S3}

S1 = {4,1,3}, Cost(S1) = 5
S2 = {2,5}, Cost(S2) = 10
S3 = {1,4,3,2}, Cost(S3) = 3

Output: Minimum cost of set cover is 13 and


set cover is {S2, S3}

There are two possible set covers {S1, S2} with cost 15
and {S2, S3} with cost 13.
Why is it useful?
It was one of Karp’s NP-complete problems, shown to be so in 1972.
Other applications: edge covering, vertex cover Interesting example:
IBM finds computer viruses (wikipedia) Elements- 5000 known viruses
Sets- 9000 substrings of 20 or more consecutive bytes from viruses, not
found in ‘good’ code. A set cover of 180 was found. It suffices to search
for these 180 substrings to verify the existence of known computer
viruses.

2|Page
Printed Page: 13 of
2

2|Page

You might also like