Fada 13
Fada 13
B.E. Semester 5
(Computer Engineering)
230133107013
Government Engineering Collage, Gandhinagar
Certificate
This is to certify that Mr./Ms. DHRUV MODI. Enrollment No. 230133107013
of B.E. Semester 5th Computer Engineering of this Institute (GTU
Code:3150703) has satisfactorily completed the Practical / Tutorial work for
the subject Analysis and Design of Algorithms(3150703) for the academic year
2022-23.
Place: __________
Date: __________
2
Analysis and Design of Algorithms (3150703)
Preface
Main motto of any laboratory/practical/field work is for enhancing required skills as well as
creating ability amongst students to solve real time problem by developing relevant competencies
in psychomotor domain. By keeping in view, GTU has designed competency focused
outcomebased curriculum for engineering degree programs where sufficient weightage is given
to practical work. It shows importance of enhancement of skills amongst the students and it pays
attention to utilize every second of time allotted for practical amongst students, instructors and
faculty members to achieve relevant outcomes by performing the experiments rather than having
merely study type experiments. It is must for effective implementation of competency focused
outcome-based curriculum that every practical is keenly designed to serve as a tool to develop
and enhance relevant competency required by the various industry among every student. These
psychomotor skills are very difficult to develop through traditional chalk and board content
delivery method in the classroom. Accordingly, this lab manual is designed to focus on the
industry defined relevant outcomes, rather than old practice of conducting practical to prove
concept and theory.
By using this lab manual students can go through the relevant theory and procedure in advance
before the actual performance which creates an interest and students can have basic idea prior to
performance. This in turn enhances pre-determined outcomes amongst students. Each experiment
in this manual begins with competency, industry relevant skills, course outcomes as well as
practical outcomes (objectives). The students will also achieve safety and necessary precautions
to be taken while performing practical.
This manual also provides guidelines to faculty members to facilitate student centric lab activities
through each experiment by arranging and managing necessary resources in order that the
students follow the procedures with required safety and necessary precautions to achieve the
outcomes. It also gives an idea that how students will be assessed by providing rubrics.
Algorithms are an integral part of computer science and play a vital role in solving complex
problems efficiently.The goal of this subject is to equip students with the knowledge and skills
required to design and analyze algorithms for various applications. Designing of algorithms is
important before implementation of any program or solving any problem. Analysis and Design
of Algorithms is essential for efficient problem-solving, optimizing resource utilization,
developing new technologies, and gaining a competitive advantage.This lab manual is designed
to help you learn algorithms by doing. Each experiment is structured to provide you with stepby-
step instructions on how to analyze and design a particular algorithmforspecific problem. You
will learn how to analyze various algorithms and decide efficient algorithm in terms of time
complexity. By the end of this lab, you will have a solid understanding of algorithm design and
analysis.
Utmost care has been taken while preparing this lab manual however always there is chances of
improvement. Therefore, we welcome constructive suggestions for improvement and removal of
errors if any.
Analysis and Design of Algorithms (3150703)
The following industry relevant competencies are expected to be developed in the student by
undertaking the practical work of this laboratory.
1. Expertise in algorithm analysis
2. Judge best algorithm among various algorithms
3. Ability to solve complex problems
4. Ability to design efficient algorithm for some problems
Sr. No. Objective(s) of Experiment Page Date of Date of Assessme Sign. of Remar
No. perform submiss nt Teacher ks
ance ion Marks with date
Implement a function for each of following
problems and count the number of steps
executed/time taken by each function on
various inputs (100 to 500) and write time
complexity of each function. Also draw a
1. comparative chart of number of input versus
steps executed/time taken. In each of the
following function N will be passed by user.
1. To calculate sum of 1 to N numbers using loop.
2. To calculate sum of 1 to N numbers using equation.
3. To calculate sum of 1 to N numbers using recursion.
Write user defined functions for the following
sorting methods and compare their
performance by steps executed/time taken for
executionon various inputs (1000 to 5000)
ofrandomnature,ascending order and
descending order sorted data.Also draw a
comparative chart of number of input versus
2. steps executed/time taken for each cases
(random, ascending, and descending).
Selection Sort
Bubble Sort
1.Insertion Sort
2.Merge Sort
3.Quick Sort
Implement a function of sequential search and
count the steps executed by function on various
inputs (1000 to 5000) for best case, average
case and worst case. Also, write time
3.
complexity in each case and draw a
comparative chart of number of input versus
steps executed by sequential search for each
case.
Compare the performance of linear search and
4. binary search for Best case, Average case and
Worst case inputs.
Analysis and Design of Algorithms (3150703)
Implement functions to print nth Fibonacci
number using iteration and recursive method.
Compare the performance of two methods by
5.
counting number of steps executed on various
inputs. Also, draw a comparative chart.
Date:
Theory:
Implement three functions based on above steps and calculate the number of steps executed
by each functions on various inputs ranging from 100 to 500. Take a counter variable to
calculate the number of steps and increment it for each statement in the function.
Code: Using loop
public class prac_1 { public static int
sumUsingLoop(int N) {
int sum = 0;
int counter = 0;
counter++;
for (int i = 1; i <= N; i++) {
counter++;
sum += i;
counter++;
}
counter++;
System.out.println("Steps in sumUsingLoop: " + counter);
return sum; }
Using equation
public static int sumUsingEquation(int N) {
int counter = 0;
int sum = N * (N + 1) / 2;
counter += 3; counter++;
System.out.println("Steps in sumUsingEquation: " + counter);
return sum;
}
Using recursion
public static int sumUsingRecursion(int n, int counter) {
counter++;
if (n == 1) {
counter++;
System.out.println("Steps in sumUsingRecursion at base case: " + counter);
return 1; }
else {
counter += 2;
return n + sumUsingRecursion(n - 1, counter);
}
}
public static void main(String[] args) {
int N = 500;
int sumLoop = sumUsingLoop(N);
System.out.println("Sum using loop: " + sumLoop);
int counter = 0;
int sumRecursion = sumUsingRecursion(N, counter);
System.out.println("Sum using recursion: " + sumRecursion);
}
} Output:
Observations:
The Equation method is the most efficient, executing a constant 4 steps regardless of input size,
making it ideal for large inputs. The Loop method scales linearly with input size, with steps
increasing as 2N+22N + 22N+2, making it less efficient than the Equation method but more
predictable. The Recursion method is the least efficient, scaling linearly as 3N−13N - 13N−1 with
additional overhead from function calls, making it slower for large inputs.
Result: Complete the below table based on your implementation of functions and steps executed
by each function.
Chart: <Draw Comparative Chart of inputs versus number of steps executed by each
algorithm>
11 | P a g e
230133107013
Conclusion:
In conclusion, the Equation method is the most optimal for calculating the sum from 1 to NNN, as
it consistently executes a minimal number of steps, regardless of input size. While the Loop and
Recursion methods both scale linearly, the Loop method is more efficient than Recursion but still
less efficient than the Equation approach. For large inputs, the Equation method is clearly the best
choice due to its constant time complexity.
Quiz:
1. What is the meaning of constant growth rate of an algorithm?
A constant growth rate in an algorithm refers to the situation where the execution time or the number
of steps required by the algorithm does not change regardless of the size of the input data. This
means that no matter how large the input size NNN is, the algorithm will always perform the same
number of operations. Key Characteristics:
• Time Complexity: The time complexity of such an algorithm is represented as
O(1)O(1)O(1), where "1" indicates that the time required is constant.
• Performance: Algorithms with constant growth rates are extremely efficient because their
performance is unaffected by the size of the input. They execute in the same time whether
the input size is small or very large.
• Example: A simple example is accessing an element in an array by its index. No matter
how
large the array is, accessing a specific element takes the same amount of time.
2. If one algorithm has a growth rate of n2 and second algorithm has a growth rate of n then which
algorithm execute faster? Why?
If one algorithm has a growth rate of O(n2) O(n^2) O(n2) and another algorithm has a growth rate
of O(n)O(n)O(n), the algorithm with the growth rate of O(n)O(n)O(n) will generally execute faster.
Reason:
• Growth Rate Comparison: The growth rate O(n)O(n)O(n) means that the number of
operations or the time required increases linearly with the size of the input nnn. In contrast,
O(n2) O(n^2) O(n2) means that the number of operations increases quadratically as the input
size grows.
• Impact on Performance: As the input size nnn increases, the O(n2) O(n^2) O(n2)
algorithm will perform significantly more operations than the O(n)O(n)O(n) algorithm. For
example, if n=100n = 100n=100, an O(n)O(n)O(n) algorithm would perform approximately
100 operations, whereas an O(n2) O(n^2) O(n2) algorithm would perform 10,000
operations.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms” by E. Horowitz et al.
References used by the students:
12 | P a g e
230133107013
Rubric wise marks obtained:
13 | P a g e
230133107013
Experiment No: 2
Write user defined functions for the following sorting methods and compare their performance by
steps executed/time taken for execution on various inputs (1000 to 5000) ofrandom nature,
ascending order and descending order sorted data.Also, draw a comparative chart of number of
inputs versus steps executed/time taken for each cases (random, ascending, and descending).
1.Selection Sort
2.Bubble Sort
3.Insertion Sort
4.Merge Sort
5.Quick Sort
Date:
Theory:
Code:
import java.util.Arrays;
import java.util.Random;
15 | P a g e
230133107013
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps + ",
Time: " + (endTime - startTime) + " ns");
}
16 | P a g e
230133107013
Swap_flag=0;
for(int j=0; j<n-i-1; j++)
{
if (a[j]>a[j+1])
{
temp=a[j];
a[j]=a[j+1]; a[j+1]=temp;
Swap_flag=1;
}
}
if(swap_flag==0)
return; }}
Code:
import java.util.Arrays;
import java.util.Random;
17 | P a g e
230133107013
public static void runAndPrintResults(int size, String orderType) {
int[] arr = generateArray(size, orderType); steps = 0;
long startTime = System.nanoTime();
bubbleSort(arr);
long endTime = System.nanoTime();
System.out.println("Array size: " + size + ", Order: " + orderType + ", Steps: " + steps +
", Time: " + (endTime - startTime) + " ns");
}
public static int[] generateArray(int size, String orderType) {
int[] arr = new int[size];
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr); for (int i =
0; i < size / 2; i++) { int temp
= arr[i]; arr[i] = arr[size - i -
1]; arr[size - i - 1] = temp;
}
}
return arr;
}
}
Output:
18 | P a g e
230133107013
for(j=1;j<n;j++)
{
key=a[j];
i=j-1;
while(i>=0 && a[i]>key)
{
a[i+1] = a[i];
i= i-1;
}
a[i+1] =key;
}
}
Code:
import java.util.Arrays;
import java.util.Random;
19 | P a g e
230133107013
runAndPrintResults(size, "Descending");
}
}
Output:
20 | P a g e
230133107013
Code:
import java.util.Arrays;
import java.util.Random;
21 | P a g e
230133107013
public class MergeSort {
static int steps = 0;
23 | P a g e
230133107013
Random rand = new Random();
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size);
}
if (orderType.equals("Ascending")) {
Arrays.sort(arr);
} else if (orderType.equals("Descending")) {
Arrays.sort(arr);
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}
Output:
while(flag==1)
{
i++;
while(q[i]<key)
24 | P a g e
230133107013
{
i++; }
j--;
while(q[j]>key)
j--;
if(i<j)
{
t1=q[i];
q[i]=q[j];
q[j]=t1;
}
else{
flag=0;
}
}
t2=q[lb];
q[lb]=q[j];
q[j]=t2;
quicksort(q,lb,j-1);
quicksort(q,j+1,ub);
}
return;
}
Code:
import java.util.Arrays;
import java.util.Random;
25 | P a g e
230133107013
steps++;
steps++;
if (arr[j] < pivot) {
i++;
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
steps+=4;
} }
steps++;
int temp = arr[i + 1];
arr[i + 1] = arr[high];
arr[high] = temp;
steps+=4;
return i + 1;
}
26 | P a g e
230133107013
for (int i = 0; i < size / 2; i++) {
int temp = arr[i];
arr[i] = arr[size - i - 1];
arr[size - i - 1] = temp;
}
}
return arr;
}
}
Output:
Write main function to use above sorting functions and calculate the number of steps executed
by each functions on various inputs ranging from 1000 to 5000. Take a counter variable to
calculate the number of steps and increment it for each statement in the function.
27 | P a g e
230133107013
a[j] = a[j+1];
a[j+1] = temp;
swap flag = 1;
steps++;
steps++;
steps++;
steps++;
}
}
steps++;
steps++;
if(swap_flag==0)
{
steps++;
return;
}
}
steps++;
}
Observations:
Selection, Bubble, and Insertion Sort perform poorly with O(n^2) time complexity on random and
descending data, but Insertion Sort performs best on ascending data with O(n) complexity. Merge
Sort consistently shows O(nlogn) time complexity across all data types, making it efficient
regardless of input order.Quick Sort performs well on random data with O(nlogn) but suffers from
O(n^2) in the worst case (ascending/descending data) due to poor pivot choices.
Result: Complete the below table based on your implementation of functions and steps executed
by each function. Also, prepare similar tables for ascending order sorted data and descending
order sorted data.
28 | P a g e
230133107013
2000 4009995 4002000 7998 158792 7541914
3000 9014995 9003000 11998 251492 16835228
343592 29715912
4000 16019995 16004000 15998
47567794
5000 25024995 25005000 19998 442844
Chart:
<Draw Comparative Charts of inputs versus number of steps executed on various data (Random, ascending
order sorted and descending order sorted data) by each algorithm.> Sample Chart is as below:
29 | P a g e
230133107013
Chart:
30 | P a g e
230133107013
Conclusion:
For smaller or nearly sorted datasets, Insertion Sort is the most efficient among quadratic algorithms.
However, for larger datasets, Merge Sort is the most reliable due to its consistent O(nlogn)
performance across all cases. Quick Sort is highly efficient on random data but can degrade to
O(n^2) in the worst case without optimized pivot selection. Selection and Bubble Sort consistently
perform poorly on larger, unsorted data. For general use, Merge Sort or optimized Quick Sort are
recommended.
Quiz:
1. Which sorting function execute faster (has small steps count) in case of ascending order sorted
data?
In the case of ascending order sorted data, Insertion Sort executes the fastest with the smallest step
count. It efficiently detects that the array is already sorted, resulting in a time complexity of
O(n)O(n)O(n), making it the best choice for this scenario.
2. Which sorting function execute faster (has small steps count) in case of descending order
sorted data?
In the case of descending order sorted data, Merge Sort generally executes the fastest with a time
complexity of O(nlogn) because its performance is unaffected by the initial data order. Quick Sort
can also be efficient, but its performance may degrade to O(n^2) with poor pivot selection. Selection
Sort and Insertion Sort both have a time complexity of O(n^2), but Insertion Sort tends to be slower
with descending data due to more shifts. Thus, Merge Sort is usually the most efficient in this
scenario.
3. Which sorting function execute faster (has small steps count) in case of random data?
Merge Sort and Quick Sort are faster with random data, with Merge Sort having consistently
reliable performance. Quick Sort is also efficient on average but can be less predictable.
31 | P a g e
230133107013
The worst case for Quick Sort occurs when the pivot selection consistently results in highly
unbalanced partitions. This typically happens when the pivot is the smallest or largest element in
the partition, leading to partitions where one side is empty or nearly empty, and the other side
contains most of the data. This scenario occurs with already sorted or reverse-sorted data if the pivot
selection strategy is poor, such as always picking the first or last element. In this case, Quick Sort
degrades to O(n^2) time complexity due to the deep recursion and inefficient partitioning.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
32 | P a g e
230133107013
Experiment No: 3
Implement a function of sequential search and count the steps executed by function on various inputs
(1000 to 5000) for best case, average case and worst case. Also, write time complexity in each case
and draw a comparative chart of number of input versus steps executed by sequential search for
each case.
Date:
Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count on various inputs.
Theory:
The algorithm works by sequentially iterating through the elements of the array and comparing
each element to the target value. If a match is found, the algorithm exits the loop.
Implement above functions and calculate the number of steps executed by each functions on various
inputs ranging from 1000 to 5000. Take a counter variable to calculate the number of steps and
increment it for each statement.Based on algorithm’s logic, decide best, worst and average case
inputs for the algorithm and prepare a table of steps count.
Code:
import java.util.*;
public class searching {
34 | P a g e
230133107013
Output:
Observations:
Best Case: The best case occurs when the target element is found at the first position. The number of steps
is always 1.
Average Case: The average case occurs when the target element is found in the middle of the array. For an
array of size n, the number of steps is approximately n/2.
Worst Case: The worst case occurs when the target element is found at the last position or not found at all.
The number of steps is equal to the size of the array, i.e., n.
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.
Chart:
35 | P a g e
230133107013
Conclusion:
• Sequential search is a simple algorithm with a linear growth in the number of steps as the
input size increases.
• The best case occurs when the element is found at the first position, while the worst case
occurs when the element is at the last position or not present at all. • Time complexity for
the worst and average cases is O(n), whereas the best case is O(1).
Quiz:
1. Which is the best case of an algorithm?
The best case occurs when the key is found at the first position, requiring the minimum number of
comparisons (i.e., 1 comparison).
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
36 | P a g e
230133107013
Experiment No: 4
Compare the performances of linear search and binary search for Best case, Average case and
Worst case inputs.
Date:
Objectives: (a) Identify Best, Worst and Average cases of given problem.
(b) Derive time complexity from steps count for different inputs.
Theory:
Implement function of binary search algorithm and use linear search function implemented
in previous practical. Compare the steps count of both the functions on various inputs ranging
from 100 to 500 for each case (Best, Average, and Worst).
Code:
import java.util.Arrays;
import java.util.Random;
Output:
Observations:
In the best case, both Binary Search and Linear Search execute minimal steps, making them equally
efficient. However, in the average and worst cases, Binary Search significantly outperforms Linear
39 | P a g e
230133107013
Search, as its steps grow logarithmically, while Linear Search's steps grow linearly with input size.
As input size increases, Binary Search remains much faster, especially for large datasets.
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.
Chart:
<Draw Comparative Charts of inputs versus number of steps executed by both algorithms in various cases>
40 | P a g e
230133107013
Conclusion: compare all cases in liner and binory searching and find out time complexity.
Quiz:
1. Which element should be searched for the best case of binary search algorithm?
For the best case of a binary search algorithm the element should be searched is middle element.
This is because binary search starts by checking the middle element of the array first. If the target
element matches this middle element on the first attempt, the algorithm finishes in a single step,
making it the best-case scenario.
2. Which element should be searched for the worst case of binary search algorithm?
For the worst case of the binary search algorithm, the element to be searched should be one that is
not present in the array or an element that forces the algorithm to repeatedly divide the search range
until there is only one element left to check.
41 | P a g e
230133107013
• Linear Search has a time complexity of O(n) in the worst case, meaning it checks every
element in the array, resulting in the number of steps growing linearly with the size of the
input.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
42 | P a g e
230133107013
Experiment No: 5
Implement functions to print nth Fibonacci number using iteration and recursive method. Compare
the performance of two methods by counting number of steps executed on various inputs. Also draw
a comparative chart. (Fibonacci series 1, 1, 2, 3, 5, 8….. Here 8 is the 6th Fibonacci number).
Date:
Objectives: (a) Compare the performances of two different versions of same problem.
(b) Find the time complexity of algorithms.
(C) Understand the polynomial and non-polynomial problems
Theory:
The Fibonacci series is the sequence of numbers (also called Fibonacci numbers), where every
number is the sum of the preceding two numbers, such that the first two terms are '0' and '1'. In some
older versions of the series, the term '0' might be omitted. A Fibonacci series can thus be given as,
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, . . . It can thus be observed that every term can be calculated by adding
the two terms before it. We are ignoring initial zero in the series.
To represent any (n+1)th term in this series, we can give the expression as, Fn = Fn-1 + Fn-2. We can
thus represent a Fibonacci series as shown in the image below,
43 | P a g e
230133107013
th
Recursive version to print n Fibonacci number is as below:
Implement functions of above two versions of Fibonacci series and compare the steps count of
both the functions on various inputs ranging from 10 to 50 (if memory permits for recursive
version).
Code:
import java.util.Arrays;
public class FibonacciComparison {
44 | P a g e
230133107013
public static void main(String[] args) {
int[] nValues = {10, 15, 20, 25, 30};
int[] iterativeSteps = new int[nValues.length];
int[] recursiveSteps = new int[nValues.length];
System.out.println("Comparing Fibonacci (Iterative vs Recursive) Steps:");
45 | P a g e
230133107013
Observations:
Write observation based on number of steps executed by both algorithms.
The iterative approach is more efficient for computing Fibonacci numbers, especially for large values of
n, as it avoids the overhead of function calls and redundant calculations. The recursive approach
becomes impractical for large values of n due to the exponential increase in the number of function
calls and redundant calculations.
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.
Conclusion: In this practical, we implemented and analyzed two approaches for calculating the nth
Fibonacci number: iterative and recursive. By comparing the number of steps executed by each
algorithm, we observed clear differences in their performance.
Quiz:
1. What is the time complexity of iterative version of Fibonacci function? The time
complexity of the iterative version of the Fibonacci function is O(n).
2. What is the time complexity of recursive version of Fibonacci function? The time
complexity of the recursive version of the Fibonacci function is O(2^n).
46 | P a g e
230133107013
3. Can you execute recursive version of Fibonacci function for more inputs?
The recursive version of the Fibonacci function, due to its exponential time complexity (O(2^n)),
becomes inefficient and slow for larger inputs, such as n > 30. As n increases, the number of
recursive calls grows rapidly, consuming significant memory and processing time, which may lead
to stack overflow errors or long execution times.
4. What do you mean by polynomial time algorithms and exponential time algorithms?
A polynomial time algorithm is one where the time complexity can be expressed as a polynomial
function of the input size n. This means the number of steps the algorithm takes grows at a rate
proportional to nkn^knk, where kkk is a constant.
47 | P a g e
230133107013
Experiment No: 6
Implement a program for randomized version of quick sort and compare its performance with the
normal version of quick sort using steps count on various inputs (1000 to 5000) of random nature,
ascending order and descending order sorted data. Also, draw a comparative chart of number of
input versus steps executed/time taken for each cases (random, ascending, and descending).
Date:
Theory:
Implement a function of randomized version of quick sort as per above instructions and use
basic version of quick sort (that selects first element as pivot element). Compare the steps
count of both the functions on various inputs ranging from 1000 to 5000 for each case
(random, ascending, and descending).
48 | P a g e
230133107013
Code :-
import java.util.Random;
49 | P a g e
230133107013
public static void swap(int[] arr, int i, int j) {
int temp = arr[i];
arr[i] = arr[j];
arr[j] = temp;
}
public static int[] generateRandomArray(int size) {
Random rand = new Random();
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = rand.nextInt(size * 2); }
return arr;
}
public static int[] generateAscendingArray(int size) {
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = i + 1;
}
return arr;
}
public static int[] generateDescendingArray(int size) {
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = size - i;
}
return arr;
}
50 | P a g e
230133107013
System.out.println("Basic Quick Sort Steps (Ascending Data): " + basicQuickSortSteps);
randomizedQuickSortSteps = 0;
randomizedQuickSort(ascArrCopy, 0, size - 1);
System.out.println("Randomized Quick Sort Steps (Ascending Data): "
randomizedQuickSortSteps);
Observations:
Write observation based on number of steps executed by both algorithms.
1. Basic Quick Sort tends to perform poorly on sorted (ascending or descending) data because
it always selects the first element as the pivot, leading to unbalanced partitions and,
consequently, higher step counts in such cases.
2. Randomized Quick Sort, by selecting a random pivot, significantly improves performance
in the worst-case scenarios (sorted data), making it more efficient in general.
Result: Complete the below table based on your implementation of sequential search algorithm
and steps executed by the function.
Randomized Quick
Inputs Sort Basic Quick Sort
1000 17079 15831
51 | P a g e
230133107013
Descending Order
Inputs Ascending Order
1000 749999 999999
Prepare similar tables for descending order and ascending order sorted data.
Chart:
Conclusion: Overall, Randomized Quick Sort is the preferred algorithm for sorting large datasets,
as it mitigates the risks of worst-case performance and achieves better average-case efficiency
compared to its basic counterpart.
Quiz:
1. What is the time complexity of Randomized Quick Sort in worst case?
52 | P a g e
230133107013
The worst-case time complexity of Randomized Quick Sort is O(n^2). However, due to
random pivot selection, the worst-case scenario (where the pivot divides the array very
unevenly) happens rarely, making it much less likely than with the basic version.
2. What is the time complexity of basic version of Quick Sort on sorted data? Give reason of
your
The time complexity of the basic version of Quick Sort on sorted data is O(n^2). This
happens because, in each recursive call, the first element is selected as the pivot, leading to
highly unbalanced partitions where one side has no elements, and the other has n-1
elements. This leads to a recursion depth of n, resulting in quadratic time complexity.
3. Can we always ensure O(n*lg n) time complexity for Randomized Quick Sort?
No, we cannot always ensure O(n log n) time complexity for Randomized Quick Sort. While
random pivot selection greatly reduces the likelihood of encountering the worst case, it is
still theoretically possible for the algorithm to end up with unbalanced partitions, resulting
in O(n^2) time complexity.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
53 | P a g e
230133107013
Experiment No: 7
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …,dn} and if
we want to change for some amount N, many combinations are possible. Suppose {d1, d2, d5, d8},
{d0, d2, d4}, {d0, d5, d7} all are feasible solutions but the solution which selects the minimum
number of coins is considered to be an optimal solution. The aim of making a change is to find a
solution with a minimum number of coins / denominations. Clearly, this is an optimization problem.
General assumption is that infinite coins are available for each denomination. We can select any
denomination any number of times.
Sort all the denominations and start scanning from smallest to largest denomination.In every
iterationi, if current denomination di is acceptable, then 1 coin is added in solution and total amount
is reduced by amount di. Hence,
C[i, j] = 1 + (c [i, j – di])
C[i,j] is the minimum number of coins to make change for the amount j. Below figure shows the
content of matrix C.
using coins if current denomination is larger than current problem size, then we have to skip the
denomination and stick with previously calculated solution. Hence, C[i, j] = C[i – 1, j]
54 | P a g e
230133107013
If above cases are not applicable then we have to stick with choice which returns minimum
number of coin. Mathematically, we formulate the problem as,
C[i, j] = min {C[i – 1, j] , 1 + C[i, j – di]}
Algorithm MAKE_A_CHANGE(d,N)
// d[1…n] = [d1,d2,…,dn] is array of n denominations
// C[1…n, 0…N] is n x N array to hold the solution of sub problems
// N is the problem size, i.e. amount for which change is required
for i ← 1 to n do
C[i, 0] ← 0 end
for i ← 1 to n do for j ← 1 to N do if i =
= 1 ans j < d [i] then
C[i, j] ← ∞ else
if i == 1 then
C[i, j] ← 1 + C[1, j – d[1]) else
if j < d [i] then
C[i, j] ← C[I – 1, j] else
C[i, j] ← min (C[i – 1, j] ,1 + C[i, j – d[i]) end
end
end
return C[n, N]
Implement above algorithm and print the matrix C. Your program should return the number
of coins required and its denominations. Code :-
import java.util.Arrays; public
class CoinChange {
public static void makeChange(int[] denominations, int n, int N) {
int[][] C = new int[n + 1][N + 1];
for (int i = 0; i <= n; i++) {
C[i][0] = 0;
}
for (int j = 1; j <= N; j++) {
C[0][j] = Integer.MAX_VALUE;
}
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= N; j++) {
if (denominations[i - 1] > j) {
C[i][j] = C[i - 1][j];
} else {
C[i][j] = Math.min(C[i - 1][j], (C[i][j - denominations[i - 1]] ==
Integer.MAX_VALUE ? Integer.MAX_VALUE : 1 + C[i][j - denominations[i - 1]]));
}
}
55 | P a g e
230133107013
}
if (C[n][N] == Integer.MAX_VALUE) {
System.out.println("No solution exists to make change for the given amount.");
} else {
System.out.println("Minimum coins required: " + C[n][N]);
}
System.out.println("Matrix C:");
for (int i = 0; i <= n; i++) {
for (int j = 0; j <= N; j++) {
if (C[i][j] == Integer.MAX_VALUE)
System.out.print(" INF ");
else
System.out.printf("%3d ", C[i][j]);
}
System.out.println();
}
}
Result
Conclusion: The dynamic programming approach to the Making Change problem effectively finds
the optimal solution by breaking the problem into smaller sub-problems and storing intermediate
results. This algorithm not only yields the correct number of coins needed but also demonstrates the
power of dynamic programming in solving optimization problems efficiently.
56 | P a g e
230133107013
Quiz:
1. What is the time complexity of above algorithm?
The time complexity of the above algorithm is O(n×N)O(n \times N)O(n×N), where nnn is the
number of denominations and NNN is the amount for which change is needed. This is because
the algorithm fills a matrix of size n×Nn \times Nn×N.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.
3. https://codecrucks.com/making-change-problem-using-dynamic-programming/
57 | P a g e
230133107013
Experiment No: 8
Date:
Theory:
Given a sequence of matrices A1, A2,...,An and dimensions p0, p1,...,pn where Ai is of dimension pi−1
× pi, determine the order of multiplication (represented, say, as a binary tree) that minimizes the
number of operations.
This algorithm does not perform the multiplications; it just determines the best order in which to
perform the multiplications
Two matrices are called compatible only if the number of columns in the first matrix and the
number of rows in the second matrix are the same. Matrix multiplication is possible only if they
are
compatible. Let A and B be two compatible matrices of dimensions p x q and q x r Suppose
dimension of three matrices are :
A1 = 5 x 4
A2 = 4 x 6
A3 = 6 x 2
Matrix multiplication is associative. So
(A1 A2 ) A3 = {(5 x 4 ) x (4 x 6) } x (6 x 2)
= (5 x 4 x 6) + (5 x 6 x 2)
= 180
= 88
58 | P a g e
230133107013
The answer of both multiplication sequences would be the same, but the numbers of multiplications
are different. This leads to the question, what order should be selected for a chain of matrices to
minimize the number of multiplications?
Let us denote the number of alternative parenthesizations of a sequence of n matrices by p(n). When
n = 1, there is only one matrix and therefore only one way to parenthesize the matrix. When n ≥ 2,
a fully parenthesized matrix product is the product of two fully parenthesized matrix sub-products,
and the split between the two subproducts may occur between the k and (k + 1)st matrices for any k
= 1, 2, 3…, n – 1. Thus we obtain the recurrence.
The solution to the recurrence is the sequence of Catalan numbers, which grows as Ω(4n / n3/2),
roughly equal to Ω(2n). Thus, the numbers of solutions are exponential in n. A brute force attempt
is infeasible to find the solution.
Any parenthesizations of the product Ai Ai + 1 … Aj must split the product between Ak and Ak+1 for
some integer k in the range i ≤ k < j. That is for some value of k, we first compute the matrices Ai….k
and Ak + 1…j and then multiply them together to produce the final product Ai…j The cost of computing
these parenthesizations is the cost of computing Ai….k , plus the cost of computing Ak + 1…j plus the
cost of multiplying them together.
We can define m[i, j] recursively as follows. If i == j, the problem is trivial; the chain consists of
only one matrix Ai…i = A. No scalar multiplications are required. Thus m[i, i] = 0 for i = 1, 2 …n.
To compute m[i, j] when i < j, we take advantage of the structure of an optimal solution of the first
step. Let us assume that the optimal parenthesizations split the product Ai Ai + 1…Aj between Ak and
Ak + 1, where i ≤ k < j. Then m[i, j] is equal to the minimum cost for computing the subproducts
Ai…k and Ak + 1…j plus the cost of multiplying these two matrices together.
60 | P a g e
230133107013
}
}
matrixChainOrder(p, n, m, s);
System.out.println("Minimum number of multiplications: " + m[1][n]);
System.out.print("Optimal Parenthesization: ");
printOptimalParenthesization(s, 1, n);
System.out.println();
System.out.println("Matrix m:");
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j++) {
System.out.printf("%4d ", m[i][j]);
}
System.out.println();
}
}
}
Result
Conclusion: The dynamic programming approach to the Chain Matrix Multiplication problem
effectively determines the optimal order of multiplication, resulting in a significant reduction in the
number of multiplications. This implementation showcases the power of dynamic programming in
solving optimization problems.
Quiz:
1. What is the time complexity of above algorithm?
The time complexity of the above algorithm is O(n3)O(n^3)O(n3), where nnn is the number of
matrices. This is due to the three nested loops: one for the chain length, one for the starting matrix,
and one for the split point.
61 | P a g e
230133107013
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
62 | P a g e
230133107013
Experiment No: 9
Implement program to solve LCS (Longest Common Subsequence) problem using dynamic
programing.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
The Longest Common Subsequence (LCS) problem is a classic computer science problem that
involves finding the longest subsequence that is common to two given sequences.
A subsequence is a sequence that can be derived from another sequence by deleting some or no
elements without changing the order of the remaining elements. For example, given the sequence
"ABCDE", "ACE" is a subsequence of "ABCDE", but "AEC" is not a subsequence.
Given two sequences X and Y, the LCS problem involves finding the longest common subsequence
(LCS) of X and Y. The LCS need not be contiguous in the original sequences, but it must be in the
same order. For example, given the sequences "ABCDGH" and "AEDFHR", the LCS is "ADH"
with length 3.
Naïve Method:
Let X be a sequence of length m and Y a sequence of length n. Check for every subsequence of X
whether it is a subsequence of Y, and return the longest common subsequence found. There are 2m
subsequences of X. Testing sequences whether or not it is a subsequence of Y takes O(n) time. Thus,
the naïve algorithm would take O(n2m) time.
63 | P a g e
230133107013
• Case 1 − If the counter encounters common element in both X and Y sequences, increment
the counter by 1.
• Case 2 − If the counter does not encounter common elements in X and Y sequences at T[i,
j], find the maximum value between T[i-1, j] and T[i, j-1] to fill it in T[i, j].
Step 3 − Once the table is filled, backtrack from the last value in the table. Backtracking here is
done by tracing the path where the counter incremented first.
Step 4 − The longest common subseqence obtained by noting the elements in the traced path.
Consider the example, we have two strings X=BDCB and Y=BACDB to find the longest common
subsequence. Following table shows the construction of LCS table.
Once the values are filled, the path is traced back from the last value in the table at T[5, 4].
Algorithm is as below:
64 | P a g e
230133107013
B[i, j] := ‘L’ return
C and B
Code:
public class LCS {
public static int lcs(String X, String Y) {
int m = X.length();
int n = Y.length();
int[][] dp = new int[m + 1][n + 1];
for (int i = 1; i <= m; i++) {
for (int j = 1; j <= n; j++) {
if (X.charAt(i - 1) == Y.charAt(j - 1)) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
return dp[m][n];
}
public static String printLCS(String X, String Y) {
int m = X.length();
int n = Y.length();
int[][] dp = new int[m + 1][n + 1];
for (int i = 1; i <= m; i++) {
for (int j = 1; j <= n; j++) {
if (X.charAt(i - 1) == Y.charAt(j - 1)) {
dp[i][j] = dp[i - 1][j - 1] + 1;
} else {
dp[i][j] = Math.max(dp[i - 1][j], dp[i][j - 1]);
}
}
}
StringBuilder lcs = new StringBuilder();
int i = m, j = n;
while (i > 0 && j > 0) {
if (X.charAt(i - 1) == Y.charAt(j - 1)) {
lcs.append(X.charAt(i - 1));
i--;
j--;
65 | P a g e
230133107013
} else if (dp[i - 1][j] > dp[i][j - 1]) {
i--;
} else {
j--;
}
}
return lcs.reverse().toString();
}
Observations:
1. The algorithm constructs a 2D table CCC where each cell C[i][j]C[i][j]C[i][j] represents
the length of the longest common subsequence of the first iii characters of sequence XXX and the
first 3 characters of sequence YYY.
2. The algorithm fills the table based on whether characters match and tracks the direction
from which each value was derived, allowing the reconstruction of the LCS.
3. The output shows both the length of the LCS and the LCS itself, confirming the
algorithm's effectiveness in finding the optimal solution.
Result
Conclusion: The dynamic programming approach to solving the Longest Common Subsequence
(LCS) problem efficiently finds the longest subsequence shared between two sequences. The
implementation clearly demonstrates how to construct the solution step-by-step while also allowing
for the backtracking necessary to find the actual subsequence.
Quiz:
1. What is the time complexity of above algorithm?
The time complexity of the above algorithm is O(m×n)O(m \times n)O(m×n), where mmm and nnn
are the lengths of the two sequences XXX and YYY. This complexity arises from the nested loops
used to fill the LCS table.
66 | P a g e
230133107013
3. Does Dynamic programming approach to find LCS perform well compare to naïve approach?
Yes, the dynamic programming approach performs significantly better than the naïve
approach. The naïve method has a time complexity of O(n2m)O(n^{2^m})O(n2m), making it
impractical for larger sequences, whereas dynamic programming efficiently solves the
problem in polynomial time, making it suitable for longer sequences.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
67 | P a g e
230133107013
Experiment No: 10
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
The knapsack problem is useful in solving resource allocation problem. Let X = < x1, x2, x3, . . . . .
, xn> be the set of n items. Sets W = <w1, w2, w3, . . . , wn> and V = < v1, v2, v3, . . . , vn> are weight
and value associated with each item in X. Knapsack capacity is M unit.
The knapsack problem is to find the set of items which maximizes the profit such that collective
weight of selected items does not cross the knapsack capacity. Select items from X and fill the
knapsack such that it would maximize the profit.
Knapsack problem has two variations. 0/1 knapsack, that does not allow breaking of items. Either
add an entire item or reject it. It is also known as a binary knapsack. Fractional knapsack allows
breaking of items. Profit will be earned proportionally.
Following are the steps to implement binary knapsack using dynamic programming.
The above algorithm will just tell us the maximum value we can earn with dynamic programming.
It does not speak anything about which items should be selected. We can find the items that give
optimum result using the following algorithm.
Algorithm TRACE_KNAPSACK(w, v, M)
// w is array of weight of n items
// v is array of value of n items
// M is the knapsack capacity
SW ← { }
SP ← { } i
←n
j←M
while ( j> 0 ) do if (V[i, j] == V[i –
1, j]) then i ← i – 1 else
V[i, j] ← V[i, j] – vi j
← j – w[i]
SW ← SW +w[i]
SP ← SP +v[i] end
end
Implement the above algorithms for the solution of binary knapsack problem.
Code:-
public class Knapsack {
public static void knapsack(int n, int W, int[] weights, int[] values) {
int[][] V = new int[n + 1][W + 1]; for (int i = 0; i <= n; i++) {
for (int w = 0; w <= W; w++) {
if (i == 0 || w == 0) {
V[i][w] = 0;
} else if (weights[i - 1] <= w) {
V[i][w] = Math.max(values[i - 1] + V[i - 1][w - weights[i - 1]], V[i - 1][w]);
} else {
V[i][w] = V[i - 1][w];
}
}
}
System.out.println("Maximum value: " + V[n][W]);
System.out.println("Selected items (index, weight, value):");
int w = W; for (int i = n; i > 0 && w > 0; i--) { if
(V[i][w] != V[i - 1][w]) {
System.out.println("Item " + i + ": Weight = " + weights[i - 1] + ", Value = " + values[i
- 1]);
w -= weights[i - 1];
}
69 | P a g e
230133107013
}
}
Observations:
1. The program builds a 2D dynamic programming table V[i][j]V[i][j]V[i][j], where iii represents
the number of items considered and jjj represents the weight capacity of the knapsack.
2.By comparing the inclusion and exclusion of each item, the algorithm computes the maximum
value that can be obtained without exceeding the knapsack's capacity.
3. The algorithm also allows tracing back the selected items by checking where changes in the DP
table values occurred.
Result
Conclusion:
The dynamic programming approach to solving the 0/1 knapsack problem efficiently computes the
maximum possible value while staying within the given weight limit. The program also helps in
identifying the set of items that achieve this optimal result.
Quiz:
1. What is the time complexity of above binary knapsack algorithm?
The time complexity of the binary knapsack algorithm using dynamic programming is O(n×W)O(n
\times W)O(n×W), where nnn is the number of items and WWW is the capacity of the knapsack.
This complexity arises from the two nested loops used to fill the DP table.
Suggested Reference:
70 | P a g e
230133107013
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
71 | P a g e
230133107013
Experiment No: 11
Implement program for solution of fractional Knapsack problem using greedy design technique.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Knapsack problem is as stated below:
Given a set of items, each having different weight and value or profit associated with it. Find the set
of items such that the total weight is less than or equal to a capacity of the knapsack and the total
value earned is as large as possible.
Brute-force approach: The brute-force approach tries all the possible solutions with all the
different fractions but it is a time-consuming approach.
Greedy approach: In Greedy approach, we calculate the ratio of profit/weight, and accordingly, we
will select the item. The item with the highest ratio would be selected first.
Following are the steps to implement fractional knapsack using greedy design strategy.
Implement the program based on above logic for the solution of fractional knapsack problem.
Code :-
import java.util.Arrays;
import java.util.Comparator;
73 | P a g e
230133107013
Observations:
1. The greedy approach efficiently selects items based on the value-to-weight ratio, ensuring that
the maximum value is obtained.
2. The program gives optimal results for various inputs.
Result
Conclusion: The fractional knapsack problem can be solved optimally using the greedy approach.
By selecting items with the highest value-to-weight ratio, the algorithm ensures that the maximum
value is achieved within the knapsack's weight limit. The algorithm performs efficiently and returns
the optimal solution.
Quiz:
1. What is the time complexity of above knapsack algorithm?
The time complexity of the fractional knapsack algorithm is O(nlogn)O(n \log n)O(nlogn) due to
the sorting step, where nnn is the number of items. The rest of the algorithm runs in linear time
O(n)O(n)O(n).
3. What is the time complexity solving knapsack problem using brute-force method?
The time complexity of solving the knapsack problem using the brute-force method is exponential, i.e.,
O(2n)O(2^n)O(2n), where nnn is the number of items. This is because the brute-force approach examines all
possible subsets of the items
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
74 | P a g e
230133107013
Experiment No: 12
Implement program for solution of Making Change problem using greedy design technique.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Making Change problem is to find change for a given amount using a minimum number of coins
from a set of denominations. If we are given a set of denominations D = {d0, d1, d2, …, dn} and if
we want to change for some amount N, many combinations are possible. {d1, d2, d5, d8}, {d0, d2,
d4}, {d0, d5, d7} can be considered as all feasible solutions if sum of their denomination is N. The
aim of making a change is to find a solution with a minimum number of coins / denominations.
Following are the steps to solve coin change problem using greedy design technique
Implement the program based on above steps for the solution of fractional knapsack problem.
Code :-
public class CoinChange {
public static void findMinCoins(int[] denominations, int amount) {
int[] coinCount = new int[denominations.length];
for (int i = 0; i < denominations.length - 1; i++) {
for (int j = 0; j < denominations.length - i - 1; j++) {
if (denominations[j] < denominations[j + 1]) {
int temp = denominations[j];
75 | P a g e
230133107013
denominations[j] = denominations[j + 1];
denominations[j + 1] = temp;
}
}
}
for (int i = 0; i < denominations.length; i++) {
while (amount >= denominations[i]) {
amount -= denominations[i];
coinCount[i]++;
}
}
System.out.println("Minimum number of coins needed:");
for (int i = 0; i < denominations.length; i++) {
if (coinCount[i] != 0) {
System.out.printf("%d coin(s) of denomination %d\n", coinCount[i], denominations[i]);
}
}
}
public static void main(String[] args) {
int[] denominations = {25, 10, 5, 1};
int amount = 87;
findMinCoins(denominations, amount);
}}
Observations:
The greedy approach works optimally when the denominations follow a pattern like U.S. coins,
where larger denominations can be broken down into smaller ones without losing efficiency. For
certain sets of denominations, however, the greedy algorithm may not give the optimal solution.
Result
Conclusion: The greedy algorithm efficiently solves the making change problem for typical coin systems
like U.S. currency. However, it may not always produce the optimal solution for arbitrary sets of
denominations.
Quiz:
1. What is the time complexity of above knapsack algorithm?
The time complexity of the making change algorithm is O(n)O(n)O(n), where nnn is the number
of coin denominations. This is because we iterate over each denomination once.
76 | P a g e
230133107013
2. Does above algorithm always return optimal answer?
No, the greedy algorithm does not always return an optimal answer. It works for certain sets of
denominations (such as U.S. coins) but may fail for arbitrary denominations. For example, with
denominations {1, 3, 4}, the greedy approach does not yield the optimal solution for making change
for 6.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
77 | P a g e
230133107013
Experiment No: 13
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Objectives: (a) Understand how to use Kruskal's algorithm to find the minimum spanning
tree. (b) Solve the optimization based problem. (c) Find the time complexity of the
algorithm.
Theory:
In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other words,
an MST is a subset of the edges of the graph that form a tree and have the smallest sum of weights.
int e = 0;
int i = 0;
while (e < vertices - 1) {
Edge nextEdge = edges[i++];
int x = find(parent, nextEdge.source);
int y = find(parent, nextEdge.destination);
if (x != y) {
result[e++] = nextEdge;
union(parent, x, y);
}
if (i >= edges.length) {
break;
}
}
System.out.println("Minimum Spanning Tree:");
int totalWeight = 0;
for (Edge edge : result) {
if (edge != null) {
System.out.println(edge.source + " -- " + edge.destination + " == " + edge.weight);
totalWeight += edge.weight;
}
}
79 | P a g e
230133107013
System.out.println("Total weight of MST: " + totalWeight);
}
}
public class KruskalAlgorithm {
public static void main(String[] args) {
int vertices = 4;
int edgesCount = 5;
Graph graph = new Graph(vertices, edgesCount);
graph.edges[0] = new Edge(0, 1, 10);
graph.edges[1] = new Edge(0, 2, 6);
graph.edges[2] = new Edge(0, 3, 5);
graph.edges[3] = new Edge(1, 3, 15);
graph.edges[4] = new Edge(2, 3, 4);
graph.kruskalMST();
}
}
Observations:
Write observation based on whether this algorithm always returns minimum spanning tree or not
on various inputs.
Result
Conclusion: Kruskal's Algorithm is an efficient method for finding the Minimum Spanning Tree
(MST) of a connected, weighted graph. It operates by considering edges in order of their weight
and progressively adding them to the MST, provided they do not form a cycle.
Quiz:
1. What is the time complexity of krushkal’s algorithm?
O(ElogE)
3. What data structure is typically used to keep track of the connected components in Kruskal's
algorithm?
The data structure typically used to track connected components in Kruskal's Algorithm is the
Union-Find data structure.
80 | P a g e
230133107013
4. When does Kruskal's algorithm stop adding edges to the minimum spanning tree?
Kruskal's Algorithm stops adding edges to the Minimum Spanning Tree when it has added V−1V -
1V−1 edges, where VVV is the number of vertices in the graph.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
81 | P a g e
230133107013
Experiment No: 14
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Objectives: (a) Understand how to use Prim's algorithm to find the minimum spanning
tree. (b) Solve the optimization based problem. (c) Find the time complexity of
the algorithm.
Theory:
In graph theory, a minimum spanning tree (MST) of an undirected, weighted graph is a tree that
connects all the vertices of the graph with the minimum possible total edge weight. In other words,
an MST is a subset of the edges of the graph that form a tree and have the smallest sum of weights.
82 | P a g e
230133107013
for (int count = 0; count < V - 1; count++) {
int u = minKey(key, mstSet, V);
mstSet[u] = true;
for (int v = 0; v < V; v++) {
if (graph[u][v] != 0 && !mstSet[v]
&& graph[u][v] < key[v]) {
parent[v] = u; key[v]
= graph[u][v];
}
}
}
System.out.println("Edge \tWeight");
for (int i = 1; i < V; i++) {
System.out.println(parent[i] + " - " + i + "\t" + graph[i][parent[i]]);
}
}
Observations:
• Kruskal's algorithm efficiently finds the MST for a given graph. It always returns the correct
MST, provided the graph is connected. • The Union-Find data structure is critical for efficiently
handling cycle detection.
Result
83 | P a g e
230133107013
Conclusion: Kruskal’s algorithm is a greedy approach that efficiently finds the minimum spanning tree of
a graph by sorting edges and using the Union-Find data structure to avoid cycles. It works best with sparse
graphs and guarantees an optimal solution.
Quiz:
1. What is the time complexity of Prim’s algorithm?
The time complexity of Kruskal’s algorithm is O(ElogE)O(E \log E)O(ElogE), where EEE is the
number of edges. Sorting the edges takes O(ElogE)O(E \log E)O(ElogE), and the Union-Find
operations take O(logV)O(\log V)O(logV) on average, where VVV is the number of vertices.
2. Does above Prim’s algorithm always return optimal answer?
Yes, Kruskal's algorithm always returns an optimal answer for the minimum spanning tree of a
connected, undirected, weighted graph.
3.When does Prim's algorithm stop adding edges to the minimum spanning tree?
The Union-Find (or Disjoint Set Union) data structure is used to keep track of the connected
components and detect cycles during the construction of the minimum spanning tree.
4.What data structure is typically used to keep track of the vertices in Prim's algorithm?
Kruskal's algorithm stops adding edges when the minimum spanning tree contains exactly
V−1V1V−1 edges, where VVV is the number of vertices in the graph.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
84 | P a g e
230133107013
Experiment No: 15
Implement DFS and BFS graph traversal techniques and write its time complexities.
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
Depth First Search is a graph traversal algorithm that explores as far as possible along each branch
before backtracking. It is used to search for a node or a path in a graph, and is implemented
recursively or iteratively.
The algorithm starts at a specified node and visits all the nodes in the graph by exploring each branch
as far as possible before backtracking to the previous node. When a node is visited, it is marked as
visited to prevent loops.
85 | P a g e
230133107013
i. If the adjacent node has not been visited, mark it as visited and enqueue it
into the queue Q.
ii.
code :- import
java.util.LinkedList;
import java.util.Stack;
class Graph {
private int vertices;
private int[][] adjMatrix;
Conclusion: DFS and BFS are both essential traversal techniques for exploring graphs. DFS explores as
deep as possible before backtracking, while BFS explores level by level. These traversal techniques have
different applications depending on the problem's requirements.
87 | P a g e
230133107013
Quiz:
1. What data structure is typically used in the iterative implementation of DFS and BFS? DFS
typically uses a stack, and BFS uses a queue.
2.What is the time complexity of DFS on a graph with V vertices and E edges?
The time complexity of DFS is O(V+E)O(V + E)O(V+E), where VVV is the number of vertices
and EEE is the number of edges.
3.What is the time complexity of BFS on a graph with V vertices and E edges?
The time complexity of BFS is also O(V+E)O(V + E)O(V+E), where VVV is the number of
vertices and EEE is the number of edges.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
88 | P a g e
230133107013
Experiment No: 16
Date:
Competency and Practical Skills: Algorithmic thinking, Programming Skills, Problem solving
Theory:
It is a string searching algorithm that is named after its authors Richard M. Carp and Michael O.
Rabin. This algorithm is used to find all the occurrences of a given pattern ‘P’’ in a given string ‘S’
in O(Ns + Np) time in average case, where ‘Ns’ and ‘Np’ are the lengths of ‘S’’ and ‘P’, respectively.
We can see that “xyz” is occurring in “cxyzghxyzvjkxyz” at three positions. So, we have to print
that pattern ‘P’ is occurring in string ‘S’ at indices 1, 6, and 12.
Naive Pattern Searching (brute force) algorithm slides the pattern over text one by one and checks
for a match. If a match is found, then slide by 1 again to check for subsequent matches. This
approach has a time complexity of O(P* (S-P)).
The Rabin-Karp algorithm starts by computing, at each index of the text, the hash value of the string
starting at that particular index with the same length as the pattern. If the hash value of that equals
to the hash value of the given pattern, then it does a full match at that particular index.
Rabin Karp algorithm first computes the hash value of pattern P and first Np characters from S. If
hash values are same, we check the equality of actual strings. If the pattern is found, then it is called
hit. Otherwise, it is called a spurious hit. If hash values are not same, no need to compare actual
strings.
89 | P a g e
230133107013
1. Calculate the hash value of the pattern: The hash value of the pattern is calculated using a hash
function, which takes the pattern as input and produces a hash value as output.
2. Calculate the hash values of all the possible substrings of the same length in the text: The hash
values of all the possible substrings of the same length as the pattern are calculated using the
same hash function.
3. Compare the hash value of the pattern with the hash values of all the possible substrings: If a
match is found, the algorithm checks the characters of the pattern and the substring to verify
that they are indeed equal.
4. Move on to the next possible substring: If the characters do not match, the algorithm moves on
to the next possible substring and repeats the process until all possible substrings have been
compared.
Implement the Rabin-Karp algorithm based on above steps and give different input text and
pattern to check its correctness. Also, find the time complexity of your implemented algorithm.
90 | P a g e
230133107013
if (i < Ns - Np) {
textHash = (d * (textHash - text.charAt(i) * h) + text.charAt(i + Np)) % q;
if (textHash < 0) {
textHash = (textHash + q);
}
}
}
if (matches == 0) {
System.out.println("Pattern not found in the given text.");
}
}
public static void main(String[] args) {
String text = "cxyzghxyzvjkxyz";
String pattern = "xyz";
rabinKarp(pattern, text);
}
}
Observations: The Rabin-Karp algorithm effectively finds the pattern in the given text by
comparing the hash values of the pattern and the substrings of the text. If a match is found, it ensures
correctness by doing a character-by-character comparison. Result:-
Conclusion:- The Rabin-Karp algorithm is an efficient algorithm for string matching, especially
when the hash values can be used to quickly eliminate unnecessary character comparisons. It
performs well in the average case with time complexity O(Ns + Np), but can degrade to O(Ns * Np)
in the worst case due to hash collisions.
Quiz:
1. What is the Rabin-Karp algorithm used for?
The Rabin-Karp algorithm is used for finding all occurrences of a pattern in a given text by using
hash values to speed up the comparison process.
2. What is the time complexity of the Rabin-Karp algorithm in the average case?
The average case time complexity of the Rabin-Karp algorithm is O(Ns + Np), where Ns is the
length of the text and Np is the length of the pattern.
3.What is the main advantage of the Rabin-Karp algorithm over the brute force algorithm for
string matching?
The main advantage of the Rabin-Karp algorithm is that it uses hash values to avoid unnecessary
character comparisons, making it faster than the brute force algorithm in the average case.
91 | P a g e
230133107013
4.What is the worst-case time complexity of the Rabin-Karp algorithm?
The worst-case time complexity of the Rabin-Karp algorithm is O(Ns * Np) due to hash
collisions, which may require character-by-character comparisons.
Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein 2. “Fundamentals of Algorithms”by E. Horowitz et al.
92 | P a g e