[go: up one dir, main page]

0% found this document useful (0 votes)
14 views57 pages

DAA by Tech Move - Merged

The document discusses the Design and Analysis of Algorithms (DAA), emphasizing the importance of algorithms in problem-solving within computer science. It covers key concepts such as algorithm properties, time and space complexity, types of algorithms, and methods for analyzing their efficiency. Additionally, it introduces asymptotic notation and the Master Theorem for solving recurrence relations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views57 pages

DAA by Tech Move - Merged

The document discusses the Design and Analysis of Algorithms (DAA), emphasizing the importance of algorithms in problem-solving within computer science. It covers key concepts such as algorithm properties, time and space complexity, types of algorithms, and methods for analyzing their efficiency. Additionally, it introduces asymptotic notation and the Master Theorem for solving recurrence relations.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Introduction

Full Form of DAA:- Design and Analysis of Algorithm


Why Study DAA:- Design and Analysis of Algorithm help to design the algorithms for solving
different types of problems in Computer Science. It also helps to design and analyze the logic on
how the program will work before developing the actual code for a program.

Algorithm:- An Algorithm is a set of well-defined instructions designed to perform a specific


set of tasks.
Note:- Algorithms are used in Computer science to perform calculations, automatic reasoning,
data processing, computations, and problem-solving. Designing an algorithm is important before
writing the program code as the algorithm explains the logic even before the code is developed.
Clear and Unambiguous:- The algorithm should be clear and unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
Well-Defined Inputs:- If an algorithm says to take inputs, it should be well-defined
inputs.
Well-Defined Outputs:- The algorithm must clearly define what output will be yielded
and it should be well-defined as well.
Finite-ness:- The algorithm must be finite, i.e. it should terminate after a finite time.
Feasible:- The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology or
anything.
Language Independent:- The Algorithm designed must be language-independent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the output
will be the same, as expected.
Properties of Algorithm:
• It should terminate after a finite time.
• It should produce at least one output.
• It should take zero or more input.
• Every step in the algorithm must be effective i.e. every step should do some work.

Thank You!
Need of Algorithm
➢ To understand the basic idea of the problem.
➢ To find an approach to solve the problem.
➢ To improve the efficiency of existing techniques.
➢ To understand the basic principles of designing the algorithms.
➢ To compare the performance of the algorithm with respect to other techniques.
➢ It is the best method of description without describing the implementation detail.
➢ A good design can produce a good solution.
➢ To understand the flow of the problem.
➢ To measure the behavior (or performance) of the methods in all cases (best cases,
worst cases, average cases)
➢ With the help of an algorithm, we can also identify the resources (memory, input-
output) cycles required by the algorithm.
➢ With the help of algorithm, we convert art into a science.
Analysis of Algorithm

The analysis is a process of estimating the efficiency of an algorithm. There are two
fundamental parameters based on which we can analysis the algorithm:

Space Complexity:- The space complexity can be understood as the amount of space
required by an algorithm to run to completion.

Time Complexity:- Time complexity is a function of input size n that refers to the
amount of time needed by an algorithm to run to completion.
Types of Algorithm Analysis
➢ Best case:- Define the input for which algorithm takes less time or minimum time. In
the best case calculate the lower bound of an algorithm.
✓ Example:- In the linear search when search data is present at the first location of large
data then the best case occurs.
➢ Worst Case:- Define the input for which algorithm takes a long time or maximum
time. In the worst calculate the upper bound of an algorithm.
✓ Example: In the linear search when search data is not present at all then the worst case
occurs.
➢ Average case:- In the average case take all random inputs and calculate the
computation time for all inputs.
And then we divide it by the total number of inputs.

Average case = all random case time / total no of case


Types of Algorithm

➢ Sorting algorithms:- Bubble Sort, insertion sort, and many more. These algorithms
are used to sort the data in a particular format.

➢ Searching algorithms:- Linear search, binary search, etc. These algorithms are
used in finding a value or record that the user demands.

➢ Graph Algorithms:- It is used to find the solutions of problems like finding the
shortest path between cities, real-life problems like traveling salesman problems.
Advantages & Disadvantages of Algorithm
Advantages of Algorithms:-
➢ It is easy to understand.
➢ An algorithm is a step-wise representation of a solution to a given problem.
➢ Language Independent- It is not dependent on any programming language, so it can easily
be understood by anyone.
➢ Debug / Error Finding- Every step is independent / in a flow so it will be easy to spot and fix
the error.
➢ Sub-Problems- It is written in a flow so now the programmer can divide the tasks which
makes them easier to code.

Disadvantages of Algorithms:-
➢ Writing an algorithm takes a long time so it is time-consuming.
➢ Understanding complex logic through algorithms can be very difficult.
➢ Branching and Looping statements are difficult to show in Algorithms(imp).
Time Complexity
Time Complexity:- The time complexity of an algorithm is the total amount of time
required by an algorithm to complete its execution.

Generally, the running time of an algorithm depends upon the following...


➢ Whether it is running on Single processor machine or Multi processor machine.
➢ Whether it is a 32 bit machine or 64 bit machine.
➢ Read and Write speed of the machine.
➢ The amount of time required by an algorithm
to perform Arithmetic operations, logical operations, return value
and assignment operations etc.
➢ Input data
To calculate the time complexity of an algorithm, we need to define a model machine. Let
us assume a machine with following configuration...
➢ It is a Single processor machine
➢ It is a 32 bit Operating System machine
➢ It performs sequential execution
➢ It requires 1 unit of time for Arithmetic and Logical operations
➢ It requires 1 unit of time for Assignment and Return value
➢ It requires 1 unit of time for Read and Write operations

Examples 1 :-

int sum(int a, int b)


{
return a+b;
}
it requires 1 unit of time to calculate a+b and 1 unit of time to return the value.
Remark:- If any program requires a fixed amount of time for all input values then its time
complexity is said to be Constant Time Complexity.

Examples 2:-
int sum(int A[], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}

Remark:- If the amount of time required by an algorithm is increased with the increase of
input value then that time complexity is said to be Linear Time Complexity.
Logarithmic Complexity:- It imposes a complexity of O(log(N)). It undergoes the execution
of the order of log(N) steps. To perform operations on N elements, it often takes the
logarithmic base as 2.

Quadratic Complexity:- It imposes a complexity of O(n2). For N input data size, it


undergoes the order of N2 count of operations on N number of elements for solving a
given problem.

Cubic Complexity: It imposes a complexity of O(n3). For N input data size, it executes the
order of N3 steps on N elements to solve a given problem.

Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N elements,


it will execute the order of count of operations that is exponentially dependable on the
input data size.
Space Complexity
Space Complexity:- Total amount of computer memory required by an algorithm to
complete its execution is called as space complexity of that algorithm.

When we design an algorithm to solve a problem, it needs some computer memory to


complete its execution. For any algorithm, memory is required for the following purposes...
➢ To store program instructions.
➢ To store constant values.
➢ To store variable values.
➢ And for few other things like function calls, jumping statements etc.
Generally, when a program is under execution it uses the computer memory for THREE
reasons. They are as follows...

➢ Instruction Space:- It is the amount of memory used to store compiled version of


instructions.
➢ Environmental Stack:- It is the amount of memory used to store information of partially
executed functions at the time of function call.
➢ Data Space:- It is the amount of memory used to store all the variables and constants.
To calculate the space complexity, we must know the memory required to store different datatype values
(according to the compiler). For example, the C Programming Language compiler requires the following...

➢ 2 bytes to store Integer value.


➢ 4 bytes to store Floating Point value.
➢ 1 byte to store Character value.
➢ 6 (OR) 8 bytes to store double value.

Example 1:-
int square(int a)
{
return a*a;
}

Remark:- If any algorithm requires a fixed amount of space for all input values then that
space complexity is said to be Constant Space Complexity.
Example 2:-
int sum(int A[ ], int n)
{
int sum = 0, i;
for(i = 0; i < n; i++)
sum = sum + A[i];
return sum;
}
'n*2' bytes of memory to store array variable 'a[ ]'
2 bytes of memory for integer parameter 'n'
4 bytes of memory for local integer variables 'sum' and 'i' (2 bytes each)
2 bytes of memory for return value.

Remark:- If the amount of space required by an algorithm is increased with the increase
of input value, then that space complexity is said to be Linear Space Complexity.
Asymptotic Notation
Asymptotic Notation:- Asymptotic notation of an algorithm is a mathematical
representation of its complexity.

Note - In asymptotic notation, when we want to represent the complexity of an algorithm,


we use only the most significant terms in the complexity of that algorithm and ignore least
significant terms in the complexity of that algorithm (Here complexity can be Space
Complexity or Time Complexity).
Upper Bound

Actual Running Time

Time Complexity Lower Bound

Input Data
Majorly, we use THREE types of Asymptotic Notations and those are as follows...
➢ Big - Oh (O)
➢ Big - Omega (Ω)
➢ Big - Theta (Θ)

Big - Oh Notation (O):- Big - Oh notation is used to define the upper bound of an algorithm
in terms of Time Complexity. That means Big - Oh notation always indicates the maximum
time required by an algorithm for all input values. That means Big - Oh notation describes
the worst case of an algorithm time complexity.

Consider function f(n) as time complexity of an


algorithm and g(n) is the most significant term. If
f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as O(g(n)).​
f(n) = O(g(n))​
Big - Omega Notation (Ω):- Big - Omega notation is used to define the lower bound of an
algorithm in terms of Time Complexity. That means Big-Omega notation always indicates
the minimum time required by an algorithm for all input values. That means Big-Omega
notation describes the best case of an algorithm time complexity.

Consider function f(n) as time complexity of an


algorithm and g(n) is the most significant term. If
f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1.
Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Big - Theta Notation (Θ):- Big - Theta notation is used to define the average bound of an
algorithm in terms of Time Complexity. That means Big - Theta notation always indicates
the average time required by an algorithm for all input values. That means Big - Theta
notation describes the average case of an algorithm time complexity.

Consider function f(n) as time complexity of an


algorithm and g(n) is the most significant term. If
C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0
and n0 >= 1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Recurrence Relation
A recurrence relation is an equation that defines a sequence based on a rule that gives the
next term as a function of the previous term(s).

There are two popular methods for solving Recurrence:

• Master Method
• Recursion Tree Method

There are four methods for solving Recurrence:

➢ Substitution Method
➢ Iteration Method
➢ Recursion Tree Method
➢ Master Method
Master Theorem
Master’s Theorem is a popular method for solving the recurrence relations.

Master’s theorem solves recurrence relations of the form-

Here, a >= 1, b > 1, k >= 0 and p is a real number.


Master Theorem Cases:- To solve recurrence relations using Master’s theorem, we compare
a with bk.
Case-01:

If a > bk, then T(n) = θ (nlogba)

Case-02:

If a = bk and
➢ If p < -1, then T(n) = θ (nlogba)
➢ If p = -1, then T(n) = θ (nlogba.log2n)
➢ If p > -1, then T(n) = θ (nlogba.logp+1n)
Here, a >= 1, b > 1, k >= 0 and p is a real number.
Case-03:

If a < bk and
➢ If p < 0, then T(n) = O (nk)
➢ If p >= 0, then T(n) = θ (nklogpn)
Problem-01: Solve the following recurrence relation using Master’s theorem-
T(n) = 3T(n/2) + n2
Solution:- We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).
Then, we have-
a=3
b=2
k=2
p=0

Now, a = 3 and bk = 22 = 4.
Clearly, a < bk. Here, a >= 1, b > 1, k >= 0 and p
So, we follow case-03. is a real number.
Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n2log0n)

Thus,
T(n) = θ (n2)
Problem-02: Solve the following recurrence relation using Master’s theorem-
T(n) = 2T(n/2) + nlogn
Solution:- We compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have- a=2
b=2
k=1
p=1
Now, a = 2 and bk = 21 = 2.
Clearly, a = bk.
Here, a >= 1, b > 1, k >= 0 and p
So, we follow case-02. is a real number.

Since p = 1, so we have-
T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog22.log1+1n)

Thus,
T(n) = θ (nlog2n)
Problem-03: Solve the following recurrence relation using Master’s theorem-
T(n) = 2T(n/4) + n0.51
Solution:- We compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have- a=2
b=4
k = 0.51
p=0
Now, a = 2 and bk = 40.51 = 2.0279.
Clearly, a < bk.
Here, a >= 1, b > 1, k >= 0 and p
So, we follow case-03. is a real number.

Since p = 0, so we have-
T(n) = θ (nklogpn)
T(n) = θ (n0.51log0n)

Thus, T(n) = θ (n0.51)


Problem-04: Solve the following recurrence relation using Master’s theorem-
T(n) = √2T(n/2) + logn
Solution:- We compare the given recurrence relation with T(n) = aT(n/b) + θ (n klogpn).
Then, we have- a = √2
b=2
k=0
p=1
Now, a = √2 = 1.414 and bk = 20 = 1.
Clearly, a > bk.
Here, a >= 1, b > 1, k >= 0 and p
So, we follow case-01. is a real number.

So, we have- T(n) = θ (nlogba)


T(n) = θ (nlog2√2)
T(n) = θ (n1/2)

Thus,
T(n) = θ (√n)
Problem-05: Solve the following recurrence relation using Master’s theorem-
T(n) = 8T(n/4) – n2logn
Solution:-

➢ The given recurrence relation does not correspond to the general form of Master’s
theorem.
➢ So, it cannot be solved using Master’s theorem.

Here, a >= 1, b > 1, k >= 0 and p


is a real number.
Problem-06: Solve the following recurrence relation using Master’s theorem-
T(n) = 3T(n/3) + n/2
Solution:-
➢ We write the given recurrence relation as T(n) = 3T(n/3) + n.
➢ This is because in the general form, we have θ for function f(n) which hides constants in it.
➢ Now, we can easily apply Master’s theorem.

We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).


Then, we have-
a=3
b=3
k=1
p=0

Now, a = 3 and bk = 31 = 3.
Clearly, a = bk.

Here, a >= 1, b > 1, k >= 0 and p


is a real number.
So, we follow case-02.

Since p = 0, so we have-
T(n) = θ (nlogba.logp+1n)
T(n) = θ (nlog33.log0+1n)
T(n) = θ (n1.log1n)

Thus,
T(n) = θ (nlogn)
Recursion Tree Method
➢ Like Master’s Theorem Recursion Tree is another method for solving the recurrence
relations.
➢ A recursion tree is a tree where each node represents the cost of a certain recursive
sub-problem.
➢ We sum up the values in each node to get the cost of the entire algorithm.

Steps to solve recurrence relation using recursion tree method:


Step-01:- Draw a recursion tree based on the given recurrence relation.
Step-02:- Determine-
➢ Cost of each level
➢ Total number of levels in the recursion tree
➢ Number of nodes in the last level
➢ Cost of the last level
Step-03:- Count the total number of nodes in the last level and calculate the cost of
the last level
Step-04:- Sum up the cost of all the levels in the recursive tree

PRACTICE PROBLEMS BASED ON RECURSION TREE-

Problem-01: Solve the following recurrence relation using recursion tree method-
T(n) = 2T(n/2) + n
Solution:

Step-1: Draw a recursive tree


➢ The cost of dividing a problem of size n into its 2 sub-problems and then combining
its solution is n.
➢ The cost of dividing a problem of size n/2 into its 2 sub-problems and then
combining its solution is n/2 and so on.

Step-2: Determine cost of each level-


➢ Cost of level-0 = n
➢ Cost of level-1 = n/2 + n/2 = n
➢ Cost of level-2 = n/4 + n/4 + n/4 + n/4 = n and so on.

Step-3: Determine total number of levels in the recursion tree-


➢ Size of sub-problem at level-0 = n/20
➢ Size of sub-problem at level-1 = n/21
➢ Size of sub-problem at level-2 = n/22
Size of sub-problem at level-i = n/2i
Suppose at level-x (last level), size of sub-problem becomes 1. Then-
n / 2x = 1
2x = n
Taking log on both sides, we get-
xlog2 = logn
x = log2n

∴ Total number of levels in the recursion tree = log2n + 1

Step-04: Determine number of nodes in the last level-


➢ Level-0 has 20 nodes i.e. 1 node
➢ Level-1 has 21 nodes i.e. 2 nodes
➢ Level-2 has 22 nodes i.e. 4 nodes

Continuing in similar manner, we have-


Level- log2n has n nodes
Step-05: Determine cost of last level-
Cost of last level = n x T(1) = θ(n)

Step-06: Add costs of all the levels of the recursion tree and simplify the expression so obtained in terms of
asymptotic notation-

= n x log2n + θ (n)
= nlog2n + θ (n)
= θ (nlog2n)
Problem-02: Solve the following recurrence relation using recursion tree method-
T(n) = 2T(n/2) + c.n
Solution:

Step-1: Draw a recursive tree


Problem-03: Solve the following recurrence relation using recursion tree method-
T(n) = 3T(n/4) + cn2
Solution:

Step-1: Draw a recursive tree


B-Tree Data Structure
B-Tree was developed in the year 1972.
Height Balanced m-way Search Tree. Later it was named as B-Tree.
B-Tree is a self-balanced search tree in which every node contains
multiple keys and has more than two children.

B-Tree Property
All leaf nodes must be at same level.
Every node in a B-Tree contains at most m children.

e
The root nodes must have at least 2 nodes.

v
Every node in a B-Tree except the root node and the leaf node contain

at least m/2 children.

o
All the key values in a node must be in Ascending Order.

M
Examples

c h
Te
B-Tree of Order 4 contains a maximum of 3 key values in a node and maximum of 4
children for a node.

Operation on a B-Tree
Searching
Insertion
Deletion

Search Operation on B-Tree


Read the search element from the user.
Compare the search element with first key value of root node in the tree.
If both are matched, then display "Given node is found!!!" and terminate

the function
If both are not matched, then check whether search element is smaller

or larger than that key value.


If search element is smaller, then continue the search process in left subtree.
repeate steps 3, 4, 5 and 6
If the last key value in the leaf node is also not matched then display

"Element is not found" and terminate the function.

Insertion Operation on B-Tree


Check whether tree is Empty.
If tree is Empty, then create a new node as a root node.
If tree is Not Empty, then find the suitable leaf node added using BST rule.
If that leaf node has empty positionadd the new key value to that leaf

node in ascending order of key value within the node.


If that leaf node is already full, split that leaf node by sending middle value

to its parent node.

Examples
Construct a B-Tree of Order 3 by inserting numbers from 1 to 10.

Insert-1

Insert-2

Insert-3

Insert-4

Insert-5

Insert-6

Insert-7

Insert-8

ve
M o
c h
e
Insert-9

Insert-10
T
Red-Black Tree Data Structure

A Red Black Tree is a category of the self-balancing binary search tree.

It was created in 1972 by Rudolf Bayer.

Red Black Tree is a Binary Search Tree in which every node is colored

either RED or BLACK.

Red-Black Tree Property

Red - Black Tree must be a Binary Search Tree.

The ROOT node must be colored BLACK.

The children of Red colored node must be colored BLACK.

In all the paths of the tree, there should be same number of BLACK

colored nodes.

Every new node must be inserted with RED color.

Every leaf (e.i. NULL node) must be colored BLACK.

Examples

ve
o
M
c h
Every Red Black Tree is a binary search tree but every Binary Search Tree

need not be Red Black tree.

Te Operation on Red-Black Tree

Recolor

Rotation

Rotation followed by Recolor

Insertion into Red-Black Tree

Check whether tree is Empty.

If tree is Empty then insert the newNode as Root node with color Black

and exit from the operation.

If tree is not Empty then insert the newNode as leaf node with color Red.

If the parent of newNode is Black then exit from the operation.

If the parent of newNode is Red then check the color of parentnode's

sibling of newNode.

If it is colored Black or NULL then make suitable Rotation and Recolor it.

If it is colored Red then perform Recolor.

Repeat the same until tree becomes Red Black Tree.

Examples

Construct a Red-Black Tree by inserting following sequence of numbers

8, 18, 5, 15, 17, 25, 40 & 80.

ve
o
M
c h
Te
Delete operation in Red-Black Tree

Every deletion operation, we need to check with the Red-Black Tree properties.

The deletion operation in Red-Black Tree is similar to deletion operation

in BST.

Reference: www.btechsmartclass.com
Merge Sort Algorithm

Merge sort is a famous sorting algorithm.

It uses a divide and conquer paradigm for sorting.

It divides the problem into sub problems and solves them individually.

It then combines the results of sub problems to get the solution of the
original problem.

e
Step by Step Process

ov
h M
e c
T
Insertion sort Algorithm
step 1: start

step 2: declare array and left, right, mid variable

step 3: perform merge function.

if left > right

return

mid= (left+right)/2

mergesort(array, left, mid)

mergesort(array, mid+1, right)

merge(array, left, mid, right)

step 4: Stop

Examples

Consider the following elements have to be sorted in ascending order-

6, 2, 11, 7, 5, 4

The merge sort algorithm works as-

ve
M o
c h
Te Final sorted list

Time complexity Analysis

Thus, time complexity of merge sort algorithm is T(n) = Θ(nlogn).

Space complexity Analysis

Merge sort uses additional memory for left and right sub arrays.

Hence, total Θ(n) extra memory is needed.


Bubble Sort Algorithm

Bubble sort is the easiest sorting algorithm to implement.

It is inspired by observing the behavior of air bubbles over foam.

It is an in-place sorting algorithm.

Step by Step Process

Select the first element of the list (i.e., Element at first position in the

list).

Compare the selected element with all the other elements in the list.

In every comparision, if any element is found smaller than the selected

element (for Ascending order), then both are swapped.

Repeat the same procedure with element in the next position in the list

e
till the entire list is sorted.

for(i=0; i<size; i++){

Bubble Sort Algorithm

ov
M
for(j=i+1; j<size; j++){

if(list[i] > list[j])

h
temp=list[i];

list[i]=list[j];

c
list[j]=temp;

e
}

T Examples

Consider the following unsorted list of element..?

Select the first position element in the list, compare it with all other

elements in the list and when ever we found a smaller element then the

element at first position then swap those two elements.

Iteration-01

List after 1st Iteration:-

Iteration-02

Select the second position element in the list, compare it with all other

elements in the list and when ever we found a smaller element then the

element at first position then swap those two elements.

List after 2nd Iteration:-

Iteration-03

Select the 3rd position element in the list, compare it with all other

elements in the list and when ever we found a smaller element then the

element at first position then swap those two elements.

List after 3rd Iteration:-

Iteration-04

List after 4th Iteration:-

Iteration-05

List after 5th Iteration:-

Iteration-06

ve
List after 6th Iteration:-

o
M
h
Iteration-07

c
List after 7th Iteration:-

Te Final sorted list

Time complexity Analysis

Bubble sort uses two loops- inner loop and outer loop.

The inner loop deterministically performs O(n) comparisons.

Best CaseO(n)

Average Case Θ(n2)

Worst Case O(n2)

Space complexity A nalysis


Bubble sort uses only a constant amount of extra space for variables

k
li e temp, i, n.

Hence, the space complexity of bubble sort is O(1).

Insertion Sort Algorithm

Insertion sort is an in-place sorting algorithm.

It is inspired from the way in which we sort playing cards.

In insertion sort algorithm, every iteration moves an element from

unsorted portion to sorted portion until all the elements are sorted in

the list.

Step by Step Process

Assume that first element in the list is in sorted portion and all the

remaining elements are in unsorted portion.

Take first element from the unsorted portion and insert that element

into the sorted portion in the order specified.

Repeat the above process until all the elements from the unsorted

portion are moved into the sorted portion.

Insertion sort Algorithm


e
v
o
for (i = 1 ; i < n ; i++)

key = A [ i ];

M
j = i - 1;

Here,

while(j > 0 && A [ j ] > key)

h
i = variable to traverse the array A

A [ j+1 ] = A [ j ];

key = variable to store the new number to


j--;

c
be inserted into the sorted sub-array

j = variable to traverse the sorted sub-


A [ j+1 ] = key;

e
array
}

T
Examples

Consider the following unsorted list of element..?

e
v
o
M
h
c
e Final sorted list

T Time complexity Analysis

Insertion sort algorithm consists of two nested loops.

Owing to the two nested loops, it has O(n2) time complexity.

Best Case n

Average Case n2

Worst Case n2

Space complexity Analysis

It performs all computation in the original array and no other array is

used.

Hence, the space complexity works out to be O(1).

Important Notes

Insertion sort is not a very efficient algorithm when data sets are large.

This is indicated by the average and worst case complexities.


ve
M o
c h
Te

ve
M o
Selection sort algorithm consists of two nested loops.

h
Owing to the two nested loops, it has O(n2) time complexity.

c
Best Case n2

Average Case n2

e
Worst Case n2

used.

T
It performs all computation in the original array and no other array is

Hence, the space complexity works out to be O(1).


Radix Sort Algorithm
Another linear time technique is radix sort.
It organizes data based on digit position.
Beginning with the least significant digit and progressing to the most
significant digit.
Keys are sorted on the i-th digit in each iteration i.

Step by Step Process

Radix sort Algorithm

RADIX_SORT(A, d)

// A is an array to be sorted

// k is the number of digits in the largest element in A

for i ← 1 to k do

Apply a stable sort (such as counting sort) on digit i of array A

end

Examples
Sort following sequence of data using radix sort: 165,958, 635, 694,
480, 637, 598
The largest element in list is 958, and its width is 3. So radix sort
algorithm needs to perform three scans on each digit from LSD to MSD.

ve
Time complexity Analysis

M o
h
Best O(n+k)

c
Worst O(n+k)

Average O(n+k)

e
Space complexity Analysis

T
O(max)
Heap Sort Algorithm
Heapsort is an in-place comparison based sorting algorithm.
It is similar to the selection sort where we first find the minimum
element and place the minimum element at the beginning.
Heapsort is not stable algorithm.

What is meant by Heapify


Heapify is the process of creating a heap data structure from a binary
tree represented using an array. It is used to create Min-Heap or Max-

e
heap. Start from the first index of the non-leaf node whose index is
given by n/2 – 1. Heapify uses recursion

v
heapify(array)

o
Root = array[0]

Largest = largest( array[0] , array [2 * 0 + 1]/ array[2 * 0 + 2])

if(Root != Largest)

M
Swap(Root, Largest)

Heap Representation of an array

c h
Te Heap Data Structure
It is a complete binary tree data structure.
Except for the last level, all levels are entirely filled.
There are two types of heap tree.
Max Heap:
In a max heap, the value of the parent element is greater than its child
node.
The largest element is at the root.
Max heap is used in heap sort.

Min Heap:
In min-heap, the value of the parent element is less than its child node.
The smallest element is at the root.
Min heap is used in a priority queue.

Examples

Sort given data using heap sort: 17, 32, 45, 76, 40, 13, 55, 30, 7, 11

Heap sort works in two stages :

> Build Max heap

> Sorting
STAGE 1: Build max heap
Here, number of elements n = 10, so i = ceil(n/2) – 1 = ceil(10/2) – i = 4

Step 1: Construct complete binary tree.

Step 2 : For node having index i=4, check if it satisfies the max heap

property. Here node 4 has value 76, which is greater than both of its child,

which satisfies the max heap property.

Step 3 : Check if the 3rd node satisfies the max heap property.

Step 4 : Check if the 2nd node satisfies the max heap property.

Step 5: Check if the 1st node satisfies the max heap property.

Step 6: Now, node 2 violates the max heap property,

All node satisfies the max heap property, so this is the desired max-heap.

STAGE 2: Sorting

ve
o
In each iteration, the value of the 1st node will be swapped with the last

node in the unsorted node and heap will be adjusted again.


This process will be repeated n – 1 time. Each pass will put

M
the largest element at the end of the unsorted list.

Step 1 : Swap A[1] with A[10], and heapify the tree again.

c h
Te
Step 2 : Swap A[1] with A[9], and heapify the tree again.

Step 3 : Swap A[1] with A[8], and heapify the tree again.

Step 4 : Swap A[1] with A[7], and heapify the tree again.

Step 5 : Swap A[1] with A[6], and heapify the tree again.

Step 6 : Swap A[1] with A[5], and heapify the tree again.

Step 7 : Swap A[1] with A[4], and heapify the tree again.

Step 8 : Swap A[1] with A[3], and heapify the tree again

Step 9 : Swap A[1] with A[2], and heapify the tree again.

Step 10: Read elements, They are sorted.

Time complexity Analysis


Best O(nlog n)

Worst O(nlog n)

Average O(nlog n)
Space complexity Analysis
O(1)
Quick Sort Algorithm

Quick Sort is a famous sorting algorithm.

It sorts the given data items in ascending order.

It uses the idea of divide and conquer approach.

It follows a recursive algorithm.

St ep by Step Algorithm
Consider the first element of the list as pivot

Define two variables i and j. Set i and j to first and last elements of the

list respectively.

Increment i until list[i] > pivot then stop.

Decrement j until list[j] < pivot then stop.

If i < j then exchange list[i] and list[j].

Repeat steps 3,4 & 5 until i > j.

Exchange the pivot element with list[j] element.

Examples
Consider the following elements have to be sorted in ascending order -

Step -01:

Left and Loc (pivot) points to the first element of the array.

Right points to the last element of the array.

So to begin with, we set loc = 0, left = 0 and right = 5 as-

Step -02:

Since loc points at left, so algorithm starts from right and move towards

left.

As a[loc] < a[right], so algorithm moves right one position towards left as-

Step -03:

Since loc points at left, so algorithm starts from right and move towards

left.

As a[loc] > a[right], so algorithm swaps a[loc] and a[right] and loc points at
right as-

Now,

e
loc, left and right points at the same element.

This indicates the termination of procedure.

v
The pivot element 25 is placed in its final position.

o
All elements to the right side of element 25 are greater than it.

All elements to the left side of element 25 are smaller than it.

h M
e c
T
Similar Solve........................

Time complexity Analysis


Best O(n*log n)

Worst O(n2)

Average O(n*log n)
S pace complexity Analysis
O(log n)
Binomial Heap
A Binomial Heap is a collection of Binomial trees.
Binomial Heap is used to implement priority queues.
It is an extension of Binary Heap that provides faster union or merge
operation together with other operations provided by Binary Heap.

Binomial Heap Property


Every binomial tree in the heap must follow the min/max heap property.
For any non-negative integer k, every k there is at most one binomial tree

of order k in the heap.

Order-0

Order-2

ve
Order-3
An n nodes should have at most 1 + log2^n binomial trees, here log2 is

o
the binary logarithm.

Representation of Binomial Heap

M
h
c
e
T
>>>>>>Represent trees using left-child, right sibling pointers.

– three links per node (parent, left, right)

>>>>>> Roots of trees connected with singly linked list.

– degrees of trees strictly decreasing from left to right

Parent of X

Key

D g e ree of X

Child of X Sbi ling of X


Binomial Heap
A Binomial Heap is a collection of Binomial trees.
Binomial Heap is used to implement priority queues.
It is an extension of Binary Heap that provides faster union or merge
operation together with other operations provided by Binary Heap.

Binomial Tree
A Binomial Tree is a unique structure tree which follows the following properties:
A Binomial Tree of order 0 has exactly 1 node.

A Binomial Tree of order k can be constructed by taking two binomial

e
trees of order k-1 and making one as leftmost child or other.

ov
h M
e c
T
Binomial-Tree Property
It has exactly 2k nodes.
It has a depth or height of k.
There are exactly kCi nodes at depth(height) i, for i=0, 1, 2, .....k-1, k.
The root of tree has degree k and children of root are themselves are

binomial trees of order k-1, k-2, ....1, 0 from left to right.


Operation of Binomial Heap

The operations that can be performed on binomial heap are listed as

follows -

Creating a binomial heap

Finding the minimum key

Union or merging of two binomial heaps

Inserting a node

Extracting minimum key

Decreasing a key

Deleting a node

Creating a Binomial Heap

When we create a new binomial heap, it simply takes O(1) time because

creating a heap will create the head of the heap in which no elements

are attached.

Finding the minimum key Binomial Heap

Binomial heap is the collection of binomial trees, and every binomial

tree satisfies the min-heap property. It means that the root node

contains a minimum value.

Union/Merging of Two Binomial Heap

It is the most important operation performed on the binomial heap.

The time complexity for finding a union is O(logn).

Case 1: If degree[x] ≠ degree[next x] then move pointer ahead.

Case 2: If degree[x] = degree[next x] = degree[sibling(next x)] then Move


pointer ahead.

:
Case 3 If degree[x] = degree[next x] ≠ degree[sibling[next x]] and

key[x] < key[next x] then remove [next x] from root and attached to x.

Case 3: If degree[x] = degree[next x] but not equal to degree[sibling[next x]]

and key[x] > key[next x] then remove x from root and attached to [next x].

e
v
o
M
h
c
e
T

Inserting an element in Binomial Heap

Inserting an element in the heap can be done by simply creating a new

heap only with the element to be inserted, and then merging it with the

original heap.

Due to the merging, the single insertion in a heap takes O(logn) time.

Insert-1 5

e
v
o
M
h
c
e
T
Extracting the minimum key Operation in

Binomial Heap

It means that we have to remove an element with the minimum key

value.

We know, in min-heap, the root element contains the minimum key

value.

So, we have to compare the key value of the root node of all the
binomial trees.

Let's see an example of extracting the minimum key from the heap.

12, 7, and 15 are the key values of the root node in the above heap in
which 7 is minimum; therefore, remove node 7 from the tree

The degree of node 1 2 and node 25 is B0, and the degree of node 15 is
B2. so use the Merging/Union Operation.

The above heap is the final heap after extracting the minimum key.

Decreasing a key Operation in Binomial Heap

Once the value of the key is decreased, it might be smaller than its

'
parent s key that results in the violation of min-heap property.

If such case occurs after decreasing the key, then exchange the

element with its parent, grandparent, and so on until the min-heap

property is satisfied.

Let's understand the process of decreasing a key in a binomial heap


using an example.

After decreasing 45 by 7, the heap will be -

After decreasing the key, the min-heap property of the above heap is
violated.

Now, compare 7 wits its parent 30, as it is lesser than the parent, swap
7 with 30, and after swapping, the heap will be -

Swap the element 7 with its parent 8, after swapping the heap will be -

Now, the min-heap property of the above heap is satisfied. So, the
above heap is the final heap after decreasing a key.

Deleting a node from the Binomial heap

First, we have to decrease its key to negative infinity (or - ∞).


Then delete the minimum in the heap(Extracting the minimum key

Operation).

Example : Suppose we have to delete the node 41 from the heap -

First, replace the node with negative infinity (or - ∞)

Now, swap the negative infinity with its root node in order to maintain
the min-heap property.

Now, again swap the negative infinity with its root node.

Extract the minimum key from the heap. Since the minimum key in the
above heap is -infinity so we will extract this key, and the heap would be :

e
v
o
M
h
c
e
T
The final heap after deleting the node 41.

Time Complexity of the Binomial heap

Operations Time complexity

Finding the minimum key O(log n)

Inserting a node O(log n)

Extracting minimum key O(log n)

Decreasing a key O(log n)

Union or merging O(log n)

Deleting a node O(log n)

Space Complexity of the Binomial heap

''
The space complexity of a binomial heap with n elements is O(n).
Fibonacci Heap
A fibonacci heap is a data structure that consists of a collection of
trees.
which follow min heap or max heap property.
In a fibonacci heap, a node can have more than two children or no
children at all.
It has more efficient heap operations than that supported by the
binomial and binary heaps.

Fibonacci Heap Property


It is a set of min heap-ordered trees.
A pointer is maintained at the minimum element node.
It consists of a set of marked nodes. (Decrease key operation)
It can have multiple trees of equal degrees, and each tree doesn't need
to have 2^k nodes.
All the roots and siblings are stored in a separated circular-doubly-linked list.

e
All the trees in the Fibonacci Heap are rooted but not ordered.

ov
h M
Remark:-

e c
x , Degree[x],
P[ ] Mark[x],

T
Left[x], Right[x], Child[x]

O peration on Fibonacci Heap


Insert - O(1)
Find Min - O(1)
Union - O(1)
Extract Min - O(log n)
Decrease Key - O(1)

Delete Node - O(log n)

Insertion operation on Fibonacci Heap


Create a new node for the element.
Check if the heap is empty.
If the heap is empty, set the new node as a root node and mark it min.
Else, insert the node into the root list and update min.

Find Min operation on Fibonacci Heap


The minimum element is always given by the min pointer.

U nion operation on Fibonacci Heap


Concatenate the roots of both the heaps.
Update min by selecting a minimum key from the new root lists.

Extract Min operation on Fibonacci Heap


It is the most important operation on a fibonacci heap.
In this operation, the node with minimum value is removed from the
heap and the tree is re-adjusted.
The following steps are followed:
Delete the min node.

Set the min-pointer to the next root in the root list.


Create an array of size equal to the maximum degree of the trees in the
heap before deletion.
Do the following (steps 5-7) until there are no multiple roots with the
same degree.
Map the degree of current root (min-pointer) to the degree in the array.
Map the degree of next root to the degree in array.
If there are more than two mappings for the same degree, then apply
union operation to those roots such that the min-heap property is
maintained

e
Example:- We will perform an extract-min operation on the heap below.

ov
h M
e c
T
Delete the min node, add all its child nodes to the root list and set the
min-pointer to the next root in the root list.

The maximum degree in the tree is 3. Create an array of size 4 and map
degree of the next roots with the array.

H ere, 23 and 7 have the same degrees, so unite them.

Again, 7 and 17 have the same degrees,

Again 7 and 24 have the same degree,

Map the next nodes.

Again, 52 and 21 have the same degree,

Similarly, unite 21 and 18.

Map the remaining root.

ve
o
h M
c
The final heap is.

e
T

D ecreasing a Key operation on Fibonacci Heap


In decreasing a key operation, the value of a key is decreased to a lower
value.

Decrease-Key:-

Select the node to be decreased, x, and change its value to the new
value k.
If the parent of x, y, is not null and the key of parent is greater than that
of the k then call Cut(x) and Cascading-Cut(y) subsequently.
If the key of x is smaller than the key of min, then mark x as min.
Cut:-

R emove x from the current position and add it to the root list.
If x is marked, then mark it as false.
Cascading-Cut:-

If the parent of y is not null then follow the following steps.


If y is unmarked, then mark y.
Else, call Cut(y) and Cascading-Cut(parent of y).

Example: Decreasing 46 to 15.

Cut part: Since 24 ≠ nill and 15 < its parent, cut it and add it to the

root list.

Cascading-Cut part: mark 24.

Example: Decreasing 35 to 5

Cut part: Since 26 ≠ nill and 5<its parent, cut it and add it to the root list.

Cascading-Cut part: Since 26 is marked, the flow goes to Cut and

Cascading-Cut.

Cut(26): Cut 26 and add it to the root list and mark it as false.

Cascading-Cut(24):

Since the 24 is also marked, again call Cut(24) and Cascading-Cut(7).

These operations result in the tree below.

Since 5 < 7, mark 5 as min.

ve
o
h M
e c
D
T
eleting a Node operation on Fibonacci Heap
This process makes use of decrease-key and extract-min operations.
The following steps are followed for deleting a node.
Let k be the node to be deleted

Apply decrease-key operation to decrease the value of k to the lowest


possible value (i.e. -∞).
Apply extract-min operation to remove this node.
Trie Data Structure

A Trie is an advanced data structure.

It is also known as prefix tree or digital tree.

It is a tree that stores the data in an ordered and efficient way.

We generally use trie's to store strings.

Each node of a trie can have as many as 26 references (pointers).

Application of Trie Data Structure

e
Spell Cheker:

v
o
Spell checking is a three-step process. First, look for that word in a

dictionary, generate possible suggestions, and then sort the suggestion

words with the desired word at the top.

M
Auto-complete:

Auto-complete functionality is widely used on text editors, mobile applications,

h
and the Internet. It provides a simple way to find an alternative word to complete

the word for the following reasons.

c
It provides an alphabetical filter of entries by the key of the node.

e
We trace pointers only to get the node that represents the string entered

by the user.

As soon as you start typing, it tries to complete your input.

T
Browser history:

It is also used to complete the URL in the browser. The browser keeps a

history of the URLs of the websites you've visited.

Trie Data Structure P ropert y


The root node of the trie always represents the null node.

Each child of nodes is sorted alphabetically.

Each node can have a maximum of 26 children (A to Z).

Each node (except the root) can store one letter of the alphabet.

A trie representation for the bell, bear, bore, bat, ball, stop, stock, and stack.

e
v
o
M
h
c
e
T
A dv anta g s
e of Trie Data Structure

It can be insert faster and search the string than hash tables and binary

search trees.

It provides an alphabetical filter of entries by the key of the node.

Di s dv
a anta g s e of Trie Data Structure

It requires more memory to store the strings.

It is slower than the hash table.


Devide and Conquer

A divide and conquer algorithm is a strategy of solving a large problem by

Breaking the problem into smaller sub-problems.

Solving the sub-problems, and

Combining them to get the desired output.

How Divide and Conquer Algorithms Work?

Divide the original problem into a set of subproblems.

Conquer: Solve every subproblem individually, recursively.

Combine: Put together the solutions of the subproblems to get the solution

to the whole problem.

ve
M o
c h
Te
If we expand out two more recursive steps, it looks like this:

Trie Data Structure Property

Searching Algorithms

Searching is the process of finding some particular element in the list.

If the element is present in the list, then the process is called successful,

and the process returns the location of that element; otherwise, the search

is called unsuccessful.

It is the simplest searching algorithm.

we start from one end and check every element of the list until the

desired element is found.

Working of Linear search

search for an element k = 1 in the list .

Start from the first element, compare k with each element x.

If x == k, return the index.

ve
Else, return not found.

M o
h
Linear Search Algorithm

e c
Best Case O(1)

LinearSearch(array, key)

Average Case O(n)

for each item in the array

T
Worst Case O(n)
if item == value

return its index


The space complexity of linear search is O(1).

Binary Search Algorithms


Binary Search is a searching algorithm for finding an element's position

in a sorted array.

In this approach, the element is always searched in the middle of a

portion of an array.

Binary search can be implemented only on a sorted list of items. If the

elements are not sorted already, we need to sort them first.

Binary Search Working


Binary Search Algorithm can be implemented in two ways

Iterative Method

Recursive Method
The recursive method follows the divide and conquer approach.

Let x = 4 be the element to be searched.

Set two pointers low and high at the lowest and the highest positions

respectively.

Find the middle element mid of the array ie. arr[(low + high)/2] = 6

If x == mid, then return mid.

If x > mid, then set the value of low = mid + 1.


If x < mid, then set the value of high = mid - 1.

ve
o
Repeat steps 3 to 6 until low meets high.

h M
c
x = 4 is found.

Te
Time Comple xities
Best case complexity: O(1)

Average case complexity: O(log n)

Worst case complexity: O(log n)

Space Comple xity


The space complexity of the binary search is O(1).

Conve x Hull
Polygon is called convex polygon if the angle between any of its two

j
ad acent edges is always less than 1 80. Otherwise, it is called a
concave polygon.

Convex hull is the smallest region covering given set of points.

Complex polygons are self-intersecting polygons.

e
(a)Concave polygon (b) Convex polygon (c) Complex polygon

v
Given a set of points in the plane. the convex hull of the set is the

o
smallest convex polygon that contains all the points of it.

h M
e c
T Process to find Conve

There exist multiple approaches to solve convex hull problem. we will

discuss how to solve it using divide and conquer approach.

Sort all of the points by their


x Hull

X coordinates. The tie is broken by ranking


points according to their Y coordinate.
Determine two extreme points A and B, where A represents the leftmost
point and B represents the rightmost point. A and B would be the


convex hull s vertices. Lines AB and BA should be added to the solution

set.

Find the point C that is the farthest away from line AB.

Calculate the convex hull of the points on the line AC s right and left ’
sides. Remove line AB from the original solution set and replace it with
AC and CB.

Process the points on the right side of line BA in the same way.

Find the convex hull of the points on the left and right of the line
connecting the two farthest points of that specific convex hull

recursively.

Find the convex hull for a given set of points using divide and conquer approach.

Find left most and rightmost points from the set P and label them as A
and B. Label all the points on the right of AB as S1 and all the points on

the right of BA as S 2.

Solution = {AB, BA}

Make a recursive call to FindHull (S1, A, B) and FindHull(S2 ,B, A)


FindHull(S1 ,A, B)

Find point C orthogonally farthest from line AB

Solution = Solution – { AB } U {AC, CB}

= {AC, CB, BA}

Make recursive calls: FindHull (X1, A, C) and FindHull (X2, C, B)

FindHull(X1, A, C)

Find point D orthogonally farthest from line AC

Solution = Solution – {AC} U {AD, DC}

= {A D, DC, CB, BA}

Make recursive calls: FindHull (X1, A, D) and FindHull (X2, D, C)

But X1 and X2 sets are empty, so algorithm returns

FindHull(X2, C, B)

Find point E orthogonally farthest from line CB

Solution = Solution – {CB} ∪ U {CE, EB}

= {A D, DC, CE, EB, BA}

Make recursive calls: FindHull (X1, C, E) and FindHull (X2, E, B).

But X1 and X2 sets are empty, so algorithm returns

Now we will explore the points in S2, on the right-hand side of the line
BA

FindHull(S2 ,B, A)

Find point F orthogonally farthest from line BA

Solution = Solution – {BA} U {BF, FA}

= {A D, DC, CE, EB, BF, FA}

Make recursive calls : FindHull (X1, B, F) and FindHull (X2, F, A)

e
But X1 set is empty, so call to FindHull (X1, B, F) returns

v
FindHull (X2, F, A)

Find point G orthogonally farthest from line FA

h M
e c
= {A
T
Solution = Solution

– {FA} U {FG, GA}

D, DC, CE, EB, BF, FG, GA}

Make recursive calls:

FindHull (X1, F, G) and FindHull (X2, G, A).

But X1 and X2 sets are empty, so algorithm returns. And no more


recursive calls are left. So polygon with edges (AD, DC, CE, EB, BF, FG,

GA) is the convex hull of given points.

Matrix Multiplication using Divide and Conquer


Matrix Multiplication using Divide and Conquer along with the
conventional method.

Conventional Approach

Matrix multiplication is an important operation in many mathematical

and image processing applications.

Suppose we want to multiply two matrices A and B, each of si e n x n, z


multiplication is operation is defined as

C11 = A11 B11 + A12 B21

C1 2 = A11 B12 + A12 B22

C 21 = A21 B11 + A22 B21

C 22 = A21 B12 + A22 B22


This approach is very costly as it required 8 multiplications and 4 additions.

Matrix Multiplication using Strassen’s Method


It is a divide and conquer strategy-based matrix multiplication technique

that requires fewer multiplications than the traditional method.

The multiplication operation is defined as follows using Strassen s ’


method:

e
C11 = S1 + S4 – S5 + S7

C1 2 = S3 + S5

C 21 = S2 + S4

C 22 = S1 + S3 – S2 + S6

Where,

h
S1 = (A11 + A22) * (B11 + B22)

S 2 = (A21 + A22) * B11

S 3 = A11 * (B12 – B22)

T
S 4 = A22 * (B21 – B11)

S 5 = (A11 + A12) * B22

S 6 = (A21 – A11) * (B11 + B12)

S 7 = (A12 – A22) * (B21 + B22)

Let us check if it is same as the conventional approach.

C1 2 = S3 + S5

= A11 * (B1 2 – B22) + (A11 + A12) * B22

= A11 * B1 2 – A11 * B22 + A11 * B22 + A12 * B22

= A11 * B1 2 + A12 * B22

This is same as C1 2 derived using conventional approach. Similarly we


can derive all Cij for strassen’s matrix multiplication.

Ex ample of Matrix Multiplication


Multiply given two matrices A and B using Strassen’s approach, where

Solution:

let,

S1 = (A11 + A22) x (B11 + B22)

S 2 = (A21 + A22) x B11

S 3 = A11 x (B12 – B22)

S 4 = A22 x (B21 – B11)

S 5 = (A11 + A12) x B22

S 6 = (A11 – A11) x (B11 + B12)

ve
o
S 7 = (A12 – A22) x (B21 + B22)

h M
e c
T
Greedy Algorithms
The greedy method is one of the strategies like Divide and conquer used

to solve the problems.


This method is used for solving optimization problems.
An optimization problem is a problem that demands either maximum or
minimum results.
The Greedy method is the simplest and straightforward approach.
It is not an algorithm, but it is a technique.
It is used in finding the shortest path.

Components of Greedy Algorithm


A candidate set − A solution is created from this set.
A selection function − Used to choose the best candidate to be added
to the solution.
A feasibility function − Used to determine whether a candidate can be
used to contribute to the solution.
An objective function − Used to assign a value to a solution or a partial
solution.
A solution function − Used to indicate whether a complete solution has
been reached.

Drawback of Greedy Approach


The greedy algorithm doesn't always produce the optimal solution. This

is the major disadvantage of the algorithm

F or example, suppose we want to find the longest path in the graph below

from root to leaf. Let's use the greedy algorithm here.

e
M
ov

h
Our problem is to find the largest path. And, the optimal solution at the

moment is 3. So, the greedy algorithm will choose 3.

c
Finally the weight of an only child of 3 is 1. This gives us our final result

e
20 + 3 + 1 = 24.

T
However, it is not the optimal solution. There is another path that carries
more weight (20 + 2 + 10 = 32).

Spanning Tree
A spanning tree is a subset of an undirected Graph that has all the vertices
connected by minimum number of edges.
If all the vertices are connected in a graph, then there exists at least one
spanning tree. In a graph, there may exist more than one spanning tree.

Properties:
A spanning tree does not have any cycle.
Any vertex can be reached from any other vertex.

Minimum Spanning Tree


A Minimum Spanning Tree (MST) is a subset of edges of a connected
weighted undirected graph that connects all the vertices together with
the minimum possible total edge weight.
To derive an MST, Prim’s algorithm or Kruskal’s algorithm can be used.

Prim’s Algorithm
P rim’s Algorithm is a famous greedy algorithm.
It is used for finding the Minimum Spanning Tree (MST) of a given graph.
To apply Prim’s algorithm, the given graph must be weighted, connected
and undirected.

Prim’s Algorithm Implementation


Step-01:

Randomly choose any vertex.

e
The vertex connecting to the edge having least weight is usually
selected.
Step-02:

o
ind all the edges that connect the tree to new vertices.

v
F
Find the least weight edge among those edges and include it in the
existing tree.
If including that edge creates a cycle, then reject that edge and look for
the next least weight edge.

M
h
Step-03:

c
Keep repeating step-02 until all the vertices are included and Minimum
Spanning Tree (MST) is obtained.

C
e
PRACTICE PROBLEMS BASED ON PRIM’S ALGORITHM

T
onstruct the minimum spanning tree (MST) for the given graph using

Prim’s Algorithm-

Solution: The above discussed steps are followed


S tep:1

S tep:2

S tep:3

S tep:4

S tep:5

S tep:6

e
S

ince all the vertices have been included in the MST, so we stop.

Now, Cost of Minimum Spanning Tree

um of all edge weights

M
ov

h
=S

5 + 22 + 12 + 16 + 14

c
= 10 + 2

e
= 99 units

T
Time Complexities
Worst case time complexity of Prim’s Algorithm is-

ElogV) using binary heap

O(
O(E + VlogV) using Fibonacci heap

If adjacency list is used to represent the graph, then using breadth first search,

all the vertices can be traversed in O(V + E) time.


Min heap operations like extracting minimum element and decreasing key

value takes O(logV) time.


So, overall time complexity

= O( E + V) x O(logV)

= O(( E + V)logV)

= O( ElogV)

PRACTICE PROBLEMS BASED ON PRIM’S ALGORITHM


Using Prim’s Algorithm, find the cost of minimum spanning tree (MST) of the

given graph-

Solution:

Now, Cost of Minimum Spanning Tree

=S um of all edge weights

=1+4+2+ 6 + 3 + 10

=2 6 units

Kruskal’s Algorithm
K ruskal’s Algorithm is a famous greedy algorithm.
It is used for finding the Minimum Spanning Tree (MST) of a given graph.
To apply Kruskal’s Algorithm, the given graph must be weighted,
connected and undirected.

Kruskal’s Algorithm Implementation


S tep-01:

e
ov
S ort all the edges from low weight to high weight.
S tep-02:

Take the edge with the lowest weight and use it to connect the vertices

M
of graph.

h
If adding an edge creates a cycle, then reject that edge and go for the
next least weight edge.

c
S tep-03:

e
Keep adding edges until all the vertices are connected and a Minimum

T
Spanning Tree (MST) is obtained.

PRACTICE PROBLEMS BASED ON KRUSKAL’s ALGORITHM


Construct the minimum spanning tree (MST) for the given graph using

Kruskal’s Algorithm-

Solution: To construct MST using Kruskal’s Algorithm,

Simply draw all the vertices on the paper.


Connect these vertices using edges with minimum weights such that no

cycle gets formed.


Step:1

Step:2

Step:3

Step:4

e
M
ov
Step:5

ch
Te
Step:6

Step:7

S ince all the vertices have been connected / included in the MST, so we stop.

Weight of the MST

=S um of all edge weights

= 10 + 2 5 + 22 + 12 + 16 + 14

= 99 units

Time Complexities
Worst case time complexity of Kruskal’s Algorithm

= O(ElogV) or O(ElogE)

Dijkstra’s Algorithm
Dijkstra's algorithm allows us to find the shortest path between any two
vertices of a graph.
Dijkstra algorithm is a single-source shortest path algorithm.
Here, single-source means that only one source is given, and we have to
find the shortest path from the source to all the nodes.

Dijkstra’s Algorithm Implementation

e
ov
Dijkstra's Algorithm works on the basis that any subpath B -> D of the
shortest path A -> D between vertices A and D is also the shortest path
between vertices B and D.

M
ch
Te
PRACTICE PROBLEMS BASED ON DIJKSTRA’S ALGORITHM
It is easier to start with an example and then think about the algorithm.

S tart with a weighted graph

C hoose a starting vertex and assign infinity path values to all other devices

Go to each vertex and update its path length

If the path length of the adjacent vertex is lesser than new path length, don't

update it

Avoid updating path lengths of already visited vertices

e
After each iteration, we pick the unvisited vertex with the least path length.

So we choose 5 before 7

M
ov

c h
Te
Notice how the rightmost vertex has its path length updated twice

Repeat until all the vertices have been visited

Time Complexities
Time Complexity: O(E Log V)

where, E is the number of edges and V is the number of vertices.


Time Complexities
S pace Complexity: O(V)

Dijkstra’s Algorithm Application


To find the shortest path
In social networking applications
In a telephone network
To find the locations in the map

PRACTICE PROBLEMS BASED ON DIJKSTRA’S ALGORITHM


It is easier to start with an example and then think about the algorithm.

9
F E

2 6

14 11
C
D
9

10
A 15
7

o r e
S u c D estination

A B C D E F

0 ∞ ∞ ∞ ∞ ∞

A->B 7 9 ∞ ∞ 14

e
A,B,C 7 9 22 ∞ 14

A,B,C,F 7 9 20 ∞ 11

ov
A,B,C,F,D 7 9 20 20 11

A,B,C,F,D,E 7 9 20 20 11

M
h
A,B,C,F,D,E 7 9 20 20 11

c
F nali solution:

Te
9
F E

11
C
D
9

A
7

A->B, A->C, A->C->F, A->C->D, A->C->F->E

Bellman Ford Algorithm


It is a single-source shortest path algorithm.
algorithm helps us find the shortest path from a vertex to all other vertices
of a weighted graph.
There are various other algorithms used to find the shortest path like
Dijkstra algorithm, etc.
If the weighted graph contains the negative weight values, then the
Dijkstra algorithm does not confirm whether it produces the correct
answer or not.
In contrast to Dijkstra algorithm, bellman ford algorithm guarantees the
correct answer even if the weighted graph contains the negative weight
values.

B A B C
-8
10
0 ∞ ∞

A C AC 10 5

5 ACB 10 5

10
10
B B
10 -8 10 -8

A C A A C
5
5 5
2
Every edge relax time: (V-1)+1

How Bellman Ford's algorithm works


Bellman Ford algorithm works by overestimating the length of the path
from the starting vertex to all other vertices. Then it iteratively relaxes
those estimates by finding new paths that are shorter than the
previously overestimated paths.

e
M
ov

ch
Te

Bellman Ford Algorithm Application


or calculating shortest paths in routing algorithms
F

or finding the shortest path


F

Time Complexities
Best Case ComplexityO(E)

Average Case ComplexityO(VE)

Worst Case ComplexityO(VE)

Space Complexities
S pace Complexity: O(V)

PRACTICE PROBLEMS BASED ON Bellman Ford’s ALGORITHM


e
M
ov

ch
e
To find the shortest path of the above graph, the first step is note down all

T
the edges which are given below:

(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)

The source vertex as 'A'; therefore, the distance value at vertex A is 0 and
the distance value at all the other vertices as infinity.

S ince the graph has six vertices so it will have five iterations.

First iteration

Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now
use the relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 6

Since (0 + 6) is less than ∞, so update

d(v) = d(u) + c(u , v)

d(v) = 0 + 6 = 6

Therefore, the distance of vertex B is 6.

Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now
use the relaxing formula:

d(u) = 0

d(v) = ∞

c(u , v) = 4

Since (0 + 4) is less than ∞, so update

d(v) = d(u) + c(u , v)

d(v) = 0 + 4 = 4

Therefore, the distance of vertex C is 4.

similar we perform for all vertex.......

Second iteration

In the second iteration, we again check all the edges. The first edge is (A, B).
Since (0 + 6) is greater than 1 so there would be no updation in the vertex B.

The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no
updation in the vertex C.

The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no


updation in the vertex D.

The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so
update:

d(v) = d(u) + c(u, v)

d(E) = d(B) +c(B , E)

= 1 - 1 = 0

The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so
there would be no updation in the vertex E.

The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no


updation in the vertex C.

The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no


updation in the vertex F.

e
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so
there would be no updation in the vertex F.

ov
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no
updation in the vertex B.

M
ch
Third iteration

Te
We will perform the same steps as we did in the previous iterations. We will
observe that there will be no updation in the distance of vertices.

The following are the distances of vertices:

A: 0

B: 1

C: 3

D: 5

E: 0

F: 3

Drawbacks of Bellman ford algorithm

The bellman ford algorithm does not produce a correct answer if the sum of
the edges of a cycle is negative.

Knapsack Problem
A knapsack (kind of shoulder bag) with limited weight capacity.
Few items each having some weight and value.
The problem states
The value or profit obtained by putting the items into the knapsack is
maximum
And the weight limit of the knapsack does not exceed.

Ex amples:
Item Weight Value

e
ov
1 18 2 5

2 15 24

C apacity 20Kg
3 10 1 5

M
Greedy About value:
Greedy About Weight:

h
25+2/15*24 = 28.2 15+10/15*24 = 31

c
Greedy About Both Weight and Value:

e
Item Weight Value V/W

T
1 18 2 5 1.3

2 15 24 1.6

3 10 1 5 1.5

Sorting: 1.6, 1.5, 1.3


Sol: 24+5/10*15 =31.5

Knapsack Problem Variants


K napsack problem has the following two variants
Fractional Knapsack Proble
0/1 Knapsack Problem

Fractional Knapsack Problem


In Fractional Knapsack Problem
As the name suggests, items are divisible here
We can even put the fraction of any item into the knapsack if taking
the complete item is not possible
It is solved using Greedy Method.

Fractional Knapsack Problem Using Greedy Method


F or each item, compute its value / weight ratio.
Arrange all the items in decreasing order of their value / weight ratio.
Start putting the items into the knapsack beginning from the item with
the highest ratio.

Time Complexity
The main time taking step is the sorting of all items in decreasing order of
their value / weight ratio
If the items are already arranged in the required order, then while loop
takes O(n) time
The average time complexity of Quick Sort is O(nlogn)
Therefore, total time taken including the sort is O(nlogn).

PRACTICE PROBLEM BASED ON FRACTIONAL

KNAPSACK PROBLEM
For the given set of items and knapsack capacity = 60 kg, find the optimal
solution for the fractional knapsack problem making use of greedy approach.

Item Weight Value

1 5 3 0

2 10 40

3 1 5 4 5

4 22 77

5 2 5 90

OR

Find the optimal solution for the fractional knapsack problem making use of
greedy approach. Consider-

n = 5

w = 60 kg

(w1, w2, w3, w4, w5) = (5, 10, 15, 22, 25)

(b1, b2, b3, b4, b5) = (30, 40, 45, 77, 90)

Solution-

S tep-01: Compute the value / weight ratio for each item-

e
S tep-02: Sort all the items in decreasing order of their value / weight ratio-

I1 I2 I5 I4 I3

o
(6) (4) (3.6) (3.5) (3)

S tep-03: Start filling the knapsack by putting the items into it one by one.
v
M
c h
Now
K
Te
napsack weight left to be filled is 20 kg but item-4 has a weight of 22 kg
Since in fractional knapsack problem, even the fraction of any item can be
taken
So, knapsack will contain the following items-

< I1 , I2 , I5 , (20/22) I4 >

Total cost of the knapsack

= 160 + (20/22) x 77

= 160 + 70

= 230 units

Dynamic Programming

Dynamic programming is one of the techniques used to solve the

optimization problems.

5
A A

8
10

50 7
A A A

2
12

A A
100

The five steps used in dynamic programming to solve the optimization problem

It breaks down the complex problem into simpler sub-problems.

e
We will find the optimal solution to these sub-problems.

v
o
Storing the results of these sub-problems.

We will reuse the solution of these sub-problems so that we do not

need to re-calculate them.

M
Finally, we will calculate the result of the complex problem.

The dynamic programming is applicable to those problems that have

certain properties

h
c
Overlapping subproblems: The overlapping problems are those that

have common solutions.

e
Optimal substructure: The optimal substructure is a substructure that

T
can be obtained by the combination of all the optimal solution of

subproblems.

0/1(Absent/Present) Knapsack Problem

In 0/1 Knapsack Problem

As the name suggests, items are indivisible here

We can not take the fraction of any item

We have to either take an item completely or leave it completely

It is solved using dynamic programming approach.

0/1 Knapsack Problem U sing Dynamic Programming

Consider

Knapsack weight capacity =


Number of items each having some weight and value = n

Draw a table say T with ‘ ’ (n+1) number of rows and (w+1) number of
columns .

Fill all the boxes of 0th row and 0th column with zeroes as shown-

Start filling the table row wise top to bottom from left to right .

Use the following formula-

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Here, T(i , j) = maximum value of the selected items if we can take items
1 to i and have weight restrictions of j

This step leads to completely filling the table

Then, value of the last box represents the maximum possible value

that can be put into the knapsack.

To identify the items that must be put into the knapsack to obtain that

maximum profit

Consider the last column of the table

Start scanning the entries from bottom to top

On encountering an entry whose value is not same as the value

stored in the entry immediately above it, mark the row label of that

entry

After all the entries are scanned, the marked labels represent the

items that must be put into the knapsack.

P R CTICE ROBLEM B SE ON
A P A D 0/1 K N S CK

AP A

ROBLEM P

For the given set of items and knapsack capacity = 5 kg, find the optimal
solution for the 0/1 knapsack problem making use of dynamic programming

approach.
Item Weight Value

1 2 3

2 3 4

3 4 5

4 5 6

OR

Find the optimal solution for the 0/1 knapsack problem making use of

dynamic programming approach. Consider -

n = 4

w = 5 kg

(w1, w2, w3, w4) = (2, 3, 4, 5)


e
(b1, b2, b3, b4, ) = (3, 4, 5, 6)

v
o
Solution-

M
h
Given
Knapsack capacity (w) = 5 k
Number of items (n) = 4

c
Step-01
e ‘ ’ (n+1) = 4 + 1 = 5 number of rows and (w+1) = 5 + 1

T
Draw a table say T with

= 6 number of columns
Fill all the boxes of 0th row and 0th column with 0.

Step-02:

Start filling the table row wise top to bottom from left to right using the

formula -

T (i , j) = max { T ( i-1 , j ) , valuei + T( i-1 , j – weighti ) }

Finding T(1,1)-

We have

i=
j=
(value)i = (value)1 =
(weight)i = (weight)1 = 2

Substituting the values, we get -

( ) = max { T(1-1 , 1) , 3 + T(1-1 , 1-2) }

T 1,1

T(1,1) = max { T(0,1) , 3 + T(0,-1) }

T(1,1) = T(0,1) { Ignore T(0,-1) }

T(1,1) = 0

Finding T(1,2)-

We have

i=
j=
(value)i = (value)1 =
(weight)i = (weight)1 = 2

Substituting the values, we get -

( ) = max { T(1-1 , 2) , 3 + T(1-1 , 2-2) }

T 1,2

T(1,2) = max { T(0,2) , 3 + T(0,0) }

T(1,2) = max {0 , 3+0}

T(1,2) = 3

Finding T(1,3)-

We have

i=
j=
(value)i = (value)1 =
(weight)i = (weight)1 = 2

Substituting the values, we get -

( 3) = max { T(1-1 , 3) , 3 + T(1-1 , 3-2) }

T 1,

T(1,3) = max { T(0,3) , 3 + T(0,1) }

T(1,3) = max {0 , 3+0}

T(1,3) = 3

Finding T(1,4)-

We have

i=
j=
(value)i = (value)1 =
(weight)i = (weight)1 = 2

Substituting the values, we get -

( 4) = max { T(1-1 , 4) , 3 + T(1-1 , 4-2) }

T 1,

T(1,4) = max { T(0,4) , 3 + T(0,2) }

T(1,4) = max {0 , 3+0}

T(1,4) = 3

Finding T(1,5)-

We have

i=
j=
(value)i = (value)1 =
(weight)i = (weight)1 = 2

Substituting the values, we get -

( ) = max { T(1-1 , 5) , 3 + T(1-1 , 5-2) }

T 1,5

T(1,5) = max { T(0,5) , 3 + T(0,3) }

T(1,5) = max {0 , 3+0}

T(1,5) = 3

e
Finding T(2,1)-

We have

= v
o
i

j=
(value)i = (value)2 =

M
(weight)i = (weight)2 = 3

Substituting the values, we get -

( ) = max { T(2-1 , 1) , 4 + T(2-1 , 1-3) }

T 2,1

T(2,1) = max { T(1,1) , 4 + T(1,-2) }

T(2,1) = T(1,1) { Ignore T(1,-2) }


h
T(2,1) = 0

c
e
Similarly, compute all the entries.
T
After all the entries are computed and filled in the table, we get the following

table -

The last entry represents the maximum possible value that can be put into

the knapsack

So, maximum possible value that can be put into the knapsack = 7.

We mark the rows labelled “1” and “2”.

Thus, items that must be put into the knapsack to obtain the maximum value 7

are- Item-1 and Item-2

All Pair Sh ortest Pat h s

The all pair shortest path algorithm is also known as Floyd-Warshall

algorithm is used to find all pair shortest path problem from a given

weighted graph .

F dW h
loy ars all Algorit h m

Floyd Warshall Algorithm is a famous algorithm.

It is used to solve All Pairs Shortest Path Problem.

It computes the shortest path between every pair of vertices of the given

graph.

Floyd Warshall Algorithm is an example of dynamic programming

approach.

Time Complexity

Floyd Warshall Algorithm consists of three loops over all the nodes

The inner most loop consists of only constant complexity

operations

Hence, the asymptotic complexity of Floyd Warshall algorithm is


O(n3)

Here, n is the number of nodes in the given graph.

Floyd Warshall Algorithm has the following main advantages

It is extremely simple

It is easy to implement.

P R CTICE ROBLEM B SE ON FLOY W RSH LL

A P A D D A A

LGORITHM- A

o
Pr blem - Consider the following directed weighted graph-

Using Floyd Warshall Algorithm, find the shortest path distance between every
pair of vertices.

Solution-

Step-01

Remove all the self loops and parallel edges (keeping the lowest weight
edge) from the graph

In the given graph, there are neither self edges nor parallel edges.

Step-02

Write the initial distance matrix

It represents the distance between every pair of vertices in the form of

given weights

For diagonal elements (representing self-loops), distance value = 0


For vertices having a direct edge between them, distance value = weight of

that edge

For vertices having no direct edge between them, distance value = ∞.

Initial distance matrix for the given graph is -

Step-03 :

Using Floyd Warshall Algorithm, write the following 4 matrices-

The last matrix D 4 represents the shortest path distance between every pair of
vertices.

Remember
In the above problem, there are 4 vertices in the given graph
So, there will be total 4 matrices of order 4 x 4 in the solution excluding the

initial distance matrix

Diagonal elements of each matrix will always be 0.

B h
e
acktracking Algorit m

A backtracking algorithm is a problem-solving algorithm that uses a


v
o
brute force approach for finding the desired output.

The Brute force approach tries out all the possible solutions and

M
chooses the desired/best solutions.

The term backtracking suggests that if the current solution is not

h
suitable, then backtrack and try other solutions. Thus, recursion is used

in this approach.

c
This approach is used to solve problems that have multiple solutions. If

you want an optimal solution, you must go for dynamic programming .

eS S T
T tate pace ree

A space state tree is a tree representing all the possible states (solution
or nonsolution ) of the problem from the root as an initial state to the leaf
as a terminal state.

Example: if we sort the 3 1 2 in asending order.

Using Brute Force Method:

Here is possible solution:

3,1,2

3,2,1

3
2,3,1

1 2

2,1,3

3,1 3,2 1, 3 1,2 2, 3 2,1

1,3,2

1,2,3 3,1,2 3,2,1 1,2, 3 2,1, 3


1, 3,2 2, 3,1
Using Backtracking Method:

3 e
v 1 2
k
c
a

o
tr
k
c
a
B

3,1 3,2 3 31, 1,2 2, 2,1

M
h 3
c
1,2,

3,2 3,1
e
1, 2,

T
Application

N-queen proble
Sum of subset proble

Graph colorin
Hamiliton cycle

G hC rap oloring

Graph Coloring is a process of assigning colors to the vertices of a


graph

Such that no two ad acent vertices of it are assigned the same color. j
Graph Coloring is also called as Vertex Coloring.
Such a graph is called as a Properly colored graph.

Graph Coloring Application


Map Colorin
Scheduling the task

Preparing Time Tabl

Assignmen

Con flict Resolutio


Sudoku

Chromatic Number
Chromatic Number is the minimum number of colors required to
properly color any graph.

Ch romatic Nu mber Of G h rap s

Chromatic Number of some common types of graphs are as follows-

1. Cycle Graph
In a cycle graph, all the vertices are of degree 2

If number of vertices in cycle graph is even, then its chromatic

number =2
If number of vertices in cycle graph is odd, then its chromatic

number = 3.

2. Planar Graphs
A Planar is a graph that can be drawn in a plane such that none of its

edges cross each other.

Chromatic Number of any Planar Graph = Less than or equal to 4

All the above cycle graphs are also planar graphs.

3. Complete Graphs
A complete graph is a graph in which every two distinct vertices are

joined by exactly one edge


In a complete graph, each vertex is connected with every other

vertex

So to properly it, as many different colors are needed as there are

number of vertices in the given graph.

ChromaticNumber of any Complete Graph = Number of vertices in that


Complete Graph

4. Bipartite Graphs
Graphs consists of two sets of vertices X and Y
A Bipartite

The edges only join vertices in X to vertices in Y, not vertices within a

set.

Chromatic Number of any Bipartite Graph = 2

5. Tree
A Tree is a special type of connected graph in which there are no

circuits

Every tree is a bipartite graph.

Chromatic Number of any tree = 2

Graph Coloring Algorithm


There exists no efficient algorithm for coloring a graph with

minimum number of colors

Graph Coloring is a NP complete problem.

PRACTICE PROBLEMS BASED ON FINDING CHROMATIC


NUMBER OF A GRAPH-

Problem-01 :

Find chromatic number of the following graph-

Solution: Minimum number of colors used to color the given graph are 2
Therefore, Chromatic Number of the given graph = 2

Problem-02 :

Find chromatic number of the following graph-

Solution: Minimum number of colors used to color the given graph are 3
Therefore, Chromatic Number of the given graph = 3.

Problem-0 3:

Find chromatic number of the following graph-

Solution: Minimum number of colors used to color the given graph are 4
Therefore, Chromatic Number of the given graph = 4

Problem-0 4:

Find chromatic number of the following graph-

Solution: Minimum number of colors used to color the given graph are 3
Therefore, Chromatic Number of the given graph = 3.

Problem-05 :

Find chromatic number of the following graph-

Solution: Minimum number of colors required to color the given graph are 3
Therefore, Chromatic Number of the given graph = 3.

N - Queens Problems v
e
o
N - Queens problem is to place n - queens in such a manner on an n x n

M
chessboard that no queens attack each other by being in the same row,

column or diagonal .

It can be seen that for n =1, the problem has a trivial solution.
No solution exists for n =2 and n =3.
h
c
The n- queen problem must follow the following rules

There must be at least one queen in each column

e
There must be at least one queen in each row

T
There must be at least one queen in each diagonal.

Example :

The implicit tree for 4 - queen problem for a solution (2, 4, 1, 3) is as follows:

Hamiltonian Cycle
A Hamiltonian graph is the directed or undirected graph containing a
Hamiltonian cycle.
The Hamiltonian cycle is the cycle that traverses all the vertices of the

given graph G exactly once and then ends at the starting vertex.

The Hamiltonian problem involves checking if the Hamiltonian cycle is


present in a graph G or not.

Practice-Problems-Based-On- Hamiltonian-Graph

Examples: Consider a graph G = (V, E) shown in fig. we have to find a


Hamiltonian circuit using Backtracking method.

Solution: Firstly, we start our search with vertex 'a.' this vertex 'a' becomes the
root of our implicit tree.

Next, we choose vertex 'b' adjacent to 'a' as it comes first in lexicographical


order (b, c, d).

e
v
Next, we select 'c' adjacent to 'b.'
o
M
h
c
e
T
Next, we select 'd' adjacent to 'c.'

Next, we select 'e' adjacent to 'd.'

Next, we select vertex 'f' adjacent to 'e.' The vertex adjacent to 'f' is d and e, but
they have already visited. Thus, we get the dead end, and we backtrack one

step and remove the vertex f from partial solution. ''

From backtracking, the vertex ad acent to e is b, c, d, and f from which vertex j ' '
'f' has already been checked, and b, c, d have already visited.

Now, adjacent to c is 'e' and adjacent to 'e' is 'f' and adjacent to 'f' is 'd' and
adjacent to 'd' is 'a.' Here, we get the Hamiltonian Cycle as all the vertex other

than the start vertex 'a' is visited only once. (a - b - c - e - f -d - a).

Here we have generated one Hamiltonian circuit, but another Hamiltonian


circuit can also be obtained by considering another vertex.

Sum of Subset Problem


Given a set of elements and a sum value. We need to find all possible
subsets of the elements with a sum equal to the sum value.

Example

Set: {10, 20, 30, 40


Sum: 5

Output: [ {10, 40}, {20, 30} ]

There are two ways to solve the Subset Sum Problem

Brute Force– Slo


Backtracking – Fast

In the Bruteforce approach, we usually test every combination starting

from one, then two, then three, and so on for the required sum.

Using Backtracking we can reduce its time complexity up to a great


extent. Let’s see how.

Include: Here include means that we are selecting the element from the
array .

Exclude: Here, exclude means that we are rejecting the element from the
array .

Example

Set: {a = 10, b = 20, c = 30, d = 40


Sum: 5

Output: [ {10, 40}, {20, 30} ]

0
a =0
a =1 0

b =1
10

b =1 b =1
20

30 10
c=1
c=1 =1
=0 c 50
....................
c
c =0 Similar next step

60
30 40 10

d =1
d =0 d =1 d =0 d =1 d =0

70
30 80 40 50 10

State Space Tree

a =1 b =0 c =0 d =1

a =0 b =1 c =1 d =0

Branch And Bound Algorithm


Branch and bound is one of the techniques used for problem solving.

It is similar to the backtracking since it also uses the state space tree.

A state space tree is a tree where the solution is constructed by adding

elements one by one, starting from the root. Note that the root contains
no element.

It is used for solving the optimization problems and minimization

problems.

Examples:

S
6
3
B

6
A
3
2
E
2
7
D

C
4 1
5

State Space Tree

S
6
B

3 9
12
A 12
D E S

10 5
6
C 10
D
S

9 7
8 7

G E B A

Let's understand through an example

For a given set {A, B, C, D}, the state space tree will be constructed as follows :

Traveling Salesman Problem


The salesman has to visit each one of the cities starting from a certain

one (e.g. the hometown) and returning to the same city.


A salesman has to visit every city exactly once.

He has to come back to the city from where he starts his journey.
What is the shortest possible route that the salesman must follow to

complete his tour?

Let's understand through an example

e
If salesman starting city is A, then a TSP tour in the graph is
v -

A → B → D → Co → A

Cost of the tou

= 10 + 25 + 30 + 15

M
= 80 units
h
c
e
Traveling Salesperson problem using

T branch and bound


Solve Travelling Salesman Problem using Branch and Bound Algorithm in

the following graph-

Step-01 :

Write the initial cost matrix and reduce it-

Row Reduction-

Consider the rows of above matrix one by one .

If the row already contains an entry 0 , then ‘ ’


There is no need to reduce that row.

If the row does not contains an entry 0 , then ‘ ’


Reduce that particular row
Select the least value element from that row

Subtract that element from each element of that row

This will create an entry 0 in that row, thus reducing that row.

‘ ’

Following this, we have

Reduce the elements of row-1 by 4


Reduce the elements of row-2 by 5
Reduce the elements of row-3 by 6
Reduce the elements of row-4 by 2.

Performing this, we obtain the following row-reduced matrix-

Column Reduction-

Consider the columns of above row-reduced matrix one by one .

If the column already contains an entry 0 , then ‘ ’


There is no need to reduce that column.

If the column does not contains an entry 0 , then ‘ ’


Reduce that particular column
Select the least value element from that column

Subtract that element from each element of that column

This will create an entry 0 in that column, thus reducing that column.

‘ ’

Following this, we have

There is no need to reduce column-1

There is no need to reduce column-2

Reduce the elements of column-3 by 1


There is no need to reduce column-4.

Performing this, we obtain the following column-reduced matrix-

Finally, the initial distance matrix is completely reduced.

Now, we calculate the cost of node-1 by adding all the reduction elements.

Cost(1)

= Sum of all reduction elements

= 4 + 5 + 6 + 2 + 1

= 18

Choosing To Go To Vertex-B: Node-2 (Path A → B)


From the reduced matrix of step-01, M[A,B] =
Set row-A and column-B to

Set M[B,A] = ∞

Now, resulting cost matrix is-

Reduce all the elements of row-2 by 13.

Reduce the elements of column-1 by 5.

Now, we calculate the cost of node-2.

Cost(2)

= Cost(1) + Sum of reduction elements + M[A,B]

= 18 + (13 + 5) + 0

= 36

Choosing To Go To Vertex-C: Node-3 (Path A → C)


From the reduced matrix of step-01, M[A,C] =
Set row-A and column-C to

Set M[C,A] = ∞

The matrix is already row-reduced .

The matrix is already column reduced

Now, we calculate the cost of node-3.

Cost(3)

= Cost(1) + Sum of reduction elements + M[A,C]

= 18 + 0 + 7

= 25

Choosing To Go To Vertex-D: Node-4 (Path A → D)


From the reduced matrix of step-01, M[A,D] =
Set row-A and column-D to

Set M[D,A] = ∞

Reduce all the elements of row-3 by 5.

The matrix is already column reduced

Now, we calculate the cost of node-4.

Cost(4)

= Cost(1) + Sum of reduction elements + M[A,D]

= 18 + 5 + 3

= 26

Thus, we have

( ) = 36 (for Path A → B
Cost 2

Cost(3) = 25 (for Path A → C

Cost(4) = 26 (for Path A → D)

We choose the node with the lowest cost .

Thus, we choose node- 3 i.e. path A → C.

Step-03:

We explore the vertices B and D from node-3.

We now start from the cost matrix at node-3 which is-

Choosing To Go To Vertex-B: Node-5 (Path A → C → B)


From the reduced matrix of step-02, M[C,B] =
Set row-C and column-B to

Set M[B,A] = ∞

Reduce all the elements of row-2 by 13


Reduce all the elements of row-4 by 8.

The matrix is already column reduced.

Now, we calculate the cost of node-5.

Cost(5)

= cost(3) + Sum of reduction elements + M[C,B]

= 25 + (13 + 8) + ∞

= ∞

Choosing To Go To Vertex-D: Node-6 (Path A → C → D)


From the reduced matrix of step-02, M[C,D] =
Set row-C and column-D to

Set M[D,A] = ∞

The matrix is already row-reduced .

The matrix is already column reduced

Now, we calculate the cost of node-6.

Cost(6)

= cost(3) + Sum of reduction elements + M[C,D]

= 25 + 0 + 0

= 25

Thus, we have

( ) = (for Path A → C → B
Cost 5 ∞

Cost(6) = 25 (for Path A → C → D)

We choose the node with the lowest cost .

Thus, we choose node- 6 i.e. path C → D.

Step-04:

We explore vertex B from node- 6.

We start with the cost matrix at node- 6 which is-

Choosing To Go To Vertex-B: Node-7 (Path A → C → D → B)


From the reduced matrix of step-0 3, M[D,B] =
Set row-D and column-B to

Set M[B,A] = ∞

The matrix is already row-reduced .

The matrix is already column reduced

Now, we calculate the cost of node-7.

Cost(7)

= cost(6) + Sum of reduction elements + M[D,B]

= 25 + 0 + 0

= 25

Thus

Optimal path is: A → C → D → B →

Cost of Optimal path = 25 units


18

36
B 25 2 6 D
C


25

D
B

25

e
v
o
M
h
c
e
T

You might also like