DSSlides M1
DSSlides M1
DATA STRUCTURES
UNIT –I
• Add an element
• Delete an element
• Traverse
• Sort the elements
• Search for a data element
• Update an element
• Merging
Algorithm
Definition:
An algorithm is a series of step-by-step instructions that aim to
solve a particular problem.
• The word algorithm originates from the Arabic word Algorism
which is linked to the name of the Arabic Mathematician “Abu
Abdullah Muhammad ibn Musa Al-Khawarizmi”
• Khawarizmi is considered to the first algorithm designer for
adding numbers.
Structure of an Algorithm
In simpler terms,
In the above program, 3 integer variables are used. The size of the integer
data type is 2 or 4 bytes which depends on the compiler. Now, lets assume
the size as 4 bytes. So, the total space occupied by the above-given
program is 4 * 3 = 12 bytes. Since no additional variables are used, no
extra space is required.
Example -2:
#include <stdio.h>
int main()
{
int n, i, sum = 0;
scanf("%d", &n);
int arr[n];
for(i = 0; i < n; i++)
{
scanf("%d", &arr[i]);
sum = sum + arr[i];
}
printf("%d", sum);
}
How to calculate Space Complexity of an Algorithm?
In the above-given code, the array consists of n integer elements. So, the
space occupied by the array is 4 * n. Also we have integer variables such as
n, i and sum. Assuming 4 bytes for each variable, the total space occupied
by the program is 4n + 12 bytes.
24
Total Frequency Count of Program Segment B
25
Total Frequency Count of Program Segment C
f(n) g(n)
16n3 + 45n2 +12n n3 f(n) = O(n3 )
34n – 40 n f(n) =O(n)
50 1 f(n) =O(1)
27
Asymptotic Notations
20
Time Complexity
Complexity Notation Description
Constant O(1) Constant number of operations, not depending on the input data size.
Quadratic O(n2 ) Number of operations proportional to the square of the size of the
input data.
Cubic O(n3 ) Number of operations proportional to the cube of the
size of the input data.
Exponential O(2n) Exponential number of operations, fast growing.
O(kn )
O(n!)
30
Time Complexities of various Algorithms
31
Recursion
Definition:
The process of recursion involves solving a problem by turning it
into smaller varieties of itself.
Types of Recursions
• Direct Recursion
- a function calls itself from within itself.
• Indirect Recursion
- two functions call one another mutually.
Differences Between Recursion and Iteration
• A function is said to be recursive if it calls itself again and again within its
body whereas iterative functions are loop based imperative functions.
• Recursion uses stack whereas iteration does not use stack.
• Recursion uses more memory than iteration as its concept is based on stacks.
• Recursion is comparatively slower than iteration due to overhead condition
of maintaining stacks.
• Recursion makes code smaller and iteration makes code longer.
• Iteration terminates when the loop-continuation condition fails whereas
recursion terminates when a base case is recognized.
• Infinite recursion can crash the system whereas infinite looping uses CPU
cycles repeatedly.
• Recursion uses selection structure whereas iteration uses repetition structure.
Types of Recursion
Recursion may be further categorized as:
• Linear Recursion
• Binary Recursion
• Multiple Recursion
Linear Recursion
It is the most common type of Recursion in which
function calls itself repeatedly until base condition
[termination case] is reached. Once the base case is
reached the results are return to the caller function. If a
recursive function is called only once then it is called a
linear recursion.
Linear Recursion
Binary Recursion
Some recursive functions don't just have one call to themselves; they have two
(or more). Functions with two recursive calls are referred to as binary recursive
functions.
fib(n) = 0 if n is 0,
= 1 if n is 1,
= fib(n-1) + fib(n-2) otherwise
Binary Recursion
void print(int n)
{
if (n < 0) return;
cout << " " << n;
Logic
• Bubble Sort
• Selection Sort
• Insertion Sort
• Quick Sort
• Merge Sort
Bubble Sort
This sorting technique is also known as exchange sort, which
arranges values by iterating over the list several times and in each
iteration the larger value gets bubble up to the end of the list.
• This algorithm uses multiple passes and in each pass the first and
second data items are compared.
• if the first data item is bigger than the second, then the two items
are swapped.
• Next the items in second and third position are compared and if the
first one is larger than the second, then they are swapped,
otherwise no change in their order.
• This process continues for each successive pair of data items until
all items are sorted.
Example
Pass - 1
Algorithm
Step 1: Repeat Steps 2 and 3 for i=1 to n -1
Step 2: Set j=1
Step 3: Repeat while j<=n-i-1
(A) if a[i] < a[j]
Then interchange a[i] and a[j]
[End of if]
(B) Set j = j+1
[End of Inner Loop]
[End of Step 1 Outer Loop]
Step 4: Exit
Program
def bubbleSort(arr):
n = len(arr)
# Traverse through all array elements for i in range(n):
for i in range(n-1):
# Last i elements are already in place for j in range(0, n-i-1):
for j in range(0,n-i-1):
# traverse the array from 0 to n-i-1
# Swap if the element found is greater
# than the next element if arr[j] > arr[j+1] :
if arr[j] > arr[j+1]:
arr[j], arr[j+1] = arr[j+1], arr[j]
#print(arr)
# Driver code to test above
arr = [64, 34, 25, 12, 22, 11, 90]
print("Unsorted array is:")
for i in range(len(arr)):
print ("%d" %arr[i], end=' ')
bubbleSort(arr)
print ("\nSorted array is:")
for i in range(len(arr)):
Time Complexity of Bubble Sort
The efficiency of Bubble sort algorithm is independent of number
of data items in the array and its initial arrangement. If an array
containing n data items, then the outer loop executes n-1 times as
the algorithm requires n-1 passes. In the first pass, the inner loop is
executed n-1 times; in the second pass, n-2 times; in the third pass,
n-3 times and so on. The total number of iterations resulting in a
run time of O(n2).
Worst Case Performance O(n2)
Best Case Performance O(n)
Average Case Performance O(n2)
Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the
minimum element (considering ascending order) from unsorted part
and putting it at the beginning. The algorithm maintains two
subarrays in a given array.
1)The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element
(considering ascending order) from the unsorted subarray is picked
and moved to the sorted subarray.
Example
Algorithm
Program
A = [64, 25, 12, 22, 11]
# Traverse through all array elements
for i in range(len(A)):
# Find the minimum element in remaining unsorted array
min_idx = i
for j in range(i+1, len(A)):
if A[min_idx] > A[j]:
min_idx = j
➢ It takes the array, left-most and right-most index of the array to be sorted
as arguments.
➢ Middle index (mid) of the array is calculated as (left + right)/2.
➢ Check if (left<right) cause we have to sort only when left<right because
when left=right it is anyhow sorted.
➢ Sort the left part by calling MergeSort() function again over the left part
MergeSort(array,left,mid) and the right part by recursive call of
MergeSort function as MergeSort(array, mid + 1, right). Lastly merge
the two arrays using the Merge function.
Merge Sort Algorithm
Merge() function:
➢ It takes the array, left-most , middle and right-most index of the array to
be merged as arguments.
➢ Finally copy back the sorted array to the original array.
Merge Sort Algorithm
MergeSort(arr[], l, r)
If l < r:
1.Find the middle point to divide the array into two halves:
middle m = l+ (r-l)/2
2.Call mergeSort for first half: Call mergeSort(arr, l, m)
3.Call mergeSort for second half: Call mergeSort(arr, m+1, r)
4.Merge the two halves sorted in step 2 and 3:
Call merge(arr, l, m, r)
Time Complexity of Merge Sort