Unit - 4 Sorting
Unit - 4 Sorting
.
Now, compare 32 and 35.
Now, move to the second iteration.
Second Pass
The same process will be followed for
second iteration.
Third Pass
The same process will be followed for third
iteration.
Bubble Sort compares the adjacent elements.
1st (n-1)
2nd (n-2)
3rd (n-3)
....... ......
last 1
Case
Worst Case O(n2)
Space Complexity
This algorithm is not suitable for large data sets as its average
and worst case complexities are of Ο(n2), where n is the number
of items.
Selection sort is generally used when -
A small array is to be sorted
Swapping cost doesn't matter
It is compulsory to check all elements
SELECTION SORT(arr, n)
Selection Sort
The same process is applied to the rest of
the items in the array.
Following is a pictorial depiction of
the entire sorting process −
Complexity = O(n2)
we can analyze the complexity by simply observing
the number of loops. There are 2 loops so the
complexity is n*n = n2.
Time Complexities:
Worst Case Complexity: O(n2)
If we want to sort in ascending order and the array
is in descending order then, the worst case occurs.
Best Case Complexity: O(n2)
It occurs when the array is already sorted
Average Case Complexity: O(n2)
It occurs when the elements of the array are in
jumbled order (neither ascending nor descending).
Case Time Complexity
Best Case O(n2)
Average Case O(n2)
Worst Case O(n2)
Space Complexity
O(1)
Stable
YES
Space complexity is O(1) because an extra
variable temp is used.
Selection Sort Applications
4. Median as pivot
Algorithm for Quick Sort
Step 2: Take two variables to point left and right of the list excluding
pivot.
Step 5: While value at left < (Less than) pivot move right.
Step 6: While value at right > (Greater than) pivot move left.
Step 7: If both Step 5 and Step 6 does not match, swap left and right.
Step 8: If left = (Less than or Equal to) right, the point where they met
is new pivot.
Implementaion of Quick Sort Algorithm using C
Programming Language
#include<stdio.h> void quickSort(int list[10],int first,int last){
#include<conio.h> int pivot,i,j,temp;
if(first < last){
void quickSort(int [10],int,int); pivot = first;
void main(){ i = first;
int list[20],size,i; j = last;
while(i < j){
printf("Enter size of the list: ");
while(list[i] <= list[pivot] && i < last)
scanf("%d",&size); i++;
printf("Enter %d integer values: ",size); while(list[j] > list[pivot])
j--;
for(i = 0; i < size; i++)
if(i <j){
scanf("%d",&list[i]); temp = list[i];
quickSort(list,0,size-1); list[i] = list[j];
printf("List after sorting is: "); list[j] = temp;
}
for(i = 0; i < size; i++) }
printf(" %d",list[i]); temp = list[pivot];
getch(); list[pivot] = list[j];
list[j] = temp;
}
quickSort(list,first,j-1);
quickSort(list,j+1,last);
}
}
Complexity of the Quick Sort Algorithm
What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in
which the node can have the utmost two children. A complete
binary tree is a binary tree in which all the levels except the last
level, i.e., leaf node, should be completely filled, and all the nodes
should be left-justified.
What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap
sort is to eliminate the elements one by one from the heap part of the list,
and then insert them into the sorted part of the list.
Heapsort is the in-place sorting algorithm.
heapify(a, n, largest);
}
}
/
*Function to implement the heap sort
*/
void heapSort(int a[], int n)
{
for (int i = n / 2 - 1; i >= 0; i--)
heapify(a, n, i);
// One by one extract an element fr
om heap
for (int i = n - 1; i >= 0; i--) {
/
* Move current root element to end*/
// swap a[0] with a[i] /
int temp = a[0]; * function to print the array elements
a[0] = a[i]; */
a[i] = temp; void printArr(int arr[], int n)
{
heapify(a, i, 0); for (int i = 0; i < n; ++i)
} {
} printf("%d", arr[i]);
printf(" ");
}
int main()
{
int a[] = {48, 10, 23, 43, 28, 26, 1};
int n = sizeof(a) / sizeof(a[0]);
printf("Before sorting array elements are - \
n");
printArr(a, n);
heapSort(a, n);
printf("\nAfter sorting array elements are - \
n");
printArr(a, n);
return 0;
}
Output
Heap sort complexity
Time complexity of Heap sort in the best case, average case, and worst case. We will also see the space
complexity of Heapsort.
1. Time Complexity
Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted.
The best-case time complexity of heap sort is O(n logn).
•Average Case Complexity - It occurs when the array elements are in jumbled order that is not
properly ascending and not properly descending. The average case time complexity of heap sort
is O(n log n).
•Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse
order. That means suppose you have to sort the array elements in ascending order, but its elements are
in descending order. The worst-case time complexity of heap sort is O(n log n).
The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and worst
case). The height of a complete binary tree having n elements is logn.
2. Space Complexity
•The space complexity of Heap sort is O(1).
Hashing
Hash table is one of the most important data
structures that uses a special function known
as a hash function that maps a given value
with a key to access the elements faster.
A Hash table is a data structure that stores
some information, and the information has
basically two main components, i.e., key and
value.
The hash table can be implemented with the
help of an associative array. The efficiency of
mapping depends upon the efficiency of the
hash function used for mapping.
Hash function:
Hash function is a mathematical formula,
produces an integer which can be used
as an index for the key in the hash
table.
Perfect Hash Function- Each key is
transformed into a unique storage
location
Imperfect hash Function- Maps more
than one key to the same storage
location
For example, suppose the key value is John and the
value is the phone number, so when we pass the key
value in the hash function shown as below:
Hash(key)= index;
When we pass the key in the hash function, then it
gives the index.
Hash(john) = 3;
The above example adds the john at the index 3.
Drawback of Hash function
A Hash function assigns each value with a unique key.
Sometimes hash table uses an imperfect hash function
that causes a collision because the hash function
generates the same key of two different values.
Operation in hash function:
Insert - T[ h(key) ] = value;
It calculates the hash, uses it as the key and
stores the value in hash table.
Delete - T[ h(key) ] = NULL;
It calculates the hash, resets the value stored
in the hash table for that key.
Search - return T[ h(key) ];
It calculates the hash, finds and returns the
value stored in the hash table for that key.
How to choose a Hash Function
For the first step, time taken depends on the K and the
hash function.
For example, if the key is a string “abcd”, then it’s hash
function may depend on the length of the string. But for
very large values of n, the number of entries into the
map, length of the keys is almost negligible in
comparison to n so hash computation can be considered
to take place in constant time, i.e, O(1).
For the second step, traversal of the list of K-V pairs
present at that index needs to be done. For this, the
worst case may be that all the n entries are at the same
index. So, time complexity would be O(n).
But, enough research has been done to make hash
functions uniformly distribute the keys in the array so this
almost never happens.
on an average, if there are n entries and b
is the size of the array there would be n/b
entries on each index. This value n/b is
called the load factor that represents the
load that is there on our map.
This Load Factor needs to be kept low, so
that number of entries at one index is less
and so is the complexity almost constant,
i.e., O(1).
Rehashing