[go: up one dir, main page]

0% found this document useful (0 votes)
6 views22 pages

module-3-1

The document outlines various searching and sorting techniques, including Linear Search, Binary Search, Fibonacci Search, and several sorting algorithms such as Selection Sort, Insertion Sort, and Quick Sort, detailing their algorithms and time complexities. It also discusses hashing methods, hash functions, and collision resolution techniques like Separate Chaining and Open Addressing. Each algorithm is characterized by its efficiency and use cases, with Binary Search and Quick Sort generally being preferred for their performance.

Uploaded by

battagnanadeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views22 pages

module-3-1

The document outlines various searching and sorting techniques, including Linear Search, Binary Search, Fibonacci Search, and several sorting algorithms such as Selection Sort, Insertion Sort, and Quick Sort, detailing their algorithms and time complexities. It also discusses hashing methods, hash functions, and collision resolution techniques like Separate Chaining and Open Addressing. Each algorithm is characterized by its efficiency and use cases, with Binary Search and Quick Sort generally being preferred for their performance.

Uploaded by

battagnanadeep
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MODULE-3

Searching Techniques

1. Linear Search:

 Algorithm for Linear Search:

 Linear Search Time Complexity Analysis:

Time Complexity: O(n) - Linear Search checks each element in the array exactly once in
the worst case, so its time complexness is linear with respect to the size of the array (n).
2. Binary-Search:

Algorithm for Binary Search:

 Binary-Search Time Complexity Analysis:

Time Complexness: O(log n) - Binary-Search repeatedly divides the search interval in half,
so its time complexity is logarithmic with respect to the size of the array (n).
3. Fibonacci Search:
 Definition: Fibonacci Search is an algorithm for searching a sorted array using a divide
& conquer approach. It uses Fibonacci series numbers to determine the split points in the
array.
 Explanation: Fibonacci Search divides the search interval into smaller sub intervals
using Fibonacci numbers. It is similar to Binary Search but uses a more complex formula
for selecting the split point.

Algorithm for Fibonacci Search:


Example: let n=11, x=85

i 1 2 3 4 5 6 7 8 9 10 11
arr[i] 10 22 35 40 45 50 80 82 85 90 100

Following is the Fibonacci numbers table for acknowledgment.

F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10
Fib.no 0 1 1 2 3 5 8 13 21 34 55

Smallest Fibonacci number greater than or equal to 11 is 13.


Therefore fibM = 13, fibMm1 = 8, and fibMm2 = 5

offset=-1

fibM fibMm1 fibMm2 offset i Arr[i] Inference


13 8 5 -1 4 45 Move down by one, reset
offset
8 5 3 4 7 82 Move down by one, reset
offset
5 3 2 7 9 90 Move down by two
2 1 1 9 8 85 return i

 Fibonacci Search Time Complexity Analysis:

Time Complexness: O(log n) - Fibonacci-Search also has a logarithmic time


complexness similar to Binary-Search.
Each of the algorithms for Linear Search, Binary Search, and Fibonacci Search has its own
characteristics and use cases, with Binary Search being the most efficient for sorted data, while
Linear Search is simple but less efficient. Linear-Search has a linear time complexness, while
Binary-Search and Fibonacci-Search both have logarithmic time complexity. Binary Search is
generally preferred for searching in sorted arrays due to its simplicity and efficiency. Fibonacci
Search is less commonly used.

Sorting Techniques:

1. Selection-Sort:

 Algorithm for Selection Sort:


In this code:

 Selection-Sort is a function that takes an int egral array arr and its length n as
parameters and sorts the array using the Selection-Sort Algorithm.
 The outer loop iterates through each element of the array from left to right, considering
them as the minimum element initially.

The main utility shows the usage of the selection-Sort function on an example array.

After sorting, the array will be in ascending order.

2. Insertion Sort:

When you run this code, it will output the original array followed by the sorted array.
 Algorithm for Insertion Sort:

3. Bubble-Sort:
 Algorithm for Bubble Sort:

When you run this code, it will output the original array followed by the sorted array.
4. Merge-Sort:

 Merge-Sort Example: Given an array 6,5,12,10,9,1


 Algorithm for Merge Sort:
This code defines the merge function to merge two sub arrays and the merge-Sort function to
perform Merge Sort on an array of integers.

5. Quick-Sort:

 Definition: Quick-Sort is a divide & conquer sorting algorithm. It selects a "pivot"


component, partitions the array into components less than and greater than the pivot, and
recursively sorts these partitions.
 Explanation: Quick Sort selects a pivot element (often the last or middle component),
partitions the array into 2 sub arrays: one with components less than the pivot and one
with components larger than the pivot. It then recursively sorts these sub arrays. The
partitioning process is a key step.
 Time Complexity: O(n^2) is the worst case & O(n log n) on average and best cases.
Quick-Sort is often quicker than other O(n log n) algorithms.

 Algorithm for Quick Sort:


When you run this code, it will output the original array followed by the sorted array.
6. Heap Sort:

 Example: Given array, build a tree for it and do heapify.






 After heapify, the heap represents following array.
 Next, we have to deletion the main root component (89) from the max heap. To deletion this
node, we have to swap it with the last node, i.e. (11). After deletion the main root component,
we again have to heapify it to change it into max heap.


 After swaps the array components 89 with 11 and converts the heap into max-heap, the
components of array are as follows.


 Repeat deleting root again and heapify till no elements left.

 Algorithm for Heap Sort:


This code defines the heapify function to heapify a subtree, and the heapSort function to perform
Heap Sort on an array of integers. The main function establish how to usage the heapSort
function to sort an illustration array.

When you run this code, it will output the original array followed by the sorted array.

7. Radix Sort or Bin Sort or Bucket Sort:

 Definition: Radix-Sort is a non comparable sorting algorithm that works by forwarding


individual digits of the numbers in the array, from the (Minimum)Least significant digit
(LSD) to the (Maximum)Most significant digit (MSD) or vice versa.
 Explanation: Radix Sort processes the array by considering the digits of the numbers. It
starts by sorting based on the minimum significant digit and proceeds to the maximum
significant digit.
 Time Complexity: O(n * k) in the worst, average, and best cases, where N is the number
of components & K is the number of digits in the maximal number. It is efficient when k is
small compared to n.
 Radix Sort Example:

Radix-Sort Algorithm

Hashing:

Hash-Table is a data structures which stores the data in an associativity manner. In a hash table,
data is stored in an sequential format, where each data value has its possess unique index value.
Access of data becomes very quickly if we know the index of the desired data. Hash-Table uses
an array as a depository medium and uses hash-technique to generate an index where an
component is to be inserted or is to be located from.
Hashing is commonly used for data retrieval and data storage, as it allows for quick and efficient
data lookup.

Hash-Functions: A hash-function is a numerical function that takes an input (or "message") and
returns a fixed size string of bytes. It should be settled, significant that for the same input, it will
always generates the same hash-value.

1. Division Method:

 Description: The division method is a simple hashing technique where you dividing the
key by the size of the hash-table and use the remainder as the hash code. It's expressed as
hash(key) = key % table_size.
 Example: Suppose you have a hash table with 10 slots (0 to 9). To hash the key "42" using
the division method, you calculate hash(42) = 42 % 10, which results in a hash code of 2.
So, key "42" would be stored in slot 2 of the hash-table.

2. Multiplication-Method:

 Description: The multiplication process involving multiplying the key by a fixed fraction
and takes the fractional part of the result as the hash-code. It's expressed as hash(key) =
floor(table_size * (key * A % 1)), where "A" is a constant chosen based on data
characteristics.
Processing Steps:

 Example: Suppose you have a hash table with 8 slots (0 to 7). To hash the key "25" using
the multiplication method with A = 0.618, you calculate hash(25) = floor(8 * (25 * 0.618
% 1)), which results in a hash code of 3. The key "25" would be stored in slot 4 of the hash
table.

3. Folding-Method:

Example - 1

= 5 (discard Carry over)


Thus, h (k) = 5
4. Mid Square Method:

 Examples:
k 3205 7148 2345
k*k 10272025 51093904 5499025
h(k) 72 93 99

In hashing, a conflict occurs when 2 different keys generate the same hash code or index
in the hash-table. Since hash functions map potentially infinite input values to a finite
range of hash codes (the size of the hash table), collisions are inevitable, especially

when dealing with a large number of keys. Handling collisions is a critical aspect of hash
table design.

There are several collision resolution techniques to address collisions:

Separate-Chaining:

Each bucket (or slot) in the hash-table maintains a linked-list to accumulation multiple values
that hash to the identical index.
 Pros: Easy to implement, allows for efficient handling of multiple collisions, and works
well when collisions are infrequent.
 Cons: Requires additional memory for storing linked lists or other data structures.
Example:

Open Addressing:

Open addressing is one such technique where when a conflict occurs, the algorithm searching
for the adjacent available slot within the same table.
Linear Probing:
 Description: In linear probing, when a conflict occurs, the algorithm searching for the
adjacent available slot by incrementing the index linearly until an blank slot is found.
 Example: Consider you have a hash-table with seven slots, and the keys 76,
93,40,47,10,55. When collision occurs, the linear probing will check next available
index.

Quadratic-Probing:
 Description: Quadratic-probing is same to linear-probing, but instead of incrementing
linearly, it uses a quadratic function to determine the next index to probe. This helps
reduce clustering and may lead to a more even distribution. Consider a record R with key n
has the hash-address hash(n)=h. Then instead of searches the locations with addresses h,
h+1, h+2, …, we linearly-search the locations with addresses.
h, (h+1)%T, (h+4)%T,( h+9)%T, …, (h+i2)%T, …
Double Hashing:

 Description: Double hashing involves using a secondary hash-function to calculate the


size for probing. When a conflict occurs, the algorithm adds the step size to the current
index and checks the resulting index until an empty slot is found.

You might also like