module-3-1
module-3-1
Searching Techniques
1. Linear Search:
Time Complexity: O(n) - Linear Search checks each element in the array exactly once in
the worst case, so its time complexness is linear with respect to the size of the array (n).
2. Binary-Search:
Time Complexness: O(log n) - Binary-Search repeatedly divides the search interval in half,
so its time complexity is logarithmic with respect to the size of the array (n).
3. Fibonacci Search:
Definition: Fibonacci Search is an algorithm for searching a sorted array using a divide
& conquer approach. It uses Fibonacci series numbers to determine the split points in the
array.
Explanation: Fibonacci Search divides the search interval into smaller sub intervals
using Fibonacci numbers. It is similar to Binary Search but uses a more complex formula
for selecting the split point.
i 1 2 3 4 5 6 7 8 9 10 11
arr[i] 10 22 35 40 45 50 80 82 85 90 100
F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10
Fib.no 0 1 1 2 3 5 8 13 21 34 55
offset=-1
Sorting Techniques:
1. Selection-Sort:
Selection-Sort is a function that takes an int egral array arr and its length n as
parameters and sorts the array using the Selection-Sort Algorithm.
The outer loop iterates through each element of the array from left to right, considering
them as the minimum element initially.
The main utility shows the usage of the selection-Sort function on an example array.
2. Insertion Sort:
When you run this code, it will output the original array followed by the sorted array.
Algorithm for Insertion Sort:
3. Bubble-Sort:
Algorithm for Bubble Sort:
When you run this code, it will output the original array followed by the sorted array.
4. Merge-Sort:
5. Quick-Sort:
After heapify, the heap represents following array.
Next, we have to deletion the main root component (89) from the max heap. To deletion this
node, we have to swap it with the last node, i.e. (11). After deletion the main root component,
we again have to heapify it to change it into max heap.
After swaps the array components 89 with 11 and converts the heap into max-heap, the
components of array are as follows.
Repeat deleting root again and heapify till no elements left.
When you run this code, it will output the original array followed by the sorted array.
Radix-Sort Algorithm
Hashing:
Hash-Table is a data structures which stores the data in an associativity manner. In a hash table,
data is stored in an sequential format, where each data value has its possess unique index value.
Access of data becomes very quickly if we know the index of the desired data. Hash-Table uses
an array as a depository medium and uses hash-technique to generate an index where an
component is to be inserted or is to be located from.
Hashing is commonly used for data retrieval and data storage, as it allows for quick and efficient
data lookup.
Hash-Functions: A hash-function is a numerical function that takes an input (or "message") and
returns a fixed size string of bytes. It should be settled, significant that for the same input, it will
always generates the same hash-value.
1. Division Method:
Description: The division method is a simple hashing technique where you dividing the
key by the size of the hash-table and use the remainder as the hash code. It's expressed as
hash(key) = key % table_size.
Example: Suppose you have a hash table with 10 slots (0 to 9). To hash the key "42" using
the division method, you calculate hash(42) = 42 % 10, which results in a hash code of 2.
So, key "42" would be stored in slot 2 of the hash-table.
2. Multiplication-Method:
Description: The multiplication process involving multiplying the key by a fixed fraction
and takes the fractional part of the result as the hash-code. It's expressed as hash(key) =
floor(table_size * (key * A % 1)), where "A" is a constant chosen based on data
characteristics.
Processing Steps:
Example: Suppose you have a hash table with 8 slots (0 to 7). To hash the key "25" using
the multiplication method with A = 0.618, you calculate hash(25) = floor(8 * (25 * 0.618
% 1)), which results in a hash code of 3. The key "25" would be stored in slot 4 of the hash
table.
3. Folding-Method:
Example - 1
Examples:
k 3205 7148 2345
k*k 10272025 51093904 5499025
h(k) 72 93 99
In hashing, a conflict occurs when 2 different keys generate the same hash code or index
in the hash-table. Since hash functions map potentially infinite input values to a finite
range of hash codes (the size of the hash table), collisions are inevitable, especially
when dealing with a large number of keys. Handling collisions is a critical aspect of hash
table design.
Separate-Chaining:
Each bucket (or slot) in the hash-table maintains a linked-list to accumulation multiple values
that hash to the identical index.
Pros: Easy to implement, allows for efficient handling of multiple collisions, and works
well when collisions are infrequent.
Cons: Requires additional memory for storing linked lists or other data structures.
Example:
Open Addressing:
Open addressing is one such technique where when a conflict occurs, the algorithm searching
for the adjacent available slot within the same table.
Linear Probing:
Description: In linear probing, when a conflict occurs, the algorithm searching for the
adjacent available slot by incrementing the index linearly until an blank slot is found.
Example: Consider you have a hash-table with seven slots, and the keys 76,
93,40,47,10,55. When collision occurs, the linear probing will check next available
index.
Quadratic-Probing:
Description: Quadratic-probing is same to linear-probing, but instead of incrementing
linearly, it uses a quadratic function to determine the next index to probe. This helps
reduce clustering and may lead to a more even distribution. Consider a record R with key n
has the hash-address hash(n)=h. Then instead of searches the locations with addresses h,
h+1, h+2, …, we linearly-search the locations with addresses.
h, (h+1)%T, (h+4)%T,( h+9)%T, …, (h+i2)%T, …
Double Hashing: