[go: up one dir, main page]

0% found this document useful (0 votes)
8 views9 pages

Sort explanation

The document provides an overview of various sorting algorithms including selection sort, bubble sort, quick sort, and flash sort, detailing their core concepts, step-by-step processes, time and space complexities, and potential variants and optimizations. Selection sort and bubble sort both have a time complexity of O(n^2), while quick sort is more efficient with O(n log n) in average cases. Flash sort is a bucket sort variant with a best-case time complexity of O(n) but can degrade to O(n^2) in the worst case.

Uploaded by

hogiaphuca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

Sort explanation

The document provides an overview of various sorting algorithms including selection sort, bubble sort, quick sort, and flash sort, detailing their core concepts, step-by-step processes, time and space complexities, and potential variants and optimizations. Selection sort and bubble sort both have a time complexity of O(n^2), while quick sort is more efficient with O(n log n) in average cases. Flash sort is a bucket sort variant with a best-case time complexity of O(n) but can degrade to O(n^2) in the worst case.

Uploaded by

hogiaphuca
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Selection sort

Core Concepts:
- Starting with the first index, selection sort finds the index with the smallest
value (or min for short)

- It then swaps the value with the first index of the unsorted array

- After swapping, the process continues with the searching for the next index
with smallest value

- The algorithm loops until there’s only one unsorted element left

Step-by-step explanation:
Original array: | 67 25 18 33 55

- Step 1: | 67 25 18 33 55 (select the


first index which is currently min)

- Step 2: | 67 25 18 33 55 (since the


next index has a smaller value than
the current min, it is now min)

- Step 3: | 67 25 18 33 55 (continues
to find min)

- Step 4: | 67 25 18 33 55 (the next


index has higher value than min, so
we move forwards)

- Step 5: 18 | 25 67 33 55 (after
reaching the end, swap min with the
first index of unsorted, min is now
sorted)

- Step 6: 18 | 25 67 33 55 (starts back


from the beginning)
Complexity analysis:
- Time: Since selection sort find the min value, it first has to do n
comparisons (with n being the amount of elements) and then swap min to
the first index. Next, it scan the (n – 1) leftover elements, then (n – 2)
elements and so on. Therefore, the time complexity is: O(n 2)

Best case: O(n2)

Worst case: O(n2)

Average case: O(n2)

- Space: Due to the fact that selection directly changes the array, it has the
space complexity of: O(1)

Variants and optimizations:


- Variants:

 A popular variant of selection is cocktail sort (also known as double


selection sort): rather than finding min, it searches for both min and
max value in the array
 Selection sort can also be considered as stable sort if instead of
swapping min with the first index, min was to be inserted and next
indexes would be move forward instead
 Another popular alternative to selection sort is bingo sort which as
oppose to finding min it finds the max value instead

- Optimizations: Since selection sort is one of the worst sorting algorithms,


there isn’t much to optimize beside using the variants mentioned above

Bubble sort

Core Concepts:
- Starting with the first index, bubble sort checks whether the value of the
next index is higher or lower than the current index

 If the next index value is lower, it then swaps the value of both indexes
and moves onto the next index
 If the next index value is higher, it moves onto the next index
- This process repeats until it reaches the final index which, in theory, will
have the largest value and will be the first sorted index

- It then repeats itself, starting from the first index again, going until it
reaches the final unsorted index (which excludes the sorted indexes from
previous loops)

Step-by-step
explanation:
Original array: 7 2 5 8 4 |

- Step 1: 7 2 5 8 4 | (the first index is


selected)

- Step 2: 7 2 5 8 4 | (checks value with


the next index)

- Step 3: 2 7 5 8 4 | (current index


value is higher, so we swap both
values)

- Step 4: 2 7 5 8 4 | (moves onto the


next index)

- Step 5: 2 5 7 8 4 | (the final swap of


the first loop)

- Step 6: 2 5 7 4 | 8 (the previous last


index is now sorted)

- Step 7: 2 5 7 4 | 8 (start back from


the beginning, excluding the sorted
index)

Complexity analysis:
- Time: Similar to selection sort, bubble sort also has the time complexity of
O(n2). The difference is that instead of finding min through n, (n – 1), (n – 2)
elements, bubble sort goes through (n – 1), (n – 2) elements and swap each
other if the current index has greater value

Best case: O(n) (for sorted data)

Worst case: O(n2) (for reversed data)

Average case: O(n2)

- Space: Bubble sort have the space complexity of O(1) since it also directly
changes the array

Variants and optimizations:


- Variants:

 Odd-even sort or simply brick sort, is one of the variants of bubble sort.
It works by comparing odd indexes and even indexes rather than each
index one at a time
 Bubble sort also has its own bidirectional variant called cocktail shaker
sort. It processes similar to bubble sort but it changes back and forth
between checking if the current index is greater (from left to right) or if
the current index is smaller (from right to left)

- Optimization: To optimize bubble sort, we can implement so that the inner


loop avoid checking the last (n – 1) elements rather than both loops repeat n
times because in theory, those are already sorted

Quick sort

Core Concepts:
- Firstly, a pivot is chosen (can be first, last or middle index)

- Divide the array into two smaller ones, smaller (<) & higher or equals to (≥)
the pivot’s value

- Repeat the pivot choosing in both arrays and continue to divide until only
one element remains
- Merge all elements together into one sorted array

Step-by-step explanation:
Original array: 9 10 3 18 27

- Step 1: 9 10 3 18 27 (Choose middle as


pivot)

- Step 2: 3 | 10 9 18 27 (After separating


elements to < or ≥ the pivot’s value)

- Step 3: 3 | 10 9 18 27 (Continue
choosing new pivot)

- Step 4: 3 | 9 | 10 18 27 (After 2nd


separation)

- Step 5: 3 | 9 | 10 18 27 (Final pivot)

- Step 6: 3 9 10 18 27 (Pivot is in the


correct spot, merge previous pivot and
current array)

Complexity analysis:
- Time: The partitioning step, which nearly go through all the elements takes
O(n) time, while each subarrays of the original go through n/2 elements
using recursive. Therefore, quick sort has the time complexity of: 2T(n/2) +
O(n) which is O(n log n)

Best case: O(n log n) (for random, nearly-sorted data and middle pivot)

Worst case: O(n2) (for sorted, reversed data and first/last pivot)

Average case: O(n log n) (for random, nearly-sorted data and semi-middle
pivot)

- Space: With best/average case, quick sort splits the array into 2 subarrays
half/nearly half the original array, this occurs recursively giving quick sort the
space complexity of O(log n). On the other hand, if the data is
sorted/reversed and the pivot is either the first or last index, each subarray
has (n – 1) elements resulting a space complexity of O(n)

Best case: O(log n) (for random, nearly-sorted data and middle pivot)

Worst case: O(n) (for sorted, reversed data and last/first pivot)

Average case: O(log n) (for random, nearly-sorted data and semi-middle


pivot)

Variants and optimizations:

- Variants:

 Multi-pivot quick sort have the same premise as normal quick sort but
has the ability to pick multiple pivot instead
 Quick radix sort is as expect a combination of both quick sort and radix
sort where instead of comparing each element’s value, it compares the
element’s digit instead

- Optimization: The most popular optimization is using the median of 3


elements to choose the pivot, but a more efficient method is through tail call.
Basically, it converts recursive calls into loops wherever possible in order to
save stack space so that it makes one recursive call instead of two

Flash sort

Core Concepts:

Flash sort is an implementation of bucket sort

- First, flash sort search for the min and max value to estimate the range of
the array

- Then it divides the array and put them into different buckets/classes (using
a classification function) before counting the number of elements of each
buckets

- Continuing, flash sort adds the value of each bucket’s elements into an
accumulative sum

- Finally, the elements in each buckets are rearranged and sorted using
insertion sort

Step-by-step explanation:

Original array: 35 8 17 26 7
- Step 1: 35 | 8 17 26 7 [separate the
array into 2 buckets (by using the
formula m = n*0.43) with bucket
index]

- Step 2: 35 | 8 17 26 7 (after counting


the elements in each bucket, bucket 0
contains index 0 to 4, and bucket 1
contains index 5)

- Step 3: 7 8 17 26 | 35 (since bucket 1


contains index 5, swap its only
element to the correct position)

- Step 4: 7 8 17 26 | 35 [elements in
bucket 0 are in their correct position
(index 0 to 4) so do nothing]

- Step 5: 7 8 17 26 | 35 [after sorting


bucket 0 and 1 with insertion sort
(bucket 0 is already sorted and bucket
1 only has 1 element)]

Complexity analysis:

- Time:

Best case: O(n) (buckets has nearly the same amount of elements)

Worst case: O(n2) (almost all elements are in few buckets)

Average case: O(n3/2)

- Space: O(m) with m being the amount of buckets (which are vectors)
Variants and optimizations:

You might also like