0 ratings 0% found this document useful (0 votes) 9 views 56 pages DSA Notes
The document outlines essential concepts in problem-solving, algorithms, and data structures, emphasizing the importance of logical approaches and critical thinking. It details the characteristics of algorithms, the distinction between abstract data types (ADTs) and data structures, and various classifications of data structures. Additionally, it discusses the complexity of algorithms in terms of time and space, providing a comprehensive overview of foundational concepts in computer science.
AI-enhanced title and description
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here .
Available Formats
Download as PDF or read online on Scribd
Go to previous items Go to next items
©) studocu
DSA-1 - contains important concepts of 2nd and 3rd unitProblem Solving:
Problem-solving is a critical skill for success in any field
Problem solving involves using critical thinking, logic, creativity, and analysis to identify and
overcome obstacles in order to reach a desired outcome.
Steps in Problem solving:
Identify the problem.
Develop a plan of action
¢ Implement the chosen solution
© Evaluate the effectiveness of the solution.
To solve a problem, you need to follow a logical approach, which involves breaking down the
problem into smaller sub-problems, creating an algorithm to solve each sub-problem, and
implementing the algorithm using an appropriate data structure
Effective problem solving requires a combination of critical thinking skills, creativity, and
perseverance. It is important to remain open-minded, flexible, and willing to adapt to new
information or circumstances.
Problem: A problem is a task or a question that needs to be solved or answered,
The key to solving a problem is to understand the problem clearly and break it down into
smaller, more manageable parts
Logic
Once you understand the problem, you need to develop a logical approach to solving it.
Logic involves thinking through the problem step by step and determining the most efficient way
to solve it.
This requires a deep understanding of the problem and the ability to break it down into smaller,
more manageable parts,
Algorithm
An algorithm is a set of instructions that tells a computer what steps to take to solve a problem.
An algorithm is a well-defined, step-by-step procedure for solving a problem or achieving a
specific goal
The main characteristics of algorithms are:
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)4.Well-Defined: An algorithm must have a clear and unambiguous definition that leaves no
room for interpretation. Each step of the algorithm must be precisely defined so that it can be
executed in a systematic and predictable way.
input: An algorithm must take input values, which are the data or information to be processed
by the algorithm. The input can be of any type, such as numbers, characters, strings, or even
other algorithms.
3.Output: An algorithm must produce an output, which is the result of the processing performed
on the input. The output can be of any type, depending on the purpose of the algorithm, such as
a number, a string, a list, or even another algorithm.
4,Finiteness: An algorithm must terminate after a finite number of steps. It must not continue
indefinitely, otherwise, it would be useless for practical purposes.
5.Effectiveness: An algorithm must be effective, meaning that it must always produce the
correct output for any given input within a reasonable amount of time.
6.Generality: An algorithm must be general, meaning that it must be applicable to a wide range
of problems, not just a specific one.
7.Uniqueness: An algorithm must be unique, meaning that there must not be any other
algorithm that solves the same problem using the same approach.
8.Clear and unambiguous: Each step of the algorithm must be clear and unambiguous, with
no room for interpretation
An algorithm needs to be clear, concise, and unambiguous. It should be designed to solve the
problem efficiently, with the fewest possible steps.
Algorithm design tools such as pseudocode and flowcharts can be used to aid in the design and
communication of algorithms,
There are several tools used to design and represent algorithms, including pseudocode,
flowcharts, and programming languages.
Pseudocode
Pseudocode is a high-level description of an algorithm that uses a combination of natural
language and programming language syntax.
It is used to help programmers understand the steps required to solve a problem and to
communicate those steps to others.
Jad by Harshae Pakhale (lerntach221S@gmallcom)Pseudocode is particularly useful in the early stages of algorithm design, as it allows
programmers to quickly and easily experiment with different approaches and make changes
without worrying about the specifics of a particular programming language.
Pseudocode can also be used as a way to communicate ideas to non-technical stakeholders,
such as project managers or clients.
Pseudocode can be written in any format, but it typically uses a mix of English language
phrases and symbols from programming languages such as variables, loops, and conditional
statements.
Example
#Pseudocode for Finding odd and even numbers.
1, Start
2. Get an input number from the user and store it in a variable called num.
3. Ifnum modulo 2 is equal to 0:
a. Print "num is even"
4, Else:
a, Print “num is odd”
5. End
# Pseudocode for Finding Factorial of number.
1, Start
2, Take an input number from the user and store it in a variable called num.
3. Initialise a variable called factorial to 1
4, Use a loop to multiply factorial by each integer from 1 to num:
a. Set a variable called i to 1
, While iis less than or equal to num, do the following:
i, Multiply factorial by i.
ii. Increment i by 1
5. Print the value of factorial.
6. End.
Data Structure
A data structure is a way of organising and storing data in computer programs so that it can be
accessed and used efficiently.
Itis a way to manage and manipulate data to make it more accessible and useful.
Think of it ike a toolbox with different compartments for storing different tools. Each
compartment is designed to hold a specific type of tool and make it easy to find and use when
you need it,
Similarly, a data structure is designed to store data in a way that makes it easy to access,
manipulate, and perform operations on it.
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)‘Some common examples include arrays,
inked lists, stacks, queues, trees, and graphs.
Each data structure has its own set of operations that can be performed on it, such as adding or
removing elements, searching for elements, and sorting the data,
Choosing the right data structure is important for the efficiency and effectiveness of a program
or system.
‘Some Data structure and their general use:
Array - Efficient random access and iteration over a contiguous block of memory.
Linked List - Efficient insertion and deletion operations, especially in the middle of the list.
Stack - Last-in, first-out (LIFO) data storage and retrieval.
Queue - First-in, first-out (FIFO) data storage and retrieval.
Hash Table - Fast key-value lookup and insertion
Tree - Hierarchical data storage and retrieval, efficient searching and sorting
Graph - Non-linear data storage and retrieval, efficient traversal and pathfinding algorithms.
Data
Data refers to raw, unprocessed facts or figures.
Itis a collection of individual facts, such as numbers, letters, or symbols, that do not provide any
meaning or context on their own.
For example, a list of numbers or a collection of characters would be considered data.
Data can be structured or unstructured, and can come from a variety of sources, including
sensors, databases, or human input.
Information
Information is data that has been processed, organised, or structured in a meaningful way to
provide context or insight.
Information is data that has been given meaning, either through relationships or associations,
and can be used to answer a specific question or make a decision
For example, a graph or chart that displays the data in a visual format, or a report that
summarises the data into key findings would be considered information.
Knowledge
Knowledge is the understanding or insight that is gained from information,
Jad by Harshae Pakhale (lerntach221S@gmallcom)Itis the application of information to solve problems, make decisions, or gain insights.
Knowledge is gained by analysing and synthesising information, and understanding how it fits
into a larger context.
Data, information, and knowledge are terms that are closely related but have different
meanings.
The relationship between data, information, and knowledge can be seen as a progression from
raw facts (data), to processed information (data with meaning), to insights and understanding
(knowledge).
Data is the building block, information is the result of processing data, and knowledge is the
application of information to gain deeper understanding and insights.
Abstract Data Types(ADTs)
Abstract Data Types (ADTs) are a way of defining a data structure by specifying a set of
operations that can be performed on that structure, without necessarily specifying how those
operations are implemented,
ADTs provide a way to separate the interface of a data structure from its implementation.
This separation allows different implementations of the same ADT to be used interchangeably,
as long as they conform to the same interface.
This makes it possible to change the implementation of a data structure without affecting the
code that uses it.
Example: it's ike having a set of instructions for a game. As long as all players follow the same
set of rules, it doesn't matter what materials or equipment they use to play the game. Similarly,
ADTs provide a set of rules for how data can be stored and manipulated, As long as different
implementations of an ADT follow these rules, they can be used interchangeably in a program
without affecting how the program works.
The interface of an ADT refers to the set of operations or methods that can be performed on the
data structure. It defines what the data structure can do, but not how it does it.
These operations may include creating and destroying the data structure, inserting and deleting
elements, accessing and modifying elements, and iterating over the elements.
For example, the interface of a list ADT might include methods like add, remove, and get, which
allow you to add items to the list, remove items from the list, and retrieve items from the list,
respectively.
Some examples of commonly used ADTs include lists, stacks, queues, and maps.
A list is an ordered collection of elements
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)Asstack is a collection of elements that can be accessed in a last-in-first-out (LIFO) order.
A queue is a collection of elements that can be accessed in a first-in-first-out (FIFO) order.
‘A map is a collection of key-value pairs that can be accessed by key.
Difference Between ADTs and Data Structure:
‘The ADT specifies what these methods should do, but not how they should be implemented. On
the other hand, a data structure is a concrete implementation of an ADT.
Data structure specifies how the operations defined by the ADT are actually carried out using a
specific set of data elements and relationships between them.
ADTs define the abstract operations that a data structure can perform, while data structures are
the actual implementations of those operations using specific data elements and structures,
Use of ADTs:
ADTs are an important tool in software development because they provide a way to encapsulate
data and operations in a single unit.
This encapsulation makes it easier to reason about the behaviour of a program, and reduces
the likelihood of errors caused by the misuse of data structures,
Classification Of Data Structure
Data structures can be classified based on several criteria, including their organisation and
behaviour.
1) Linear & Non-Linear Data structure
Linear and non-linear data structures refer to how the data elements are arranged and
connected within a data structure.
Linear Data Structure: Linear data structures are those in which the data elements are
arranged in a sequential order, with a single predecessor and a single successor element.
This means that the elements can be traversed in a linear or sequential manner, either forward
or backward
‘Some examples of linear data structures include:
Arrays: A fixed-size collection of elements, where each element is identified by an index
Linked lists: A dynamic collection of elements, where each element is connected to the
next element via a link or pointer.
Stacks: A collection of elements that supports two operations: push, which adds an
element to the top of the stack, and pop, which removes the top element from the stack.
Jad by Harshae Pakhale (lerntach221S@gmallcom)* Queues: A collection of elements that supports two operations: enqueue, which adds an
element to the end of the queue, and dequeue, which removes the first element from the
queue.
: Non-linear data structures have elements arranged in a
hierarchical or non-sequential manner, with multiple predecessor and/or successor elements.
This means that the elements cannot be traversed in a linear or sequential manner, but instead
require a specialised traversal algorithm.
Some examples of non-linear data structures include:
© Trees: A collection of elements arranged in a hierarchical manner, with each element
having a single predecessor (except for the root node) and multiple successor elements.
© Graphs: A collection of nodes and edges that represent a set of relationships between
the nodes. Graphs can be directed or undirected, and can have cycles or be acyclic.
Linear data structures are useful for storing and accessing data in a specific order, while
non-linear data structures are useful for representing complex relationships or hierarchies
between elements.
2) Static & Non-Static Data structure
‘These terms refer to how the size of the data structure is determined and whether it can be
changed after it has been created.
Static Data Structure: Static data structures are fixed in size and cannot be resized once they
are created
‘The size of the data structure must be specified in advance and cannot be changed during
runtime.
If the size of the data structure is known in advance and will not change during runtime, a static
data structure may be more efficient.
This is because the memory allocation for a static data structure can be done at compile-time,
making it faster to access the elements in the structure during runtime.
Examples of Static Data Structure : arrays and static linked lists
Non-Static Data structure: Dynamic or Non-static data structures can change their size during
runtime.
They can be resized to accommodate new data elements or to remove existing ones. The size
of a dynamic data structure is not fixed at the time of creation and can be adjusted as needed.
if the size of the data structure is not known in advance or may change during runtime, a
dynamic data structure is more appropriate,
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)Dynamic data structures provide more flexibility and can adjust to the changing requirements of
the problem being solved. The downside is that dynamic data structures may require more
memory allocation during runtime, which can affect performance.
Examples of Dynamic data structures: Dynamic arrays, linked lists, stacks, and queues.
3) Persistent & Ephemeral Data Structures
These terms refer to how the data structure behaves with respect to time and updates,
Ephemeral Data Structure: Ephemeral data structures are those in which the original structure
is modified every time a change is made.
This means that after an update is made, the original data structure is lost, and only the updated
version remains.
Examples of ephemeral data structures include arrays, linked lists, stacks, and queues.
Persistent Data Structure: Persistent data structures are those in which the original data
structure remains unchanged after updates.
This means that multiple versions of the same data structure can exist at different points in time,
with each version representing a different state of the structure.
Persistent data structures are particularly useful in applications where multiple versions of the.
same data structure need to be maintained.
There are two main types of persistent data structures:
1) Fully Persistent Data Structure: Fully persistent data structures allow you to access
and modify any version of the data structure that has been created, while still keeping
the original version intact
This means you can go back in time and see what the data looked like at any point in the past.
2) Partially Persistent Data Structure: Partially persistent data structures only allow you
to access and modify the most recent version of the data structure, but they still preserve
the original version.
This means you can see what the data looked like before the most recent change, but you can't
go back to an earlier version before that.
Persistent data structures are like a time machine that allows you to go back in time and see
what the data looked like at an earlier point,
Jad by Harshae Pakhale (lerntach221S@gmallcom)Implementing persistent data structures can be more challenging than implementing ephemeral
data structures, as it requires keeping track of multiple versions of the same data structure,
Complexity Of Algorithms
In computer science, the Complexity of an Algorithm refers to the amount of time and space
resources required for the algorithm to solve a particular problem.
This complexity can be analysed and measured in terms of two main types: Time complexity
and Space complexity.
Time Complexity:
Time complexity refers to the amount of time it takes for an algorithm to solve a problem:
Time complexity is a measure of the amount of time or number of operations required by an
algorithm to solve a problem, as the size of the input data grows.
The time complexity of an algorithm depends on several factors, including the size of the input
data, the number of operations performed by the algorithm, and the efficiency of the operations
themselves,
Space Complexity:
Space complexity is a measure of the amount of memory required by an algorithm to solve a
particular problem. It refers to the amount of memory used by an algorithm to store data,
variables, and other information during its execution
There are two main types of space complexity:
‘© Auxiliary space complexity refers to the amount of extra memory required by an
algorithm beyond the input data. This includes any additional space needed for
variables, temporary data structures, and other overhead,
* Total space complexity refers to the total amount of memory required by an algorithm,
including both the input data and any additional memory used by the algorithm,
Time complexity Analysis:
Running time analysis is an important aspect of algorithm analysis, as it helps in determining the
efficiency of an algorithm.
Itis also known as time complexity analysis, and it involves determining how the processing
time of a problem increases as the size of the input increases.
Comp:
g the Algorithms:
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)‘When comparing algorithms, it is important to use objective measures that are independent of
machine time (execution time is specific to a particular computer), programming style (number
of statements varies with programming languages as well as with the style of individual
programmer), or other factors that can influence execution times.
‘One common approach is to express the running time of an algorithm as a function of the
input size, and compare these functions to determine which algorithm is more efficient for a
given problem.
some objective measures that can be used to compare algorithms:
4.Asymptotic analysis: Asymptotic analysis involves analyzing the growth rate of an
algorithm's running time as the input size increases.
2.Counting basic operations: Counting the total number of basic operations (additions,
subtractions, multiplications, etc.) performed by an algorithm as a function of the input size is,
another objective measure.
Rate of Growth:
The rate at which running time increases as a function of input is called rate of growth.
Some commonly used rate of growths:
Time Complexity Name
Constant
| Logarithmic
Linear —
| Linear Logarithmic
Quadratic:
| Cubie
Exponential
| Factorial
(One common technique for approximation is to ignore the low order terms in a polynomial
expression
In general, as the input size of an algorithm increases, the contribution of low order terms
becomes relatively insignificant compared to the highest order term. For example, consider the
following polynomial expression:
1M + 2n*3 + 100n*2 + 500 ~ nM4 (for large value of input size, n)
We can express the time complexity using big-O notation, where we focus on the dominant term
of the expression representing the running time. Here are some common types of time
complexities and their characteristics:
Downloaded by Harshad Pakhale (learntech2215@gmallcom)1. Linear time complexity (Q(n)):
This means that the running time of the algorithm increases linearly with the input size.
For example, if we have a loop that iterates n times and does constant work on each iteration,
then the time complexity would be O(n). As the input size doubles, the running time would also
double,
2. Quadratic time complexity (Q(n42)):
This means that the running time of the algorithm grows with the square of the input size.
For example, if we have a loop nested inside another loop, where both loops iterate n times,
then the time complexity would be O(n"2). As the input size doubles, the running time would
increase by a factor of four.
3. Cubic time complexity (O(n“3)):
This means that the running time of the algorithm grows with the cube of the input size.
For example, if we have three nested loops, where each loop iterates n times, then the time
complexity would be O(n*3). As the input size doubles, the running time would increase by a
factor of eight
4, Logarithmic time complexity (O(log n)):
This means that the running time of the algorithm grows logarithmically with the input size.
For example, if we have a binary search algorithm, where we divide the input in half at each
step, then the time complexity would be O(log n). As the input size doubles, the running time
would only increase by a constant amount.
Types of analysis:
‘When analyzing algorithms, there are three types of analysis that are commonly used:
4.Worst-Case Analysis :
Determines the maximum amount of time an algorithm will take to complete for any given
input.
Provides an upper bound on the running time of the algorithm, which is useful in situations
where the algorithm must complete within a certain time frame
2.Best-Case Analysis:
Determines the minimum amount of time an algorithm will take to complete for any given
input.
Provides a lower bound on the running time of the algorithm, which is useful in situations
where the algorithm must complete as quickly as possible.
3.Average-Case Analysis:
Determines the expected running time of an algorithm over a range of inputs,
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)Provides a more realistic estimate of the running time of the algorithm and is often used in
situations where the input is likely to be random or unpredictable.
Asymptotic notation:
‘Asymptotic notation is a mathematical notation used to describe the behavior of a function as its
input grows to infinity.
Itis commonly used to analyze the running time or space complexity of algorithms,
Big-O notation represents the upper bound of the function, indicating the maximum growth rate
of the function.
2.0mega Notation (-notation):
(Omega notation represents the lower bound of the fun
of the funetion.
n, indicating the minimum growth rate
3.Theta Notation (@-notation):
Theta notation represents both the upper and lower bounds of the function, indicating the exact
growth rate of the function
Asymptotic notation is useful because it allows us to compare the performance of algorithms
without being concemed about the exact values of the input or the constants involved in the
running time or space complexity.
Algorithmic strategies:
Algorithmic strategies refer to the techniques and approaches used to solve problems using
algorithms.
1.Divide and Conquer
Divide and Conquer is a popular algorithmic strategy that involves breaking down a complex
problem into smaller subproblems that can be more easily solved, and then combining the
results of these sub-problems to obtain the solution to the original problem.
Divide and Conquer is used in many well-known algorithms, such as merge sort, quicksort,
and binary search
These algorithms are typically efficient, as they can reduce the time complexity of the problem
from O(n*2) to O(n log n) or even O(log n).
2.Greedy Strategy:
The greedy strategy is a popular algorithmic strategy that involves making the locally optimal
choice at each step in the hope of finding a global optimum.
Jad by Harshae Pakhale (lerntach221S@gmallcom)In other words, it involves making the best possible decision at the current time without worrying
about the future consequences of that decision.
The key steps in the Greedy strategy are:
*+Selection: Choose the locally optimal choice at the current time, based on the available
options.
~+Feasibility: Check if the selected option satisfies all the constraints of the problem.
++Solution: Update the solution with the selected option and repeat the process until the problem
is solved.
‘The Greedy strategy is often used in optimization problems, where the goal is to find the best
possible solution from a set of choices.
3.Brute Force:
Brute force is a straightforward algorithmic strategy that involves trying every possible solution
to a problem in a systematic manner.
In other words, it involves searching through all possible solutions to find the best one.
The brute force strategy is often used in problems where the number of possible solutions is
relatively small,
For example, in a game of tic-tac-toe, there are only a limited number of possible moves, so the
brute force strategy can be used to find the optimal move for a given situation.
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)Linear Data Structure Using Sequential
Organization
Alinear data structure is a type of data structure where data elements are arranged in a
sequential order, and each element has a unique successor and predecessor, except for the
first and last elements.
Sequential orga
n is a method used to arrange data elements in a linear data structure.
In this method, the data elements are placed in a row, one after another, in a continuous block
of memory.
Each element is placed at a fixed amount of space in the memory block.
The order of the elements is maintained in the memory, with the first element placed at the
beginning of the block and the rest of the elements placed in increasing memory addresses as
we move ahead in the block
Examples of linear data structure that uses sequential organization:
Array:
In an array, elements are stored in a sequence, and each element can be accessed using its
index position.
The index position of an element in an array is its distance from the beginning of the array,
measured in units of the size of each element.
2.Linked-list:
Ina linked list, each element is stored in a node, which contains a reference to the next element
in the sequence.
The nodes can be stored in a contiguous block of memory, with each node containing a fixed
amount of space for the data element and a reference to the next node.
‘Sequential organization is a simple and efficient way to implement linear data structures, as it
allows for fast access to any element in the sequence using its index position.
However, it has limitations in terms of flexibility and scalability, as the size of the data structure
must be fixed at the time of allocation, and adding or removing elements can require expensive
memory reallocation and copying.
Jad by Harshae Pakhale (lerntach221S@gmallcom)Array:
‘An array is a linear data structure that is used to store a collection of elements of the same data
type.
In Python, an array can be created using the built-in ist data type. For example, the following
code creates an array of integers:
my_array = [1, 2, 3, 4, 5]
In this example, my_array is an array of 5 integers, with the first element being 1, the second
element being 2, and so on.
(One important thing to note about arrays is that they have a fixed size, meaning that once an
array is created, its size cannot be changed.
Itis a fixed-size container that can store elements in contiguous memory locations.
Each element in an array is identified by an index, which is a non-negative integer that
represents the position of the element in the array.
‘An array can be thought of as an abstract data type (ADT) because it provides a set of
operations that can be performed on it, regardless of the specific implementation details.
The most common operations performed on an array include:
Accessing an element at a specific index position
Updating an element at a specific index position
‘Adding an element to the end of the array
Removing an element from the array
‘As an Abstract Data Type (ADT), arrays can be implemented using various data structures,
such as linked lists, trees, or hash tables, and the selection of the data structure for
implementation depends on the requirements of the application.
For example, if we need to frequently add or delete elements from the array, linked lists can be
a better choice because they allow efficient insertion and deletion
Similarly, if we want to perform efficient searching and sorting operations, we can choose data
structures like trees or hash tables.
The choice of implementation is crucial for the performance and efficiency of the program.
Accessing Elements: Elements in an array can be accessed using their index.
In Python, arrays are zero-indexed, which means that the first element in the array has an index
of 0, the second element has an index of 1, and so on,
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)The syntax for accessing an element of an array in Python is:
arrfindex]
Here, arr is the name of the array, and index is the index of the element that we want to access.
Inserting Elements: Elements can be inserted into an array at a specific position using the
insert() method in Python.
This method takes two arguments: the index where the new element should be inserted, and
the value of the new element.
For example, to insert the value 4 at index 2 of the arr array, we can use the following code:
arrinsert(2, 4)
print(arr) # Output: [1, 2, 4, 3]
Note that inserting an element at a position other than the end of the array requires shifting all
subsequent elements by one position, which can be an expensive operation for large arrays.
Deleting Elements: Elements can be removed from an array using the pop() method in Python.
This method takes an optional argument, which is the index of the element to be removed.
Ifo index is specified, pop() removes and retums the last element of the array.
For example, to remove the second element (which has an index of 1) from the arr array, we
can use the following code:
arr.pop(1)
print(arr) # Output: [1, 4, 3]
Updating Elements: Elements in an array can be updated by assigning a new value to the
corresponding index.
For example, to update the second element of the arr array to have a value of 5, we can use the
following code:
an{t]=5
print(arr) # Output: [1, 5, 3]
Sorting Elements: Elements in an array can be sorted using the builtin sorted() function in
Python, which returns a new sorted list without modifying the original array. Alternatively, the
sort() method can be used to sort the array in place.
For example, to sort the arr array in ascending order, we can use the following code:
sorted_arr = sorted(arr)
print(sorted_arr) # Output: [1, 3, 5]
arr.sort()
print(arr) # Output: [1, 3, 5]
Jad by Harshae Pakhale (lerntach221S@gmallcom)Merging Of Two Arrays:
Merging of two arrays involves combining the elements of two arrays into a single array ina
specific order.
Process of Merging two arrays:
1. Merging two arrays means taking two arrays and creating a new array that contains all the
elements of both the original arrays in a sorted order.
2. To do this, we first compare the first elements of both arrays and select the smaller element.
We add the selected element to the new array and move to the next element of the array from
which the selected element was taken. We continue this process until all elements of both
arrays are included in the new array.
3, Once all the elements from one of the arrays have been compared and appended to the
merged_array, there may still be some elements remaining in the other array. To include these
remaining elements, we use two additional while loops.
4.First, we check if there are any elements remaining in array’. If yes, we append them to the
merged_array and increment the index variable for array’. We continue this until all the
remaining elements from array! have been added to the merged_array.
5.Then, we check if there are any elements remaining in array2. If yes, we append them to the
merged_array and increment the index variable for array2. We continue this until all the
remaining elements from array2 have been added to the merged_array.
This process is commonly used in sorting algorithms, database operations, and other data
manipulation tasks.
# Defining two arrays to be merged
# Initializing the merged array
merged_array = []
# Initializing the index variables for the two arrays
i=0
Jao
# Merging the two arrays
while i < len(arrayt) and j < len(array2):
if array [i] < array2f):
merged_array.append(array1 (i)
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)iss
else
merged_array.append(array2{i))
jest
# Appending the remaining elements of the first array
while i 1:
# Compute the index of the next element to compare
i= min(offset + fib_n_2, n-1)
#If the target is greater than the current element, move the offset and Fibonacci numbers
to the right
if arti] < target:
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)# Ifthe target is less than the current element, move the offset and Fibonacci numbers to
the left
elif arr] > target:
fib_n = fib_n_2
fib_n_1 = fib_n_1 -fib_n_2
fib_n_2= fib_n- fib_n_t
# If the target is found, return the index
else:
return i
# If the target is not found, return -1
if arfoftset+1] == target:
return offset+1
else:
retum -1
This implementation takes an array arr and a target element target, and returns the index of the
arget element in the array usir
re Fibonacci search algorithm. The algorithm works by first
ci number that is greater than or equal to the size of the
the array into subarrays to be searched. The search
of th compared are chosen based on
The algorithm has a time complexity of O(log n
computing the smallest Fib:
and then using that numt
similar
th
elements to be
inan ho
Fibonacci
Indexed sequential search
Indexed sequential search is a search algorithm that improves the efficiency of sequential
search by dividing the list into blocks and creating an index for each block,
The index contains the key value of the first element of each block and its position in the list.
This way, when searching for an element, we can first check the index to determine the block
that may contain the element and then search within that block only, instead of searching the
entire list.
The algorithm works as follows:
1. Divide the list into blocks of fixed size.
Downloaded by Harshad Pakhale (learntech2215@gmallcom)2. Create an index for each block by storing the key value of the first element of the block
and its position in the list.
3. Sort the index by the key value.
4. Search for an element by first checking the index to determine the block that may
contain the element, Use binary search to search within that block only.
5. Ifthe element is not found in the block, move to the next block and repeat step 4 until the
element is found or the end of the list is reached.
The performance of indexed sequential search depends on the size of the blocks and the size of
the index.
Alarger block size reduces the number of blocks and hence reduces the size of the index, but it
also increases the number of elements that need to be searched within each block
Assmaller block size increases the size of the index but reduces the number of elements that
need to be searched within each block.
indexed sequential search can be an efficient search algorithm for large lists if the block size
and index size are chosen appropriately.
Example implementation of the indexed sequential search algorithm in Python:
def indexed_seq_search(lst, target):
# Define the size of the index
idx_size = int(len(Ist) ** 0.5)
# Create the index
index = [0] *idx_size
for i in range(idx_size):
index] = i * idx_size
# Find the block where the target may be located
block_num = -1
for i in range(idx_size - 1):
if lstfindex{il] <= target < Ist[index[i+1]}:
block_num =i
break
if block_num == -1:
block_num = idx_size - 1
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)# Search within the block using sequential search
block_start = indexjblock_num}
block_end = len(Ist) - 1 if block_num == idx_size - 1 else index{block_num+1] - 1
for jin range(block_start, block_end + 1)
if Istf] == target:
return i
# Target not found
return -1
In this implementation, the e of the index is determined by taking the square root of the length
ofthe list. The index is then created by setting ed nt to the starting index of each block
The algorithm then finds the block where the target element may be located by comparing itt
ex. If the target element is not found within the block,
ent is not in the lis
each block in the ini
te that the target
he first element
he algorithm return
Sorting:
sorting is the process of arranging items in a collection in a specific order.
The items can be anything - numbers, strings, or even more complex data structures like arrays,
lists, or trees. Sorting is an essential operation for many computer algorithms and applications.
There are two main types of sorting: Internal Sorting and External Sorting.
Internal Sorting:
Internal sorting algorithms are used to sort data that can be stored entirely in memory, without
requiring any additional disk space.
Internal sorting algorithms are used for smaller data sets that can be stored in memory.
These algorithms work on the principle of comparing and swapping individual elements until the
entire list is sorted.
Examples of internal sorting algorithms include Bubble Sort, Insertion Sort, Selection Sort,
Quick Sort, Merge Sort, Heap Sort, and Shell Sort
External Sorting:
External sorting algorithms are used to sort data that is too large to fit entirely in memory, and
must be stored on disk,
External sorting algorithms are used for larger data sets that must be stored on disk.
Downloaded by Harshad Pakhale (learntech2215@gmallcom)These algorithms work by dividing the data into smaller chunks that can be sorted in memory,
and then merging these sorted chunks into a final sorted output.
Examples of external sorting algorithms include External Merge Sort and Polyphase Merge
Sort.
General Sort Concepts:
1) Sort Order: Sort order refers to the order in which elements are sorted in a list. There
are two types of sort order:
¢ Ascending order: In ascending order, elements are sorted from smallest to largest.
© Descending order: In descending order, elements are sorted from largest to smallest.
2) Stability: Stability refers to the ability of a sorting algorithm to maintain the relative order
of equal elements in the sorted list.
Ifa sorting algorithm is stable, then the order of elements that have the same value will remain
the same in the sorted list as it was in the unsorted list.
if two elements in the list have the same value, a stable sorting algorithm will ensure that they
are ordered in the same way in the sorted list as they were in the original list.
For example, suppose we have a list of objects with two attributes: "name" and "age". We want
to sort the list by "age" in ascending order. If two objects have the same age, a stable sorting
algorithm will ensure that they are ordered by "name" as well. That means that if "Alice" and
"Bob" both have an age of 25, and “Alice” comes before "Bob" in the original list, then “Alice” will
also come before "Bob" in the sorted list.
Many common sorting algorithms, such as bubble sort, insertion sort, and merge sort, are
stable. However, some algorithms, such as quicksort, are not inherently stable, but can be
modified to be made stable.
3) Efficiency: Efficiency refers to the speed of a sorting algorithm:
It is usually measured in terms of time complexity or space complexity
Time complexity refers to the amount of time it takes to execute the algorithm, and space
complexity refers to the amount of memory space required by the algorithm,
4) Number of Passes: The number of passes refers to the number of
iterated over during the sorting process.
jes the entire list is
‘wedormersownsnn EY studocu
Downie
fed by Harsha Pakhale (earntach221S@gmallcom)Each pass involves comparing and swap
their correct sorted position
9 the elements in the list to move them closer to
Some sorting algorithms require multiple passes to fully sort the list, while others can sort the list
ina single pass.
Sorting algorithms with fewer passes tend to be more efficient than those with more passes.
‘The number of passes required for a sorting algorithm can also depend on the initial state of the
list. For example, a list that is already partially sorted may require fewer passes than a
completely unsorted list.
Comparison Based Sorting Methods-
Comparison-based sorting methods are the algorithms that compare the elements of the input
list with each other and reorder them accordingly.
These algorithms work on the principle of comparing and swapping individual elements
until the entire list is sorted.
Bubble sort:
Bubble sort is a simple comparison-based sorting algorithm.
The basic idea behind this algorithm is to repeatedly swap adjacent elements if they are in the
wrong order. It gets its name from the way smaller elements "bubble" to the top of the list as the
algorithm progresses
The algorithm works as follows:
1, Start at the beginning of the list.
2. Compare the first two elements. Ifthe first element is larger than the second element,
swap them.
3, Move to the next pair of elements (ie., the second and third elements) and repeat step
2.
4. Continue this process until the end of the list is reached.
5. If any swaps were made during the previous pass, repeat steps 2-4.
Jad by Harshae Pakhale (lerntach221S@gmallcom)6. Ifno swaps were made during the previous pass, the list is sorted and the algorithm
terminates.
Bubble sort is a relatively inefficient sorting algorithm, with a worst-case time complexity of,
(0(n42) where nis the size of the list. However, it has the advantage of being simple to
understand and implement.
Implementation of Bubble Sort:
def bubble_sort(arr):
n= len(arr)
# Traverse through all array elements
for lin range(n):
# Last i elements are already sorted
for in range(n-i-1):
# Swap if the element found is greater than the next element
if aref] > arrj+1]:
anf], arr+1] = arrfj+1), arr]
return arr
This implementation takes an input array arr and iterat
er. It does this
than th
it multiple times, swapping
ch element with
ond one. This pi
ally "bubbling" to the top
ssary, indicating that th
adjacent elements if they are in the wrong
th ne and swapping them if the first one is gr
repeat ay, with the lar
of the array, The algorithm continues until no more swaps are nec
array is fully sorted.
aring ©:
jest element:
lement in the
for every
Insertion sort:
Insertion sort is a simple sorting algorithm that works by iteratively building a sorted sublist from
the elements of an unsorted list.
Insertion sort is a simple sorting algorithm that works by sorting a list of items one element at a
time, similar to how we sort playing cards in our hand.
The algorithm maintains a partially sorted list to the left of the current element and a remaining
Unsorted list to the right of the current element. It picks the first element from the unsorted list
and inserts it in its correct position in the sorted list
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)The algorithm works as follows:
1, 1. Start with the second element in the list, as the first element is considered to be a
sorted list of size 1.
2. 2. Compare the current element with the elements to its left in the sorted list
3, 3. Ifthe current element is smaller than the element to its left, swap them.
4. 4. Repeat step 3 until the current element is in its correct position in the sorted list.
5, 5. Move to the next element in the unsorted list and repeat steps 2-4 unti all elements in
the unsorted list have been sorted and added to the sorted list.
Insertion sort has a worst-case time complexity of 0(n*2), but its best-case time complexity is
(O(n), making it efficient for small lists or partially sorted lists.
Itis also an in-place sorting algorithm, meaning that it does not require any additional memory
other than the original list.
implementation of the insertion sort algorithm in Python:
def insertion_sort(art):
# Iterate over each element in the array starting from the second element
for iin range(1, len(ar):
# Store the current element in a temporary variable
key = arrli
# Find the correct position for the current element in the sorted subarray
jrid
while j>= 0 and arrl] > key:
arr{j + 1] = arr{i]
jet
arrlj+ 1] = key
return arr
Jad by Harshae Pakhale (lerntach221S@gmallcom)each element in the array st 1m the second
tin a temporary variabl
y. We do this by iterating over the s
is implementation, we first iterat
osition
ment. We thi
the current element in the orted subarray
he left of the current element and shifting each element one position to the right until we fin«
for the current element, Once we have found the correct position, we insert
tat that nd me > the next element in the u
n the sorted array.
he correct
he current
osit
Selection sort is a simple sorting algorithm that works by selecting the minimum element from
an unsorted list in each iteration and placing it at the beginning of the list. It then repeats this
process for the remaining unsorted elements until the entire list is sorted.
The selection sort algorithm can be described in the following steps:
1. Set the first element of the list as the minimum value.
2. Compare the minimum value with the second element of the list. If the second element is
‘smaller than the minimum, set it as the new minimum,
3. Repeat step 2 for every element in the list, comparing each element with the minimum
and updating the minimum as necessary.
4, After iterating through the entire list, swap the minimum element with the first element of
the list.
5. Repeat steps 1-4 for the remaining unsorted elements until the entire list is sorted.
Selection sort has a worst-case time complexity of O(n%2) and is not suitable for large lists.
However, it has the advantage of being an in-place algorithm, meaning it does not require
additional memory beyond the original list
implementation of selection sort in Python:
def selection_sort(arr)
n= len(arr)
# Traverse through all array elements
for iin range(n):
# Find the minimum element in remaining unsorted array
min_idx =
for jin range(i+1, n):
if arf] < arr{min_id:
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)min_idx = j
# Swap the found minimum element with the first element
arr, arrfmin_idx] = arr{min_idx], arr}
return arr
Here, we start with the first element of the array and traverse through the entire array to find the
minimum element. Once we find the minimum element, we swap it with the first element. We
then repeat this process with the remaining unsorted part of the array (excluding the first
element which is already sorted) until the entire array is sorted in ascending order.
Quick Sort:
Quick Sort is a widely used comparison-based sorting algorithm that is known for its efficiency
and speed.
It works by selecting a pivot element from the list and partitioning the list into two sub-lists, one
with elements smaller than the pivot and the other with elements greater than the pivot.
The pivot element is then placed in its correct position in the sorted list. This process is repeated
recursively for each sub-list until the entire list is sorted
Here are the detailed steps for implementing the Quick Sort algorithm:
1. Choose a pivot element from the list. This can be any element in the list, but a common.
choice is the first element in the list.
2, Partition the list into two sub-lists: one with elements less than or equal to the pivot, and one
with elements greater than the pivot. This can be done by iterating through the list and
comparing each element to the pivot. If an element is less than or equal to the pivot, it is added
to the first sub-list. Ifit is greater than the pivot, it is added to the second sub-list.
3, Recursively apply the above steps to each sub-list until the entire list is sorted. This can be
done by selecting a new pivot element from each sub-list and partitioning it into two more
sub-lists
4, Combine the sorted sub-lists into a single sorted list,
The worst-case time complexity of quicksort is O(n*2), which occurs when the pivot selection
leads to unbalanced partitions. However, on average, the time complexity is O(n log n), making
it a very efficient sorting algorithm. Additionally, quicksort has a space complexity of O(/og n)
due to the use of recursion in the algorithm,
Jad by Harshae Pakhale (lerntach221S@gmallcom)Here is an example implementation of the Quick Sort algorithm in Python:
def quick_sort(arr):
if len(arr) <=
return arr
right = 7
for iin range(1, len(arr)
if artf) <= pivot
left.append(arr{il)
else
right.append(arrfi)
return quick_sort(left) + [pivot] + quick_sort(right)
This implementation uses recursion to apply the Quick Sort algorithm to each sub-list. The base
case is when the sub-list has only one or zero elements, which is already sorted.
‘The pivot element is chosen as the first element in the list, and the sub-lists are partitioned
using a loop that compares each element to the pivot.
The sorted sublists are then combined with the pivot element to form the final sorted list.
Shell Sort:
Shell Sort is a sorting algorithm that uses the concept of Insertion Sort to sort elements in a list
or an array.
The algorithm sorts elements by comparing them with other elements that are not next to each
other. Itis an extension of the Insertion Sort algorithm.
The Shell Sort algorithm works by first dividing the list into smaller sub-lists. It does this by
choosing a gap value, which is typically half the length of the list. Then, the algorithm compares
and sorts elements that are spaced gap values apart. It continues to sort elements using smaller
and smaller gap values until the entire lst is sorted
The basic steps of the Shell Sort algorithm are:
1, Start with a gap value, which is typically half the length of the list.
2. Divide the list into smaller sub-lists of size gap.
3. Sort the sub-lists using the Insertion Sort algorithm,
4, Repeat steps 2 and 3 with smaller gap values until the entire list is sorted.
‘wedormersownsnn EY studocu
Downie
fed by Harsha Pakhale (earntach221S@gmallcom)The idea behind the Shell Sort algorithm is that by sorting elements that are spaced farther
apart first, it can reduce the amount of data movement needed to complete the final sort. The
algorithm has a time complexity of O(n*(3/2)), which is better than the O(n*2) time complexity of
the Insertion Sort algorithm.
Here is an implementation of the Shell Sort algorithm in Python:
def shellSort(arr):
# Start with a large gap value
gap = len(arr) //2
# Perform Shell Sort
while gap > 0:
for jin range(gap, len(arn))
ji
while j >= gap and arrfj- gap] > temp:
arr] = arti - gap]
J-=gap
arti] = temp
gap = 2
In this implementation, the algorithm starts with a gap value of half the length of the list. It then
divides the list into sublists of size gaps and sorts them using the Insertion Sort algorithm. The
gap value is halved in each iteration until the entire list is sorted.
Non-Comparison Based Sorting
Radix Sort:
Radix Sort is a non-comparison-based sorting algorithm that sorts elements by examining each
digit of the elements in a specific order, such as from the least significant digit to the most
significant digit,
It is often used for sorting elements that have a fixed number of digits.
Jad by Harshae Pakhale (lerntach221S@gmallcom)The Radix Sort algorithm can be performed in two ways: least significant digit (LSD) and most
significant digit (MSD). In LSD Radix Sort, the algorithm starts sorting from the least significant
digit of each element, while in MSD Radix Sort, it starts sorting from the most significant digit
The Radix Sort algorithm works as follows:
1. Determine the maximum number of digits among all the elements in the list.
2, Starting from the least significant digit, sort the list using a stable sorting algorithm such as
Counting Sort.
3, Move to the next significant digit and repeat step 2 until all digits have been sorted.
The counting sort algorithm is used to sort each digit. It works by counting the number of
occurrences of each digit and using this information to determine the position of each element in
the sorted list.
The time complexity of Radix Sort is O(kn), where k is the maximum number of digits in the
elements and n is the number of elements in the list. Radix Sort is efficient for sorting elements
with a fixed number of digits, but it can be less efficient than comparison-based sorting
algorithms for elements with variable-length representations.
Here's an example implementation of Radix Sort in Python:
def radix_sort(arr)
# Find the maximum number to know the number of digits
‘max_num = max(art)
# Do counting sort for every digit
oxp=1
while max_num // exp > 0:
counting_sort(arr, exp)
exp *= 10
def counting_sort(arr, exp):
len(arr)
output = [0] *1n
count = [0] * 10
# Store count of occurrences in countf]
for in range(n):
index = (arrfi]// exp)
countfindex % 10] += 1
# Change count[i] so that count{i] now contains actual position ofthis digit in output(]
for iin range(1, 10):
countfi] += countfi- 1]
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)# Build the output array
jsn-1
while i >= 0:
index = (arr{i]/ exp)
output{countfindex % 10] - 1] = ari]
countfindex % 10]-= 1
ist
# Copy the output array fo arr], so that arr{] now contains sorted numbers
for in range(n):
arr] = output{i]
In this implementation, ‘radix_sort()" function takes an unsorted list as input and sorts it using
the radix sort algorithm. It first finds the maximum number in the list to determine the maximum
number of digits for counting sort. It then loops through each digit, calling “counting_sort()" to
sort the list based on that digit.
‘counting_sort()’ is a helper function that performs the actual sorting based on the given digit. It
creates an array for output and a count array to store the occurrence count of each digit. It then
counts the occurrence of each digit and calculates the actual position of the digit in the output
array. Finally, it builds the output array and copies it back to the original array.
Overall, this implementation has a time complexity of O(kn), where k is the maximum number of
digits in the input list.
Counting Sort:
Counting Sort is a sorting algorithm that works by counting the number of occurrences of each
element in the list and then sorting them in ascending order based on their counts,
The algorithm works as follows:
1, Identify the maximum value in the list.
2, Create a count list with length equal to the maximum value + 1, initialized to all Os,
3, Count the number of occurrences of each element in the input list and store it in the
corresponding index of the count list
4, Modify the count list by adding the value of the previous element to the current element, to
get the number of elements that come before each element in the sorted list.
5. Create a result list with the same length as the input list.
6. Iterate through the input list backwards and use the count list to place each element in the
correct position in the result list.
7. Return the result list as the sorted list
Jad by Harshae Pakhale (lerntach221S@gmallcom)Counting Sort has a time complexity of O(n+k), where n is the length of the input list and k is the.
maximum value in the list. It is an efficient algorithm when the range of values in the list is not
too large, but it requires extra space to store the count list,
Bucket sort:
Bucket sort is a sorting algorithm that works by dividing an array into a set of buckets. Each
bucket is then sorted individually, either using another sorting algorithm or by recursively
applying the bucket sort algorithm. Finally, the sorted buckets are concatenated to form the
sorted array,
The bucket sort algorithm can be described in the following steps:
1, Create an array of empty buckets.
2, Iterate through the input array and place each element in the appropriate bucket based on
some criteria, such as its value or some other key.
3. Sort each bucket individually using a sorting algorithm of your choice, such as insertion sort
or quicksort.
4, Coneatenate the sorted buckets to form the sorted array.
The efficiency of the bucket sort algorithm depends on how uniformly the input values are
distributed among the buckets. If the values are evenly distributed, the algorithm can achieve a
linear time complexity of O(n+k), where n is the number of elements in the input array and k is
the number of buckets. However, if the values are not evenly distributed, the algorithm may
require more passes or sorting algorithms, resulting in a higher time complexity.
The choice of a sorting algorithm depends on several factors such as the size of the input data,
the distribution of the data, the desired time complexity, and the available memory. Here are
some general guidelines for choosing a sorting algorithm:
1. For small inputs (less than 10-20 elements), simple algorithms such as insertion sort,
bubble sort, or selection sort may be sufficient. These algorithms have low overhead and are
easy to implement.
2. For medium-sized inputs (up to a few thousand elements), quicksort, mergesort, or
heapsort are good choices. These algorithms have an average-case time complexity of O(n log
1) and are efficient in practice.
3, For large inputs (millions or billions of elements), external sorting algorithms such as,
external merge sort, radix sort, or bucket sort may be needed. These algorithms are
designed to work with data that is too large to fit into memory and use disk-based operations for
sorting,
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)4. If the input data is uniformly distributed and has a known range, radix sort or counting
sort can be very efficient as they have a time complexity of O(n+k), where k is the range of the
input data.
5. If the input data is partially sorted or nearly sorted, algorithms such as insertion sort,
bubble sort, or shell sort can be very efficient as they have a time complexity of O(n) in the
best case.
1. Bubble Sort:
- Time Complexity: O(n*2)
- Space Complexity: O(1)
- Stable Sorting: Yes
- Number of Passes: n-1
2. Insertion Sort
- Time Complexity: O(n*2)
- Space Complexity: O(1)
- Stable Sorting: Yes
- Number of Passes: n-1
3, Selection Sort:
- Time Complexity: O(n*2)
- Space Complexity: O(1)
- Stable Sorting: No
- Number of Passes: n-1
4, Merge Sort:
- Time Complexity: O(n log n)
- Space Complexity: O(n)
~ Stable Sorting: Yes
- Number of Passes: log n
5. Quick Sort:
- Time Complexity: O(n log n) (average case), O(n*2) (worst case)
- Space Complexity: O(log n)
- Stable Sorting: No
- Number of Passes: log n
6. Heap Sort:
- Time Complexity: O(n tog n)
- Space Complexity: O(1)
- Stable Sorting: No
Downloaded by Harshad Pakhale (learntaeh2215@gmall com)- Number of Passes: log n
7. Radix Sort:
- Time Complexity: O(d(n+k)), where d is the number of digits in the largest number and k is
the range of values.
- Space Complexity: O(n+k)
- Stable Sorting: Yes
- Number of Passes: d
8, Counting Sort:
- Time Complexity: O(n+k), where k is the range of values,
- Space Complexity: O(n+k)
- Stable Sorting: Yes
- Number of Passes: 1
9. Bucket Sort:
- Time Complexity: O(n+k), where k is the number of buckets used
- Space Complexity: O(n+k)
- Stable Sorting: Yes
- Number of Passes: 1
important to note that the time and space complexity of these algorithms may vary
depending on the implementation, and the above values represent the average or worst-case
scenario. Additionally, the choice of sorting algorithm depends on the size of the input, the range
of values, and the need for stability or efficiency,
Here is a comparison of the time complexities of some common sorting algorithms’
1, Bubble Sort
- Best case: O(n)
- Average case: O(n’2)
- Worst case: O(n"2)
2. Insertion Sort:
~ Best case: O(n)
- Average case: O(n’2)
- Worst case: O(n"2)
3, Selection Sort:
~ Best case: O(n'2)
- Average case: O(n*2)
- Worst case: O(n’2)
4, Quick Sort:
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntaes2215@gmall com)- Best case: O(n log n)
- Average case: O(n log n)
- Worst case: O(n*2)
5. Shell Sort:
~ Best case: O(n log n)
- Average case: depends on the gap sequence used
- Worst case; O(n"2)
6. Radix Sort:
~ Best case: O(nk)
- Average case: O(nk)
- Worst case; O(nk)
7. Counting Sort:
~ Best case: O(ntk)
- Average case: O(n+k)
- Worst case; O(n+k)
8, Bucket Sort:
~ Best case: O(ntk)
- Average case: O(n+k)
- Worst case; O(n"2)
Note that n is the number of elements in the list to be sorted, and k is the range of values that
the elements can take,
Itis important to note that the best, average, and worst-case time complexities for each
algorithm can vary depending on the specific implementation and the data being sorted.
Additionally, some algorithms may have better performance in certain scenarios or for certain
types of data. Therefore, itis important to carefully consider the specific requirements of a
sorting problem before selecting an algorithm to use.
Bubble Sort, Insertion Sort, and Selection Sort are simple sorting algorithms that have a time
complexity of O(n*2) and are suitable for small-sized arrays.
Quick Sort is a widely used sorting algorithm that has an average time complexity of O(n log n).
Ituses the divide-and-conquer approach and is suitable for larger arrays.
Shell Sort is a variation of Insertion Sort that has a time complexity of O(n log n) and is suitable
for medium-sized arrays. It sorts elements at specific intervals and gradually reduces the
interval until the entire array is sorted.
Jad by Harshae Pakhale (lerntach221S@gmallcom)Radix Sort, Counting Sort, and Bucket Sort are non-comparison based sorting algorithms that
have a time complexity of O(n) or O(n log n). They are suitable for large-sized arrays and have
better performance than comparison-based algorithms when the range of data is known
beforehand.
Radix Sort sorts the elements by comparing each digit of the elements, while Counting Sort
counts the frequency of each element and places them in their correct position, Bucket Sort
divides the array into smaller buckets and then sorts each bucket individually, combining them to
form the sorted array.
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhal
learntech2215@gmallcom)Linked List
Static and Dynamic Memory Allocation:
Static and dynamic memory allocation are two different methods for reserving memory space in
computer memory.
Static memory Allocation:
Static memory allocation refers to the process of allocating a fixed amount of memory during
compile-time or runtime.
Static memory allocation is the process of
reserving a fixed amount of memory during the
compilation or runtime of a program. The memory is allocated on the stack and remains
unchanged throughout the program's execution. This type of memory allocation is used when
he amount of memory required is known beforehand and does not change during
execution. It is often used for creating arrays or structures that have a fixed size
The memory is allocated on the stack, and the allocated memory remains the same throughout
the program's execution.
Static memory allocation is commonly used for fixed-size arrays or when the memory allocation
requirements are known in advance.
Static memory allocation is efficient as the memory is already reserve
dynamically allocate or deallocate it during program execution, However, it can limit the flexibility
of the program as the size of the allocated memory is fixed
allo ind there is no need to.
Dynamic memory allocatior
Dynamic memory allocation is a process of allocating memory space during runtime. In this
method, the memory is allocated on the heap, which is a larger and more flexible memory
storage space than the stack.
Dynamic memory allocation is when a program requests memory space while it’s running. This
memory space is allocated on the heap, which is a big and flexible storage area. Programs use
dynamic memory allocation when they don't know how much memory they ne
or if the amount of memory they need can change while the program is running. It lets programs
Use only as much memory as they need, which makes them more effic
d ahead of time
Dynamic memory allocation is commonly used when the size of the data structures is unknown.
or can change during runtime. It allows programs to allocate and deallocate memory as needed,
making it more efficient and flexible.
Dynamic memory allocation is implemented through functions like malloc(), calloc(), and
realloc(), while static memory allocation is implemented through the use of variables and data
Downloaded by Harshad Pakhale (learntech2215@gmallcom)types that are allocated a fixed amount of memory at compile-time. However, the use of
dynamic memory allocation requires careful memory management, as memory leaks or errors
can cause the program to crash or behave unexpectedly.
MANIA
Linked List:
Alinked list is a linear data structure that is used to store a collection of data elements.
Ina linked list, each element, known as a node, contains a piece of data and a reference to the
next node in the list.
A linked list is like a chain made up of individual links, Each link is called a node, and has two
parts: a piece of data, and a reference (or pointer) to the next node in the chain.
The first node is called the head, and the last node is called the tail
Unlike arrays, linked lists do not require contiguous blocks of memory. Instead, each node can
be located anywhere in memory, and only requires enough space to store its data and
reference. This makes linked lists more flexible in terms of memory usage.
Linked lists are often used to implement other data structures, such as stacks, queues, and
hash tables. They are also useful for handling large amounts of data that need to be stored
dynamically, such as in some database applications. However, linked lists can be less efficient
than arrays for certain operations, such as random access to elements, due to their sequential
nature.
‘Types Of Linked List:
4. Singly Linked List:
- Ina singly linked list, each node contains a data element and a reference (or pointer) to the
next node in the sequence.
- It forms a unidirectional sequence where traversal
from the head to the tail
= The last node in the list has a reference to ‘None’ to indicate the end of the list,
~ A linear linked list is a common term used to refer to a singly linked list, emphasizing the
Unidirectional nature of the list.
Possible only in one direction, typically
‘wedormersownsnn EY studocu
fed by Harshad Pakhal
Downie
learntech2215@gmallcom)2. Doubly Linked List:
- In a doubly linked list, each node contains a data element and references (or pointers) to
both the next and previous nodes in the sequence.
- Itforms a bidirectional sequence where traversal is possible in both directions, forward and
backward.
- Each node has two pointers: ‘next’ pointing to the next node and “prev’ pointing to the
previous node.
3. Circular Linked List:
- Ina circular linked list, the last node in the list has a reference that points back to the first
node, forming a loop.
- Traversal in a circular linked list can start from any node, and it can go around the list
indefinitely until a stopping condition is reached.
4, Doubly Circular Linked List:
- A doubly circular linked list combines the properties of a doubly linked list and a circular
linked list.
- Each node has a reference to both the next and previous nodes, and the last node in the list
has a reference that points back to the first node.
- This allows for bidirectional traversal and forming a loop where traversal can start from any
node.
Each type of linked list has its own advantages and use cases.
Singly linked lists are simple and efficient for most operations, while doubly linked lists provide
more flexibility with bidirectional traversal
Circular linked lists are useful in scenarios where continuous traversal is required, and doubly
circular linked lists combine the features of both doubly linked lists and circular linked lists.
The choice of linked list type depends on the specific requirements of the application and the
operations to be performed on the list.
Creating Linked List In Python:
Here's a step-by-step guide to creating a linked list in Python from the beginning:
1. Define a class for the linked list node:
class Node:
def _init_(self, data):
self.data = data
self.next = None
Jad by Harshae Pakhale (lerntach221S@gmallcom)2. Create a head variable to keep track of the first node in the linked list:
head = None
3, Create a function to insert a new node at the beginning of the linked list:
def insert_at_beginning(data):
global head
new_node = Node(data)
new_node.next = head
head = new_node
4, Repeat step 3 as needed to insert more nodes at the beginning.
5, Create a function to display the linked list:
def display().
global head
current = head
while current:
print(current.data, end:
current = current.next
print()
6. Test the linked list by inserting nodes at the beginning and displaying the list:
insert_at_beginning(3)
insert_at_beginning(2)
insert_at_beginning(1)
display()
The output will be:
123
You can continue inserting nodes at the beginning or perform other operations on the linked list
using similar methods,
‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)Downloaded by Harshad Pakhale (learntech2215@gmallcom)‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)Downloaded by Harshad Pakhale (learntech2215@gmallcom)‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)Downloaded by Harshad Pakhale (learntech2215@gmallcom)‘wedormersownsnn EY studocu
Downloaded by Harshad Pakhale learntech2215@gmallcom)