[go: up one dir, main page]

0% found this document useful (0 votes)
8 views18 pages

DSA Notes

A stack is a linear data structure that operates on a Last In, First Out (LIFO) basis, supporting operations like push, pop, and peek. It can be implemented using arrays or linked lists, each with its own advantages and disadvantages regarding memory management and performance. A queue, on the other hand, follows a First In, First Out (FIFO) principle and is used in various applications such as CPU scheduling and task management, with implementations also available in array and linked list forms.

Uploaded by

kapewe3373
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views18 pages

DSA Notes

A stack is a linear data structure that operates on a Last In, First Out (LIFO) basis, supporting operations like push, pop, and peek. It can be implemented using arrays or linked lists, each with its own advantages and disadvantages regarding memory management and performance. A queue, on the other hand, follows a First In, First Out (FIFO) principle and is used in various applications such as CPU scheduling and task management, with implementations also available in array and linked list forms.

Uploaded by

kapewe3373
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

STACK

What is a Stack?

A stack is a linear data structure that follows the Last In, First Out (LIFO) principle. This means
the last element added to the stack will be the first one to be removed.

Basic Operations

1. Push: Add an element to the top of the stack.


2. Pop: Remove and return the top element of the stack.
3. Peek/Top: Return the top element without removing it.
4. isEmpty: Check if the stack is empty.
5. isFull: (In fixed-size implementations) Check if the stack is full.

Applications of Stacks

1. Function Call Management: Stacks manage function calls in programming languages.


Each function call is pushed onto the call stack, and when a function returns, it is popped
off the stack.
2. Expression Evaluation: Stacks are used to evaluate expressions and convert infix
expressions to postfix or prefix expressions.
3. Undo Mechanisms: Most text editors use a stack to keep track of the history of
operations for the undo functionality.
4. Depth-First Search (DFS): DFS in graph algorithms can be implemented using a stack.
5. Balancing Symbols: Checking for balanced parentheses in an expression.

Array-Based Implementation of Stack

Characteristics:

 Fixed Size: The stack is created with a fixed size (capacity). If the stack is full (i.e., the
array is full), no more elements can be pushed onto the stack unless some are popped
first.
 Contiguous Memory: The elements are stored in contiguous memory locations, which
can lead to efficient use of CPU cache and quick access times.
 Index-Based Access: Elements are accessed using their index positions in the array,
making operations like push and pop straightforward with constant time complexity
O(1).

Advantages:

 Simplicity: Easy to implement and understand.


 Performance: Index-based access provides O(1)time complexity for push and pop
operations due to direct access to array elements.
Disadvantages:

 Fixed Size: The stack size is fixed at the time of creation, which can lead to either wasted
memory if the stack is under-utilized or overflow errors if the stack exceeds its capacity.
 Resizing: If dynamic resizing is implemented (e.g., doubling the array size when it's full),
it introduces complexity and potential overhead due to copying elements to a new array.

Linked List-Based Implementation of Stack

Characteristics:

 Dynamic Size: The stack does not have a fixed size. It grows and shrinks as elements are
pushed and popped.
 Non-contiguous Memory: Elements (nodes) are stored in non-contiguous memory
locations, with each node containing a reference (pointer) to the next node.
 Node-Based Access: Operations involve manipulating node pointers, which may involve
more complex memory management compared to arrays.

Advantages:

 Dynamic Size: The stack can grow and shrink as needed, which eliminates the problem
of overflow and under-utilization seen in fixed-size arrays.
 Flexibility: Memory is allocated as needed, which can be more efficient in terms of
memory usage, especially if the maximum size of the stack is not known in advance.

Disadvantages:

 Pointer Overhead: Each element in the stack requires additional memory for storing
pointers, which can lead to higher memory usage compared to an array.
 Performance: Operations involve pointer dereferencing, which might be slower than
direct index access in arrays due to potential cache misses.

Summary

 Array-Based Stack:
o Pros: Simple implementation, efficient access due to contiguous memory,
O(1)push/pop operations.
o Cons: Fixed size, potential overflow/under-utilization, resizing overhead if
dynamic resizing is implemented.
 Linked List-Based Stack:
o Pros: Dynamic size, no overflow, memory allocated as needed.
o Cons: Higher memory usage due to pointers, potentially slower due to pointer
dereferencing, more complex memory management.
Difference b/w Array-Based and Linked List Based Stack

1.Memory Allocation:

 Array-Based Stack: Uses contiguous memory allocation, where elements are stored in
adjacent memory locations.
 Linked List-Based Stack: Utilizes dynamic memory allocation, where each element
(node) is allocated independently, and pointers connect them.

2.Dynamic Resizing:

 Array-Based Stack: Typically has a fixed size, unless dynamic resizing is implemented.
Resizing involves creating a new array and copying elements, which can be inefficient
(O(n) time complexity).
 Linked List-Based Stack: Can grow or shrink dynamically without requiring resizing
operations, as memory allocation is flexible.

3.Access Time:

 Array-Based Stack: Provides constant-time access O(1) for push and pop operations due
to direct index-based access.
 Linked List-Based Stack: Offers constant-time insertion and deletion O(1), but traversal
may be slower due to pointer dereferencing.

4.Memory Efficiency:

 Array-Based Stack: May be more memory-efficient, especially for large stacks, as


there's no overhead for pointers.
 Linked List-Based Stack: Requires additional memory for storing pointers, potentially
leading to higher memory usage compared to arrays.

5.Insertion and Deletion:

 Array-Based Stack: Efficient for push and pop operations but less efficient for inserting
or deleting elements in the middle.
 Linked List-Based Stack: Efficient for insertion and deletion at any position due to the
nature of linked lists.

6.Flexibility:

 Array-Based Stack: Less flexible in terms of memory allocation and resizing. Requires
a fixed size or dynamic resizing mechanism.
 Linked List-Based Stack: More flexible, allowing for dynamic growth or shrinkage
without the need for resizing.

7.Traversal:
 Array-Based Stack: Traversal is straightforward and efficient due to contiguous memory
allocation, providing better cache performance.
 Linked List-Based Stack: Traversal involves following pointers, which may result in
slower access, especially for large stacks.

8.Implementation Complexity:

 Array-Based Stack: Generally simpler to implement and understand, with direct index-
based access for operations.
 Linked List-Based Stack: Involves more complex memory management due to pointers
but offers more flexibility in terms of dynamic resizing.

Function Call Management(Application of Stack)

1. Function Invocation:
o When a function is called, the program needs to keep track of where to return after
the function finishes executing. This is achieved by pushing the return address
onto the call stack.
o The call stack also stores the local variables of the function, the parameters
passed to the function, and other function-specific information such as the frame
pointer.
2. Stack Frame:
o Each function call creates a new stack frame (or activation record) on the call
stack. A stack frame contains:
 Return Address: The address to return to once the function execution is
complete.
 Local Variables: Variables that are declared within the function.
 Function Parameters: Values passed to the function.
 Saved Registers: Registers that need to be restored after the function call.
3. Function Execution:
o The function executes, utilizing the local variables and parameters stored in its
stack frame.
o If the function calls another function, a new stack frame is pushed onto the stack,
and the process repeats.
4. Function Return:
o When a function completes its execution, its stack frame is popped off the stack.
o The program counter is set to the return address stored in the popped stack frame,
and execution resumes from there.
o Local variables and parameters from the function are discarded as the stack frame
is removed.
QUEUE

What is a Queue?

A queue is a linear data structure that follows the First In, First Out (FIFO) principle. It's quite
similar to a queue you might encounter in real life, like waiting in line at a ticket counter or a
fast-food restaurant.

Here are the key characteristics of a queue:

1. FIFO Principle: The element that is inserted first is the one that is removed first. Think
of it as people entering a line; the person who arrives first is served first.

2. Two Main Operations: Queues typically support two primary operations:


o Enqueue: Adding an element to the back or rear of the queue.
o Dequeue: Removing an element from the front or head of the queue.

3. No Random Access: Unlike arrays, you can't access elements in a queue by their index.
You only have access to the front and back elements.

4. Limited Access: Generally, queues only allow access to the element at the front (head)
and the element at the back (rear). This prevents arbitrary insertion and deletion,
maintaining the FIFO order.

5. Usage: Queues are widely used in scenarios where tasks or data must be handled in the
order they were added. Examples include job scheduling in operating systems, printer
queues, handling requests in web servers, breadth-first search algorithms, etc.

6. Implementations: Queues can be implemented using arrays, linked lists, or other data
structures. Each implementation has its advantages and disadvantages in terms of
efficiency and ease of use.

Circular Queue: A circular queue is a variation of the basic queue data structure where the last
element is connected back to the first element. This circular arrangement allows for efficient
use of memory and enables continuous usage of the queue without the need for resizing.

Differences b/w Linear and Circular Queue

1. Structure:

 Linear Queue: This type of queue is a simple, straightforward


implementation where elements are inserted at the rear and
removed from the front. It is typically represented as a linear
array.
 Circular Queue: This type of queue is a more complex structure
where the last position is connected back to the first position,
forming a circle. It allows for efficient use of space by reusing
empty spaces freed by dequeuing elements.

2. Overflow Handling:

 Linear Queue: In a linear queue, when the rear reaches the end
of the array, no more elements can be inserted even if there is
space available at the front. This situation is known as overflow.
 Circular Queue: A circular queue handles overflow more
efficiently. When the rear reaches the end of the array, it wraps
around to the front if there is empty space available, making
better use of available memory.

3. Space Utilization:

 Linear Queue: May suffer from inefficient space utilization


because once the rear reaches the end, even if there are empty
slots at the front due to dequeuing, they cannot be used.
 Circular Queue: Makes efficient use of the available space by
wrapping around, allowing the queue to use all available slots in
the array.

4. Pointers/Indices Management:

 Linear Queue: Typically maintains two pointers or indices (front


and rear) that move in one direction. Once the rear reaches the
end, the queue needs to be reset or resized.
 Circular Queue: Also maintains two pointers or indices (front and
rear), but they wrap around to the beginning when they reach the
end of the array, thus making the management of indices more
complex but efficient.

5. Performance:

 Linear Queue: Can lead to wasted space and potential


inefficiency in scenarios where many enqueue and dequeue
operations happen, causing the need for frequent resizing or data
shifting.
 Circular Queue: Offers better performance in terms of space
utilization and often avoids the need for resizing by effectively
reusing space, leading to potentially lower overhead.
Advantages of using circular queue instead of linear queue

1. Avoids Wastage of Space: In a linear queue, once elements are dequeued from the front,
the space they occupied remains unused, potentially leading to inefficient use of memory.
A circular queue reuses this space by wrapping around, thus making full use of the
allocated memory.

2. No Need for Shifting Elements: In a linear queue implemented with an array, when the
rear reaches the end of the array and elements are dequeued from the front, you might
need to shift elements to the front to utilize free space, which is an expensive operation.
In a circular queue, there’s no need for shifting elements, as the rear simply wraps around
to the beginning of the array.

3. Circular Wrapping: In a circular queue, when the rear reaches the end of the array, it
wraps around to the beginning, making it easier to handle the queue being "full" without
extra operations to manage array indices.

4. Constant Time Complexity: Both enqueue and dequeue operations in a circular queue
are O(1) operations, as they simply involve updating the front and rear indices. In
contrast, a linear queue might require O(n) time to shift elements if implemented with an
array and if not managed properly.

5. Better Performance in Fixed Size Scenarios: For fixed-size data structures, circular
queues often perform better because they avoid the overhead of shifting elements and
make better use of available space.

Applications of Queue

1. CPU Scheduling: Operating systems often use queues to manage processes. For
example, in Round Robin scheduling, processes are placed in a queue and each process is
given a fixed time slot (time quantum).

2. Printer Spooling: Print jobs are placed in a queue and processed in the order they arrive.

3. Graph Traversal: In BFS (Breadth First Search Tree), a queue is used to explore nodes
level by level. Nodes are enqueued as they are discovered and dequeued as they are
processed, ensuring that each level is fully explored before moving to the next.

4. Task Scheduling: Real-time operating systems use priority queues to manage tasks that
must be executed according to their urgency.

Characteristics of Array-Based and Linked-List-Based Queue


Array-Based Queue

1. Structure:
o Uses a fixed-size array to store elements.
o Typically requires additional variables to keep track of the front and rear indices.
2. Size:
o Fixed size. The maximum number of elements is determined when the array is
created.
o May require resizing (costly operation) if the queue exceeds the initial capacity.
3. Memory Usage:
o Efficient memory usage when the queue is full.
o May waste memory if the queue is mostly empty.
4. Access Time:
o O(1) time complexity for enqueue and dequeue operations, assuming no resizing is
needed.
o Enqueue operation might be O(n) in the worst case if resizing the array is
necessary.
5. Implementation Complexity:
o Simple and straightforward to implement.
o Requires handling wrap-around when using a circular array to efficiently use
space.
6. Performance:
o Better cache performance due to contiguous memory allocation.
o Overhead of shifting elements or managing wrap-around logic in a circular buffer.

Linked-List-Based Queue

1. Structure:
o Uses nodes where each node contains the element and a reference to the next node.
o Requires pointers for the front and rear nodes.
2. Size:
o Dynamic size. Can grow or shrink as needed, limited only by system memory.
o No need for resizing operations.
3. Memory Usage:
o More flexible memory usage as it can expand or contract.
o Slightly higher memory overhead per element due to storing additional pointers.
4. Access Time:
o O(1) time complexity for both enqueue and dequeue operations.
o No resizing operation required.
5. Implementation Complexity:
o Slightly more complex to implement compared to an array-based queue.
o Requires careful handling of pointers to avoid memory leaks and ensure proper
linking of nodes.
6. Performance:
o May have lower cache performance due to non-contiguous memory allocation.
o No overhead for resizing or shifting elements.

Summary
Array-Based Queue:

 Pros: Simpler to implement, better cache performance, constant-time operations if no


resizing is needed.
 Cons: Fixed size or requires resizing, potential wasted space if not fully utilized.

Linked-List-Based Queue:

 Pros: Dynamic size, constant-time operations, no resizing needed.


 Cons: More complex implementation, slightly higher memory overhead, potentially
lower cache performance.

Differences b/w Array-based and Linked-List-Based Queue

Feature Array-Based Queue Linked-List-Based Queue


Structure Fixed-size array Nodes with elements and pointers
Fixed (requires resizing for Dynamic (grows and shrinks as
Size
expansion) needed)
Efficient when full, potential Flexible, but higher overhead per
Memory Usage
wastage when not full element due to pointers
O(1) for enqueue and dequeue
Access Time O(1) for both enqueue and dequeue
(O(n) if resizing)
Implementation Simple, requires handling wrap- More complex, involves managing
Complexity around logic pointers
Better cache performance Potentially lower cache performance
Performance
(contiguous memory) (non-contiguous memory)
Memory Requires handling array resizing No need for resizing, no wasted
Management and potential wasted space space
Resizing Needed when capacity is reached Not needed, grows with elements

Priority Queue: A priority queue is a type of data structure that, unlike a regular queue,
associates each element with a priority. Elements are dequeued not in the order they were
enqueued, but according to their priority. Higher priority elements are dequeued before lower
priority ones.

Applications of Priority Queue


1. CPU Scheduling: Operating systems use priority queues to manage processes. Processes
with higher priority are scheduled for execution before those with lower priority.

2. Task Scheduling: In multi-threaded applications, tasks can be scheduled based on


priority.

3. Interrupt Handling: Priority queues are used to manage interrupt requests, with higher
priority interrupts being handled before lower priority ones.

4. Real-Time Scheduling: Tasks are scheduled based on deadlines and priorities to ensure
timely execution in real-time applications.

Dequeue: Dequeue, also known as "double-ended queue," is a data structure that allows
insertion and deletion of elements from both ends: the front and the rear. It combines the
features of both stacks and queues, providing more flexibility for different operations.

Applications of Dequeue

1. Palindrome Checking: A dequeue can be used to check if a string is a palindrome by


comparing characters from the front and rear ends.

2. Job Scheduling: In some scheduling scenarios, tasks may need to be added or removed
from both ends, and a dequeue provides an efficient way to manage this.

Key Operations of a Dequeue

 Insert at Front: Add an element to the front of the dequeue.


 Insert at Rear: Add an element to the rear of the dequeue.
 Delete from Front: Remove an element from the front of the dequeue.
 Delete from Rear: Remove an element from the rear of the dequeue.
 Get Front: Access the front element without removing it.
 Get Rear: Access the rear element without removing it.
 Is Empty: Check if the dequeue is empty.
 Is Full: Check if the dequeue is full (in fixed-size implementations).

Key Differences b/w Stack And Queue

Feature Stack Queue


Principle LIFO (Last In, First Out) FIFO (First In, First Out)
Primary
Push, Pop, Peek Enqueue, Dequeue, Peek
Operations
First element added is first
Access Order Last element added is first removed
removed
Typical Use Function call management, Expression Task scheduling, BFS traversal,
Feature Stack Queue
evaluation, Backtracking algorithms, DFS
Cases Producer-Consumer problem
traversal

Linked List

What is Linked List?

A linked list is a fundamental data structure that consists of a sequence of elements, each
elements containing a reference (or link) to the next element in the sequence. Unlike arrays,
linked lists do not store elements in contiguous memory locations, allowing for efficient
insertions and deletions.

Types of Linked List

1. Singly Linked List:

 Each node contains data and a reference to the next node.


 Traversal is unidirectional, from the head (first node) to the tail (last node).

2. Doubly Linked List:

 Each node contains data, a reference to the next node, and a reference to the previous
node.
 Allows bidirectional traversal.

3. Circular Linked List:

 The last node points to the first node, forming a circular structure.
 Can be singly or doubly linked.

A circular linked list is a type of linked list in which the last node points back to the first
node, creating a circular structure. This unique arrangement offers several advantages:

1. Efficient Circular Traversal:

 In a circular linked list, it is easy to traverse the entire list from any node, as
you can keep moving to the next node and eventually return to the starting
node. This makes it useful for applications where you need to cycle through
the elements repeatedly, such as in round-robin scheduling or buffering.

2. Consistent Handling of Lists with No Beginning or End:

 Circular linked lists are well-suited for situations where there isn't a clear
start or end point. This is particularly useful in implementing data structures
like buffers, where the data may continuously wrap around.
3. Simplified Insertion and Deletion:

 Inserting or deleting a node can be easier compared to other linked list


structures because you don't need to handle special cases for inserting at
the end of the list or deleting the last node. All nodes are treated equally,
and the circular nature ensures there is always a node before and after any
given node.

4. Use in Advanced Data Structures:

 Circular linked lists form the basis for more complex data structures such as
circular queues (also known as ring buffers), which are widely used in
scenarios requiring fixed-size buffers with efficient data management.

5. Memory Utilization:

 Circular linked lists can be more memory-efficient in some scenarios where


linked lists need to be dynamically resized. They allow easy reuse of nodes
and can avoid some of the overhead associated with more complex list
management.

Applications of Circular Linked Lists

1. Round-Robin Scheduling:

 In operating systems, round-robin scheduling is a common algorithm for


managing processes. A circular linked list allows the scheduler to cycle
through processes repeatedly in a fair manner.

2. Circular Buffers:

 Circular buffers use circular linked lists to manage data in a fixed-size buffer
that can overwrite old data when new data is added, making them useful for
streaming data applications like multimedia processing.

3. Implementing Free Lists in Memory Allocators:

 Memory allocators can use circular linked lists to manage free memory
blocks, ensuring efficient allocation and deallocation without fragmentation.

4. Multiplayer Board Games:

 For games where players take turns in a circular fashion, a circular linked list
can easily manage the sequence of turns.
Basic Operations on Linked Lists

 Insertion:
o At the beginning
o At the end
o At a specific position
 Deletion:
o From the beginning
o From the end
o From a specific position
 Traversal:
o Visiting each node to perform operations such as searching or displaying the list.
 Searching:
o Finding a node with a specific value.

Applications of Linked Lists

1. Dynamic Memory Allocation:


o Used in managing free memory in systems where blocks of memory are allocated
and deallocated frequently.
2. Implementing Other Data Structures:
o Used to implement stacks, queues, graphs, and hash tables.
3. Real-Time Systems:
o Used in systems that require predictable timing, as linked lists provide consistent
performance for insertion and deletion.
4. File Systems:
o Used to maintain directory structures and manage free disk space.
5. Music and Video Players:
o Used to maintain playlists where elements can be added or removed dynamically.

Differences between Singly and Doubly Linked List

Feature Singly Linked List Doubly Linked List


Each node contains data, a reference
Each node contains data and a reference
Structure (link) to the next node, and a reference
(link) to the next node.
(link) to the previous node.
Requires more memory per node due to
Memory Requires less memory per node as it
storing two links: one for the next node
Usage only stores a single link.
and one for the previous node.
Insertion Efficient insertion/deletion at the
Efficient insertion/deletion at both the
/Deletion beginning; less efficient at the end due
beginning and the end.
(at ends) to traversal.
Traversal Traversal is unidirectional, typically Traversal can be bidirectional, from the
from the head to the tail. head to the tail or from the tail to the
head.
Reverse Not possible without additional data Possible, as each node has a link to the
Traversal structures or modifications to the list. previous node.
Complexity
Simpler implementation with fewer More complex due to managing both next
of
pointers to manage. and previous pointers.
Operations
Suitable for applications requiring
Suitable for simple lists or applications
Use Cases bidirectional traversal, such as navigating
where traversal is primarily forward.
browser history or implementing a deque.
Insertion at the beginning: O(1), at the
Operations Insertion and deletion at both ends:
end: O(n). Deletion at the beginning:
Complexity O(1).
O(1), at the end: O(n).
Example Insertion at the beginning: O(1), Insertion and deletion at both ends:
Operations insertion at the end: O(n). O(1).

Advantages of doubly linked list over singly linked list

1. Bidirectional Traversal: Doubly linked lists allow traversal in both forward and backward
directions. This makes operations like reverse traversal or accessing the previous node more
efficient compared to singly linked lists, where reverse traversal requires traversing the list from
the beginning.

2. Easier Deletion: In a doubly linked list, deleting a node requires updating the references of
both the previous and next nodes, whereas in a singly linked list, you need to traverse the list to
find the previous node before the node to be deleted, which can be less efficient.

3. Insertion/Deletion at Both Ends: Doubly linked lists facilitate efficient insertion and
deletion at both the beginning and end of the list. In a singly linked list, inserting at the end or
deleting the last element requires traversing the entire list to reach the end.

4. Previous Node Access: In a doubly linked list, each node stores a reference to its previous
node, allowing direct access to the previous node without traversing the list. This can be
beneficial in various scenarios, such as implementing undo operations or navigating through
data structures efficiently.
Insertion Sort

Definition: Insertion Sort is a simple sorting algorithm that builds the final sorted array (or list)
one element at a time. It iterates through the input array, removing one element each time and
then finding the position where it belongs within the sorted part of the array.

Steps to sort an array—

1. Start with the second element (index 1) and consider it as the key.

2. Compare the key with the elements before it, moving larger elements one position to the
right to make space for the key.

3. Repeat step 2 until you find the correct position for the key, where all elements to the left
are smaller and all elements to the right are larger.

4. Move to the next element (index 2) and repeat steps 2 and 3 until the entire array is
sorted.

Complexity of Insertion Sort: The time complexity of the Insertion Sort algorithm is O(n2)
in the worst case, where n is the number of elements in the array, but it performs efficiently for
small datasets or nearly sorted arrays.

Recursion

Recursion is a fundamental concept where a function calls itself in order to solve a problem.
This approach is particularly useful for solving problems that can be broken down into smaller,
similar subproblems. A recursive function typically has two main components:

1. Base Case: The condition under which the function stops calling itself. This prevents
infinite recursion and typically handles the simplest, smallest instance of the problem.
2. Recursive Case: The part of the function where it calls itself with a modified argument,
gradually working towards the base case.
Tree

A tree is a widely used abstract data type that simulates a hierarchical tree structure, with a set
of connected nodes. Here is the standard definition and some key properties of a tree:

Definition:

A tree is a collection of nodes connected by edges that satisfies the following properties:

1. Rooted Structure: It has a root node, which is the topmost node in the hierarchy.
2. Parent-Child Relationship: Each node (except the root) has exactly one parent, and zero
or more children.
3. Acyclic: There are no cycles in a tree. In other words, there is exactly one path between
any two nodes.
4. Connected: All nodes are connected by edges, ensuring that there is a path between any
two nodes.

Types of Trees

1. Binary Tree: A binary tree is a type of data structure in which each node has at most
two children, referred to as the left child and the right child. This constraint makes binary
trees a simpler and more structured form of tree data structure

2. Binary Search Tree (BST): A binary tree where for each node, the left subtree contains
nodes with keys less than the node’s key, and the right subtree contains nodes with keys
greater than the node’s key.

Complete binary tree: A complete binary tree is a binary tree in which all levels, except
possibly the last, are completely filled, and all nodes in the last level are as far left as possible.

Properties

1. Nodes: For a complete binary tree with height h:


h h+1
o The number of nodes is at least 2 and at most 2 -1

Full Binary Tree: A full binary tree (also known as a proper or strict binary tree) is a binary
tree in which every node has either 0 or 2 children. This means that no node in the tree has only
one child.

Properties

1. Nodes: For a full binary tree with i level (i=1, 2, 3,…), the total number of nodes is 2i-1.
Node at i level is 2i-1.
2. Leaves: For a full binary tree with i internal nodes, the number of leaves is i+1.

3. Perfect Binary Tree Subtype: A perfect binary tree is a special case of a full binary tree
where all leaf nodes are at the same level, and all internal nodes have two children.

Binary Search Tree

A Binary Search Tree (BST) is a specific type of binary tree where each node has at most two
children and the tree maintains a specific order property that facilitates efficient search,
insertion, and deletion operations.

BST Property: For any given node:

 The key in the left subtree is less than the node's key.
 The key in the right subtree is greater than the node's key.
 Both the left and right subtrees must also be binary search trees.

Balanced Search Tree: A balanced search tree is a type of binary search tree (BST) that
maintains a low height to ensure efficient search, insertion, and deletion operations. The
primary goal of a balanced search tree is to keep the tree height logarithmic relative to the
number of nodes, ensuring that these operations can be performed in O(log n) time on average.

Heaps: A heap is a specialized tree-based data structure that satisfies the heap property. Heaps
are used in various applications, such as implementing priority queues, heapsort algorithms, and
for managing resources in memory management. There are two main types of heaps: max-heaps
and min-heaps.

Types of Heap-

1. Max-Heap: In a max-heap, for any given node i, the value of i is greater than or equal to
the values of its children. The largest element is at the root.

2. Min-Heap: In a min-heap, for any given node i, the value of i is less than or equal to the
values of its children. The smallest element is at the root.

Binary Heap: A binary heap is a complete binary tree, which means all levels are fully filled
except possibly the last level, which is filled from left to right.

Applications of Heaps
1. Priority Queues: Heaps are often used to implement priority queues, where the highest
(or lowest) priority element is always at the root and can be accessed quickly.

2. Heap Sort: An efficient comparison-based sorting algorithm that uses a heap to sort
elements.
3. Graph Algorithms: Heaps are used in algorithms like Dijkstra's shortest path algorithm
and Prim's minimum spanning tree algorithm for efficiently selecting the next node to
process.

4. Memory Management: Heaps are used in memory management systems for dynamic
memory allocation (e.g., the heap in the context of the heap and stack).

You might also like