Top 100 DSA Interview Questions & Answers
Top 100 DSA Interview Questions & Answers
4. What is a stack?
A stack is an abstract data type that follows the Last-In-First-Out (LIFO) principle. It
supports two main operations: push (inserting an element onto the top of the stack) and
pop (removing the topmost element from the stack). Stacks are often used for
managing function calls, expression evaluation, and undo mechanisms.
5. What is a queue?
A queue is an abstract data type that follows the First-In-First-Out (FIFO) principle. It
supports two primary operations: enqueue (adding an element to the end of the queue)
and dequeue (removing the element at the front of the queue). Queues are commonly
used in scenarios where data needs to be processed in the order it arrives, such as
scheduling tasks or handling requests.
6. What is a tree?
A tree is a hierarchical data structure consisting of nodes connected by edges. It has a root
node at the top and child nodes below it, forming a branching structure. Trees are used to
represent hierarchical relationships, such as file systems, organization structures,
and decision-making processes.
@coding_knowladge
7. What is a graph?
A graph is a non-linear data structure consisting of nodes (vertices) and edges that
connect them. It is a powerful tool for representing relationships between objects. Graphs
can be directed (edges have a specific direction) or undirected (edges have no
direction). They are widely used in network analysis, social networks, and pathfinding
algorithms.
14. What is the time complexity of inserting an element into a linked list?
Inserting an element into a linked list typically involves updating the references of the
adjacent nodes. If the insertion happens at the beginning or end of the linked list, the
time complexity is constant, O(1), as it requires updating only a few references. However,
inserting in the middle of a linked list requires traversing it until the desired position,
resulting in a time complexity of O(n), where n is the number of elements in the linked
list.
15. What is the time complexity of searching for an element in a linked list?
The time complexity of searching for an element in a linked list is O(n), where n is the
number of elements in the linked list. Since linked lists do not provide random access, we
need to traverse the list from the beginning until we find the desired element or reach the
end. This linear traversal makes the search time proportional to the size of the linked list.
20. What is the time complexity of inserting an element into a hash table?
The time complexity of inserting an element into a hash table is typically O(1), assuming
a well-designed hash function and an evenly distributed hash table. The hash function
calculates the index where the element will be stored, and the element is inserted at that
position. In the best case, insertion can be constant time. However, in the worst case,
when collisions occur and chaining is used to resolve them, the time complexity can be
O(n), where n is the number of elements in the hash table.
21. What is the time complexity of searching for an element in a hash table?
The time complexity of searching for an element in a hash table is typically O(1),
assuming a well-designed hash function and an evenly distributed hash table. The hash
function calculates the index of the element, and a lookup is performed at that position.
In the best case, the element is found immediately. However, in the worst case, when
collisions occur and chaining is used, the time complexity can be O(n), where n is the
number of elements in the hash table.
@coding_knowladge
22. What is a trie data structure?
A trie, also known as a prefix tree, is a tree-based data structure used to efficiently store
and search for strings. Each node in the trie represents a common prefix of multiple
strings, and the edges represent individual characters. Tries are particularly useful for
tasks such as autocomplete, spell checking, and IP routing.
32. What is the time complexity of removing an element from a dynamic array?
The time complexity of removing an element from a dynamic array depends on the
position of the element. If the element is removed from the end of the array, the time
complexity is constant, O(1), as it only requires updating the array's size. However, if the
element is removed from the middle, all subsequent elements need to be shifted,
resulting in a time complexity of O(n), where n is the number of elements in the array.
34. What is the time complexity of inserting an element into a red-black tree?
The time complexity of inserting an element into a red-black tree is O(log n), where n is
the number of nodes in the tree. The balancing operations performed during insertion
take logarithmic time because the tree height remains balanced, thanks to the red-
black tree properties. The self-balancing nature ensures that the worst-case height of
the tree remains proportional to log n.
35. What is the time complexity of searching for an element in a red-black tree?
The time complexity of searching for an element in a red-black tree is O(log n), where n
is the number of nodes in the tree. Similar to other balanced binary search trees, the
height of the red-black tree remains balanced due to its properties. As a result, the
search operation efficiently narrows down the search space, leading to a logarithmic
time complexity.
40. What is the difference between a priority queue and a regular queue?
The main difference between a priority queue and a regular queue lies in the ordering of
elements. In a regular queue, elements are stored and retrieved in a First-In-First-Out
(FIFO) order. However, in a priority queue, elements are associated with priorities and
retrieved based on the priority order. The element with the highest (or lowest) priority is
dequeued first.
41. What is the time complexity of inserting an element into a priority queue
implemented with a binary heap?
The time complexity of inserting an element into a priority queue implemented with a
binary heap is O(log n), where n is the number of elements in the heap. During insertion,
the element is appended to the end of the heap, and then it "bubbles up" by swapping
with its parent until the heap property is restored. The maximum number of swaps
required is proportional to the height of the heap, which is logarithmic.
44. What is the time complexity of sorting elements using heap sort?
The time complexity of sorting elements using heap sort is O(n log n), where n is the
number of elements in the input array. Heap sort involves building a binary heap from
the array (O(n)), repeatedly removing the maximum element from the heap (O(log n))
and placing it in the sorted portion of the array. The overall time complexity is dominated
by the O(log n) removal operation, performed n times.
46. What is the difference between BFS and DFS graph traversal algorithms?
The main difference between breadth-first search (BFS) and depth-first search (DFS) lies
in the order in which they explore nodes in a graph. BFS visits all the neighbors of a node
before moving to the next level, resembling a wave expanding from the starting point.
DFS explores as far as possible along each branch before backtracking, going deeper
into the graph. As a result, BFS typically finds the shortest path, while DFS explores paths
deeply.
50. What is the time complexity of topological sort in a directed acyclic graph?
The time complexity of topological sort in a directed acyclic graph (DAG) is O(V + E),
where V is the number of vertices (nodes) and E is the number of edges in the graph. The
algorithm performs a depth-first search (DFS) with some modifications, resulting in a
linear time complexity.
@coding_knowladge
51. What is a linked list?
A linked list is a linear data structure consisting of nodes, where each node contains a
value and a reference (or pointer) to the next node in the sequence. Linked lists allow for
efficient insertion and deletion at any position, but accessing elements requires
traversing the list from the beginning.
52. What is the time complexity of inserting an element at the beginning of a linked list?
The time complexity of inserting an element at the beginning of a linked list is O(1). Since
the new element becomes the head of the list, it simply requires updating the head
pointer to point to the new node.
53. What is the time complexity of inserting an element at the end of a linked list?
The time complexity of inserting an element at the end of a linked list is O(n), where n is
the number of nodes in the list. To insert at the end, we need to traverse the entire list to
reach the last node and then update its reference to point to the new node.
54. What is the time complexity of searching for an element in a linked list?
The time complexity of searching for an element in a linked list is O(n), where n is the
number of nodes in the list. In the worst case, we may need to traverse the entire list to
find the desired element.
55. What is the time complexity of removing an element from a linked list?
The time complexity of removing an element from a linked list depends on the position of
the element. If the element is at the beginning, the removal operation can be done in
O(1) time by updating the head pointer. If the element is in the middle or at the end, it
requires traversing the list to find the element (O(n)) and updating the references
accordingly.
The time complexity of removing (popping) an element from a stack is O(1). It involves
removing the element from the top of the stack by updating the top pointer.
59. What is the time complexity of accessing the top element of a stack?
The time complexity of accessing (peeking) the top element of a stack is O(1). It involves
retrieving the element from the top of the stack without modifying the stack itself.
60. What is a queue?
A queue is an abstract data type that follows the First-In-First-Out (FIFO) principle. It can
be visualized as a horizontal line of elements, where insertion occurs at one end (rear)
and removal occurs at the other end (front). The first element inserted is the first one to
be removed.
61. What is the time complexity of inserting an element into a queue?
The time complexity of inserting (enqueueing) an element into a queue is O(1). It involves
adding the element to the rear of the queue.
62. What is the time complexity of removing an element from a queue?
The time complexity of accessing (peeking) the front element of a queue is O(1). It
involves retrieving the element from the front of the queue without modifying the queue
itself.
A hash table is a data structure that implements an associative array abstract data
type. It uses a hash function to map keys to array indices, allowing for efficient insertion,
deletion, and retrieval of key-value pairs. Hash tables provide constant-time average
case complexity for these operations.
65. What is a hash function?
A hash function is a function that takes an input (such as a key) and returns a fixed-size
numerical value, known as a hash code or hash value. The hash function is designed to
evenly distribute the hash codes across the available indices of the hash table,
minimizing collisions and maximizing efficiency.
66. What is collision handling in a hash table?
Collision handling in a hash table refers to the process of dealing with situations where
two or more keys result in the same hash code, leading to a collision. Common collision
handling techniques include chaining (using linked lists or arrays to store multiple values
at the same index) and open addressing (probing for alternative locations when a
collision occurs).
67. What is the time complexity of inserting an element into a hash table?
The time complexity of inserting an element into a hash table is typically O(1) on
average. However, in the worst case, when collisions are frequent and extensive chaining
or probing is required, the time complexity can increase to O(n), where n is the number
of elements in the hash table.
68. What is the time complexity of retrieving an element from a hash table?
The time complexity of retrieving an element from a hash table is typically O(1) on
average. However, in the worst case, when collisions are frequent and extensive chaining
or probing is involved, the time complexity can increase to O(n), where n is the number
of elements in the hash table.
69. What is the time complexity of removing an element from a hash table?
The time complexity of removing an element from a hash table is typically O(1) on
average. However, in the worst case, when collisions are frequent and extensive chaining
or probing is required, the time complexity can increase to O(n), where n is the number
of elements in the hash table.
70. What is a binary search tree (BST)?
A binary search tree (BST) is a binary tree data structure in which each node has a key
greater than all the keys in its left subtree and smaller than all the keys in its right
subtree. This property enables efficient searching, insertion, and deletion operations. In-
order traversal of a BST yields a sorted sequence of keys.
71. What is the time complexity of searching for an element in a binary search tree
(BST)?
The time complexity of searching for an element in a binary search tree (BST) is O(h),
where h is the height of the tree. In a balanced BST, the height is logarithmic (h = log n,
where n is the number of nodes), resulting in an average case time complexity of O(log
n). However, in the worst case, when the tree is skewed and resembles a linked list, the
height is linear (h = n), leading to a time complexity of O(n).
72. What is the time complexity of inserting an element into a binary search tree (BST)?
The time complexity of inserting an element into a binary search tree (BST) is O(h),
where h is the height of the tree. In a balanced BST, the height is logarithmic (h = log n,
where n is the number of nodes), resulting in an average case time complexity of O(log
n). However, in the worst case, when the tree is skewed and resembles a linked list, the
height is linear (h = n), leading to a time complexity of O(n).
73. What is the time complexity of removing an element from a binary search tree (BST)?
The time complexity of removing an element from a binary search tree (BST) is O(h),
where h is the height of the tree. In a balanced BST, the height is logarithmic (h = log n,
where n is the number of nodes), resulting in an average case time complexity of O(log
n). However, in the worst case, when the tree is skewed and resembles a linked list, the
height is linear (h = n), leading to a time complexity of O(n).
A self-balancing binary search tree is a binary search tree that automatically maintains
a balanced structure during insertions and deletions. It achieves this balance by
performing rotations or other operations to ensure that the height of the tree remains
logarithmic, optimizing the time complexity of search, insert, and delete operations.
75. What is an AVL tree?
An AVL tree is a self-balancing binary search tree named after its inventors, Adelson-
Velsky and Landis. It maintains the balance factor (the height difference between left
and right subtrees) of each node, ensuring that it is always in the range of -1, 0, or 1. AVL
trees perform rotations to maintain balance and achieve efficient operations with a
worst-case time complexity of O(log n).
A red-black tree is a self-balancing binary search tree with an additional color attribute
for each node, either red or black. The color properties and rotations maintain a balance
between the left and right subtrees, ensuring that the longest path is no more than twice
the length of the shortest path. Red-black trees offer efficient operations with a worst-
case time complexity of O(log n).
A heap is a complete binary tree data structure that satisfies the heap property. In a max
heap, for every node, the value of the node is greater than or equal to the values of its
children. In a min heap, the value of each node is smaller than or equal to the values of
its children. Heaps are commonly used to implement priority queues and heap sort.
78. What is the time complexity of finding the maximum (or minimum) element in a
heap?
The time complexity of finding the maximum (or minimum) element in a heap is O(1).
The maximum (or minimum) element is always located at the root of the heap, allowing
for direct access without the need for traversal or comparison with other elements.
80. What is the time complexity of removing the maximum (or minimum) element from
a heap?
The time complexity of removing the maximum (or minimum) element from a heap is
O(log n), where n is the number of elements in the heap. The removal process involves
swapping the root with the last element, removing the last element, and "bubbling down"
the new root by swapping it with its larger (or smaller) child until the heap property is
satisfied. The number of swaps required is proportional to the height of the heap, which is
logarithmic.
81. What is a trie?
A trie, also known as a prefix tree, is a tree-based data structure commonly used for
efficient string searching and retrieval operations. It stores a set of strings, where each
node represents a prefix or a complete string. Trie nodes typically have multiple child
pointers, each associated with a character. Tries are useful in applications such as
autocomplete, spell-checking, and IP routing.
The time complexity of searching for a string in a trie is O(m), where m is the length of
the string. The search process involves traversing the trie from the root to the leaf node
corresponding to the last character of the string. The number of comparisons required is
proportional to the length of the string.
83. What is the time complexity of inserting a string into a trie?
The time complexity of inserting a string into a trie is O(m), where m is the length of the
string. The insertion process involves traversing the trie based on the characters of the
string and creating new nodes as necessary. The number of operations is proportional to
the length of the string.
84. What is a graph?
A directed acyclic graph (DAG) is a directed graph that does not contain any directed
cycles. In other words, it is impossible to traverse from a vertex and return back to it by
following the directions of the edges. DAGs are used in various applications, including
task scheduling, dependency resolution, and representing precedence relationships.
87. What is a minimum spanning tree (MST)?
A minimum spanning tree (MST) is a subset of the edges of a weighted undirected graph
that connects all the vertices with the minimum possible total edge weight. MSTs are
used to find the most cost-effective way to connect a set of nodes. Common algorithms
for finding MSTs include Prim's algorithm and Kruskal's algorithm.
88. What is Dijkstra's algorithm?
Dijkstra's algorithm is a graph traversal algorithm used to find the shortest path between
a starting vertex and all other vertices in a weighted graph with non-negative edge
weights. It maintains a priority queue to continuously select the vertex with the smallest
distance from the starting vertex and updates the distances of adjacent vertices
accordingly. Dijkstra's algorithm guarantees the shortest paths when all edge weights
are non-negative.
89. What is the time complexity of Dijkstra's algorithm?
The time complexity of Dijkstra's algorithm depends on the data structure used to
implement the priority queue. When implemented with a binary heap or Fibonacci heap,
the time complexity is O((V + E) log V), where V is the number of vertices and E is the
number of edges in the graph.
90. What is the difference between a breadth-first search (BFS) and a depth-first search
(DFS)?
BFS and DFS are graph traversal algorithms with different exploration strategies. BFS
explores all the vertices at the current depth level before moving to the next depth level,
while DFS explores as far as possible along each branch before backtracking. BFS uses a
queue data structure, while DFS uses a stack or recursion.
91. What is dynamic programming?
The time complexity of a recursive algorithm with memoization depends on the number
of distinct subproblems encountered. If there are n subproblems, and the time
complexity of solving each subproblem is O(1), the overall time complexity is O(n).
An array is a contiguous block of memory that stores elements of the same type.
Accessing elements in an array is fast and constant time (O(1)) because they can be
accessed directly using their indices. However, inserting or deleting elements in the
middle of an array requires shifting subsequent elements, resulting in a time complexity
of O(n).
On the other hand, a linked list is a data structure where elements (nodes) are scattered
in memory and connected through pointers. Insertion and deletion operations in a linked
list can be done in constant time (O(1)) by adjusting pointers, but accessing elements
requires traversing the list, resulting in a time complexity of O(n).
95. What is the difference between a stack and a queue?
A stack follows the Last-In-First-Out (LIFO) principle, allowing insertion and deletion only
at one end (top). The last element inserted is the first one to be removed.
A queue follows the First-In-First-Out (FIFO) principle, allowing insertion at one end
(rear) and deletion at the other end (front). The first element inserted is the first one to be
removed.
96. What is the difference between a hash table and a binary search tree?
A hash table uses a hash function to map keys to array indices and provides constant-
time average case complexity for insertion, deletion, and retrieval operations. However,
hash tables do not naturally maintain order and may experience collisions, affecting
performance.
A binary search tree (BST) maintains elements in a sorted order based on their keys.
BSTs provide efficient searching, insertion, and deletion operations with a time
complexity of O(log n) in balanced trees. However, the time complexity can degrade to
O(n) in worst-case scenarios.
97. What is the difference between a graph and a tree?
98. What is the difference between a breadth-first search (BFS) and a depth-first search
(DFS) in a graph?
BFS and DFS are graph traversal algorithms with different exploration strategies:
BFS explores all the vertices at the current depth level before moving to the next
depth level. It uses a queue to store the vertices and visits them in the order of their
discovery.
DFS explores as far as possible along each branch before backtracking. It uses a
stack or recursion to store the vertices and visits them in a depth-first manner.
BFS is typically used to find the shortest path between two vertices or to visit all vertices
in a connected component. DFS is useful for tasks such as finding cycles, topological
sorting, and exploring paths in a graph.
99. What is the difference between a spanning tree and a minimum spanning tree?
A spanning tree of a graph is a subgraph that includes all the vertices of the graph while
forming a tree structure without any cycles. It preserves the connectivity of the original
graph. A minimum spanning tree (MST) is a spanning tree with the minimum possible
total
edge weight. It connects all the vertices with the least overall cost. MSTs are useful in
scenarios such as designing network infrastructure or connecting a set of locations with
minimal expenses.