[go: up one dir, main page]

0% found this document useful (0 votes)
5 views20 pages

Daa

The document discusses classic problems with O(1), O(n), and O(log n) time complexities, providing examples and principles for each. O(1) operations include array indexing and hash table lookups, while O(n) operations involve linear searches and summing elements. O(log n) complexities are exemplified by binary search and operations on balanced binary search trees, emphasizing efficient algorithms through divide and conquer strategies.

Uploaded by

toobafar004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views20 pages

Daa

The document discusses classic problems with O(1), O(n), and O(log n) time complexities, providing examples and principles for each. O(1) operations include array indexing and hash table lookups, while O(n) operations involve linear searches and summing elements. O(log n) complexities are exemplified by binary search and operations on balanced binary search trees, emphasizing efficient algorithms through divide and conquer strategies.

Uploaded by

toobafar004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 20

Classic Problems with O(1) Time Complexity and Their Principles

An O(1) time complexity means that the algorithm runs in constant time regardless
of the size of the input. These algorithms typically involve direct calculations,
lookups, or operations that do not depend on input size.

1. Array Indexing (Direct Access)


Problem: Retrieve an element from an array.
Principle: Direct memory access via indexing.
Example:
python
Copy
Edit
arr = [10, 20, 30, 40]
x = arr[2] # O(1) operation
Why O(1)? Retrieving an element using an index takes constant time because it
directly maps to a memory address.
2. Hash Table Lookup (Average Case)
Problem: Access a value in a hash table (dict in Python).
Principle: Hashing and direct access.
Example:
python
Copy
Edit
hashmap = {"apple": 1, "banana": 2, "cherry": 3}
x = hashmap["banana"] # O(1) average case
Why O(1)? The hash function computes an index in constant time, allowing for fast
lookups (except in worst-case collisions).
3. Stack Push and Pop
Problem: Insert or remove an element from a stack.
Principle: Last-In-First-Out (LIFO) structure.
Example:
python
Copy
Edit
stack = []
stack.append(10) # Push (O(1))
x = stack.pop() # Pop (O(1))
Why O(1)? Both operations only modify the top element without iterating over the
structure.
4. Queue Enqueue and Dequeue (Using Linked List or Circular Queue)
Problem: Add or remove an element from a queue.
Principle: First-In-First-Out (FIFO) structure.
Example:
python
Copy
Edit
from collections import deque
queue = deque()
queue.append(10) # Enqueue (O(1))
x = queue.popleft() # Dequeue (O(1))
Why O(1)? These operations update only the front or rear of the queue.
5. Checking Even/Odd of a Number
Problem: Determine if a number is even or odd.
Principle: Modulus operation.
Example:
python
Copy
Edit
def is_even(n):
return n % 2 == 0 # O(1)
Why O(1)? The modulus operation is a single arithmetic operation.
6. Swap Two Variables
Problem: Swap values of two variables.
Principle: Direct assignment.
Example:
python
Copy
Edit
a, b = 5, 10
a, b = b, a # O(1)
Why O(1)? The swap is a constant-time operation regardless of input size.
7. Bitwise Operations
Problem: Perform operations like AND, OR, XOR.
Principle: Hardware-level bit manipulation.
Example:
python
Copy
Edit
a, b = 5, 3
result = a & b # O(1) (Bitwise AND)
Why O(1)? Bitwise operations are processed in constant time by the CPU.
8. Mathematical Computations
Problem: Calculate power, square root, log, etc.
Principle: Direct function computation.
Example:
python
Copy
Edit
import math
x = math.sqrt(16) # O(1)
Why O(1)? These functions execute in constant time due to efficient algorithms in
the CPU.
9. Finding Maximum or Minimum in a Fixed-Size Set
Problem: Given a fixed set of numbers, find the max or min.
Principle: Predefined lookup.
Example:
python
Copy
Edit
nums = [3, 7, 2, 9]
max_val = max(nums) # O(1) if size is fixed
Why O(1)? If the set size is fixed, the operation does not depend on input size.
10. Random Access in a Fixed-Size List
Problem: Retrieve a random element.
Principle: Direct indexing.
Example:
python
Copy
Edit
import random
nums = [3, 7, 2, 9]
x = random.choice(nums) # O(1)
Why O(1)? A single index lookup is constant time.
Summary of Principles Behind O(1) Algorithms
Direct Memory Access – Array indexing, hash table lookup.
Fixed-Size Operations – Predefined max/min in a fixed set.
Bitwise Manipulation – AND, OR, XOR.
Constant Arithmetic – Modulus, square root, power functions.
Efficient Data Structures – Stack, queue, hash maps.
Let me know if you need more examples! 🚀

Classic Problems with O(n) Time Complexity and Their Principles


An O(n) time complexity means the execution time grows linearly with the size of
the input. If the input size doubles, the time taken approximately doubles as well.
These algorithms typically involve iterating through the input once.

1. Finding the Maximum or Minimum Element in an Unsorted List


Problem: Given an unsorted list, find the maximum or minimum value.
Principle: Linear search through all elements.
Example:
python
Copy
Edit
def find_max(arr):
max_val = arr[0]
for num in arr:
if num > max_val:
max_val = num
return max_val
Why O(n)? We traverse all n elements once.
2. Linear Search
Problem: Find an element in an unsorted list.
Principle: Check each element one by one.
Example:
python
Copy
Edit
def linear_search(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return i # Found at index i
return -1 # Not found
Why O(n)? In the worst case, we examine all n elements.
3. Sum of All Elements in an Array
Problem: Compute the sum of all elements in an array.
Principle: Iterate and accumulate the sum.
Example:
python
Copy
Edit
def sum_array(arr):
total = 0
for num in arr:
total += num
return total
Why O(n)? Each element is processed once.
4. Checking for Duplicates in an Unsorted List
Problem: Determine if a list contains duplicate values.
Principle: Store seen elements in a set.
Example:
python
Copy
Edit
def has_duplicates(arr):
seen = set()
for num in arr:
if num in seen:
return True
seen.add(num)
return False
Why O(n)? We iterate over n elements and perform constant-time set lookups.
5. Copying an Array
Problem: Create a duplicate of an array.
Principle: Element-by-element copy.
Example:
python
Copy
Edit
def copy_array(arr):
new_arr = []
for num in arr:
new_arr.append(num)
return new_arr
Why O(n)? Each element is copied once.
6. Prefix Sum Computation
Problem: Compute prefix sums for an array.
Principle: Iterate once, storing cumulative sums.
Example:
python
Copy
Edit
def prefix_sum(arr):
result = [0] * len(arr)
result[0] = arr[0]
for i in range(1, len(arr)):
result[i] = result[i - 1] + arr[i]
return result
Why O(n)? Each element is processed once.
7. Removing Duplicates from an Unsorted List
Problem: Remove duplicate values while maintaining order.
Principle: Use a set for tracking unique elements.
Example:
python
Copy
Edit
def remove_duplicates(arr):
seen = set()
unique_arr = []
for num in arr:
if num not in seen:
seen.add(num)
unique_arr.append(num)
return unique_arr
Why O(n)? We iterate once, and set operations are O(1).
8. Reversing a String
Problem: Reverse a given string.
Principle: Swap characters from both ends.
Example:
python
Copy
Edit
def reverse_string(s):
return s[::-1] # Slicing method is also O(n)
Why O(n)? Each character is accessed once.
9. Counting Occurrences of Each Element
Problem: Count how many times each element appears.
Principle: Use a dictionary to track counts.
Example:
python
Copy
Edit
def count_occurrences(arr):
count_dict = {}
for num in arr:
count_dict[num] = count_dict.get(num, 0) + 1
return count_dict
Why O(n)? Each element is processed once.
10. Checking if a String is a Palindrome
Problem: Determine if a string reads the same forward and backward.
Principle: Compare characters from both ends.
Example:
python
Copy
Edit
def is_palindrome(s):
n = len(s)
for i in range(n // 2):
if s[i] != s[n - i - 1]:
return False
return True
Why O(n)? We check up to n/2 characters, which simplifies to O(n).
11. Rotating an Array by k Positions
Problem: Rotate an array left or right by k positions.
Principle: Use slicing.
Example:
python
Copy
Edit
def rotate_array(arr, k):
k = k % len(arr) # Ensure k is within bounds
return arr[-k:] + arr[:-k] # Right rotation
Why O(n)? The slicing and concatenation require processing all n elements.
12. Finding the First Non-Repeating Character
Problem: Find the first character that does not repeat.
Principle: Use a dictionary to store counts.
Example:
python
Copy
Edit
def first_non_repeating(s):
char_count = {}
for char in s:
char_count[char] = char_count.get(char, 0) + 1
for char in s:
if char_count[char] == 1:
return char
return None
Why O(n)? We scan the string twice, which is still O(n).
13. Traversing a Linked List
Problem: Visit each node in a linked list.
Principle: Move node by node.
Example:
python
Copy
Edit
class Node:
def __init__(self, data):
self.data = data
self.next = None

def traverse(head):
current = head
while current:
print(current.data)
current = current.next
Why O(n)? Each node is visited once.
14. Removing an Element from an Unordered List
Problem: Remove the first occurrence of an element.
Principle: Scan for the element and delete it.
Example:
python
Copy
Edit
def remove_element(arr, target):
for i in range(len(arr)):
if arr[i] == target:
return arr[:i] + arr[i+1:]
return arr
Why O(n)? The worst case requires scanning all elements.
15. Merging Two Sorted Lists
Problem: Merge two sorted lists into one sorted list.
Principle: Compare elements and merge sequentially.
Example:
python
Copy
Edit
def merge_sorted(arr1, arr2):
i, j = 0, 0
merged = []
while i < len(arr1) and j < len(arr2):
if arr1[i] < arr2[j]:
merged.append(arr1[i])
i += 1
else:
merged.append(arr2[j])
j += 1
return merged + arr1[i:] + arr2[j:]
Why O(n)? Each element is processed once.
Summary of O(n) Principles
Single Iteration – Processing all elements once.
Dictionary or Set Operations – Storing and checking items.
String or Array Manipulation – Reversal, rotation, merging.
Linked List Traversal – Visiting each node once.
Let me know if you need more! 🚀

Classic Problems with O(log n) Time Complexity and Their Principles


An O(log n) time complexity means the execution time grows logarithmically with the
input size. This is much more efficient than O(n) for large inputs, as the number
of operations needed halves at each step. O(log n) algorithms are typically found
in divide and conquer strategies, binary search, and tree-based structures.
1. Binary Search 🔎
Problem: Find an element in a sorted array.
Principle: Repeatedly divide the array in half.
Example:
python
Copy
Edit
def binary_search(arr, target):
left, right = 0, len(arr) - 1
while left <= right:
mid = (left + right) // 2
if arr[mid] == target:
return mid # Found at index mid
elif arr[mid] < target:
left = mid + 1
else:
right = mid - 1
return -1 # Not found
Why O(log n)? Each step halves the remaining search space.
2. Binary Search on an Infinite Array
Problem: Search for an element in an infinite sorted array.
Principle: Double the search range iteratively.
Example:
python
Copy
Edit
def search_in_infinite_array(arr, target):
index = 1
while arr[index] < target:
index *= 2 # Double the search range
return binary_search(arr[:index + 1], target)
Why O(log n)? The search space doubles, reducing iterations.
3. Finding an Element in a Sorted Matrix
Problem: Search for an element in a sorted matrix.
Principle: Binary search in rows or columns.
Example:
python
Copy
Edit
def search_matrix(matrix, target):
rows, cols = len(matrix), len(matrix[0])
left, right = 0, rows * cols - 1
while left <= right:
mid = (left + right) // 2
mid_value = matrix[mid // cols][mid % cols]
if mid_value == target:
return True
elif mid_value < target:
left = mid + 1
else:
right = mid - 1
return False
Why O(log n)? The matrix is treated as a sorted list.
4. Fast Exponentiation (Exponentiation by Squaring)
Problem: Compute a^b efficiently.
Principle: Divide exponent by 2 at each step.
Example:
python
Copy
Edit
def fast_exponentiation(a, b):
result = 1
while b > 0:
if b % 2 == 1:
result *= a
a *= a
b //= 2
return result
Why O(log n)? The exponent b is halved in each step.
5. Finding the GCD using Euclidean Algorithm
Problem: Find the greatest common divisor (GCD) of two numbers.
Principle: Use modulo operation to reduce size.
Example:
python
Copy
Edit
def gcd(a, b):
while b:
a, b = b, a % b
return a
Why O(log n)? The numbers decrease rapidly due to division.
6. Searching in a Balanced Binary Search Tree (BST)
Problem: Find an element in a BST.
Principle: Traverse left or right based on comparison.
Example:
python
Copy
Edit
class Node:
def __init__(self, value):
self.value = value
self.left = self.right = None

def search_bst(root, target):


if not root or root.value == target:
return root
if target < root.value:
return search_bst(root.left, target)
return search_bst(root.right, target)
Why O(log n)? Each step halves the remaining tree.
7. Inserting & Deleting from a Balanced BST (e.g., AVL Tree, Red-Black Tree)
Problem: Insert or delete an element while maintaining balance.
Principle: Logarithmic depth adjustments.
Example: AVL tree insertions balance the tree after insertion.
Why O(log n)? The tree remains balanced, limiting height.
8. Heap Operations (Insert, Delete, Extract Min/Max)
Problem: Maintain a min-heap or max-heap.
Principle: Upheap or downheap percolation.
Example:
python
Copy
Edit
import heapq
heap = []
heapq.heappush(heap, 5) # Insert O(log n)
heapq.heappop(heap) # Extract min O(log n)
Why O(log n)? Adjusting the heap takes logarithmic swaps.
9. Finding the k-th Smallest Element in an Array
Problem: Find the k-th smallest element in an unsorted array.
Principle: Use QuickSelect (partitioning like quicksort).
Example:
python
Copy
Edit
def quickselect(arr, k):
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
right = [x for x in arr if x > pivot]
if k < len(left):
return quickselect(left, k)
elif k < len(arr) - len(right):
return pivot
return quickselect(right, k - (len(arr) - len(right)))
Why O(log n)? Each recursive call halves the array.
10. Finding Powers of 2 Less Than n
Problem: Find all powers of 2 ≤ n.
Principle: Multiply by 2 at each step.
Example:
python
Copy
Edit
def powers_of_two(n):
power = 1
while power <= n:
print(power)
power *= 2
Why O(log n)? The values double each time.
Summary of O(log n) Principles
Divide and Conquer – Breaking problems into smaller subproblems.
Binary Search – Reducing search space by half.
Exponential Growth – Doubling or halving numbers.
Tree-Based Operations – Traversing, inserting, or deleting in balanced trees.
Heap Operations – Maintaining heap structure.
Modular Reduction – Using GCD and modular arithmetic.
Conclusion
O(log n) algorithms are much faster than O(n) for large inputs. If n = 1,000,000,
then:

O(n) takes 1,000,000 steps


O(log n) takes ~20 steps
Let me know if you need more examples! 🚀

Classic Problems with O(n log n) Time Complexity and Their Principles
An O(n log n) time complexity means the execution time grows proportionally to the
input size (n) multiplied by the logarithm of the input size (log n). It is
commonly found in sorting algorithms, divide and conquer techniques, and tree-based
problems.

🔥 1. Merge Sort
Problem: Sort an array efficiently.
Principle: Divide the array into halves, sort each half, and merge them.
Example:
python
Copy
Edit
def merge_sort(arr):
if len(arr) <= 1:
return arr
mid = len(arr) // 2
left = merge_sort(arr[:mid])
right = merge_sort(arr[mid:])
return merge(left, right)

def merge(left, right):


result = []
i = j = 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
result.extend(left[i:])
result.extend(right[j:])
return result
Why O(n log n)?
Log n levels of recursion (splitting the array in half).
n operations to merge at each level.
🔥 2. QuickSort (Average Case)
Problem: Sort an array efficiently using partitioning.
Principle: Pick a pivot, partition the array, and recursively sort subarrays.
Example:
python
Copy
Edit
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
Why O(n log n)?
Each partition step splits the array into two halves.
The recursion depth is log n.
Each partitioning step takes O(n) time.
🔥 3. Heap Sort
Problem: Sort an array using a heap.
Principle: Convert the array into a heap and repeatedly extract the
minimum/maximum.
Example:
python
Copy
Edit
import heapq
def heap_sort(arr):
heapq.heapify(arr) # O(n) operation
return [heapq.heappop(arr) for _ in range(len(arr))] # O(n log n)
Why O(n log n)?
Heapify takes O(n).
Extracting elements takes O(log n) per element (for n elements, total is O(n log
n)).
🔥 4. Counting Inversions in an Array
Problem: Count how many times a number appears before a larger number (indicating
disorder).
Principle: Modify merge sort to count swaps while merging.
Example:
python
Copy
Edit
def count_inversions(arr):
def merge_count(left, right):
i = j = count = 0
result = []
while i < len(left) and j < len(right):
if left[i] <= right[j]:
result.append(left[i])
i += 1
else:
result.append(right[j])
j += 1
count += len(left) - i # Count inversions
result.extend(left[i:])
result.extend(right[j:])
return result, count

def merge_sort_count(arr):
if len(arr) <= 1:
return arr, 0
mid = len(arr) // 2
left, left_count = merge_sort_count(arr[:mid])
right, right_count = merge_sort_count(arr[mid:])
merged, merge_count_val = merge_count(left, right)
return merged, left_count + right_count + merge_count_val

return merge_sort_count(arr)[1]
Why O(n log n)?
Uses merge sort, which runs in O(n log n).
Counting inversions only adds O(n) overhead per merge.
🔥 5. Building a Balanced Binary Search Tree (BST) from a Sorted Array
Problem: Convert a sorted array into a height-balanced BST.
Principle: Pick the middle element as the root and recursively construct subtrees.
Example:
python
Copy
Edit
class TreeNode:
def __init__(self, value):
self.value = value
self.left = self.right = None

def sorted_array_to_bst(arr):
if not arr:
return None
mid = len(arr) // 2
root = TreeNode(arr[mid])
root.left = sorted_array_to_bst(arr[:mid])
root.right = sorted_array_to_bst(arr[mid+1:])
return root
Why O(n log n)?
log n tree levels (recursive depth).
n nodes are processed once.
🔥 6. Fast Fourier Transform (FFT) for Polynomial Multiplication
Problem: Multiply two polynomials efficiently.
Principle: Convert polynomials into point values, multiply them, then inverse
transform.
Why O(n log n)?
Uses divide and conquer to break down polynomials.
Each transformation step reduces size exponentially.
🔥 7. K-th Largest Element Using a Heap
Problem: Find the k-th largest element in an unsorted array.
Principle: Use a Min Heap of size k and extract the smallest.
Example:
python
Copy
Edit
import heapq
def kth_largest(arr, k):
heap = arr[:k]
heapq.heapify(heap)
for num in arr[k:]:
if num > heap[0]:
heapq.heappushpop(heap, num)
return heap[0]
Why O(n log n)?
Heapify takes O(n).
Each insertion/removal takes O(log k), repeated n times.
🔥 8. External Sorting (Sorting Large Files That Don't Fit in Memory)
Problem: Sort a huge dataset that cannot be loaded into memory at once.
Principle:
Divide the data into chunks that fit in memory (O(n) passes).
Sort each chunk and store on disk (O(n log n) for sorting).
Merge sorted chunks using a k-way merge algorithm (O(n log n)).
Why O(n log n)?
Divide and conquer + merging costs O(n log n).
Summary of O(n log n) Principles
Sorting Algorithms – Merge Sort, QuickSort, Heap Sort.
Divide and Conquer – Recursively breaking problems into subproblems.
Heap Operations – HeapSort, K-th largest element.
Balanced BSTs – Converting a sorted array into a tree.
Polynomial Multiplication – Fast Fourier Transform.
External Sorting – Handling large datasets in chunks.
Conclusion
O(n log n) algorithms are significantly faster than O(n²) for large inputs.
If n = 1,000,000:

O(n²) takes ~10¹² operations (infeasible)


O(n log n) takes ~20,000,000 operations (manageable)
Need more details? Let me know! 🚀

Classic Problems with O(n²) Time Complexity and Their Principles


O(n²) time complexity means that the execution time grows quadratically as the
input size (n) increases. This is often found in nested loops, brute-force
approaches, and certain dynamic programming problems.
🔥 1. Bubble Sort
Problem: Sort an array by repeatedly swapping adjacent elements if they are in the
wrong order.
Principle: Brute-force pairwise comparison.
Example:
python
Copy
Edit
def bubble_sort(arr):
n = len(arr)
for i in range(n):
for j in range(n - i - 1):
if arr[j] > arr[j + 1]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
Why O(n²)?
Two nested loops, each running O(n) times → O(n × n) = O(n²).
🔥 2. Selection Sort
Problem: Sort an array by repeatedly finding the minimum element and placing it at
the correct position.
Principle: Find the minimum in each pass and swap.
Example:
python
Copy
Edit
def selection_sort(arr):
n = len(arr)
for i in range(n):
min_idx = i
for j in range(i + 1, n):
if arr[j] < arr[min_idx]:
min_idx = j
arr[i], arr[min_idx] = arr[min_idx], arr[i]
Why O(n²)?
Two nested loops performing O(n) operations each → O(n²).
🔥 3. Insertion Sort
Problem: Sort an array by picking one element at a time and inserting it into its
correct position.
Principle: Shift elements to insert in sorted order.
Example:
python
Copy
Edit
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and arr[j] > key:
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
Why O(n²)?
Worst case (reverse sorted array) requires O(n) shifts for each of the n elements.
🔥 4. Checking for Duplicates in an Unsorted Array (Brute Force)
Problem: Given an array, check if it contains duplicate elements.
Principle: Compare every element with every other element.
Example:
python
Copy
Edit
def has_duplicates(arr):
n = len(arr)
for i in range(n):
for j in range(i + 1, n):
if arr[i] == arr[j]:
return True
return False
Why O(n²)?
Two nested loops checking every pair → O(n²) comparisons.
🔥 5. Finding All Pairs That Sum to a Target
Problem: Find all pairs (i, j) such that arr[i] + arr[j] == target.
Principle: Brute-force check for all possible pairs.
Example:
python
Copy
Edit
def find_pairs(arr, target):
pairs = []
n = len(arr)
for i in range(n):
for j in range(i + 1, n):
if arr[i] + arr[j] == target:
pairs.append((arr[i], arr[j]))
return pairs
Why O(n²)?
Two nested loops checking all pairs.
🔥 6. Matrix Multiplication (Naïve Approach)
Problem: Multiply two n × n matrices.
Principle: Compute each entry in the result matrix using the dot product of a row
and a column.
Example:
python
Copy
Edit
def matrix_multiply(A, B):
n = len(A)
result = [[0] * n for _ in range(n)]
for i in range(n):
for j in range(n):
for k in range(n):
result[i][j] += A[i][k] * B[k][j]
return result
Why O(n³)?
Three nested loops → O(n³).
Optimized Strassen’s algorithm reduces this to O(n².81).
🔥 7. Floyd-Warshall Algorithm (All-Pairs Shortest Path in a Graph)
Problem: Find the shortest path between all pairs of nodes in a graph.
Principle: Dynamic programming approach updating shortest distances iteratively.
Example:
python
Copy
Edit
def floyd_warshall(graph):
n = len(graph)
for k in range(n):
for i in range(n):
for j in range(n):
graph[i][j] = min(graph[i][j], graph[i][k] + graph[k][j])
return graph
Why O(n³)?
Three nested loops processing every pair of nodes.
🔥 8. Longest Common Subsequence (LCS) - Dynamic Programming Approach
Problem: Find the longest common subsequence of two strings.
Principle: Use a DP table to store solutions to subproblems.
Example:
python
Copy
Edit
def lcs(X, Y):
m, n = len(X), len(Y)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if X[i - 1] == Y[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[m][n]
Why O(n²)?
Filling up an m × n DP table.
🔥 9. Traveling Salesman Problem (TSP) – Brute Force Approach
Problem: Find the shortest route visiting all cities exactly once and returning to
the start.
Principle: Try all possible permutations.
Example:
python
Copy
Edit
from itertools import permutations
def tsp_brute_force(graph):
n = len(graph)
cities = range(n)
min_cost = float('inf')
for perm in permutations(cities):
cost = sum(graph[perm[i]][perm[i+1]] for i in range(n - 1)) + graph[perm[-
1]][perm[0]]
min_cost = min(min_cost, cost)
return min_cost
Why O(n!)?
Trying all n! permutations.
Summary of O(n²) Principles
Problem Why O(n²)?
Bubble Sort, Selection Sort, Insertion Sort Two nested loops scanning n
elements.
Duplicate Checking (Brute Force) Two nested loops comparing all pairs.
Finding Pairs for a Target Sum Checking all pairs of elements.
Matrix Multiplication Three nested loops for row-column multiplications.
Floyd-Warshall Algorithm Three nested loops updating distances.
Longest Common Subsequence (LCS) Filling up an m × n DP table.
Traveling Salesman Problem (Brute Force) Trying all n! permutations.
Conclusion
O(n²) algorithms are inefficient for large inputs, but work well for small inputs
(n < 10,000).
If n = 1,000,000:

O(n²) takes ~10¹² operations (very slow).


O(n log n) takes ~20,000,000 operations (manageable).
For large inputs, prefer O(n log n) or O(n) solutions instead of O(n²).

Let me know if you need more details! 🚀

Polynomial Time Complexity (O(n^k))


Polynomial time complexity means that the running time of an algorithm grows as a
power (k) of the input size (n).
This is generally written as O(n^k), where k is a constant.

If k = 1, the algorithm is O(n) (Linear Time)


If k = 2, the algorithm is O(n²) (Quadratic Time)
If k = 3, the algorithm is O(n³) (Cubic Time)
If k > 3, it is still Polynomial Time but less efficient
Polynomial time is considered efficient for moderate input sizes but becomes slow
for very large inputs.

🔥 Classic Problems Solvable in Polynomial Time (O(n^k))


1. Sorting Algorithms
Bubble Sort, Insertion Sort, Selection Sort → O(n²)
Merge Sort, Quick Sort (Worst Case) → O(n log n)
Heap Sort → O(n log n)
✅ Sorting large datasets efficiently is critical in databases and computer systems.

2. Graph Algorithms
✅ Dijkstra’s Algorithm (Shortest Path) → O(V²) or O(E log V) with priority queue
Used to find the shortest path from a source node to all other nodes.
Example:

python
Copy
Edit
import heapq
def dijkstra(graph, start):
pq = [(0, start)]
distances = {node: float('inf') for node in graph}
distances[start] = 0
while pq:
curr_distance, node = heapq.heappop(pq)
for neighbor, weight in graph[node]:
distance = curr_distance + weight
if distance < distances[neighbor]:
distances[neighbor] = distance
heapq.heappush(pq, (distance, neighbor))
return distances
✅ Used in GPS navigation systems.

✅ Floyd-Warshall Algorithm (All-Pairs Shortest Path) → O(V³)


Used to find shortest paths between all nodes in a graph.

python
Copy
Edit
def floyd_warshall(graph):
V = len(graph)
for k in range(V):
for i in range(V):
for j in range(V):
graph[i][j] = min(graph[i][j], graph[i][k] + graph[k][j])
return graph
✅ Used in network routing protocols.

3. Dynamic Programming Problems (O(n²) or O(n³))


✅ Longest Common Subsequence (LCS) → O(n²)
Finds the longest subsequence common in two strings.

python
Copy
Edit
def lcs(X, Y):
m, n = len(X), len(Y)
dp = [[0] * (n + 1) for _ in range(m + 1)]
for i in range(1, m + 1):
for j in range(1, n + 1):
if X[i - 1] == Y[j - 1]:
dp[i][j] = dp[i - 1][j - 1] + 1
else:
dp[i][j] = max(dp[i - 1][j], dp[i][j - 1])
return dp[m][n]
✅ Used in bioinformatics for DNA sequence matching.

✅ Matrix Chain Multiplication → O(n³)


Finds the most efficient way to multiply matrices.

✅ Used in compiler optimizations for matrix computations.

4. Computational Geometry
✅ Convex Hull (Graham’s Scan) → O(n log n)
Finds the smallest convex boundary enclosing a set of points.

✅ Used in computer graphics and collision detection.

5. Polynomial-Time Approximation Algorithms (O(n^k))


Some NP-hard problems don’t have polynomial-time exact solutions but can be solved
approximately in O(n²), O(n³), etc.

✅ Traveling Salesman Problem (TSP) Approximation → O(n²)


Finds an approximate shortest route to visit all cities.

✅ Used in logistics and delivery optimization.

🔍 Why Polynomial Time is Important?


✅ Efficient for many practical problems
✅ Can handle large inputs (compared to exponential time O(2^n))
❌ Becomes slow for large inputs if k is high (O(n⁵) is much worse than O(n²))
❌ Cannot efficiently solve NP-Hard problems like TSP or Knapsack optimally

⏳ Comparison with Other Complexities


Complexity Example Problems
O(1) (Constant) Accessing array element
O(log n) (Logarithmic) Binary Search
O(n) (Linear) Finding max/min in an array
O(n log n) (Linearithmic) Merge Sort, Quick Sort (Average Case)
O(n²) (Quadratic) Bubble Sort, LCS
O(n³) (Cubic) Matrix Multiplication, Floyd-Warshall
O(2^n) (Exponential) Subset Sum, Recursive Fibonacci
O(n!) (Factorial) Traveling Salesman Problem (Exact Solution)
Conclusion
O(n²), O(n³) algorithms are feasible for moderate input sizes.
For large inputs, prefer O(n log n) or O(n).
If a problem is NP-hard, we usually need approximation algorithms.
Do you need any polynomial-time algorithm explained in detail? 🚀

Exponential time

ChatGPT said:
Exponential Time Complexity (O(2ⁿ), O(n!), etc.)
🔹 What is Exponential Time?
An algorithm runs in exponential time if its execution time doubles with each
increase in input size.
This means the number of operations grows exponentially with input size n, making
the algorithm infeasible for large inputs.

The most common forms of exponential time are:

O(2ⁿ) → Doubles for each increase in n


O(3ⁿ) → Triples for each increase in n
O(n!) → Factorial growth (worst possible case)
🔹 Why is Exponential Time Considered Bad?
For large values of n, exponential-time algorithms become extremely slow.
For example:

n = 10 → O(2ⁿ) = 1,024 operations (manageable)


n = 20 → O(2ⁿ) = 1,048,576 operations (already slow)
n = 50 → O(2ⁿ) = 1.13 × 10¹⁵ operations (impossible to compute in a reasonable
time)
That’s why exponential-time algorithms are only practical for very small inputs.

🔥 Classic Problems with Exponential Time Complexity


1. Recursive Fibonacci Sequence (O(2ⁿ))
The naive recursive approach to computing Fibonacci numbers grows exponentially.

Example
python
Copy
Edit
def fib(n):
if n <= 1:
return n
return fib(n - 1) + fib(n - 2)
Time Complexity Analysis
Each call splits into two recursive calls
Leads to O(2ⁿ) exponential growth
✅ Efficient alternative: Use dynamic programming or memoization to reduce it to
O(n).

2. Subset Sum Problem (O(2ⁿ))


Given a set of numbers, determine if a subset exists whose sum equals a target
value.
This is solved using backtracking, which checks all 2ⁿ subsets.

Example
python
Copy
Edit
def subset_sum(arr, n, target):
if target == 0:
return True
if n == 0:
return False
return subset_sum(arr, n - 1, target) or subset_sum(arr, n - 1, target - arr[n
- 1])
Time Complexity Analysis
For each element, we either include or exclude it
Leads to O(2ⁿ) complexity
NP-complete problem (hard to solve optimally for large n)
✅ Efficient alternative: Dynamic programming (O(n × sum))

3. Traveling Salesman Problem (O(n!))


Find the shortest route that visits every city exactly once and returns to the
starting point.
A brute-force solution checks all possible orders of cities (n! permutations).

Example
python
Copy
Edit
from itertools import permutations
import math

def tsp(graph, start=0):


n = len(graph)
cities = list(range(n))
min_cost = math.inf
for perm in permutations(cities):
if perm[0] == start:
cost = sum(graph[perm[i]][perm[i+1]] for i in range(n-1)) +
graph[perm[-1]][perm[0]]
min_cost = min(min_cost, cost)
return min_cost
Time Complexity Analysis
n! permutations are checked
O(n!) is extremely slow even for n = 20
✅ Efficient alternative: Dynamic programming (Held-Karp Algorithm, O(n² × 2ⁿ))

4. Hamiltonian Cycle (O(2ⁿ × n))


Finds whether a path exists in a graph that visits all vertices exactly once and
returns to the start.
This is solved using backtracking or dynamic programming (Held-Karp).

✅ NP-complete problem (hard to solve optimally for large graphs).

5. Boolean Satisfiability (SAT) Problem (O(2ⁿ))


Given a Boolean expression with n variables, check if there exists an assignment
that satisfies it.

Brute-force solution tries all 2ⁿ possible assignments.


SAT solvers like DPLL or CDCL improve performance but are still exponential in
worst case.
✅ NP-complete problem (used in cryptography and circuit design).

🔍 Key Takeaways
Exponential-time algorithms are impractical for large inputs.
Most exponential-time problems are NP-complete or NP-hard.
They are often solved using heuristics, approximations, or dynamic programming.
For n > 20–30, exponential algorithms are infeasible.
Would you like help in optimizing an exponential problem you’re working on? 🚀

You might also like