[go: up one dir, main page]

0% found this document useful (0 votes)
10 views24 pages

Time Complexity Overview

Uploaded by

praveenpk1804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views24 pages

Time Complexity Overview

Uploaded by

praveenpk1804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Time Complexity Overview

Here is a table summarizing the time complexities (Best, Average, and Worst cases)
of common Searching and Sorting algorithms:

Algorithm Best Case Average Case Worst Case

Searching Algorithms

Linear Search O(1) O(n) O(n)

Binary Search O(1) O(log n) O(log n)

Jump Search O(1) O(√n) O(√n)

Interpolation Search O(1) O(log log n) O(n)

Exponential Search O(1) O(log n) O(log n)

Fibonacci Search O(1) O(log n) O(log n)

Sorting Algorithms

Bubble Sort O(n) O(n²) O(n²)

Selection Sort O(n²) O(n²) O(n²)

Insertion Sort O(n) O(n²) O(n²)

Merge Sort O(n log n) O(n log n) O(n log n)

Quick Sort O(n log n) O(n log n) O(n²)

Heap Sort O(n log n) O(n log n) O(n log n)

Counting Sort O(n + k) O(n + k) O(n + k)

Radix Sort O(nk) O(nk) O(nk)

Bucket Sort O(n + k) O(n + k) O(n²)

Shell Sort O(n log n) O(n log n) O(n²)

🚀 Powered by ChatGPT Exporter 1 / 24


n → Number of elements in the array.

k → Range of numbers (used in non-comparison-based sorting algorithms like


Counting Sort, Radix Sort, and Bucket Sort).

log n → Represents logarithmic time complexity.

Would you like me to add any specific details or explanations for these algorithms? 😊

Let's discuss the Best, Average, and Worst-case time complexity of Searching
and Sorting algorithms with suitable examples.

Understanding Time Complexities


Time complexity represents the computational cost of an algorithm based on input
size (n). The three main cases are:

1. Best Case (Ω - Omega Notation): The minimum time an algorithm takes to run,
usually when the input is already optimized.

2. Average Case (Θ - Theta Notation): The expected time the algorithm takes
over random inputs.

3. Worst Case (O - Big-O Notation): The maximum time an algorithm takes, often
when input is the least favorable.

1. Searching Algorithms
(a) Linear Search

🚀 Powered by ChatGPT Exporter 2 / 24


Best Case: O(1) → The element is found at the first index.

Average Case: O(n) → The element is found in the middle or randomly located.

Worst Case: O(n) → The element is at the last index or not present.

Example:
Array: `[2, 4, 6, 8, 10]`, Search for `2`

Best Case: Found at index `0` → O(1)

Worst Case: Search for `10` → Traverses the entire array O(n)

(b) Binary Search (Only for Sorted Arrays)


Best Case: O(1) → The middle element is the target.

Average Case: O(log n) → Recursively reduces the problem size.

Worst Case: O(log n) → The element is at one of the ends.

Example:
Sorted Array: `[1, 3, 5, 7, 9]`, Search for `7`

Best Case: Middle element found → O(1)

Worst Case: Searching `9` → Binary search reduces half each step O(log n)

2. Sorting Algorithms
(a) Bubble Sort

🚀 Powered by ChatGPT Exporter 3 / 24


Best Case: O(n) → If the array is already sorted.

Average Case: O(n²) → Elements are in random order.

Worst Case: O(n²) → The array is in reverse order.

Example:
Array: `[1, 2, 3, 4, 5]` (Sorted) → Best case: O(n)
Array: `[5, 4, 3, 2, 1]` (Reverse Order) → Worst case: O(n²)

(b) Quick Sort


Best Case: O(n log n) → The pivot divides the array equally.

Average Case: O(n log n) → The pivot partitions reasonably well.

Worst Case: O(n²) → The pivot is always the smallest or largest element.

Example:
Array: `[5, 3, 8, 6, 2, 7]`, Pivot: `5`

Best Case: Balanced partitioning → O(n log n)

Worst Case: Choosing `2` or `8` as pivot (unbalanced) → O(n²)

(c) Merge Sort


Best Case: O(n log n)

Average Case: O(n log n)

Worst Case: O(n log n) (Always splits evenly)

🚀 Powered by ChatGPT Exporter 4 / 24


Example:
Array: `[4, 1, 3, 9, 7]` → Always breaks into smaller subarrays, sorts, and merges.
It consistently runs in O(n log n).

(d) Insertion Sort


Best Case: O(n) → Already sorted array.

Average Case: O(n²) → Random order elements.

Worst Case: O(n²) → Reverse sorted array.

Example:
Array: `[1, 2, 3, 4, 5]` (Sorted) → Best case: O(n)
Array: `[5, 4, 3, 2, 1]` (Reverse order) → Worst case: O(n²)

Conclusion
Best case occurs when the input is optimal (sorted or element found quickly).

Average case considers random input scenarios.

Worst case happens when the input is in the least favorable order.

Would you like a detailed analysis of any specific algorithm? 😊

Algorithm for the Knapsack Problem using the Greedy


Technique (Fractional Knapsack)
The Greedy approach is used for solving the Fractional Knapsack Problem, where
we can take fractions of items. This is different from the 0/1 Knapsack Problem,

🚀 Powered by ChatGPT Exporter 5 / 24


which requires Dynamic Programming.

Problem Statement
Given n items, each having:

Weight (w[i])

Value (v[i])

Find the maximum value that can be obtained by filling a knapsack of capacity W,
where fractions of items are allowed.

Greedy Choice
Compute value/weight ratio (v[i] / w[i]) for each item.

Sort items in descending order of this ratio.

Pick items one by one in that order:

If the item fits completely, take it.

Else, take the fraction that fits.

Algorithm (Greedy Approach)

1. Input: Array of weights w[], values v[], number of items n, and knapsack
capacity W.
2. Compute value/weight ratio for each item: ratio[i] = v[i] / w[i].
3. Sort items in **descending** order of ratio.

🚀 Powered by ChatGPT Exporter 6 / 24


4. Initialize: total_value = 0, current_weight = 0.
5. for each item i in sorted order:
if (current_weight + w[i] ≤ W):
- Take full item
- total_value += v[i]
- current_weight += w[i]
else:
- Take fraction (W - current_weight) / w[i]
- total_value += (fraction * v[i])
- Break (knapsack is full)
6. Output total_value.

Python Code Implementation

def fractional_knapsack(weights, values, capacity):


# Step 1: Calculate value/weight ratio and sort items in descending order
items = [(values[i] / weights[i], weights[i], values[i]) for i in
range(len(weights))]
items.sort(reverse=True, key=lambda x: x[0]) # Sort by value/weight ratio

total_value = 0 # Maximum value we can obtain


for ratio, weight, value in items:
if capacity >= weight:
# Take full item
total_value += value
capacity -= weight
else:
# Take fraction of item
total_value += ratio * capacity
break # Knapsack is full

return total_value

# Example usage
weights = [10, 20, 30]
values = [60, 100, 120]

🚀 Powered by ChatGPT Exporter 7 / 24


capacity = 50
max_value = fractional_knapsack(weights, values, capacity)
print("Maximum value in Knapsack =", max_value)

Example Execution
Input:

Weights = [10, 20, 30]


Values = [60, 100, 120]
Capacity = 50

Step-by-Step Calculation

1. Compute value/weight ratio:

Item 1: 60/10 = 6

Item 2: 100/20 = 5

Item 3: 120/30 = 4

2. Sort: (6, 5, 4)

3. Pick items:

Take Item 1 (10 kg) → Total Value = 60

Take Item 2 (20 kg) → Total Value = 160

Take 2/3 of Item 3 (20 kg of 30 kg) → Total Value = 160 + (120 × 20/30)
= 200

4. Maximum Value = 200

🚀 Powered by ChatGPT Exporter 8 / 24


Time Complexity
O(n log n) (Sorting items by value/weight ratio)

O(n) (Picking items)

Overall: O(n log n)

Would you like me to explain the 0/1 Knapsack (Dynamic Programming) approach
too? 😊

I'll break down each module from your syllabus and explain the topics with real-life
examples in an easy-to-understand way. Let's go step by step.

Module 1: Functional Blocks of a Computer (10


hours)
This module introduces the fundamental components of a computer system and
their operations.

1. Functional Blocks of a Computer


A computer consists of:

CPU (Central Processing Unit): Think of it as the brain of the computer.

Memory (RAM & Storage): Like a person's short-term and long-term memory.

Input/Output (I/O) Subsystems: Devices like keyboards (input) and monitors


(output).

Control Unit: Acts like a traffic controller, ensuring everything happens in order.

🚀 Powered by ChatGPT Exporter 9 / 24


🔹 Example:
When you type a Google search, your keyboard sends input signals, the CPU
processes them, fetches results from memory, and the monitor displays the results.

2. Instruction Set Architecture (ISA)


ISA defines how the CPU processes instructions, including:

Registers: Small, fast storage inside the CPU.

Instruction Execution Cycle: The process of fetching, decoding, executing, and


storing results.

🔹 Example:
When you press the volume up button on your phone, the CPU follows an instruction
cycle:

1. Fetch: Retrieve the volume-up instruction.

2. Decode: Understand what to do.

3. Execute: Increase the volume.

4. Store: Update the new volume setting.

3. Data Representation
Computers represent data using:

Signed Numbers: Positive and negative numbers.

Floating Point Representation: Used for decimal numbers.

🚀 Powered by ChatGPT Exporter 10 / 24


Character Representation (ASCII, Unicode): Storing text as numbers.

🔹 Example:
A text message like "Hello" is stored as numbers in memory using Unicode.

4. Computer Arithmetic
Operations like:

Addition & Subtraction: Ripple carry adder, carry look-ahead adder.

Multiplication: Shift-and-add, Booth’s multiplier.

Division: Restoring and non-restoring methods.

🔹 Example:
When you calculate "12 × 5" on a calculator, the CPU executes multiplication
algorithms behind the scenes.

Module 2: x86 Architecture & Memory System (14


hours)
This module covers how modern processors are designed.

1. Introduction to x86 Architecture


The x86 architecture is used in most personal computers today. It includes:

Registers (EAX, EBX, ECX, etc.) for temporary storage.

Instruction Set for operations like addition and memory access.

🚀 Powered by ChatGPT Exporter 11 / 24


🔹 Example:
When you open a new browser tab, the CPU uses x86 instructions to allocate memory
and manage data flow.

2. CPU Control Unit Design


There are two types:

1. Hardwired: Uses fixed logic circuits.

2. Microprogrammed: Uses software to control execution.

🔹 Example:
A hardwired CPU is like a basic calculator with dedicated circuits for each function.
A microprogrammed CPU is like a smartphone that can update its software to add
new features.

3. Memory System Design


RAM & Cache Memory (Temporary storage for fast access)

Semiconductor Memory Technologies (DRAM, SRAM, Flash)

🔹 Example:
Cache memory is like a small notebook where you jot down frequently used
numbers instead of looking them up every time.

4. Input/Output (I/O) Subsystems

🚀 Powered by ChatGPT Exporter 12 / 24


Handles communication between the CPU and external devices like USB, keyboard,
mouse, etc.

🔹 Example:
When you plug in a USB drive, the CPU communicates with it through the I/O
subsystem.

Module 3: Pipelining & Parallel Processors (10


hours)
1. Pipelining
Pipelining allows multiple instructions to be processed simultaneously.

🔹 Example:
Making a sandwich assembly line:

1. One person spreads butter.

2. Another adds vegetables.

3. Another cuts the sandwich.

This speeds up the process, just like pipelining speeds up CPU execution.

2. Parallel Processing
Multiple processors working together.

🔹 Example:
Modern gaming PCs use multi-core processors to run graphics, sound, and physics
calculations simultaneously.

🚀 Powered by ChatGPT Exporter 13 / 24


3. Cache Coherency
Ensures that multiple processors have updated memory data.

🔹 Example:
If two people edit the same Google Doc, cache coherency ensures they see the
latest version.

Module 4: Memory Organization (6 hours)


1. Memory Interleaving
Divides memory into modules for faster access.

🔹 Example:
A supermarket with multiple checkout counters. If one line is too long, you can go to
another.

2. Hierarchical Memory
Registers (Fastest)

Cache Memory (L1, L2, L3)

RAM (Slower)

Hard Disk (Slowest)

🔹 Example:
Registers are like a scratchpad for quick notes.

🚀 Powered by ChatGPT Exporter 14 / 24


Cache memory is like a sticky note for frequently used information.

RAM is like a notebook for temporary storage.

Hard Disk is like a bookshelf for long-term storage.

3. Cache Mapping & Replacement Algorithms


Methods for organizing and replacing cache memory data.

🔹 Example:
When watching Netflix, your device caches recently watched scenes so they load
faster. If memory fills up, the least-used scenes get replaced.

Conclusion
These modules cover the building blocks of computer architecture, from how
CPUs work to how memory is managed. Understanding these topics helps in
optimizing performance and designing efficient systems.

Would you like detailed explanations on any specific topic? 😊

Got it! I'll now provide a detailed breakdown of each module with in-depth
explanations, diagrams, real-world examples, and relevant technical insights.

📌 Module 1: Functional Blocks of


a Computer (10 Hours)
🚀 Powered by ChatGPT Exporter 15 / 24
This module introduces the structure and operation of a computer system,
including how data is represented and processed.

1.1 Functional Blocks of a Computer


A computer system consists of five major components:

1. Central Processing Unit (CPU)

Executes instructions and processes data.

Components of CPU:

Arithmetic and Logic Unit (ALU): Performs calculations (+, -, *, /) and


logical operations (AND, OR, NOT).

Control Unit (CU): Directs how data moves within the CPU.

Registers: Small storage locations inside the CPU.

Cache Memory: Stores frequently used data for faster access.

2. Memory Unit (Primary Storage)

Stores data and instructions for quick access.

Types:

RAM (Random Access Memory): Temporary storage for running


programs.

ROM (Read-Only Memory): Permanent storage for boot instructions.

3. Input Devices

Devices used to input data into the computer.

🚀 Powered by ChatGPT Exporter 16 / 24


Examples: Keyboard, Mouse, Microphone, Scanner.

4. Output Devices

Devices that display or output results.

Examples: Monitor, Printer, Speaker.

5. Secondary Storage

Long-term data storage.

Examples: Hard Drives (HDD/SSD), Flash Drives.

🔹 Real-Life Example:
When you search for something on Google:

1. Keyboard (Input Device) sends the query.

2. CPU processes the request.

3. RAM holds temporary data.

4. Monitor (Output Device) displays results.

1.2 Instruction Set Architecture (ISA)


ISA defines how the CPU understands and processes instructions.

✅ Types of Instructions in ISA:


Data Transfer Instructions (MOV, LOAD, STORE)

Arithmetic Instructions (ADD, SUB, MUL, DIV)

Logical Instructions (AND, OR, NOT, XOR)

🚀 Powered by ChatGPT Exporter 17 / 24


Control Instructions (JUMP, CALL, RETURN)

🔹 Example:
When a game loads, the CPU executes LOAD instructions to fetch data from
memory.

1.3 Data Representation


Computers use binary (0s and 1s) to store and process information.

Number Systems:
Binary (Base 2): Used in all digital circuits.

Decimal (Base 10): Human-friendly.

Octal (Base 8) and Hexadecimal (Base 16): Used in memory addressing.

✅ Binary to Decimal Conversion Example:


3 2 1 0
(1101)2 = (1 × 2 ) + (1 × 2 ) + (0 × 2 ) + (1 × 2 ) = (13)10
​ ​

✅ Character Representation:
ASCII (American Standard Code for Information Interchange)

Example: 'A' = 65 in ASCII

Unicode (Used for multiple languages)

1.4 Computer Arithmetic


Operations like addition, multiplication, and division use different algorithms:

🚀 Powered by ChatGPT Exporter 18 / 24


✅ Addition & Subtraction:
Ripple Carry Adder, Carry Look-Ahead Adder.

✅ Multiplication Algorithms:
Shift-and-Add Method

Booth’s Algorithm (Efficient for negative numbers)

✅ Division Algorithms:
Restoring Division

Non-Restoring Division

🔹 Example:
When a calculator performs 45 × 20, it uses Booth’s multiplication algorithm.

📌 Module 2: x86 Architecture &


Memory System (14 Hours)
2.1 x86 Architecture Overview
The x86 architecture is used in Intel and AMD processors.

✅ Registers in x86:
General Purpose Registers: EAX, EBX, ECX, EDX

Segment Registers: CS, DS, SS, ES

Instruction Pointer (IP): Stores the next instruction to be executed.

🚀 Powered by ChatGPT Exporter 19 / 24


🔹 Example:
When you open an application, the Instruction Pointer (IP) fetches and executes
instructions.

2.2 CPU Control Unit Design


The Control Unit manages CPU operations.

✅ Types of Control Units:


1. Hardwired Control:

Uses logic circuits for fixed instruction execution.

Fast but inflexible.

2. Microprogrammed Control:

Uses software-like instructions (Microinstructions).

More flexible but slower.

🔹 Example:
A basic calculator uses a hardwired control unit, while a smartphone processor
uses a microprogrammed control unit.

2.3 Memory System Design


Memory is essential for data storage.

✅ Types of Memory:

🚀 Powered by ChatGPT Exporter 20 / 24


1. Primary Memory (RAM, ROM)

2. Secondary Storage (HDD, SSD)

3. Cache Memory (L1, L2, L3)

🔹 Example:
A gaming PC uses SSD (Fast Storage) + RAM (Temporary Data) + Cache
(Frequent Data Access) for smooth performance.

2.4 Input/Output (I/O) Subsystems


Handles data transfer between CPU and external devices.

✅ Types of I/O:
Programmed I/O: CPU controls data transfer.

Interrupt-driven I/O: Devices notify CPU when they need processing.

DMA (Direct Memory Access): Data is transferred without CPU intervention.

🔹 Example:
When a printer prints a document, it uses interrupt-driven I/O.

📌 Module 3: Pipelining & Parallel


Processing (10 Hours)
3.1 Pipelining

🚀 Powered by ChatGPT Exporter 21 / 24


Pipelining speeds up processing by executing multiple instructions simultaneously.

✅ Stages in Pipelining:
1. Fetch: Get the instruction.

2. Decode: Understand the instruction.

3. Execute: Perform the operation.

4. Write-back: Store the result.

🔹 Example:
A car assembly line works similarly—different tasks happen at the same time.

✅ Types of Pipeline Hazards:


Structural Hazards: Resource conflicts.

Data Hazards: Dependency on previous instructions.

Control Hazards: Branching issues.

3.2 Parallel Processing


Uses multiple processors for faster computing.

✅ Types:
1. SISD (Single Instruction, Single Data)

2. SIMD (Single Instruction, Multiple Data)

3. MIMD (Multiple Instruction, Multiple Data)

🚀 Powered by ChatGPT Exporter 22 / 24


🔹 Example:
Modern supercomputers use parallel processing to perform complex simulations.

📌 Module 4: Memory
Organization (6 Hours)
4.1 Memory Interleaving
Divides memory into banks for faster access.

🔹 Example:
A supermarket with multiple checkout lanes allows customers to be served
faster.

4.2 Hierarchical Memory


Organizes memory from fastest to slowest:

1. Registers (Fastest)

2. Cache Memory

3. RAM

4. Hard Drive (Slowest)

🔹 Example:
Your web browser caches frequently visited websites for faster loading.

🚀 Powered by ChatGPT Exporter 23 / 24


4.3 Cache Mapping & Replacement
Determines where data is stored in cache and when it should be replaced.

✅ Cache Replacement Algorithms:


LRU (Least Recently Used)

FIFO (First In, First Out)

🔹 Example:
Netflix uses caching to store frequently watched shows for quick access.

Conclusion
This detailed explanation covers every aspect of Computer Architecture, from
how the CPU processes data to how memory and I/O systems function.

Would you like diagrams or code examples for better understanding? 😊

🚀 Powered by ChatGPT Exporter 24 / 24

You might also like