Operating System
Operating System
In operating systems, process management and CPU scheduling are critical components for efficient
execution and resource utilization.
1. Process Management:
Process management involves the creation, scheduling, and termination of processes within the
system. It is a fundamental part of an OS to ensure that all active tasks are performed efficiently.
• Process: A program in execution. It includes the program code, its current activity, and
resources like CPU registers, memory, and input/output devices.
• Process Control Block (PCB): This is the data structure the OS uses to store the information
about each process, such as:
o Process ID (PID)
o Process state
o CPU registers
o Memory limits
o Scheduling information
• Process Scheduling: The OS uses scheduling algorithms to decide which process should be
executed by the CPU next. This ensures efficient CPU utilization.
2. CPU Scheduling:
CPU scheduling is a crucial part of the OS to manage the execution of processes. It allocates CPU time
to different processes to ensure fairness and efficiency.
• Schedulers:
o Long-term scheduler: Decides which processes are admitted into the system (from new
to ready state).
o Short-term scheduler: Decides which of the ready processes gets the CPU next.
o Medium-term scheduler: Sometimes involved in swapping processes in and out of
memory.
o First Come, First Served (FCFS): Processes are executed in the order they arrive.
o Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the shortest
execution time is given priority.
o Round Robin (RR): Each process is given a fixed time slice (quantum), and after that,
it’s moved to the end of the queue.
o Priority Scheduling: Processes are scheduled based on priority, and the one with the
highest priority is executed first.
o Multilevel Queue Scheduling: Processes are divided into multiple queues based on
their characteristics (e.g., interactive, batch) and then scheduled accordingly.
• Performance Metrics:
o Turnaround Time: The total time taken by a process from arrival to completion.
o Waiting Time: The total time a process spends waiting in the ready queue.
o Response Time: The time between the submission of a request and the first response.
P1 4
P2 3
P3 5
• Execution:
o P1 runs for 2 units, then P2 runs for 2 units, then P3 runs for 2 units.
o After that, P1 gets another 2 units, P2 runs for its remaining time (1 unit), then P3
finishes its remaining 3 units.
This results in a fair CPU time distribution, with each process getting an equal opportunity to run.
1. In-depth Understanding of Algorithms
Scheduling Algorithms:
o Explanation: The simplest scheduling algorithm. In FCFS, processes are executed in the
order they arrive in the ready queue.
o Disadvantage: It can cause convoy effect, where short processes have to wait behind
longer ones, leading to poor average waiting time.
o Example:
▪ In FCFS, Process A runs for 5 units, then Process B runs for 3 units, resulting in a
longer waiting time for Process B.
o Explanation: The process with the shortest burst time is executed first. SJF can be
preemptive or non-preemptive.
o Disadvantage: It can lead to starvation if long processes keep arriving and the shorter
ones always take precedence.
o Example:
▪ In non-preemptive SJF, Process B will run first, followed by Process C, and then
Process A.
o Explanation: In this preemptive algorithm, each process gets a fixed time slice
(quantum). If a process doesn't complete within the quantum, it is preempted and sent
back to the ready queue, and the next process is executed.
o Disadvantage: The performance depends heavily on the time quantum. A very small
quantum leads to excessive context switching, and a large quantum behaves similarly to
FCFS.
o Example:
▪ Quantum = 2
▪ Execution: P1 runs for 2 units, then P2 runs for 2 units, then P1 runs for the
remaining 2 units, then P2 runs for the remaining 3 units.
• Priority Scheduling:
o Explanation: Each process is assigned a priority, and the CPU is allocated to the process
with the highest priority. It can be either preemptive (higher-priority processes can
interrupt lower-priority ones) or non-preemptive (the process will run to completion
before a higher-priority process preempts it).
o Example:
Performance Metrics:
• Turnaround Time:
o Definition: Time taken by a process to complete execution, from the time it arrives to
the time it finishes.
• Waiting Time:
o Definition: Time a process spends waiting in the ready queue before getting CPU time.
o Significance: Lower waiting time indicates better CPU utilization and efficiency.
• Response Time:
o Definition: The time between the submission of a request (process arrival) and the first
response (first time the process gets CPU time).
o Significance: This is especially important for interactive systems, where fast responses
are crucial.
• Real-world Application:
o Example: In an operating system handling multiple tasks (e.g., video rendering, email
checking, etc.), SJF would prioritize the rendering task if it takes less time, reducing
overall completion time. However, interactive tasks may be prioritized with Round Robin
to maintain system responsiveness.
Process States:
• Explanation: A process can exist in one of several states during its lifecycle:
o Waiting (Blocked): The process is waiting for some event (e.g., I/O operation) to
complete.
• Key Transitions:
o Running to Ready: The process is preempted and placed back in the ready queue (e.g.,
in Round Robin).
o Process ID (PID)
Schedulers:
• Long-term Scheduler: Decides which processes to admit into the system (to transition from new
to ready state).
• Short-term Scheduler: Decides which of the ready processes should be assigned to the CPU
next. This is the main scheduling function for time-sharing systems.
• Medium-term Scheduler: Moves processes in and out of memory to manage memory resources
efficiently, especially in systems with limited memory.
• Preemptive Scheduling: The OS can interrupt a running process to assign CPU time to another
process. Round Robin is an example.
• Non-Preemptive Scheduling: A running process will continue until it voluntarily releases the
CPU, typically when it completes its burst time.
3. Time Complexity:
o FCFS: O(n) for determining the completion order, but O(1) per process once the order is
determined.
o SJF: Can be O(n^2) if burst times are not known ahead of time and need to be calculated
dynamically. However, if burst times are available, the algorithm can be O(n log n).
o Round Robin: Typically O(n) as each process is allocated a fixed time quantum, but
performance depends on the quantum.
o Priority Scheduling: Can be O(n log n) if using a priority queue, otherwise O(n) for
simpler implementations.
• Priority Scheduling:
o Preemptive: A new high-priority process can interrupt a lower-priority one (e.g., real-
time systems).
o All Processes with the Same Burst Time: In this case, Round Robin and FCFS will behave
similarly, but SJF won’t be useful.
• Question 1: Calculate the waiting time and turnaround time for the following set of processes
using FCFS:
o Answer: For FCFS, calculate each process's waiting time and turnaround time based on
their order in the queue.
• Question 2: Calculate response time and waiting time for processes using Round Robin with a
time quantum of 2.
• Question 3: Determine the average waiting time and turnaround time for a set of processes
with various burst times using SJF (non-preemptive).
o Example:
Process synchronization is an essential concept in operating systems that ensures processes execute in a
coordinated manner, especially when they share resources like memory or files. It prevents race
conditions, which occur when two or more processes attempt to modify shared data concurrently,
leading to unpredictable results.
o Race Condition: Occurs when multiple processes access shared data and the final result
depends on the order in which the processes access the data.
o The goal is to design mechanisms that prevent multiple processes from simultaneously
executing in their critical sections.
2. Synchronization Requirements:
o Mutual Exclusion: Only one process can be in its critical section at a time.
o Progress: If no process is in the critical section, the system should allow some process to
enter.
o Bounded Waiting: There should be a limit on how long a process waits to enter the
critical section.
Synchronization Mechanisms:
o Lock: A basic synchronization primitive that ensures mutual exclusion by allowing only
one process to access the critical section at a time.
Semaphore Operations:
o P(S) or wait(S): Decreases the value of semaphore S. If S < 0, the process is blocked.
o V(S) or signal(S): Increases the value of semaphore S. If S was negative, one blocked
process is awakened.
2. Monitors:
3. Mutexes:
o Used in conjunction with locks or semaphores to allow processes to wait for a certain
condition to be true before continuing execution.
1. Producer-Consumer Problem:
o Involves two types of processes: producers (which create data) and consumers (which
consume data). The problem is to ensure that producers do not overwrite data that has
not been consumed, and consumers do not consume data that hasn't been produced
yet.
o This is typically solved using a shared buffer and semaphores to ensure mutual exclusion
and synchronization.
2. Reader-Writer Problem:
o In this problem, multiple processes need to read and write to a shared resource. The
challenge is to allow multiple readers to access the resource simultaneously but ensure
that only one writer can access the resource at a time.
o Two types:
• Deadlock: A situation where two or more processes are waiting indefinitely for resources held by
each other, creating a cycle of dependency.
▪ Hold and Wait: A process is holding at least one resource and waiting for
additional resources held by other processes.
• Starvation: A process is perpetually denied the resources it needs to proceed due to other
processes continuously taking precedence.
o Can occur in scheduling algorithms if processes are not fairly handled (e.g., a low-priority
process may never get CPU time).
void producer() {
while (true) {
produce_item();
add_item_to_buffer();
void consumer() {
while (true) {
remove_item_from_buffer();
consume_item();
• Critical Section: Protect shared resources from simultaneous access by multiple processes.
• Semaphores & Mutexes: Core synchronization primitives for ensuring mutual exclusion and
handling process coordination.
• Semaphore Operations:
o Ensure you are comfortable with how wait() and signal() work with semaphores.
Practice scenarios where processes wait for a semaphore to become available (e.g.,
when a process enters the critical section or waits for resource availability).
o Learn about binary semaphores (mutexes) vs. counting semaphores and practice
problems involving counting semaphores for resource allocation.
• Mutex vs Semaphore:
o Understand that mutexes are used for mutual exclusion in single-resource access, while
semaphores can be used for controlling access to a set of resources. Compare them in
terms of their usage and limitations.
• Producer-Consumer Problem:
o Understand the critical section between the producer (inserts data into a buffer) and the
consumer (removes data from the buffer).
o Be comfortable with buffer management and how to handle empty and full conditions
using semaphores.
o Practice problem: Given a buffer of size N, write code for both producer and consumer
using semaphores, considering edge cases such as buffer overflow and underflow.
• Reader-Writer Problem:
o Practice with multiple readers accessing the data concurrently while ensuring mutual
exclusion for writers.
o Practice problem: Design a solution for a multi-threaded system where multiple threads
can read the same data concurrently, but writers should have exclusive access to the
resource.
3. Deadlock Handling:
o Detection: Learn how to identify deadlock conditions using resource allocation graphs
or process state diagrams.
• Practice problem: Given a system of processes and resources, determine if deadlock exists using
a resource allocation graph or wait-for graph.
4. Mathematical Foundation:
• Counting Semaphore:
• Mutex vs Semaphores:
o Learn where mutexes are preferred (e.g., protecting single shared resources) and where
semaphores are needed (e.g., managing multiple identical resources like file access).
o Practice problem: Write a solution that uses mutexes for single-resource
synchronization and semaphores for multi-resource synchronization.
5. Practice:
o Search for previous GATE questions involving process synchronization, and write the
code or explain the solution.
o Focus on how different algorithms like round-robin or priority scheduling interact with
critical sections and how the performance might be impacted by process
synchronization.
• Practice Problems:
1. Producer-Consumer using Semaphore: Write a solution to the problem, ensuring you handle
buffer overflow and underflow conditions.
2. Reader-Writer Problem: Implement a solution where multiple threads of readers can access
data concurrently but a writer can only access it exclusively.
3. Dining Philosophers Problem: Simulate this classic problem and avoid deadlock and starvation
while ensuring philosophers get a chance to eat.
Deadlock is a critical issue in operating systems, where a set of processes are blocked because each
process is holding a resource and waiting for another resource held by another process. It is essential to
understand how deadlocks arise and how to handle or prevent them in a system.
Deadlock Basics
1. Deadlock Definition: A deadlock occurs when a set of processes are in a state where they
cannot proceed because each process is waiting for another process in the set to release a
resource.
2. Deadlock Conditions: A deadlock can only occur if the following four necessary conditions hold
simultaneously:
o Mutual Exclusion: At least one resource must be held in a non-shareable mode (i.e., only
one process can use the resource at any given time).
o Hold and Wait: A process that is holding at least one resource is waiting for additional
resources held by other processes.
o No Preemption: Resources cannot be forcibly taken from processes holding them; they
must be released voluntarily.
o Circular Wait: A set of processes are waiting for each other in a circular chain. For
example, Process 1 is waiting for a resource held by Process 2, Process 2 is waiting for a
resource held by Process 3, and Process 3 is waiting for a resource held by Process 1.
1. Deadlock Prevention:
In this approach, we design the system to prevent deadlock from ever occurring by ensuring that at least
one of the necessary conditions for deadlock is eliminated.
• Eliminating Mutual Exclusion: This is often impractical because certain resources (e.g., printers)
are inherently non-shareable.
• Eliminating Hold and Wait: Processes must request all the resources they need at once, before
starting execution. However, this can lead to resource wastage.
• Eliminating Circular Wait: One way to prevent circular wait is to impose an ordering of resources
and require processes to request resources in an increasing order.
Example:
If a process requests resources in a fixed order (e.g., first Resource A, then Resource B), it can prevent
circular wait by ensuring that no process is caught in a cycle.
2. Deadlock Avoidance:
This strategy allows the system to enter a state where deadlock is not possible, but it may not prevent
deadlock entirely. The system continuously checks for the potential for deadlock before granting a
request.
• Banker's Algorithm:
The Banker’s algorithm is a well-known deadlock avoidance algorithm. It works by checking if
granting a resource request would leave the system in a safe state.
o Safe State: A state is safe if there exists a sequence of processes that can finish executing
without causing deadlock. In other words, all processes can eventually complete even if
they are denied some resources.
o Unsafe State: A state is unsafe if no such sequence exists, and it may lead to a deadlock.
• The algorithm works by checking if a request can be granted and still leave the system in a safe
state.
• If the system can allocate resources without causing deadlock, the request is granted. If not, the
process must wait.
Example:
If a process requests resources, the algorithm checks if granting the request leaves the system in a safe
state. If so, the resources are allocated, and the system proceeds. If not, the request is denied, and the
process must wait.
In this approach, the system allows deadlocks to occur but detects them and takes action to recover
from them.
• Deadlock Detection:
The system periodically checks for deadlocks by building a resource allocation graph (RAG) or
wait-for graph.
o Resource Allocation Graph (RAG): The nodes represent processes and resources, and
edges represent resource allocation or request.
o Wait-for Graph: A simplified version of the RAG where only processes are represented.
An edge from process A to process B indicates that process A is waiting for a resource
held by process B.
• Cycle Detection:
If there is a cycle in the wait-for graph, deadlock has occurred.
• Deadlock Recovery: Once a deadlock is detected, the system must recover from it. There are a
few strategies:
o Resource Preemption: Take resources away from processes and assign them to other
processes to break the deadlock. The preempted process may need to be rolled back to
a safe state.
• Scenario: Consider two processes, P1 and P2, requesting two resources, R1 and R2.
In this case, a circular wait can occur. To prevent deadlock, we can prevent the hold and wait condition
by requiring both processes to request both resources at the same time, or not to request a new
resource while holding one.
• Scenario: Consider 3 processes (P1, P2, P3) and 3 resources (R1, R2, R3).
A cycle is formed in the wait-for graph, indicating a deadlock. The system can either terminate a process
or preempt resources to break the cycle.
• Prevention: Avoid at least one of the deadlock conditions (e.g., hold and wait, circular wait).
• Avoidance: Use algorithms like Banker’s Algorithm to ensure the system remains in a safe state.
• Detection and Recovery: Allow deadlocks to occur, detect them using resource allocation
graphs, and recover by terminating processes or preempting resources.
Operating System: File System and Disk Management
The file system and disk management components of an operating system are responsible for
organizing, storing, retrieving, and managing data on storage devices like hard drives, SSDs, and other
storage media. These concepts are fundamental for GATE preparation, as they are frequently tested in
both conceptual and problem-solving questions.
o A file is a logical storage unit that provides a way to store and retrieve data.
2. File Attributes:
o Location: Physical address or pointer to the file's data on the storage device.
o Time, Date, and User Information: Metadata related to file creation, access, and
modification.
3. File Operations:
1. Contiguous Allocation:
2. Linked Allocation:
o Each file is stored as a linked list of blocks; each block contains a pointer to the next
block.
3. Indexed Allocation:
o A special block, the index block, contains pointers to all blocks of the file.
Directory Structure
1. Single-Level Directory:
2. Two-Level Directory:
3. Hierarchical Directory:
1. Sequential Access:
2. Direct Access:
3. Indexed Access:
o Access is via an index structure, enabling both sequential and direct access.
Disk Management
Disk management focuses on organizing and managing the storage device efficiently.
o These algorithms manage the order in which I/O requests are serviced to optimize
performance.
Types:
▪ Simple, but may cause long waiting times for requests far from the current disk
head position.
▪ Moves the disk head in one direction, servicing requests, then reverses.
▪ C-SCAN (Circular SCAN): Similar to SCAN but treats the disk as circular.
o LOOK and C-LOOK:
▪ Variants of SCAN and C-SCAN that reverse direction at the last request instead of
the disk's end.
GATE Tip: Solve problems to calculate seek times for these algorithms.
2. Disk Partitioning:
o Dividing a disk into partitions for better management and isolation of data.
3. Disk Formatting:
o Combines multiple physical disks into a single logical unit for performance or fault
tolerance.
o Levels:
1. Bit Vector:
o Uses a bitmap where each bit represents a block's status (free or allocated).
2. Linked List:
3. Grouping:
o Stores addresses of free blocks in groups for faster access.
4. Counting:
1. File Allocation: Understand contiguous, linked, and indexed allocation, and their trade-offs.
2. Disk Scheduling: Be ready to solve problems on FCFS, SSTF, SCAN, C-SCAN, and LOOK
algorithms.
3. Directory Structures: Know the advantages and disadvantages of single-level, two-level, and
hierarchical directory structures.
5. Free Space Management: Learn the different techniques for tracking free space.
Practice Tips:
Would you like practice problems or additional examples for any specific topic?