[go: up one dir, main page]

0% found this document useful (0 votes)
6 views21 pages

Operating System

The document discusses process management and CPU scheduling in operating systems, detailing the lifecycle of processes, their states, and the role of the Process Control Block (PCB). It outlines various CPU scheduling algorithms such as FCFS, SJF, Round Robin, and Priority Scheduling, along with their performance metrics. Additionally, it covers process synchronization, critical section problems, synchronization mechanisms, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems.

Uploaded by

ABCUSER1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views21 pages

Operating System

The document discusses process management and CPU scheduling in operating systems, detailing the lifecycle of processes, their states, and the role of the Process Control Block (PCB). It outlines various CPU scheduling algorithms such as FCFS, SJF, Round Robin, and Priority Scheduling, along with their performance metrics. Additionally, it covers process synchronization, critical section problems, synchronization mechanisms, and classical synchronization problems like the Producer-Consumer and Dining Philosophers problems.

Uploaded by

ABCUSER1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Operating System: Process Management and CPU Scheduling

In operating systems, process management and CPU scheduling are critical components for efficient
execution and resource utilization.

1. Process Management:

Process management involves the creation, scheduling, and termination of processes within the
system. It is a fundamental part of an OS to ensure that all active tasks are performed efficiently.

• Process: A program in execution. It includes the program code, its current activity, and
resources like CPU registers, memory, and input/output devices.

• Process States: Processes go through several states during their lifetime:

o New: The process is being created.

o Ready: The process is waiting to be assigned to a processor.

o Running: The process is being executed by the CPU.

o Waiting (Blocked): The process is waiting for an event, like I/O.

o Terminated: The process has finished execution.

• Process Control Block (PCB): This is the data structure the OS uses to store the information
about each process, such as:

o Process ID (PID)

o Process state

o CPU registers

o Memory limits

o Scheduling information

• Process Scheduling: The OS uses scheduling algorithms to decide which process should be
executed by the CPU next. This ensures efficient CPU utilization.

2. CPU Scheduling:

CPU scheduling is a crucial part of the OS to manage the execution of processes. It allocates CPU time
to different processes to ensure fairness and efficiency.

• Schedulers:

o Long-term scheduler: Decides which processes are admitted into the system (from new
to ready state).

o Short-term scheduler: Decides which of the ready processes gets the CPU next.
o Medium-term scheduler: Sometimes involved in swapping processes in and out of
memory.

• CPU Scheduling Algorithms:

o First Come, First Served (FCFS): Processes are executed in the order they arrive.

o Shortest Job Next (SJN) or Shortest Job First (SJF): The process with the shortest
execution time is given priority.

o Round Robin (RR): Each process is given a fixed time slice (quantum), and after that,
it’s moved to the end of the queue.

o Priority Scheduling: Processes are scheduled based on priority, and the one with the
highest priority is executed first.

o Multilevel Queue Scheduling: Processes are divided into multiple queues based on
their characteristics (e.g., interactive, batch) and then scheduled accordingly.

• Performance Metrics:

o Turnaround Time: The total time taken by a process from arrival to completion.

o Waiting Time: The total time a process spends waiting in the ready queue.

o Response Time: The time between the submission of a request and the first response.

o Throughput: The number of processes completed per unit of time.

CPU Scheduling Example: Round Robin (RR)

Consider 3 processes with burst times (CPU time required):

Process Burst Time

P1 4

P2 3

P3 5

Assume a time quantum of 2.

• Execution:

o P1 runs for 2 units, then P2 runs for 2 units, then P3 runs for 2 units.

o After that, P1 gets another 2 units, P2 runs for its remaining time (1 unit), then P3
finishes its remaining 3 units.

This results in a fair CPU time distribution, with each process getting an equal opportunity to run.
1. In-depth Understanding of Algorithms

Scheduling Algorithms:

• FCFS (First Come First Served):

o Explanation: The simplest scheduling algorithm. In FCFS, processes are executed in the
order they arrive in the ready queue.

o Disadvantage: It can cause convoy effect, where short processes have to wait behind
longer ones, leading to poor average waiting time.

o Example:

▪ Process A arrives at time 0, with burst time 5 units.

▪ Process B arrives at time 1, with burst time 3 units.

▪ In FCFS, Process A runs for 5 units, then Process B runs for 3 units, resulting in a
longer waiting time for Process B.

• SJF (Shortest Job First):

o Explanation: The process with the shortest burst time is executed first. SJF can be
preemptive or non-preemptive.

o Disadvantage: It can lead to starvation if long processes keep arriving and the shorter
ones always take precedence.

o Example:

▪ Process A: Burst time = 4, Arrival time = 0

▪ Process B: Burst time = 2, Arrival time = 1

▪ Process C: Burst time = 3, Arrival time = 2

▪ In non-preemptive SJF, Process B will run first, followed by Process C, and then
Process A.

• Round Robin (RR):

o Explanation: In this preemptive algorithm, each process gets a fixed time slice
(quantum). If a process doesn't complete within the quantum, it is preempted and sent
back to the ready queue, and the next process is executed.

o Disadvantage: The performance depends heavily on the time quantum. A very small
quantum leads to excessive context switching, and a large quantum behaves similarly to
FCFS.

o Example:

▪ Process A: Burst time = 4


▪ Process B: Burst time = 5

▪ Quantum = 2

▪ Execution: P1 runs for 2 units, then P2 runs for 2 units, then P1 runs for the
remaining 2 units, then P2 runs for the remaining 3 units.

• Priority Scheduling:

o Explanation: Each process is assigned a priority, and the CPU is allocated to the process
with the highest priority. It can be either preemptive (higher-priority processes can
interrupt lower-priority ones) or non-preemptive (the process will run to completion
before a higher-priority process preempts it).

o Disadvantage: Starvation occurs if lower-priority processes never get a chance to


execute.

o Example:

▪ Process A: Priority = 2, Burst time = 4

▪ Process B: Priority = 1, Burst time = 3

▪ In this case, Process B will be executed before Process A.

Performance Metrics:

• Turnaround Time:

o Definition: Time taken by a process to complete execution, from the time it arrives to
the time it finishes.

o Formula: Turnaround Time = Completion Time - Arrival Time.

o Significance: Short turnaround time means quick completion of processes.

• Waiting Time:

o Definition: Time a process spends waiting in the ready queue before getting CPU time.

o Formula: Waiting Time = Turnaround Time - Burst Time.

o Significance: Lower waiting time indicates better CPU utilization and efficiency.

• Response Time:

o Definition: The time between the submission of a request (process arrival) and the first
response (first time the process gets CPU time).

o Formula: Response Time = Time of first execution - Arrival Time.

o Significance: This is especially important for interactive systems, where fast responses
are crucial.

• Real-world Application:
o Example: In an operating system handling multiple tasks (e.g., video rendering, email
checking, etc.), SJF would prioritize the rendering task if it takes less time, reducing
overall completion time. However, interactive tasks may be prioritized with Round Robin
to maintain system responsiveness.

2. Focus on Key Concepts

Process States:

• Explanation: A process can exist in one of several states during its lifecycle:

o New: The process is being created.

o Ready: The process is waiting for the CPU to be allocated.

o Running: The process is actively being executed by the CPU.

o Waiting (Blocked): The process is waiting for some event (e.g., I/O operation) to
complete.

o Terminated: The process has finished execution.

• Key Transitions:

o Ready to Running: The scheduler selects the process to run.

o Running to Waiting: The process is waiting for an I/O operation.

o Running to Ready: The process is preempted and placed back in the ready queue (e.g.,
in Round Robin).

Process Control Block (PCB):

• Explanation: Each process is represented by a PCB. It contains:

o Process ID (PID)

o Process state (Ready, Running, etc.)

o Program counter (holds the address of the next instruction to execute)

o CPU registers (stores the state of the CPU)

o Memory management information (e.g., base and limit registers)

o Accounting information (e.g., CPU time used)

o I/O status information (e.g., devices allocated to the process)

Schedulers:

• Long-term Scheduler: Decides which processes to admit into the system (to transition from new
to ready state).
• Short-term Scheduler: Decides which of the ready processes should be assigned to the CPU
next. This is the main scheduling function for time-sharing systems.

• Medium-term Scheduler: Moves processes in and out of memory to manage memory resources
efficiently, especially in systems with limited memory.

Preemptive vs Non-Preemptive Scheduling:

• Preemptive Scheduling: The OS can interrupt a running process to assign CPU time to another
process. Round Robin is an example.

• Non-Preemptive Scheduling: A running process will continue until it voluntarily releases the
CPU, typically when it completes its burst time.

3. Time Complexity:

• Analyze and compare the time complexities of different scheduling algorithms:

o FCFS: O(n) for determining the completion order, but O(1) per process once the order is
determined.

o SJF: Can be O(n^2) if burst times are not known ahead of time and need to be calculated
dynamically. However, if burst times are available, the algorithm can be O(n log n).

o Round Robin: Typically O(n) as each process is allocated a fixed time quantum, but
performance depends on the quantum.

o Priority Scheduling: Can be O(n log n) if using a priority queue, otherwise O(n) for
simpler implementations.

4. Edge Cases and Variants:

• Priority Scheduling:

o Preemptive: A new high-priority process can interrupt a lower-priority one (e.g., real-
time systems).

o Non-preemptive: A process runs to completion if its priority is lower, even if higher-


priority processes arrive (e.g., batch systems).

• Example Edge Cases:

o All Processes with the Same Burst Time: In this case, Round Robin and FCFS will behave
similarly, but SJF won’t be useful.

o Identical Arrival Times: In cases of simultaneous arrivals, Round Robin is useful to


provide fair CPU time to each process.
5. Practice Questions:

• Question 1: Calculate the waiting time and turnaround time for the following set of processes
using FCFS:

o Process A: Arrival time = 0, Burst time = 4

o Process B: Arrival time = 1, Burst time = 3

o Process C: Arrival time = 2, Burst time = 2

o Answer: For FCFS, calculate each process's waiting time and turnaround time based on
their order in the queue.

• Question 2: Calculate response time and waiting time for processes using Round Robin with a
time quantum of 2.

o Use processes like:

▪ Process A: Burst time = 4

▪ Process B: Burst time = 3

▪ Process C: Burst time = 5

• Question 3: Determine the average waiting time and turnaround time for a set of processes
with various burst times using SJF (non-preemptive).

o Example:

▪ Process A: Burst time = 6

▪ Process B: Burst time = 2

▪ Process C: Burst time = 4

Operating System: Process Synchronization

Process synchronization is an essential concept in operating systems that ensures processes execute in a
coordinated manner, especially when they share resources like memory or files. It prevents race
conditions, which occur when two or more processes attempt to modify shared data concurrently,
leading to unpredictable results.

Key Concepts in Process Synchronization:

1. Critical Section Problem:


o Critical Section: The part of the program where shared resources are accessed.

o Race Condition: Occurs when multiple processes access shared data and the final result
depends on the order in which the processes access the data.

o The goal is to design mechanisms that prevent multiple processes from simultaneously
executing in their critical sections.

2. Synchronization Requirements:

o Mutual Exclusion: Only one process can be in its critical section at a time.

o Progress: If no process is in the critical section, the system should allow some process to
enter.

o Bounded Waiting: There should be a limit on how long a process waits to enter the
critical section.

Synchronization Mechanisms:

1. Locks and Semaphores:

o Lock: A basic synchronization primitive that ensures mutual exclusion by allowing only
one process to access the critical section at a time.

o Semaphore: A more powerful synchronization tool. It’s an integer variable used to


manage access to shared resources. There are two types:

▪ Binary Semaphore (Mutex): Takes values 0 or 1. Used for mutual exclusion.

▪ Counting Semaphore: Takes a non-negative integer value, often used for


managing access to a resource pool (e.g., limited number of printers).

Semaphore Operations:

o P(S) or wait(S): Decreases the value of semaphore S. If S < 0, the process is blocked.

o V(S) or signal(S): Increases the value of semaphore S. If S was negative, one blocked
process is awakened.

2. Monitors:

o A higher-level synchronization primitive that provides a safe way for processes to


communicate and synchronize. A monitor defines a set of operations that must be
executed in mutual exclusion.

3. Mutexes:

o Mutex (Mutual Exclusion) is a simpler mechanism compared to semaphores. It’s used to


lock a critical section so that only one process can access it at a time. A mutex typically
provides operations like lock() and unlock().
4. Condition Variables:

o Used in conjunction with locks or semaphores to allow processes to wait for a certain
condition to be true before continuing execution.

Classical Problems in Process Synchronization:

1. Producer-Consumer Problem:

o Involves two types of processes: producers (which create data) and consumers (which
consume data). The problem is to ensure that producers do not overwrite data that has
not been consumed, and consumers do not consume data that hasn't been produced
yet.

o This is typically solved using a shared buffer and semaphores to ensure mutual exclusion
and synchronization.

2. Reader-Writer Problem:

o In this problem, multiple processes need to read and write to a shared resource. The
challenge is to allow multiple readers to access the resource simultaneously but ensure
that only one writer can access the resource at a time.

o Two types:

▪ Reader Preference: Prioritizes readers over writers.

▪ Writer Preference: Prioritizes writers over readers.

3. Dining Philosophers Problem:

o A synchronization problem where philosophers sit at a table and alternate between


thinking and eating. They must share forks, and the challenge is to avoid deadlock
(where all philosophers are waiting indefinitely) and ensure that each philosopher gets a
chance to eat.

Deadlock and Starvation in Synchronization:

• Deadlock: A situation where two or more processes are waiting indefinitely for resources held by
each other, creating a cycle of dependency.

o Conditions for deadlock:

▪ Mutual Exclusion: At least one resource is held in a non-shareable mode.

▪ Hold and Wait: A process is holding at least one resource and waiting for
additional resources held by other processes.

▪ No Preemption: Resources cannot be forcibly taken from processes.


▪ Circular Wait: A set of processes are waiting for each other in a circular chain.

Deadlock Prevention and Avoidance strategies can help mitigate this.

• Starvation: A process is perpetually denied the resources it needs to proceed due to other
processes continuously taking precedence.

o Can occur in scheduling algorithms if processes are not fairly handled (e.g., a low-priority
process may never get CPU time).

Example: Producer-Consumer Problem with Semaphores

Semaphore mutex = 1; // Mutex to ensure mutual exclusion

Semaphore empty = N; // Semaphore to count empty slots in the buffer

Semaphore full = 0; // Semaphore to count full slots in the buffer

void producer() {

while (true) {

produce_item();

wait(empty); // Decrease empty slots

wait(mutex); // Enter critical section

add_item_to_buffer();

signal(mutex); // Leave critical section

signal(full); // Increase full slots

void consumer() {

while (true) {

wait(full); // Decrease full slots

wait(mutex); // Enter critical section

remove_item_from_buffer();

signal(mutex); // Leave critical section


signal(empty); // Increase empty slots

consume_item();

Summary of Process Synchronization Key Points:

• Critical Section: Protect shared resources from simultaneous access by multiple processes.

• Semaphores & Mutexes: Core synchronization primitives for ensuring mutual exclusion and
handling process coordination.

• Classical Problems: Producer-Consumer, Reader-Writer, and Dining Philosophers demonstrate


real-world synchronization challenges.

• Deadlock & Starvation: Key issues to manage in concurrent systems.

1. Detailed Explanation of Synchronization Algorithms:

• Semaphore Operations:

o Ensure you are comfortable with how wait() and signal() work with semaphores.
Practice scenarios where processes wait for a semaphore to become available (e.g.,
when a process enters the critical section or waits for resource availability).

o Learn about binary semaphores (mutexes) vs. counting semaphores and practice
problems involving counting semaphores for resource allocation.

• Mutex vs Semaphore:

o Understand that mutexes are used for mutual exclusion in single-resource access, while
semaphores can be used for controlling access to a set of resources. Compare them in
terms of their usage and limitations.

2. Classical Synchronization Problems:

• Producer-Consumer Problem:

o Understand the critical section between the producer (inserts data into a buffer) and the
consumer (removes data from the buffer).

o Be comfortable with buffer management and how to handle empty and full conditions
using semaphores.
o Practice problem: Given a buffer of size N, write code for both producer and consumer
using semaphores, considering edge cases such as buffer overflow and underflow.

• Reader-Writer Problem:

o Understand reader-preference and writer-preference scenarios and how they affect


resource allocation.

o Practice with multiple readers accessing the data concurrently while ensuring mutual
exclusion for writers.

o Practice problem: Design a solution for a multi-threaded system where multiple threads
can read the same data concurrently, but writers should have exclusive access to the
resource.

3. Deadlock Handling:

• Deadlock Detection and Prevention:

o Detection: Learn how to identify deadlock conditions using resource allocation graphs
or process state diagrams.

o Prevention: Familiarize yourself with deadlock prevention strategies such as ensuring


that one of the four necessary conditions (mutual exclusion, hold-and-wait, no
preemption, circular wait) is eliminated.

o Deadlock Avoidance: Understand how Banker's Algorithm works to avoid deadlock by


safely allocating resources based on available and requested resources.

• Practice problem: Given a system of processes and resources, determine if deadlock exists using
a resource allocation graph or wait-for graph.

4. Mathematical Foundation:

• Counting Semaphore:

o Understand how counting semaphores manage access to a limited number of identical


resources, like a pool of printers or buffer slots.

o Practice problems on how to allocate resources using counting semaphores.

o Practice problem: Simulate the allocation of multiple printers to multiple processes


using a counting semaphore and ensure processes get a printer without conflicts.

• Mutex vs Semaphores:

o Learn where mutexes are preferred (e.g., protecting single shared resources) and where
semaphores are needed (e.g., managing multiple identical resources like file access).
o Practice problem: Write a solution that uses mutexes for single-resource
synchronization and semaphores for multi-resource synchronization.

5. Practice:

• Solve GATE-Level Problems:

o Search for previous GATE questions involving process synchronization, and write the
code or explain the solution.

o Pay attention to edge cases and learn how to handle them.

• Time Complexity Analysis:

o For problems involving synchronization algorithms, practice analyzing their time


complexity.

o Focus on how different algorithms like round-robin or priority scheduling interact with
critical sections and how the performance might be impacted by process
synchronization.

• Practice Problems:

1. Producer-Consumer using Semaphore: Write a solution to the problem, ensuring you handle
buffer overflow and underflow conditions.

2. Reader-Writer Problem: Implement a solution where multiple threads of readers can access
data concurrently but a writer can only access it exclusively.

3. Dining Philosophers Problem: Simulate this classic problem and avoid deadlock and starvation
while ensuring philosophers get a chance to eat.

Operating System: Deadlocks

Deadlock is a critical issue in operating systems, where a set of processes are blocked because each
process is holding a resource and waiting for another resource held by another process. It is essential to
understand how deadlocks arise and how to handle or prevent them in a system.

Deadlock Basics
1. Deadlock Definition: A deadlock occurs when a set of processes are in a state where they
cannot proceed because each process is waiting for another process in the set to release a
resource.

2. Deadlock Conditions: A deadlock can only occur if the following four necessary conditions hold
simultaneously:

o Mutual Exclusion: At least one resource must be held in a non-shareable mode (i.e., only
one process can use the resource at any given time).

o Hold and Wait: A process that is holding at least one resource is waiting for additional
resources held by other processes.

o No Preemption: Resources cannot be forcibly taken from processes holding them; they
must be released voluntarily.

o Circular Wait: A set of processes are waiting for each other in a circular chain. For
example, Process 1 is waiting for a resource held by Process 2, Process 2 is waiting for a
resource held by Process 3, and Process 3 is waiting for a resource held by Process 1.

Deadlock Handling Techniques

There are three main approaches to handling deadlocks:

1. Deadlock Prevention:

In this approach, we design the system to prevent deadlock from ever occurring by ensuring that at least
one of the necessary conditions for deadlock is eliminated.

• Eliminating Mutual Exclusion: This is often impractical because certain resources (e.g., printers)
are inherently non-shareable.

• Eliminating Hold and Wait: Processes must request all the resources they need at once, before
starting execution. However, this can lead to resource wastage.

• Eliminating No Preemption: If a process is holding resources and is waiting for additional


resources, we can preempt resources from it. This may require rolling back some operations.

• Eliminating Circular Wait: One way to prevent circular wait is to impose an ordering of resources
and require processes to request resources in an increasing order.

Example:
If a process requests resources in a fixed order (e.g., first Resource A, then Resource B), it can prevent
circular wait by ensuring that no process is caught in a cycle.

2. Deadlock Avoidance:
This strategy allows the system to enter a state where deadlock is not possible, but it may not prevent
deadlock entirely. The system continuously checks for the potential for deadlock before granting a
request.

• Banker's Algorithm:
The Banker’s algorithm is a well-known deadlock avoidance algorithm. It works by checking if
granting a resource request would leave the system in a safe state.

o Safe State: A state is safe if there exists a sequence of processes that can finish executing
without causing deadlock. In other words, all processes can eventually complete even if
they are denied some resources.

o Unsafe State: A state is unsafe if no such sequence exists, and it may lead to a deadlock.

Banker's Algorithm for Resource Allocation:

• Work and Finish arrays are used to track resources.

• The algorithm works by checking if a request can be granted and still leave the system in a safe
state.

• If the system can allocate resources without causing deadlock, the request is granted. If not, the
process must wait.

Example:
If a process requests resources, the algorithm checks if granting the request leaves the system in a safe
state. If so, the resources are allocated, and the system proceeds. If not, the request is denied, and the
process must wait.

3. Deadlock Detection and Recovery:

In this approach, the system allows deadlocks to occur but detects them and takes action to recover
from them.

• Deadlock Detection:
The system periodically checks for deadlocks by building a resource allocation graph (RAG) or
wait-for graph.

o Resource Allocation Graph (RAG): The nodes represent processes and resources, and
edges represent resource allocation or request.

o Wait-for Graph: A simplified version of the RAG where only processes are represented.
An edge from process A to process B indicates that process A is waiting for a resource
held by process B.

• Cycle Detection:
If there is a cycle in the wait-for graph, deadlock has occurred.
• Deadlock Recovery: Once a deadlock is detected, the system must recover from it. There are a
few strategies:

o Process Termination: Kill one or more processes involved in the deadlock.

▪ Abort All: Terminate all processes involved in the deadlock.

▪ Abort One: Terminate one process to break the cycle.

o Resource Preemption: Take resources away from processes and assign them to other
processes to break the deadlock. The preempted process may need to be rolled back to
a safe state.

Deadlock Prevention Example:

• Scenario: Consider two processes, P1 and P2, requesting two resources, R1 and R2.

o P1 holds R1 and requests R2.

o P2 holds R2 and requests R1.

In this case, a circular wait can occur. To prevent deadlock, we can prevent the hold and wait condition
by requiring both processes to request both resources at the same time, or not to request a new
resource while holding one.

Deadlock Detection Example:

• Scenario: Consider 3 processes (P1, P2, P3) and 3 resources (R1, R2, R3).

o P1 holds R1 and waits for R2.

o P2 holds R2 and waits for R3.

o P3 holds R3 and waits for R1.

A cycle is formed in the wait-for graph, indicating a deadlock. The system can either terminate a process
or preempt resources to break the cycle.

Summary of Deadlock Handling Approaches:

• Prevention: Avoid at least one of the deadlock conditions (e.g., hold and wait, circular wait).

• Avoidance: Use algorithms like Banker’s Algorithm to ensure the system remains in a safe state.

• Detection and Recovery: Allow deadlocks to occur, detect them using resource allocation
graphs, and recover by terminating processes or preempting resources.
Operating System: File System and Disk Management

The file system and disk management components of an operating system are responsible for
organizing, storing, retrieving, and managing data on storage devices like hard drives, SSDs, and other
storage media. These concepts are fundamental for GATE preparation, as they are frequently tested in
both conceptual and problem-solving questions.

Key Concepts of File System

1. File System Basics:

o A file is a logical storage unit that provides a way to store and retrieve data.

o Files can be organized in directories, which form a hierarchical structure.

2. File Attributes:

o Name: Human-readable name of the file.

o Type: Type of file (e.g., text, binary, executable).

o Location: Physical address or pointer to the file's data on the storage device.

o Size: The current size of the file.

o Protection: Access permissions (e.g., read, write, execute).

o Time, Date, and User Information: Metadata related to file creation, access, and
modification.

3. File Operations:

o Create: Allocate space for the file.

o Open: Load the file’s metadata into memory.

o Read/Write: Access or modify file data.

o Delete: Remove the file and free up associated space.

File Allocation Methods

1. Contiguous Allocation:

o Each file occupies a set of contiguous blocks on disk.

o Advantages: Fast access, simple implementation.


o Disadvantages: Prone to external fragmentation; resizing files is difficult.

o Example: A file requiring 5 blocks is stored in blocks 3–7.

2. Linked Allocation:

o Each file is stored as a linked list of blocks; each block contains a pointer to the next
block.

o Advantages: No external fragmentation; files can grow dynamically.

o Disadvantages: Slower access due to sequential traversal; prone to pointer overhead.

o Example: File stored as Block 1 → Block 4 → Block 8.

3. Indexed Allocation:

o A special block, the index block, contains pointers to all blocks of the file.

o Advantages: Direct access to any block; no fragmentation issues.

o Disadvantages: Overhead of maintaining index blocks.

o Example: A file has an index block pointing to blocks 3, 7, 12.

Directory Structure

1. Single-Level Directory:

o A flat structure where all files are stored in a single directory.

o Advantages: Simple to implement.

o Disadvantages: No organization; naming conflicts.

2. Two-Level Directory:

o Each user has their own directory.

o Advantages: No naming conflicts between users.

o Disadvantages: Limited to two levels; less flexible.

3. Hierarchical Directory:

o A tree-like structure with multiple levels.

o Advantages: Organizes files systematically.

o Disadvantages: Complex implementation.

4. Acyclic Graph Directory:

o Allows files or directories to have multiple parents (links).


o Advantages: Efficient sharing.

o Disadvantages: Potential issues with maintaining consistency.

File Access Methods

1. Sequential Access:

o Data is accessed in a linear order.

o Example: Reading a text file from start to end.

2. Direct Access:

o Data can be accessed directly using logical block addresses.

o Example: Accessing a database record.

3. Indexed Access:

o Access is via an index structure, enabling both sequential and direct access.

o Example: Indexed file systems used in databases.

Disk Management

Disk management focuses on organizing and managing the storage device efficiently.

1. Disk Scheduling Algorithms:

o These algorithms manage the order in which I/O requests are serviced to optimize
performance.

Types:

o FCFS (First-Come, First-Served):

▪ Simple, but may cause long waiting times for requests far from the current disk
head position.

o SSTF (Shortest Seek Time First):

▪ Selects the request with the shortest seek time.

▪ Disadvantage: May cause starvation for far-away requests.

o SCAN (Elevator Algorithm):

▪ Moves the disk head in one direction, servicing requests, then reverses.

▪ C-SCAN (Circular SCAN): Similar to SCAN but treats the disk as circular.
o LOOK and C-LOOK:

▪ Variants of SCAN and C-SCAN that reverse direction at the last request instead of
the disk's end.

GATE Tip: Solve problems to calculate seek times for these algorithms.

2. Disk Partitioning:

o Dividing a disk into partitions for better management and isolation of data.

o Types: Primary, Extended, Logical.

3. Disk Formatting:

o Preparing a partition for use by creating a file system structure.

4. RAID (Redundant Array of Independent Disks):

o Combines multiple physical disks into a single logical unit for performance or fault
tolerance.

o Levels:

▪ RAID 0: Striping for performance (no redundancy).

▪ RAID 1: Mirroring for redundancy.

▪ RAID 5: Striping with parity for fault tolerance.

▪ RAID 10: Combination of RAID 1 and RAID 0.

Free Space Management

The OS must track free space on the disk to allocate it efficiently.

1. Bit Vector:

o Uses a bitmap where each bit represents a block's status (free or allocated).

o Advantages: Compact representation.

o Disadvantages: Slower access.

2. Linked List:

o Free blocks are linked together.

o Advantages: Easy to implement.

o Disadvantages: Requires sequential traversal.

3. Grouping:
o Stores addresses of free blocks in groups for faster access.

4. Counting:

o Tracks free blocks in consecutive ranges.

Summary for GATE Preparation

1. File Allocation: Understand contiguous, linked, and indexed allocation, and their trade-offs.

2. Disk Scheduling: Be ready to solve problems on FCFS, SSTF, SCAN, C-SCAN, and LOOK
algorithms.

3. Directory Structures: Know the advantages and disadvantages of single-level, two-level, and
hierarchical directory structures.

4. RAID Levels: Understand how RAID improves performance and reliability.

5. Free Space Management: Learn the different techniques for tracking free space.

Practice Tips:

• Solve GATE-level problems on:

o Disk scheduling algorithms (e.g., seek time calculations).

o File allocation strategies.

o Directory structures and file system questions.

• Study questions from previous GATE papers to identify patterns.

Would you like practice problems or additional examples for any specific topic?

You might also like