[go: up one dir, main page]

0% found this document useful (0 votes)
22 views48 pages

Os Lab

The document outlines the Operating System Lab Manual (CSE 3003) for the Winter Semester 2024-2025 at VIT Bhopal University, detailing various lab experiments focused on operating system components, CPU scheduling, and resource management techniques. Key labs include the implementation of CPU scheduling algorithms, the Banker's Algorithm for deadlock avoidance, and dynamic storage allocation strategies. Each lab includes objectives, theoretical background, code implementations, and conclusions on the effectiveness of the techniques studied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views48 pages

Os Lab

The document outlines the Operating System Lab Manual (CSE 3003) for the Winter Semester 2024-2025 at VIT Bhopal University, detailing various lab experiments focused on operating system components, CPU scheduling, and resource management techniques. Key labs include the implementation of CPU scheduling algorithms, the Banker's Algorithm for deadlock avoidance, and dynamic storage allocation strategies. Each lab includes objectives, theoretical background, code implementations, and conclusions on the effectiveness of the techniques studied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

SCHOOL OF COMPUTER SCIENCE & ENGINEERING

(Winter Semester 2024-2025)

CSE 3003:-
OPERATING SYSTEM LAB MANUAL
Class No: BL2024250500635

Slot :-B21+E14+E22

Course Instructor :MOHD RAFI LONE

Name:- Tushar Kumar Tiwari


Registration Number:-23BAI10551

VIT BHOPAL UNIVERSITY,Sehore


Madhya Pradesh
LIST OF OS LABS (CSE3003)

Implementation of Study of Operating System Components


LAB-1 (UNIX, Linux, Windows 10)

LAB-2 CPU Scheduling Policies

LAB-3 File Storage Allocation Techniques

LAB-4 Contiguous Allocation Techniques

LAB-5 Implementation of Banker's Algorithm to Create a Safe


Order of Process Execution Without Deadlock.

LAB-6 External and Internal Fragmentation

LAB-7 Resource Allocation Graph (RAG

LAB-8 Bankers Algorithm

LAB-9 Wait Graph

LAB-10 Inter process Communication –


Semaphore
LAB-11 FORK and JOIN construct

LAB 01: OS LAB (CSE 3003) Integrate

all the lab implementation works along with idea explorations.

Title:

Study of Operating System Components (UNIX, Linux, Windows 10)

Objective:

To study and analyze the components of operating systems—UNIX, Linux, and Windows 10—including their
development, distribution, compatibility, security issues, and graphical user interface (GUI). Additionally, to implement
practical demonstrations of OS concepts in a laboratory setting.

Theory:

An operating system is software that acts as an interface between computer hardware and users, managing system
resources efficiently. Key components include:

• Process Management: Scheduling, execution, and termination of processes.

• Memory Management: Allocation and deallocation of memory.

• File System: Handling data storage and retrieval.

• Security: Protection against unauthorized access.

• User Interface: Interaction via CLI or GUI.

This report compares UNIX, Linux, and Windows 10 OS in terms of their features, strengths, and limitations.

Operating Systems Comparison 1. Development

and Distribution

• UNIX:
Developed in 1969 at Bell Labs by Ken Thompson and Dennis Ritchie. Proprietary system with modern
derivatives such as Solaris, AIX, and HP-UX.
• Linux:
Developed by Linus Torvalds in 1991 as an open-source alternative to UNIX. Distributed through
communitydriven and organizational distros like Ubuntu and Red Hat.

• Windows 10:
Released by Microsoft in 2015. Proprietary software pre-installed on most PCs and available via licenses.

2. Compatibility

• UNIX: Best suited for servers and enterprise systems. Supports POSIX standards.

• Linux: Highly compatible with various hardware, from desktops to servers and embedded systems.

• Windows 10: Optimized for desktop and consumer software, with excellent compatibility for legacy
applications. 3. Security Issues

• UNIX: Strong multi-user environment with permission-based access but may lack modern updates.

• Linux: Frequent patches, with tools like SELinux enhancing security. Risks involve misconfigurations or
unofficial repositories.

• Windows 10: Includes Windows Defender and BitLocker but faces higher exposure to malware due to
widespread use.

4. Graphical User Interface (GUI)

• UNIX: CLI-focused, with optional GUIs like the X Window System.

• Linux: Flexible GUIs (GNOME, KDE, Xfce) with extensive customization options.

• Windows 10: Known for its user-friendly GUI featuring the Start Menu, Cortana, and a unified experience
across devices.

Lab Implementation

Experiment 1: Process Management

Objective: Simulate multi-threading in Linux. Procedure:

• Write a C program using the pthread library. • Create threads to perform concurrent tasks.

• Measure execution time and CPU utilization.

Code:
OUTPUT-

Experiment 2: File System (Demonstrating File Permissions in Python)

Objective: Demonstrate file permissions in Linux using Python's os and stat modules.

Code:

OUTPUT-

Explanation:
• The os.chmod() function is used to change the file permissions to rw-------, meaning the owner has read and
write permissions only.

• The stat.filemode() function is used to verify the permissions of the file.

Experiment 3: Security (Configuring Basic Firewall Using Python)

Objective: While we cannot directly configure firewall rules in Python, we can simulate basic security checks like port
scanning using the socket module. Below is an example to check if a port is open on a system.

Code:

OUTPUT-

Explanation:

• The check_port() function attempts to connect to a specified port on the given host. If the connection is
successful, the port is considered open; otherwise, it's closed.

• This simulates checking if specific ports like HTTP (80) and SSH (22) are open or closed.

Final Notes:

• The multi-threading example uses the threading module to create concurrent tasks in Python, mimicking
process management.
• The file permissions experiment demonstrates how to change file permissions and verify them using the os
and stat modules.

• The firewall simulation checks port accessibility on a host to simulate security management, though in a
realworld scenario, actual firewall rules would be configured outside Python.

Lab 02: OS LAB (CSE 3003)

Producer-Consumer Problem in Python Objective:

The objective of this experiment is to simulate the Producer-Consumer problem using Python’s threading and queue
modules. This problem demonstrates synchronization between threads where:

• Producers generate data.

• Consumers consume the data.

• A buffer (queue) is used to store the data between the producer and consumer threads.

Theory:

The Producer-Consumer Problem is a classic example of a multi-threading synchronization problem. It involves two
types of threads:

1. Producer: A thread that generates data and places it in a shared buffer.

2. Consumer: A thread that consumes data from the shared buffer.

There is a buffer (commonly a queue) that stores the data. The producer adds items to the buffer, and the consumer
removes items from it. Both threads must be synchronized to prevent issues like:

• The consumer consuming data from an empty buffer.

• The producer attempting to add data when the buffer is full.

This problem is typically solved using:

• Mutexes (to protect shared resources).

• Condition Variables (to signal threads when they can proceed).

Materials Required:

1. Python 3.x or higher


2. threading module

3. queue module

Code Implementation:

OUTPUT-
Observations:

• The producer and consumer threads synchronize with each other using the queue. The producer thread will
wait when the queue is full, and the consumer thread will wait when the queue is empty.

• The producer and consumer perform tasks asynchronously, but the queue ensures that the shared resource
(the buffer) is accessed safely without any data corruption or race conditions.

Conclusion:

The experiment demonstrates the Producer-Consumer problem using Python’s threading and queue modules. By
utilizing these modules, we can efficiently manage thread synchronization while ensuring that resources are shared
safely between the producer and consumer threads. This problem is a good example of inter-thread communication
and synchronization in multi-threaded programming.
LAB 03: OS LAB (CSE 3003)

Implementation of the Dining-Philosophers Problem in Python.

Objective:

The objective of this experiment is to simulate the Dining Philosophers Problem using Python. This problem
demonstrates the challenges of synchronization and resource sharing in a multi-threaded environment. The goal is to
implement a solution to ensure that multiple philosophers can eat without deadlock and avoid resource contention
when accessing shared resources (forks).

Theory:

The Dining Philosophers Problem is a classic synchronization problem involving five philosophers who sit at a round
table. Each philosopher:

1. Thinks for a while.

2. When hungry, they try to pick up two forks (one from the left and one from the right).

3. Once they have both forks, they eat.

4. After eating, they put the forks down and continue thinking.

Challenges:

• Deadlock: A situation where no philosopher can proceed because they are all waiting for each other to release
a fork.

• Starvation: A philosopher may not be able to eat if the other philosophers are continuously taking the forks.

The solution requires synchronization to avoid deadlock and ensure that all philosophers get a chance to eat.

CODE-
OUTPUT-

Observations:
• The philosophers are able to eat and think independently, but they synchronize their actions when trying to
pick up forks.

• By using the threading.Lock for the forks, we ensure that only one philosopher can use a fork at any given
time, thus avoiding race conditions.

• This simple solution avoids deadlock and starvation by acquiring the forks in a strict order (left fork first, then
right fork), which prevents circular waiting.

Conclusion:

In this experiment, we successfully implemented the Dining Philosophers Problem using Python’s threading module.
The implementation demonstrated how threads can be synchronized using locks to avoid resource contention,
deadlock, and starvation. Through this problem, we learned how to handle synchronization issues in multi-threaded
environments effectively.

The solution works efficiently for this scenario, and it provides a foundation for solving similar synchronization
problems in concurrent systems.
LAB 04: OS LAB (CSE 3003)

Implement the CPU scheduling algorithms.

Objective:

The objective of this lab is to implement and analyze common CPU scheduling algorithms. The goal is to
understand how different scheduling algorithms affect the performance of a system in terms of response
time, throughput, and CPU utilization.

Theory:

CPU scheduling algorithms determine the order in which processes are executed by the CPU. The key goals
of CPU scheduling are to maximize CPU utilization, minimize response time, and ensure fairness. Below are
some common CPU scheduling algorithms:

1. FCFS (First Come First Serve):

o Processes are executed in the order they arrive in the ready queue.

o Simple but inefficient, especially in cases of long processes.

2. SJF (Shortest Job First):

o The process with the shortest burst time (execution time) is selected next.

o Non-preemptive, and it can lead to starvation of longer processes.

3. Round Robin (RR):

o Each process gets a fixed time slice (quantum), and the CPU switches between processes
after each time slice.

o Preemptive and ensures fairness.

4. Priority Scheduling:

o Each process is assigned a priority, and the process with the highest priority is selected next.

o Can be preemptive or non-preemptive, but may lead to starvation of low-priority processes.

Problem Scenario:

Given a set of processes with their arrival times and burst times, we will apply the following CPU scheduling
algorithms to determine their respective completion times, turnaround times, and waiting times.

Consider the following set of processes:

Process ID Arrival Time Burst Time Priority


P1 0 5 3
P2 1 4 1
P3 2 2 4
Process ID Arrival Time Burst Time Priority

P4 3 1 2

The arrival time is when the process enters the ready queue, and the burst time is the time the process
requires for CPU execution.

Code Implementation:
OUTPUT:- Conclusion:

The CPU scheduling algorithms help determine the order in which processes are executed in a system.
From the lab:

1. FCFS: Simple but may lead to high waiting times.

2. SJF: Minimizes waiting time but may cause starvation

LAB -05: OS LAB (CSE 3003)


Lab Report: Implementation of Banker's Algorithm to Create a Safe Order of Process Execution Without
Deadlock

1. Title

Investigation of the Banker's Algorithm Implementation for Safe Process Execution Order and Deadlock
Avoidance

2. Introduction

The Banker's Algorithm is a well-known deadlock avoidance algorithm used in operating systems to allocate
resources to processes while ensuring that the system remains in a safe state. A safe state means that there
is a sequence of processes such that each process can complete without causing a deadlock, even if all
processes request their maximum resources at once.

This report aims to investigate and implement the Banker's Algorithm, which ensures that a process
execution order is safe and avoids deadlock in a multi-process system. The algorithm operates by simulating
resource allocation and checking whether a safe sequence of processes exists.

3. Objective

• To investigate the implementation of the Banker's Algorithm.

• To ensure safe execution of processes without causing any deadlock.

• To provide a scenario demonstrating the Banker's Algorithm in action.

4. Theory
The Banker's Algorithm works by checking for a "safe state" by simulating resource allocation to processes.
It involves the following concepts:

• Available Resources: The number of resources currently available in the system.

• Maximum Demand: The maximum number of resources each process may require.

• Allocated Resources: The number of resources already allocated to each process.

• Need: The remaining resources each process needs to finish execution.

The algorithm works as follows:

1. Initial Request: When a process requests resources, the Banker's Algorithm checks if the request
can be granted immediately.

2. Safe Sequence Check: The system checks whether granting the request results in a safe state by
verifying if there exists a sequence of processes that can complete without causing deadlock.

3. Grant or Deny: If the request results in a safe state, resources are allocated. Otherwise, the request
is denied.

5. Problem Scenario

Consider a system with 3 types of resources: A, B, and C. We have 5 processes in the system. The number of
instances of each resource is given, and each process has a maximum demand and has already been
allocated some resources.

Resources:

• A: 10 units

• B: 5 units

• C: 7 units Processes:

Process Maximum Demand (A, B, C) Allocation (A, B, C)


P0 (7, 5, 3) (3, 2, 2)
P1 (3, 2, 2) (2, 1, 1)

P2 (9, 0, 2) (3, 2, 2)

P3 (2, 3, 3) (2, 1, 1)
P4 (4, 3, 3) (1, 1, 1)
The system has the following available resources after some allocation:

• A: 3 units
• B: 2 units

• C: 2 units
We need to determine if the system is in a safe state and find a safe sequence of processes.

6. Algorithm Implementation

8. Output

The output for the given scenario would be:

9. Conclusion
The Banker's Algorithm successfully ensures that the system remains in a safe state by evaluating whether a
safe sequence of processes exists. In this scenario, the system is in a safe state, and the safe sequence is
identified. The implementation provides an effective way to avoid deadlocks by simulating resource
allocation and ensuring that each process has sufficient resources to complete its execution.

LAB 06: OS LAB (CSE 3003)


Investigate the Implementations of First fit, Best fit, and Worst fit Dynamic Storage-Allocation algorithms
[Contiguous Allocation Techniques] and prepare your implementation report along with your analysis?

1. Title
Investigation of Dynamic Storage Allocation Algorithms: First Fit, Best Fit, and Worst Fit

2. Introduction
Dynamic storage allocation is a memory management technique used to allocate memory blocks to
processes. The goal is to efficiently utilize memory while minimizing fragmentation. Contiguous allocation
techniques such as First Fit, Best Fit, and Worst Fit are widely used to allocate memory dynamically.

This report investigates and implements these algorithms, analyzes their behavior, and compares their
performance in terms of memory utilization and fragmentation.

3. Objective

• To implement First Fit, Best Fit, and Worst Fit dynamic storage allocation algorithms.

• To demonstrate the allocation process for each algorithm.

• To analyze and compare their performance using a sample problem.

4. Theory

Algorithms Overview

1. First Fit:

o Allocates the first available memory block that is large enough to accommodate the process.

o Fast, but may lead to external fragmentation.

2. Best Fit:
o Allocates the smallest available memory block that fits the process size. o Reduces

leftover space but increases allocation time due to searching.

3. Worst Fit:

o Allocates the largest available memory block to minimize leftover memory.

o Can lead to larger fragmented spaces over time.

Problem Scenario

Consider memory blocks of varying sizes, and processes with specific memory requirements. Each
algorithm attempts to allocate memory to the processes while keeping track of unused spaces.

5. Problem Statement

Given memory blocks of sizes [100, 500, 200, 300, 600] and processes with sizes [212, 417, 112, 426],
simulate the allocation using First Fit, Best Fit, and Worst Fit algorithms.

6. Algorithm Implementation
OUTPUT-
8. Analysis
Advantages

• First Fit: Simple and fast allocation.

• Best Fit: Minimizes memory wastage.

• Worst Fit: Reduces the chance of small unusable fragments.

Disadvantages

• First Fit: Causes fragmentation and suboptimal block utilization.

• Best Fit: Higher overhead due to searching for the smallest block.

• Worst Fit: Tends to create larger fragments.

Comparison Table

Metric First Fit Best Fit Worst Fit

Speed of Allocation High Moderate Moderate

Memory Utilization Moderate High Low

Fragmentation High Low High

9. Conclusion

The choice of algorithm depends on the specific requirements:

• Use First Fit for faster allocation.

• Use Best Fit for better memory utilization.

• Use Worst Fit when fragmentation is less of a concern.

For the given problem, Best Fit proved to be the most memory-efficient algorithm, effectively utilizing
available memory blocks with minimal fragmentation.
LAB 07: OS LAB (CSE 3003)
Investigate the implementations of those three FIFO, LRU, Optimal page replacement algorithms,
prepare your investigation and comparison report by considering the problem scenario.

1. Title

Investigation of FIFO, LRU, and Optimal Page Replacement Algorithms

2. Introduction

Page replacement algorithms are essential in virtual memory systems to decide which memory pages to
swap out when a new page needs to be loaded. These algorithms aim to reduce the number of page faults,
improving system performance.

This report implements FIFO (First-In-First-Out), LRU (Least Recently Used), and Optimal Page
Replacement algorithms. A comparison of their efficiency is performed using a defined problem scenario.

3. Objective

• To implement FIFO, LRU, and Optimal Page Replacement algorithms.

• To simulate and compare their performance using a predefined reference string and frame size.

4. Theory

Algorithms Overview

1. FIFO (First-In-First-Out):

o The oldest page in memory is replaced first.

o Simple to implement but may not always be efficient.

2. LRU (Least Recently Used):

o Replaces the page that has not been used for the longest period. o More

complex but generally provides better performance than FIFO.

3. Optimal Page Replacement:

o Replaces the page that will not be used for the longest period in the future. o

Theoretical best performance but requires knowledge of future references.

Problem Scenario
Given a reference string representing page requests and a fixed number of memory frames, simulate each
algorithm and measure the number of page faults.

5. Problem Statement

Simulate the algorithms for the following:

• Reference String: [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1]

• Number of Frames: 3

6. Algorithm Implementation
OUTPUT:-

8. Analysis Explanation 1. FIFO:

o Replaces pages in the order they arrive.

o Leads to more page faults when the order of references changes frequently.

2. LRU:

o Tracks usage history and replaces the least recently used page. o More adaptive but

computationally intensive compared to FIFO.

3. Optimal:

o Replaces the page that will not be used for the longest time.

o Achieves the lowest number of page faults but requires future knowledge.

Comparison Table

Algorithm Page Faults Performance


FIFO 15 Low
LRU 12 Medium
Optimal 9 High
9. Conclusion

• FIFO is simple but prone to higher page faults.

• LRU provides better performance by adapting to usage patterns.

• Optimal achieves the best performance but is impractical due to the need for future knowledge.

For practical systems, LRU is often preferred over FIFO, while Optimal serves as a theoretical benchmark for
comparison.
LAB 08: OS LAB (CSE 3003)
Investigate the implementations of those Disk scheduling algorithms, prepare your investigation and
assessment report by considering any problem scenario.

Objective:

To investigate and implement various disk scheduling algorithms, assess their performance, and analyze
them in a problem scenario. The aim is to understand how different algorithms handle disk I/O requests to
optimize seek time and throughput.

Theory:

Disk scheduling algorithms manage the order of disk I/O requests to improve system performance. They
aim to minimize the seek time (time taken to move the read/write head to the required track). Common
disk scheduling algorithms include:

1. FCFS (First Come First Serve): Serves requests in the order they arrive.

2. SSTF (Shortest Seek Time First): Selects the request closest to the current head position.

3. SCAN (Elevator Algorithm): Moves the head in one direction, serving requests until the end, then
reverses.

4. C-SCAN (Circular SCAN): Similar to SCAN but only serves requests in one direction and jumps to the
start after reaching the end.

5. LOOK and C-LOOK: Variants of SCAN and C-SCAN, stopping where requests end rather than going to
the edge.

Problem Scenario:

Consider a disk queue with requests for tracks at positions: [82, 170, 43, 140, 24, 16, 190].

The disk head starts at position 50. The disk scheduler must determine the optimal sequence to minimize
total seek time.

Code Implementation:
OUTPUT:-

Observations:

1. FCFS: Processes requests in the order they arrive, leading to high seek time.

2. SSTF: Optimizes seek time by selecting the closest request but may lead to starvation.

3. SCAN: Moves in one direction serving all requests, then reverses, reducing starvation compared to
SSTF.

4. C-SCAN: Similar to SCAN but only serves requests in one direction, ensuring fairness.

5. C-LOOK: A variant of C-SCAN, only covering the range of requests, avoiding unnecessary movement
to the disk's edges.

Conclusion:

• SSTF offers the lowest seek time in our scenario but may cause starvation.

• SCAN and C-SCAN ensure fairness and are more suited for dynamic and large-scale systems.

• Choosing the best algorithm depends on the problem requirements, such as minimizing seek time
or ensuring fairness.
Virtual Memory Management
Simulate Virtual Memory using Paging
In this simulation, the memory is divided into fixed-size pages. We will use an array to represent physical
memory, and we will manage the allocation of pages to the memory.

#include <stdio.h>
#include <stdlib.h>

#define FRAME_SIZE 4
#define PAGE_COUNT 10

// Representing virtual memory pages int virtual_memory[PAGE_COUNT] = {0, 1, 2,


3, 4, 5, 6, 7, 8, 9};

// Physical memory (frames) in the computer int


physical_memory[FRAME_SIZE] = {-1, -1, -1, -1}; // Function to print
the contents of physical memory void print_physical_memory() {
printf("Physical Memory: "); for (int i = 0; i < FRAME_SIZE; i++) {
if (physical_memory[i] != -1) printf("%d ", physical_memory[i]);
else printf("Empty ");
} printf("\n");
}

// Function to simulate loading a page into physical memory int


load_page_into_memory(int page) { for (int i = 0; i < FRAME_SIZE; i++) {
if (physical_memory[i] == -1) { physical_memory[i] = page;
return 1; // page loaded successfully
}
}
return 0; // memory full
}

// Simulate page access void access_page(int page) {


printf("Accessing page %d\n", page); if
(!load_page_into_memory(page)) { printf("Page
fault occurred!\n");
}
print_physical_memory();
}

int main() { for (int i = 0; i < PAGE_COUNT; i++) {


access_page(virtual_memory[i]);
}
return 0;
}
This program simulates virtual memory and page loading. It represents virtual memory as an array and
tries to load pages into a fixed-size physical memory array. If the memory is full, a page fault occurs.
Simulate Demand Paging Concept
Demand paging loads a page into physical memory only when it is required. Pages are loaded when
accessed, and other pages are swapped out as needed.

#include <stdio.h>

#define FRAME_SIZE 3
#define PAGE_COUNT 6

int virtual_memory[PAGE_COUNT] = {0, 1, 2, 3, 4, 5}; int


physical_memory[FRAME_SIZE] = {-1, -1, -1};

int page_faults = 0;

void print_physical_memory() { printf("Physical


Memory: "); for (int i = 0; i < FRAME_SIZE; i++) { if
(physical_memory[i] != -1) printf("%d ",
physical_memory[i]); else printf("Empty ");
} printf("\n");
}

int load_page(int page) { for (int i = 0; i < FRAME_SIZE;


i++) { if (physical_memory[i] == page) { return 0;
// Page already in memory
}
if (physical_memory[i] == -1) { physical_memory[i] =
page; return 1; // Loaded successfully
}
}
return -1; // No space available
}

void demand_paging() { for (int i = 0; i < PAGE_COUNT; i++) {


printf("\nAccessing page %d\n", virtual_memory[i]); if
(load_page(virtual_memory[i]) == -1) { page_faults++;
printf("Page fault occurred!\n");
}
print_physical_memory();
}
printf("\nTotal page faults: %d\n", page_faults);
}
int main() { demand_paging(); return
0;
}
In this simulation, pages are only loaded when accessed. If physical memory is full, a page fault occurs, and
the program tracks the number of page faults during the process.

Simulate Page Replacement Policies (FIFO, LRU)


FIFO (First-In-First-Out) Page Replacement
In FIFO, the oldest page in memory is replaced when a new page needs to be loaded but memory is full.

#include <stdio.h>
#define FRAME_SIZE 3
#define PAGE_COUNT 6

int virtual_memory[PAGE_COUNT] = {0, 1, 2, 3, 4, 5}; int


physical_memory[FRAME_SIZE] = {-1, -1, -1}; int page_faults = 0;

void print_physical_memory() { printf("Physical


Memory: "); for (int i = 0; i < FRAME_SIZE; i++) { if
(physical_memory[i] != -1) printf("%d ",
physical_memory[i]); else printf("Empty ");
} printf("\n");
}

void fifo_page_replacement() { int front = 0; // Pointer


to the first frame

for (int i = 0; i < PAGE_COUNT; i++) { printf("\nAccessing page %d\n",


virtual_memory[i]); int found = 0;

for (int j = 0; j < FRAME_SIZE; j++) { if (physical_memory[j] ==


virtual_memory[i]) { found = 1; break; // Page is
already in memory
}
}

if (!found) {
physical_memory[front] = virtual_memory[i]; // Replace the front page front = (front + 1) %
FRAME_SIZE; page_faults++; printf("Page fault occurred!\n");
}

print_physical_memory();
}
printf("\nTotal page faults: %d\n", page_faults);
}
int main() { fifo_page_replacement(); return 0;
}
This program simulates FIFO page replacement. When a page fault occurs, it replaces the oldest page in
physical memory.
LRU (Least Recently Used) Page Replacement
In LRU, the page that has not been used for the longest time is replaced when memory is full.

#include <stdio.h>

#define FRAME_SIZE 3
#define PAGE_COUNT 6
int virtual_memory[PAGE_COUNT] = {0, 1, 2, 3, 4, 5}; int
physical_memory[FRAME_SIZE] = {-1, -1, -1}; int page_faults = 0;

void print_physical_memory() { printf("Physical


Memory: "); for (int i = 0; i < FRAME_SIZE; i++) { if
(physical_memory[i] != -1) printf("%d ",
physical_memory[i]); else printf("Empty ");
} printf("\n");
}

void lru_page_replacement() { int usage[FRAME_SIZE] = {-1, -1, -1}; // Track last


used time

for (int i = 0; i < PAGE_COUNT; i++) { printf("\nAccessing page %d\n",


virtual_memory[i]); int found = 0; int oldest = 0;

for (int j = 0; j < FRAME_SIZE; j++) { if (physical_memory[j] ==


virtual_memory[i]) { found = 1; // Page is already in memory
usage[j] = i; // Update usage time break;
}
}

if (!found) { for (int j = 0; j < FRAME_SIZE; j++) { if


(usage[j] == -1) { physical_memory[j] = virtual_memory[i];
usage[j] = i; found = 1; break;
} else if (usage[j] < usage[oldest]) {
oldest = j; // Find the least recently used page
} } if (!found) { physical_memory[oldest] =
virtual_memory[i]; usage[oldest] = i;
}
page_faults++; printf("Page fault occurred!\n");
}

print_physical_memory();
}
printf("\nTotal page faults: %d\n", page_faults);
}

int main() { lru_page_replacement(); return 0;


}
This program simulates the LRU page replacement algorithm, where the page that has not been accessed
for the longest time is replaced when a page fault occurs.
Track Page Faults During Process Execution
All of the above simulations (FIFO, LRU, and demand paging) already track page faults and print the
number of page faults after the simulation ends. This allows you to monitor how efficiently the page
replacement algorithms are handling memory.
The key part is the page_faults counter in each algorithm, which increments whenever a page fault occurs.

Disk Scheduling Algorithms


First-Come-First-Served (FCFS) Disk Scheduling
In FCFS, requests are processed in the order they are received. The disk head moves to the requested
track sequentially.

#include <stdio.h>
#include <stdlib.h>

#define MAX_REQUESTS 8

// Function to calculate the total seek time


int calculate_seek_time(int requests[], int size, int initial_head_position) { int seek_time = 0; int
current_position = initial_head_position;

for (int i = 0; i < size; i++) { seek_time += abs(requests[i] - current_position);


current_position = requests[i];
}

return seek_time;
}

// Function to print the disk scheduling sequence void print_sequence(int requests[], int size, int
initial_head_position) { int seek_time = 0; int current_position = initial_head_position;

printf("Disk Scheduling Order (FCFS): "); for (int i = 0; i < size; i++) {
printf("%d ", requests[i]); seek_time += abs(requests[i] - current_position);
current_position = requests[i];
}
printf("\nTotal Seek Time (FCFS): %d\n", seek_time);
}

int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;

print_sequence(requests, MAX_REQUESTS, initial_head_position);


return 0;
}

Shortest Seek Time First (SSTF) Disk Scheduling


SSTF selects the disk request that is closest to the current position of the disk head, reducing seek time at
each step.

#include <stdio.h>
#include <stdlib.h>

#define MAX_REQUESTS 8

// Function to calculate the total seek time using SSTF int


calculate_sstf_seek_time(int requests[], int size, int initial_head_position) { int
seek_time = 0;
int current_position = initial_head_position; int visited[size]; for
(int i = 0; i < size; i++) visited[i] = 0;

for (int i = 0; i < size; i++) { int min_distance =


__INT_MAX__; int min_index = -1;

for (int j = 0; j < size; j++) { if (!visited[j]) { int distance = abs(requests[j] -


current_position); if (distance < min_distance) { min_distance =
distance; min_index = j;
}
}
}

visited[min_index] = 1; seek_time += min_distance;


current_position = requests[min_index];
}

return seek_time;
}

// Function to print the disk scheduling sequence using SSTF


void print_sstf_sequence(int requests[], int size, int initial_head_position)
{
int seek_time = 0; int current_position = initial_head_position; int
visited[size];

for (int i = 0; i < size; i++) visited[i] = 0;

printf("Disk Scheduling Order (SSTF): ");


for (int i = 0; i < size; i++) { int min_distance =
__INT_MAX__; int min_index = -1;

for (int j = 0; j < size; j++) { if (!visited[j]) { int distance = abs(requests[j] -


current_position); if (distance < min_distance) { min_distance =
distance; min_index = j;
}
}
}

visited[min_index] = 1; printf("%d ", requests[min_index]);


seek_time += min_distance; current_position =
requests[min_index];
}
printf("\nTotal Seek Time (SSTF): %d\n", seek_time);
}

int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;

print_sstf_sequence(requests, MAX_REQUESTS, initial_head_position);

return 0;
}
This program uses SSTF scheduling to process disk requests by selecting the request with the minimum
seek time at each step. It calculates the total seek time for the requests and prints the disk scheduling
order.

SCAN and C-SCAN Disk Scheduling


SCAN moves the disk arm in one direction (either towards the beginning or end of the disk), serving
requests in that direction, and then reverses direction once it reaches the end. C-SCAN (Circular SCAN) is
similar, but once the disk arm reaches the end, it moves directly back to the beginning, without serving
requests in reverse order.
SCAN Disk Scheduling

#include <stdio.h>
#include <stdlib.h>

#define MAX_REQUESTS 8
#define DISK_SIZE 200

// Function to implement SCAN scheduling int calculate_scan_seek_time(int


requests[], int size, int initial_head_position, int direction) { int seek_time = 0;
int current_position = initial_head_position;

// Sorting the requests for (int i = 0; i < size - 1; i++) { for


(int j = 0; j < size - i - 1; j++) { if (requests[j] > requests[j +
1]) { int temp = requests[j]; requests[j] =
requests[j + 1]; requests[j + 1] = temp;
}
}
}

// Direction: 1 for moving towards higher tracks, -1 for lower tracks int i;
if (direction == 1) { for (i = 0; i < size; i++) { if (requests[i]
>= current_position) { break;
}
}
} else {
for (i = size - 1; i >= 0; i--) { if (requests[i] <= current_position)
{ break;
}
}
}

// Move towards end or start depending on the direction if (direction == 1) { for


(int j = i; j < size; j++) { seek_time += abs(requests[j] - current_position);
current_position = requests[j];
}
seek_time += abs(DISK_SIZE - current_position); current_position = DISK_SIZE - 1;
for (int j = i - 1; j >= 0; j--) { seek_time += abs(requests[j] - current_position);
current_position = requests[j];
}
} else { for (int j = i; j >= 0; j--) { seek_time += abs(requests[j] -
current_position); current_position = requests[j];
}
seek_time += abs(current_position); current_position = 0; for (int j = i + 1; j
< size; j++) { seek_time += abs(requests[j] - current_position);
current_position = requests[j];
}
}

return seek_time;
}

int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50; int direction = 1; // 1 for right, -1 for left

int seek_time = calculate_scan_seek_time(requests, MAX_REQUESTS, initial_head_position,


direction); printf("Total Seek Time (SCAN): %d\n", seek_time);

return 0;
}
C-SCAN Disk Scheduling

#include <stdio.h>
#include <stdlib.h>

#define MAX_REQUESTS 8
#define DISK_SIZE 200

// Function to implement C-SCAN scheduling int calculate_cscan_seek_time(int


requests[], int size, int initial_head_position) { int seek_time = 0; int
current_position = initial_head_position;

// Sorting the requests for (int i = 0; i < size - 1; i++) { for


(int j = 0; j < size - i - 1; j++) { if (requests[j] > requests[j +
1]) { int temp = requests[j]; requests[j] =
requests[j + 1]; requests[j + 1] = temp;
}
}
}

// Serve requests in one direction, then jump to the start for (int i = 0; i < size; i++) {
if (requests[i] >= current_position) { for (int j = i; j < size; j++) { seek_time
+= abs(requests[j] - current_position); current_position = requests[j];
} break;
}
}

// Jump to the first track


seek_time += abs(DISK_SIZE - current_position); current_position = 0;
// Serve remaining requests after the jump for (int i = 0; i < size; i++) {
seek_time += abs(requests[i] - current_position); current_position =
requests[i];
}

return seek_time;
}

int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;

int seek_time = calculate_cscan_seek_time(requests, MAX_REQUESTS, initial_head_position);


printf("Total Seek Time (C-SCAN): %d\n", seek_time);

return 0;
}

You might also like