Os Lab
Os Lab
CSE 3003:-
OPERATING SYSTEM LAB MANUAL
Class No: BL2024250500635
Slot :-B21+E14+E22
Title:
Objective:
To study and analyze the components of operating systems—UNIX, Linux, and Windows 10—including their
development, distribution, compatibility, security issues, and graphical user interface (GUI). Additionally, to implement
practical demonstrations of OS concepts in a laboratory setting.
Theory:
An operating system is software that acts as an interface between computer hardware and users, managing system
resources efficiently. Key components include:
This report compares UNIX, Linux, and Windows 10 OS in terms of their features, strengths, and limitations.
and Distribution
• UNIX:
Developed in 1969 at Bell Labs by Ken Thompson and Dennis Ritchie. Proprietary system with modern
derivatives such as Solaris, AIX, and HP-UX.
• Linux:
Developed by Linus Torvalds in 1991 as an open-source alternative to UNIX. Distributed through
communitydriven and organizational distros like Ubuntu and Red Hat.
• Windows 10:
Released by Microsoft in 2015. Proprietary software pre-installed on most PCs and available via licenses.
2. Compatibility
• UNIX: Best suited for servers and enterprise systems. Supports POSIX standards.
• Linux: Highly compatible with various hardware, from desktops to servers and embedded systems.
• Windows 10: Optimized for desktop and consumer software, with excellent compatibility for legacy
applications. 3. Security Issues
• UNIX: Strong multi-user environment with permission-based access but may lack modern updates.
• Linux: Frequent patches, with tools like SELinux enhancing security. Risks involve misconfigurations or
unofficial repositories.
• Windows 10: Includes Windows Defender and BitLocker but faces higher exposure to malware due to
widespread use.
• Linux: Flexible GUIs (GNOME, KDE, Xfce) with extensive customization options.
• Windows 10: Known for its user-friendly GUI featuring the Start Menu, Cortana, and a unified experience
across devices.
Lab Implementation
• Write a C program using the pthread library. • Create threads to perform concurrent tasks.
Code:
OUTPUT-
Objective: Demonstrate file permissions in Linux using Python's os and stat modules.
Code:
OUTPUT-
Explanation:
• The os.chmod() function is used to change the file permissions to rw-------, meaning the owner has read and
write permissions only.
Objective: While we cannot directly configure firewall rules in Python, we can simulate basic security checks like port
scanning using the socket module. Below is an example to check if a port is open on a system.
Code:
OUTPUT-
Explanation:
• The check_port() function attempts to connect to a specified port on the given host. If the connection is
successful, the port is considered open; otherwise, it's closed.
• This simulates checking if specific ports like HTTP (80) and SSH (22) are open or closed.
Final Notes:
• The multi-threading example uses the threading module to create concurrent tasks in Python, mimicking
process management.
• The file permissions experiment demonstrates how to change file permissions and verify them using the os
and stat modules.
• The firewall simulation checks port accessibility on a host to simulate security management, though in a
realworld scenario, actual firewall rules would be configured outside Python.
The objective of this experiment is to simulate the Producer-Consumer problem using Python’s threading and queue
modules. This problem demonstrates synchronization between threads where:
• A buffer (queue) is used to store the data between the producer and consumer threads.
Theory:
The Producer-Consumer Problem is a classic example of a multi-threading synchronization problem. It involves two
types of threads:
There is a buffer (commonly a queue) that stores the data. The producer adds items to the buffer, and the consumer
removes items from it. Both threads must be synchronized to prevent issues like:
Materials Required:
3. queue module
Code Implementation:
OUTPUT-
Observations:
• The producer and consumer threads synchronize with each other using the queue. The producer thread will
wait when the queue is full, and the consumer thread will wait when the queue is empty.
• The producer and consumer perform tasks asynchronously, but the queue ensures that the shared resource
(the buffer) is accessed safely without any data corruption or race conditions.
Conclusion:
The experiment demonstrates the Producer-Consumer problem using Python’s threading and queue modules. By
utilizing these modules, we can efficiently manage thread synchronization while ensuring that resources are shared
safely between the producer and consumer threads. This problem is a good example of inter-thread communication
and synchronization in multi-threaded programming.
LAB 03: OS LAB (CSE 3003)
Objective:
The objective of this experiment is to simulate the Dining Philosophers Problem using Python. This problem
demonstrates the challenges of synchronization and resource sharing in a multi-threaded environment. The goal is to
implement a solution to ensure that multiple philosophers can eat without deadlock and avoid resource contention
when accessing shared resources (forks).
Theory:
The Dining Philosophers Problem is a classic synchronization problem involving five philosophers who sit at a round
table. Each philosopher:
2. When hungry, they try to pick up two forks (one from the left and one from the right).
4. After eating, they put the forks down and continue thinking.
Challenges:
• Deadlock: A situation where no philosopher can proceed because they are all waiting for each other to release
a fork.
• Starvation: A philosopher may not be able to eat if the other philosophers are continuously taking the forks.
The solution requires synchronization to avoid deadlock and ensure that all philosophers get a chance to eat.
CODE-
OUTPUT-
Observations:
• The philosophers are able to eat and think independently, but they synchronize their actions when trying to
pick up forks.
• By using the threading.Lock for the forks, we ensure that only one philosopher can use a fork at any given
time, thus avoiding race conditions.
• This simple solution avoids deadlock and starvation by acquiring the forks in a strict order (left fork first, then
right fork), which prevents circular waiting.
Conclusion:
In this experiment, we successfully implemented the Dining Philosophers Problem using Python’s threading module.
The implementation demonstrated how threads can be synchronized using locks to avoid resource contention,
deadlock, and starvation. Through this problem, we learned how to handle synchronization issues in multi-threaded
environments effectively.
The solution works efficiently for this scenario, and it provides a foundation for solving similar synchronization
problems in concurrent systems.
LAB 04: OS LAB (CSE 3003)
Objective:
The objective of this lab is to implement and analyze common CPU scheduling algorithms. The goal is to
understand how different scheduling algorithms affect the performance of a system in terms of response
time, throughput, and CPU utilization.
Theory:
CPU scheduling algorithms determine the order in which processes are executed by the CPU. The key goals
of CPU scheduling are to maximize CPU utilization, minimize response time, and ensure fairness. Below are
some common CPU scheduling algorithms:
o Processes are executed in the order they arrive in the ready queue.
o The process with the shortest burst time (execution time) is selected next.
o Each process gets a fixed time slice (quantum), and the CPU switches between processes
after each time slice.
4. Priority Scheduling:
o Each process is assigned a priority, and the process with the highest priority is selected next.
Problem Scenario:
Given a set of processes with their arrival times and burst times, we will apply the following CPU scheduling
algorithms to determine their respective completion times, turnaround times, and waiting times.
P4 3 1 2
The arrival time is when the process enters the ready queue, and the burst time is the time the process
requires for CPU execution.
Code Implementation:
OUTPUT:- Conclusion:
The CPU scheduling algorithms help determine the order in which processes are executed in a system.
From the lab:
1. Title
Investigation of the Banker's Algorithm Implementation for Safe Process Execution Order and Deadlock
Avoidance
2. Introduction
The Banker's Algorithm is a well-known deadlock avoidance algorithm used in operating systems to allocate
resources to processes while ensuring that the system remains in a safe state. A safe state means that there
is a sequence of processes such that each process can complete without causing a deadlock, even if all
processes request their maximum resources at once.
This report aims to investigate and implement the Banker's Algorithm, which ensures that a process
execution order is safe and avoids deadlock in a multi-process system. The algorithm operates by simulating
resource allocation and checking whether a safe sequence of processes exists.
3. Objective
4. Theory
The Banker's Algorithm works by checking for a "safe state" by simulating resource allocation to processes.
It involves the following concepts:
• Maximum Demand: The maximum number of resources each process may require.
1. Initial Request: When a process requests resources, the Banker's Algorithm checks if the request
can be granted immediately.
2. Safe Sequence Check: The system checks whether granting the request results in a safe state by
verifying if there exists a sequence of processes that can complete without causing deadlock.
3. Grant or Deny: If the request results in a safe state, resources are allocated. Otherwise, the request
is denied.
5. Problem Scenario
Consider a system with 3 types of resources: A, B, and C. We have 5 processes in the system. The number of
instances of each resource is given, and each process has a maximum demand and has already been
allocated some resources.
Resources:
• A: 10 units
• B: 5 units
• C: 7 units Processes:
P2 (9, 0, 2) (3, 2, 2)
P3 (2, 3, 3) (2, 1, 1)
P4 (4, 3, 3) (1, 1, 1)
The system has the following available resources after some allocation:
• A: 3 units
• B: 2 units
• C: 2 units
We need to determine if the system is in a safe state and find a safe sequence of processes.
6. Algorithm Implementation
8. Output
9. Conclusion
The Banker's Algorithm successfully ensures that the system remains in a safe state by evaluating whether a
safe sequence of processes exists. In this scenario, the system is in a safe state, and the safe sequence is
identified. The implementation provides an effective way to avoid deadlocks by simulating resource
allocation and ensuring that each process has sufficient resources to complete its execution.
1. Title
Investigation of Dynamic Storage Allocation Algorithms: First Fit, Best Fit, and Worst Fit
2. Introduction
Dynamic storage allocation is a memory management technique used to allocate memory blocks to
processes. The goal is to efficiently utilize memory while minimizing fragmentation. Contiguous allocation
techniques such as First Fit, Best Fit, and Worst Fit are widely used to allocate memory dynamically.
This report investigates and implements these algorithms, analyzes their behavior, and compares their
performance in terms of memory utilization and fragmentation.
3. Objective
• To implement First Fit, Best Fit, and Worst Fit dynamic storage allocation algorithms.
4. Theory
Algorithms Overview
1. First Fit:
o Allocates the first available memory block that is large enough to accommodate the process.
2. Best Fit:
o Allocates the smallest available memory block that fits the process size. o Reduces
3. Worst Fit:
Problem Scenario
Consider memory blocks of varying sizes, and processes with specific memory requirements. Each
algorithm attempts to allocate memory to the processes while keeping track of unused spaces.
5. Problem Statement
Given memory blocks of sizes [100, 500, 200, 300, 600] and processes with sizes [212, 417, 112, 426],
simulate the allocation using First Fit, Best Fit, and Worst Fit algorithms.
6. Algorithm Implementation
OUTPUT-
8. Analysis
Advantages
Disadvantages
• Best Fit: Higher overhead due to searching for the smallest block.
Comparison Table
9. Conclusion
For the given problem, Best Fit proved to be the most memory-efficient algorithm, effectively utilizing
available memory blocks with minimal fragmentation.
LAB 07: OS LAB (CSE 3003)
Investigate the implementations of those three FIFO, LRU, Optimal page replacement algorithms,
prepare your investigation and comparison report by considering the problem scenario.
1. Title
2. Introduction
Page replacement algorithms are essential in virtual memory systems to decide which memory pages to
swap out when a new page needs to be loaded. These algorithms aim to reduce the number of page faults,
improving system performance.
This report implements FIFO (First-In-First-Out), LRU (Least Recently Used), and Optimal Page
Replacement algorithms. A comparison of their efficiency is performed using a defined problem scenario.
3. Objective
• To simulate and compare their performance using a predefined reference string and frame size.
4. Theory
Algorithms Overview
1. FIFO (First-In-First-Out):
o Replaces the page that has not been used for the longest period. o More
o Replaces the page that will not be used for the longest period in the future. o
Problem Scenario
Given a reference string representing page requests and a fixed number of memory frames, simulate each
algorithm and measure the number of page faults.
5. Problem Statement
• Number of Frames: 3
6. Algorithm Implementation
OUTPUT:-
o Leads to more page faults when the order of references changes frequently.
2. LRU:
o Tracks usage history and replaces the least recently used page. o More adaptive but
3. Optimal:
o Replaces the page that will not be used for the longest time.
o Achieves the lowest number of page faults but requires future knowledge.
Comparison Table
• Optimal achieves the best performance but is impractical due to the need for future knowledge.
For practical systems, LRU is often preferred over FIFO, while Optimal serves as a theoretical benchmark for
comparison.
LAB 08: OS LAB (CSE 3003)
Investigate the implementations of those Disk scheduling algorithms, prepare your investigation and
assessment report by considering any problem scenario.
Objective:
To investigate and implement various disk scheduling algorithms, assess their performance, and analyze
them in a problem scenario. The aim is to understand how different algorithms handle disk I/O requests to
optimize seek time and throughput.
Theory:
Disk scheduling algorithms manage the order of disk I/O requests to improve system performance. They
aim to minimize the seek time (time taken to move the read/write head to the required track). Common
disk scheduling algorithms include:
1. FCFS (First Come First Serve): Serves requests in the order they arrive.
2. SSTF (Shortest Seek Time First): Selects the request closest to the current head position.
3. SCAN (Elevator Algorithm): Moves the head in one direction, serving requests until the end, then
reverses.
4. C-SCAN (Circular SCAN): Similar to SCAN but only serves requests in one direction and jumps to the
start after reaching the end.
5. LOOK and C-LOOK: Variants of SCAN and C-SCAN, stopping where requests end rather than going to
the edge.
Problem Scenario:
Consider a disk queue with requests for tracks at positions: [82, 170, 43, 140, 24, 16, 190].
The disk head starts at position 50. The disk scheduler must determine the optimal sequence to minimize
total seek time.
Code Implementation:
OUTPUT:-
Observations:
1. FCFS: Processes requests in the order they arrive, leading to high seek time.
2. SSTF: Optimizes seek time by selecting the closest request but may lead to starvation.
3. SCAN: Moves in one direction serving all requests, then reverses, reducing starvation compared to
SSTF.
4. C-SCAN: Similar to SCAN but only serves requests in one direction, ensuring fairness.
5. C-LOOK: A variant of C-SCAN, only covering the range of requests, avoiding unnecessary movement
to the disk's edges.
Conclusion:
• SSTF offers the lowest seek time in our scenario but may cause starvation.
• SCAN and C-SCAN ensure fairness and are more suited for dynamic and large-scale systems.
• Choosing the best algorithm depends on the problem requirements, such as minimizing seek time
or ensuring fairness.
Virtual Memory Management
Simulate Virtual Memory using Paging
In this simulation, the memory is divided into fixed-size pages. We will use an array to represent physical
memory, and we will manage the allocation of pages to the memory.
#include <stdio.h>
#include <stdlib.h>
#define FRAME_SIZE 4
#define PAGE_COUNT 10
#include <stdio.h>
#define FRAME_SIZE 3
#define PAGE_COUNT 6
int page_faults = 0;
#include <stdio.h>
#define FRAME_SIZE 3
#define PAGE_COUNT 6
if (!found) {
physical_memory[front] = virtual_memory[i]; // Replace the front page front = (front + 1) %
FRAME_SIZE; page_faults++; printf("Page fault occurred!\n");
}
print_physical_memory();
}
printf("\nTotal page faults: %d\n", page_faults);
}
int main() { fifo_page_replacement(); return 0;
}
This program simulates FIFO page replacement. When a page fault occurs, it replaces the oldest page in
physical memory.
LRU (Least Recently Used) Page Replacement
In LRU, the page that has not been used for the longest time is replaced when memory is full.
#include <stdio.h>
#define FRAME_SIZE 3
#define PAGE_COUNT 6
int virtual_memory[PAGE_COUNT] = {0, 1, 2, 3, 4, 5}; int
physical_memory[FRAME_SIZE] = {-1, -1, -1}; int page_faults = 0;
print_physical_memory();
}
printf("\nTotal page faults: %d\n", page_faults);
}
#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 8
return seek_time;
}
// Function to print the disk scheduling sequence void print_sequence(int requests[], int size, int
initial_head_position) { int seek_time = 0; int current_position = initial_head_position;
printf("Disk Scheduling Order (FCFS): "); for (int i = 0; i < size; i++) {
printf("%d ", requests[i]); seek_time += abs(requests[i] - current_position);
current_position = requests[i];
}
printf("\nTotal Seek Time (FCFS): %d\n", seek_time);
}
int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;
#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 8
return seek_time;
}
int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;
return 0;
}
This program uses SSTF scheduling to process disk requests by selecting the request with the minimum
seek time at each step. It calculates the total seek time for the requests and prints the disk scheduling
order.
#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 8
#define DISK_SIZE 200
// Direction: 1 for moving towards higher tracks, -1 for lower tracks int i;
if (direction == 1) { for (i = 0; i < size; i++) { if (requests[i]
>= current_position) { break;
}
}
} else {
for (i = size - 1; i >= 0; i--) { if (requests[i] <= current_position)
{ break;
}
}
}
return seek_time;
}
int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50; int direction = 1; // 1 for right, -1 for left
return 0;
}
C-SCAN Disk Scheduling
#include <stdio.h>
#include <stdlib.h>
#define MAX_REQUESTS 8
#define DISK_SIZE 200
// Serve requests in one direction, then jump to the start for (int i = 0; i < size; i++) {
if (requests[i] >= current_position) { for (int j = i; j < size; j++) { seek_time
+= abs(requests[j] - current_position); current_position = requests[j];
} break;
}
}
return seek_time;
}
int main() { int requests[MAX_REQUESTS] = {176, 79, 34, 60, 92, 11, 41, 114}; int
initial_head_position = 50;
return 0;
}