[go: up one dir, main page]

0% found this document useful (0 votes)
602 views39 pages

Operating Systems Theory (4th Sem) .

The document provides comprehensive notes on operating systems, covering fundamental concepts such as the roles of the operating system, its functions, and its structure, including the kernel and shell. It discusses process and memory management, file systems, device management, and security, along with the evolution and types of operating systems. Additionally, it explains the differences between programs and processes, the significance of the Process Control Block (PCB), and includes a state transition diagram for process states.

Uploaded by

sahil gupta.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
602 views39 pages

Operating Systems Theory (4th Sem) .

The document provides comprehensive notes on operating systems, covering fundamental concepts such as the roles of the operating system, its functions, and its structure, including the kernel and shell. It discusses process and memory management, file systems, device management, and security, along with the evolution and types of operating systems. Additionally, it explains the differences between programs and processes, the significance of the Process Control Block (PCB), and includes a state transition diagram for process states.

Uploaded by

sahil gupta.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

1

THEORY FILE : OPERATING SYSTEMS (FULL NOTES :


BY SAHIL ) .
SUBJECT CODE: UGCA: 1923

BACHELOR OF COMPUTER APPLICATIONS

MAINTAINED BY: TEACHER’S / MAM’S :


Sahil Kumar Dr.

IL
COLLEGE ROLL NO: 226617

UNIVERSITY ROLL NO: 2200315


H
SA

DEPARTMENT OF COMPUTER SCIENCE ENGINEERING

BABA BANDA SINGH BAHADUR ENGINEERING

COLLEGE FATEGARH SAHIB


➖➖
2

Program BCA


Semester 4th.
Course Name Operating Systems (Theory).

UNIT ➖01
Fundamentals of Operating system :
●​ Introduction to Operating system ➖
An operating system (OS) is a fundamental software component that manages computer hardware and
provides services for computer programs. It serves as an intermediary between the hardware and the user
applications, ensuring efficient and organised use of the computer system resources. The primary functions
of an operating system include:

IL
1.​ File management
2.​ Memory management
3.​ Process management
4.​ Handling input and output
5.​ Controlling peripheral devices like disk drives and printers

The OS also manages data processing, running applications, and handling memory.
H
Examples of operating systems include:

❖​ Windows, Mac, Android, Linux, Mac OS, UNIX.


❖​ Every computer system must have at least one operating system to run other programs.
❖​ The OS is made up of five layers: The kernel, Input/output, Memory management, File management
system, User interface.
SA
❖​ The first operating system used for real work was GM-NAA I/O, produced in 1956 by General
Motors' Research division for its IBM 704

●​ Functions of an operating system ➖


An operating system (OS) is software that manages and supports all the programs and
applications on a computer or mobile device. It has several functions, including:

1.​ Resource management ➖


An OS controls all computer resources, such as the central processing unit (CPU), memory, disk
drives, and printers.

2.​ User interface ➖


An OS provides a standard way for users to communicate with their computer systems.

3.​ Program execution ➖


An OS coordinates the execution of user programs and provides resources for them.

3
4.​ File management
An OS manages the storage and retrieval of data to and from external storage devices.
It also keeps track of where data is stored, user access settings, and the state of each file.

5.​ Device management ➖


An OS handles input and output devices.

6.​ Security ➖
An OS has an inbuilt security function that helps users browse more securely.

7.​ Performance monitoring ➖


An OS monitors the overall system setup to help improve performance. It also records the
response time between service requests and system response.

IL
8.​ Special control programs
An OS makes automatic changes to tasks through specific control programs.

9.​ Networking ➖
An OS manages the internet and network connection inside the computer system.
H
SA

●​ Operating system as a resource manager ➖


One of the key roles of an operating system is to act as a resource manager. In this capacity, the
operating system is responsible for efficiently allocating and managing various system resources
to ensure that multiple processes and applications can run concurrently while sharing the
available resources. Here are some of the essential resources that an operating system
manages:
4

1.​ Central Processing Unit (CPU): ➖


The OS allocates CPU time to different processes, allowing them to execute their instructions. It
employs scheduling algorithms to determine the order in which processes are granted access to
the CPU, aiming to maximize overall system performance and fairness.

2.​ Memory: ➖
Operating systems manage the computer's memory, ensuring that each process gets the
necessary space to store its code and data. This involves memory allocation, deallocation, and
protection mechanisms to prevent processes from interfering with each other.

3.​ Input/Output Devices: ➖


The OS controls access to input and output devices such as keyboards, mice, printers, and
storage devices. It ensures that multiple processes can use these devices concurrently without

IL
conflicts.

4.​ Storage: ➖
Operating systems manage storage resources, including hard drives, solid-state drives, and
other storage media. This involves organizing data into files, managing file systems, and handling
read and write operations.


H
5.​ Network Resources:
In a networked environment, the operating system manages network resources, including
network interfaces and communication protocols. It facilitates data transfer between devices and
ensures proper network configuration.

6.​ Time: ➖
SA
The OS maintains system time and provides a clock to synchronize processes. It is crucial for
various tasks such as scheduling, timestamping files, and coordinating events within the system.

7.​ Security and Access Control: ➖


Operating systems implement security measures to control access to resources. This includes
user authentication, authorization, and encryption to protect sensitive data and prevent
unauthorised access.

●​ Structure of operating system (Role of kernel and Shell) ➖


The structure of an operating system is typically organised into two main components: the kernel
and the shell. Each plays a distinct role in the functioning of the operating system.

1.​ Kernel:➖
The kernel is the core component of the operating system. It is responsible for managing the
hardware resources and providing essential services to other parts of the system, including user
applications. The kernel operates in privileged mode, allowing it direct access to the hardware.

5
❖​ Key functions of the kernel include:

A.​ Process Management: Creating, scheduling, and terminating processes, as well as


managing process communication and synchronisation.

B.​ Memory Management: Allocating and deallocating memory, enforcing memory protection,
and handling virtual memory.

C.​ File System Management: Managing file operations, organising data on storage devices,
and providing a file system interface to user programs.

D.​ Device Drivers: Interfacing with hardware devices through device drivers, which are
modules that enable communication between the kernel and specific hardware
components.

IL
E.​ Interrupt Handling: Managing hardware interrupts and exceptions to ensure proper
system operation.

F.​ Security and Access Control: Enforcing security policies, user authentication, and
access control to protect system resources.


2. Shell:
H
The shell is the user interface to the operating system. It is a command-line interpreter or
graphical user interface (GUI) that allows users to interact with the system by entering
commands. The shell interprets these commands and executes them by interacting with the
kernel and other system components.


SA
❖​ Key functions of the shell include:

A.​ Command Interpretation: Parsing and interpreting user commands entered through the
command line or GUI.

B.​ Scripting: Allowing users to create scripts—a sequence of commands—that can be


executed as a batch, automating repetitive tasks.

C.​ File Management: Facilitating operations such as copying, moving, deleting, and
renaming files and directories.

D.​ User Interface: Providing a means for users to interact with the system, whether through a
text-based command line or a graphical interface.

In summary, the kernel is the core of the operating system, managing hardware resources and
providing essential services, while the shell is the user interface that allows users to interact with
the system through commands and scripts. Together, they form the foundation of the operating
system's structure and functionality.

6
●​ Views of operating system

The fundamentals of an operating system (OS) encompass several key concepts and
functionalities that are essential for understanding its role and operation. Here are some
fundamental aspects of operating systems:

1.​ Process Management: ➖


Definition: A process is a program in execution. Process management involves creating,
scheduling, and terminating processes, as well as managing process communication and
synchronisation.

2.​ Memory Management: ➖


Definition: Memory management is the allocation and deallocation of memory for processes. It
includes techniques such as virtual memory, which allows processes to use more memory than is

IL
physically available.

3.​ File System:➖


Definition: The file system organises and manages data stored on storage devices. It includes
file creation, deletion, reading, writing, and organisation of files and directories.

4.​ Device Management: ➖


H
Definition: Device management involves controlling and coordinating access to peripheral
devices such as printers, disks, and network interfaces. The OS provides device drivers to
facilitate communication between the kernel and specific hardware devices.

5.​ Security and Protection: ➖


Definition: Security mechanisms, including user authentication, access control, and encryption,
SA
protect the system and its resources from unauthorized access and ensure the integrity of data.

6.​ User Interface: ➖


Definition: The user interface allows users to interact with the operating system. It can be a
command-line interface (CLI) where users type commands or a graphical user interface (GUI)
with icons and windows.

7.​ Networking: ➖
Definition: Networking features enable communication between computers. The OS supports
network protocols and provides tools for network configuration and communication.

8.​ Interrupts and Exceptions: ➖


Definition: Interrupts are signals generated by hardware to gain the CPU's attention, and
exceptions are events that occur during program execution. The OS handles interrupts and
exceptions to maintain system stability.

7
9.​ System Calls and APIs:
Definition: System calls are interfaces that allow user-level processes to request services from
the kernel. Application Programming Interfaces (APIs) provide a set of functions and routines for
software developers to interact with the operating system.

●​ Evolution and types of operating systems ➖


The evolution of operating systems spans several decades, and it has been marked by
significant advancements in technology and changes in computing paradigms. Here is an
overview of the evolution and different types of operating systems:


Here are some types of operating systems:
1.​ Batch operating systems
These systems were popular in the 1950s and 1960s and allowed many users to share a single
computer. They were designed to run a series of programs in order and did not allow user

IL
interaction.

2.​ Distributed operating systems ➖


These systems are designed to work across a network of independent computers. In this setup,
the operating system is decentralised, with each computer in the network responsible for a
different part of the operating system's functions.


H
3.​ Multitasking operating systems
These systems allow the operating system to run multiple processes or applications
simultaneously, and switch between them rapidly.

Here are some generations of operating systems: ➖


1.​ 1st generation: Batch Processing Systems
SA
2.​ 2nd generation: Multiprogramming Batch Systems
3.​ 3rd generation: Time-Sharing Systems
4.​ 4th generation: Distributed Systems

Here are some popular operating systems: ➖


1.​ Apple macOS
2.​ Microsoft Windows
3.​ Google's Android OS
4.​ Linux Operating System
5.​ Apple iOS

●​ Process & Thread Management: ➖


●​ Program vs. Process ➖
In the context of operating systems, "program" and "process" are related but distinct concepts.
Let's explore the differences between the two:

8
1.​ Program:
❖​ A program is a set of instructions or code written in a programming language. It is a static
entity, typically stored on disk in the form of an executable file. A program doesn't consume
system resources until it is loaded into memory and executed.

❖​ Programs are passive and don't have an active presence in the system. They become
active when a user or the operating system initiates their execution. A program is
essentially a sequence of instructions that define how a task should be performed.

❖​ Example: An application software like a word processor, a web browser, or a game is a


program. It remains inert until the user decides to run it.

2.​ Process: ➖
❖​ A process, on the other hand, is the execution of a program. It represents the active state

IL
of a program in memory, along with the resources (CPU, memory, I/O) allocated to it during
its execution.

❖​ When a program is loaded into memory and is actively being executed, it becomes a
process. Multiple processes can run concurrently in a multitasking environment.

❖​ A process has its own memory space, program counter, registers, and other attributes that
H
define its current state. Processes can communicate with each other and share data, or
they may operate independently.

❖​ Example: If you open a word processing application, a process is created to execute the
corresponding program. If you open multiple instances of the same application, each
instance is a separate process.
SA
●​ PCB ➖
Process Control Block (PCB) is a data structure that contains information about a specific
process in the system. The PCB is a fundamental concept for process management, and it
serves as a repository of key information needed by the operating system to manage and control
processes effectively. The information stored in a PCB includes:

1.​ Process State: ➖


The current state of the process, such as whether it is ready, running, blocked, or terminated.
The process state helps the operating system understand what the process is currently doing
and how it should be scheduled.

2.​ Program Counter (PC): ➖


The address of the next instruction to be executed by the process. The program counter is crucial
for maintaining the execution flow of the process.

9
3.​ CPU Registers:
The values of CPU registers associated with the process. These registers hold important
information, such as the contents of general-purpose registers, stack pointers, and status flags.

4.​ CPU Scheduling Information: ➖


Details related to the process's priority, scheduling parameters, and other information needed for
the operating system's scheduler to make decisions about when and for how long the process
should run.

5.​ Memory Management Information: ➖


Information about the memory allocated to the process, including the base and limit registers for
the process's address space. This helps the operating system manage memory protection and
address translation.

IL
6.​ Accounting Information:
Various statistics related to the process, such as the amount of CPU time used, elapsed time,
and other accounting details. This information is useful for performance monitoring and resource
allocation.

7.​ I/O Status Information: ➖


Details about the I/O devices the process is using, including the status of I/O operations. This
H
information is important for managing input and output operations efficiently.

8.​ Process Identifier (PID): ➖


A unique identifier assigned to each process in the system. The PID allows the operating system
to distinguish and manage multiple processes concurrently.


SA
9.​ Process Control Information:
Flags and information that control the behaviour of the process, such as whether it can be
preempted, whether it's in the foreground or background, etc.

The PCB is typically stored in the kernel space of the operating system and is associated with
each process in the system. When a context switch occurs, which happens when the operating
system switches the CPU from one process to another, the information in the PCB of the
currently running process is saved, and the PCB of the next process to be executed is loaded.
This ensures that the system can efficiently manage and switch between different processes
while preserving their states

●​ State transition diagram ➖


A state transition diagram describes the logical transition of a system through various states of
operation. It represents states, the transitions that connect them, and the events that trigger
transitions.

In a state transition diagram, circles represent states and arcs represent transitions between
states.
10
In an operating system, a state transition diagram describes all of the states that an object can
have, the events under which an object changes state, the conditions that must be fulfilled before
the transition will occur, and the activities undertaken during the life of an object.

A process must go through a minimum of four states to be considered complete: New state, Run
state, Ready state, Terminate state.

However, in case a process also requires I/O, the minimum number of states required is 5.

A process can transition from ready to running when the scheduler selects it for execution, or
from running to waiting when it requests an input/output operation.

●​ Scheduling Queues ➖
Scheduling queues in an operating system play a crucial role in managing the execution of

IL
processes and determining which process gets access to the CPU at any given time. Depending
on the scheduling algorithm employed, processes are placed in different queues to represent
their current state and priority. Here are some common scheduling queues:

1.​ Job Queue (or Job Pool): ➖


The job queue contains all processes residing in the main memory that are waiting to be brought
into the ready queue for execution. These processes may have been submitted by users or batch
processing systems.

2.​ Ready Queue: ➖


H
The ready queue holds processes that are ready to execute but are waiting for the CPU. The
scheduling algorithm determines which process from the ready queue will be selected to run
next.
SA
3.​ Device Queue: ➖
For processes that are waiting for a particular I/O device, there is a device queue associated with
each device. Processes waiting for I/O operations to complete are moved to these queues.

4.​ Priority Queue: ➖


In priority scheduling algorithms, processes are placed in different queues based on their priority
levels. The priority queue may have multiple levels, with higher-priority processes given
precedence over lower-priority ones.

5.​ Multi-Level Queue: ➖


A multi-level queue organises processes into multiple priority levels, each with its own ready
queue. Processes move between queues based on their priority, and each queue may have its
scheduling algorithm.

11
6.​ Feedback Queue (or Round Robin Queue):
In a feedback queue scheduling algorithm, processes are initially placed in a first-level queue.
Based on their behaviour (CPU burst length), processes may move between different queues
with varying time quantum or priority levels.

7.​ Real-Time Queue: ➖


For real-time operating systems, processes with specific timing requirements are placed in a
real-time queue. These processes are typically associated with strict deadlines and need to be
scheduled accordingly.

●​ Types of schedulers ➖
Schedulers in operating systems are responsible for determining the order in which processes
are executed and managing the allocation of system resources. There are typically three types of
schedulers, each serving a specific purpose:

IL
1.​ Long-Term Scheduler (Job Scheduler): ➖
The long-term scheduler is responsible for selecting processes from the job queue and admitting
them to the ready queue. Its primary goal is to control the degree of multiprogramming, deciding
how many processes should be in the main memory at any given time.

Characteristics:
H
❖​ Invoked less frequently, usually when a process terminates or a new process arrives.
❖​ Determines which processes are brought into the ready queue from the job pool.
❖​ Focuses on optimising overall system performance.

2.​ Short-Term Scheduler (CPU Scheduler): ➖


The short-term scheduler, also known as the CPU scheduler, selects a process from the ready
SA
queue and allocates the CPU to that process. It determines which process runs next and for how
long.

Characteristics:
❖​ Invoked frequently, potentially on every clock tick or when a process transitions to a
blocked state.
❖​ Decides which process in the ready queue gets access to the CPU.
❖​ Aims to provide fair and efficient CPU utilisation.

3.​ Medium-Term Scheduler: ➖


The medium-term scheduler is responsible for swapping processes in and out of the main
memory and the backing store (usually the disk). It can be considered an extension of the
long-term scheduler and is involved in managing processes in the partially-executed or blocked
state.

Characteristics:
❖​ Invoked less frequently than the short-term scheduler but more frequently than the
long-term scheduler.
12
❖​ Decides which processes are moved to the backing store to free up main memory.
❖​ Helps manage the system's degree of multiprogramming and prevents excessive demand
for main memory.

These schedulers work together to ensure effective process management, resource allocation,
and system performance. The long-term scheduler determines when new processes are brought
into the system, the medium-term scheduler handles processes in different states, and the
short-term scheduler focuses on the immediate allocation of the CPU.

The scheduling algorithms used by these schedulers can vary, and different operating systems
may employ different strategies based on the system's goals and requirements. Common
scheduling algorithms include First-Come-First-Serve (FCFS), Shortest Job Next (SJN), Round
Robin, Priority Scheduling, and Multilevel Queue Scheduling.

IL
●​ Concept of Thread
A thread is a single sequence of activities that are executed within a process. It is also known as
the thread of execution or the thread of control.

Threads are also called lightweight processes because they have some of the properties of
processes. Each thread belongs to only one process. In an operating system that supports
multithreading, a process can have many threads.
H
Threads are used to improve the performance of applications. Each thread has its own program
counter, stack, and set of registers.

Threads can be of the same or different types. Multiple threads can run simultaneously and share
resources with each other within a process.
SA
Threads can be managed by a larger process of the operating system. They take much less
resources to run and much less time to switch contexts.

Kernel-level threads can be scheduled more efficiently, resulting in better resource utilisation and
reduced overhead. If a kernel-level thread is blocked, the kernel can still schedule another thread
for execution.



●​ Benefits, Types of threads in operating system
1.​ Concurrency:
Threads allow multiple tasks to execute concurrently within a single process. This enables
efficient utilisation of CPU resources and enhances system responsiveness.

2.​ Responsiveness: ➖
By using threads, an application can remain responsive to user input even while performing
time-consuming tasks. User interface operations can be executed in one thread while
background tasks run in another.

13
3.​ Resource Sharing:
Threads within the same process share the same address space and resources, such as file
handles and open sockets. This facilitates easy communication and data sharing between
threads.

Types of Threads in Operating Systems: ➖


1.​ User-Level Threads (ULTs): ➖
❖​ Managed entirely by user-level threads library without kernel support.
❖​ The kernel is unaware of the existence of user-level threads.
❖​ Scheduling, context switching, and synchronisation are handled at the user level.
❖​ Efficient in terms of low overhead but may face challenges with blocking calls.

2.​ Kernel-Level Threads (KLTs): ➖

IL
❖​ Supported and managed by the operating system kernel.
❖​ Kernel schedules and switches between threads.
❖​ Provides better concurrency as the kernel can schedule threads independently.
❖​ More overhead due to kernel involvement.

3.​ Many-to-One Model (M:1): ➖


❖​ Many user-level threads mapped to a single kernel thread.
H
❖​ The user-level thread library manages thread execution, and the kernel is unaware of
individual threads.
❖​ Efficient for I/O-bound applications but may suffer from poor parallelism for CPU-bound
tasks.

4.​ One-to-One Model (1:1): ➖


SA
❖​ Each user-level thread corresponds to a kernel-level thread.
❖​ Provides better concurrency as the kernel can schedule threads independently.
❖​ Overhead associated with creating and managing kernel threads for each user-level
thread.

5.​ Many-to-Many Model (M:N): ➖


❖​ Combines features of both user-level and kernel-level threads.
❖​ Allows multiple user-level threads to be mapped to a smaller or equal number of kernel
threads.
❖​ Provides a balance between concurrency and resource utilisation.

The choice of thread type and model depends on the specific requirements of the application, the
desired level of concurrency, and the characteristics of the underlying hardware and operating
system.

14
●​ Process synchronisation
Process synchronisation is a crucial concept in operating systems, especially in
multi-programming and multi-processing environments, where multiple processes may run
concurrently. It involves coordinating the execution of processes to ensure proper order of
execution, prevent data inconsistencies, and avoid conflicts for shared resources. Here are
some key mechanisms for process synchronisation:

1.​ Mutual Exclusion: ➖


❖​ Purpose: Ensures that only one process can access a critical section of code or a shared
resource at a time.
❖​ Mechanisms: Mutex locks, semaphores, and other synchronisation primitives are used to
implement mutual exclusion.

2.​ Semaphore: ➖

IL
❖​ Purpose: A more general synchronisation primitive that can be used for signalling and
mutual exclusion.
❖​ Mechanism: A semaphore is an integer variable that can be incremented or decremented.
Processes can wait for a semaphore to become positive or signal (release) it to increment
its value.

3.​ Mutex Lock: ➖


H
❖​ Purpose: A synchronisation mechanism used for mutual exclusion.
❖​ Mechanism: A lock that a process can acquire before entering a critical section. If the lock
is already held by another process, the requesting process is blocked until the lock is
released.

4.​ Condition Variables: ➖


SA
❖​ Purpose: Allows processes to synchronise based on the state of shared data.
❖​ Mechanism: Processes can wait for a condition to be true, and another process can signal
the condition when the shared data is modified.


●​ CPU Scheduling:
●​ Need of CPU scheduling
CPU scheduling is a critical component of operating systems, and its primary purpose is to
manage and allocate the CPU (Central Processing Unit) resources efficiently among multiple
processes. Several factors necessitate the need for CPU scheduling in an operating system:

1.​ Multiprogramming: ➖
In a multiprogramming environment, multiple processes are loaded into the main memory
simultaneously. CPU scheduling allows the operating system to switch between these processes,
ensuring that each process gets a fair share of CPU time.

15
2.​ Concurrency:
Modern computer systems are designed to support concurrent execution of multiple processes.
CPU scheduling is required to allow processes to run concurrently and make progress without
waiting for one process to complete before starting another.

3.​ Fairness and Equity: ➖


CPU scheduling ensures fairness and equity in resource allocation. By giving each process a
turn to execute, the system aims to provide a reasonable share of CPU time to all processes,
preventing a single process from monopolising the CPU.

4.​ Maximising CPU Utilisation: ➖


Efficient CPU scheduling aims to maximise the utilisation of the CPU. It reduces the idle time of
the processor, ensuring that there is always a process ready to execute when the CPU becomes
available.

IL
5.​ Response Time: ➖
Users expect prompt responses from interactive applications. CPU scheduling helps in
minimising response time by quickly switching between processes, allowing for smooth and
responsive interactions with the system.

6.​ Throughput: ➖
H
CPU scheduling influences the system's throughput, which is the number of processes
completed in a given time period. A well-designed scheduling algorithm can enhance system
throughput by optimising the order in which processes are executed.

7.​ Resource Sharing: ➖


CPU scheduling facilitates efficient resource sharing. By allowing multiple processes to run
SA
concurrently, it enables better utilisation of resources, including CPU, memory, and I/O devices.

In summary, CPU scheduling is crucial for efficient and effective utilisation of CPU resources in
modern operating systems. It plays a vital role in managing processes, optimising system
performance, and providing a responsive and equitable computing environment for users.

●​ CPU I/O Burst Cycle ➖


The CPU I/O burst cycle is a fundamental concept in operating systems that describes the
alternating phases a process goes through during its execution. A process typically undergoes a
sequence of CPU bursts and I/O bursts, and this cycle is crucial for understanding the behaviour
of processes in a system. The cycle can be summarised as follows:

1.​ CPU Burst: ➖


Definition: The period during which a process actively uses the CPU for computation.

Characteristics:
❖​ During the CPU burst, the process executes instructions and performs computations.
16
❖​ The length of the CPU burst varies from process to process and depends on the nature of
the computation being performed.
❖​ After completing the CPU burst, the process typically transitions to an I/O-bound state or
waits for an external event, such as user input or data from an I/O device.

2.​ I/O Burst: ➖


Definition: The period during which a process is waiting for an I/O operation to complete.

Characteristics:
❖​ During the I/O burst, the process is blocked, and the CPU is idle as the process awaits the
completion of an I/O operation (e.g., reading from disk, receiving data from a network).
❖​ The duration of the I/O burst is determined by the speed of the I/O device and the specific
I/O operation being performed.
❖​ Once the I/O operation is complete, the process transitions back to the CPU burst phase

IL
and resumes execution.

This CPU I/O burst cycle repeats throughout the lifetime of a process. The process continues to
alternate between CPU bursts and I/O bursts until it completes its execution. The behaviour of
processes and their mix of CPU and I/O operations have implications for system performance
and the effectiveness of CPU scheduling algorithms.
H
Understanding the CPU I/O burst cycle is crucial for designing efficient scheduling strategies.
Processes that alternate between CPU and I/O operations are classified as I/O-bound, and those
with more extended CPU bursts are classified as CPU-bound. Different scheduling algorithms
and strategies may be employed based on the characteristics of the processes in the system to
optimise system performance and responsiveness.


SA
●​ Pre-emptive vs. Non-pre-emptive scheduling

Pre-emptive and non-pre-emptive (or preemptive and cooperative) scheduling are two different
approaches to managing the execution of processes in an operating system. These scheduling
strategies determine how the operating system decides when to switch between different tasks or
processes.

1.​ Preemptive Scheduling: ➖


❖​ In pre-emptive scheduling, the operating system can interrupt a currently running process
and force it to give up the CPU, allowing another process to start running.
❖​ The scheduler decides when to preempt a process based on priority, time quantum, or
other factors.
❖​ This approach provides better responsiveness and ensures that no single process can
monopolise the CPU for an extended period.
❖​ Examples of pre-emptive scheduling algorithms include Round Robin, Priority Scheduling,
and Multilevel Queue Scheduling.

17
2.​ Non-pre-emptive Scheduling:

❖​ Also known as cooperative scheduling, non-pre-emptive scheduling allows a process to


run until it voluntarily relinquishes control of the CPU, such as by completing its execution
or by entering a waiting state.
❖​ The operating system does not forcibly interrupt a running process; instead, the process
itself decides when to release the CPU.
❖​ Non-pre-emptive scheduling can be simpler to implement, but it may lead to potential
issues like poor responsiveness if a long-running task doesn't voluntarily yield the CPU.
❖​ Common examples of non-pre-emptive scheduling algorithms include First Come First
Serve (FCFS) and Shortest Job Next (SJN) or Shortest Job First (SJF).

Comparison:

IL
1.​ Responsiveness:
Pre-emptive scheduling provides better responsiveness since the operating system can interrupt
a process and switch to another one, ensuring that no process monopolizes the CPU for too
long.
Non-pre-emptive scheduling relies on processes voluntarily giving up the CPU, which may lead
to slower response times, especially if a process does not yield the CPU.


2.​ Complexity:
H
Non-pre-emptive scheduling is often simpler to implement since it doesn't require forcibly
interrupting running processes.
Pre-emptive scheduling introduces the complexity of managing context switches and ensuring
fairness among processes.


SA
3.​ Fairness:
Pre-emptive scheduling tends to be fairer as it prevents a single long-running process from
hogging the CPU.
Non-pre-emptive scheduling might lead to unfairness if a process doesn't release the CPU in a
reasonable time.


●​ Different scheduling criteria, scheduling algorithms (FCSC, SJF, Round-Robin,
Multilevel Queue)

Scheduling criteria and algorithms play a crucial role in managing the execution of processes in
an operating system. Different scheduling algorithms are designed to achieve various goals
based on different criteria. Here are some common scheduling criteria and algorithms:

Scheduling Criteria: ➖
1.​ CPU Utilisation: ➖
➢​ Goal: Keep the CPU as busy as possible.
➢​ Criterion: Maximise CPU utilisation.

18
2.​ Throughput:
➢​ Goal: Maximise the number of processes completed per unit of time.
➢​ Criterion: Maximise the number of processes finished.

3.​ Turnaround Time: ➖


➢​ Goal: Minimise the total time taken to execute a process.
➢​ Criterion: Minimise the time between the submission of a process and its completion.

4.​ Waiting Time: ➖


➢​ Goal: Minimise the time a process spends waiting in the ready queue.
➢​ Criterion: Minimise the total time a process spends waiting.

5.​ Response Time: ➖


➢​ Goal: Minimise the time it takes for the system to respond to a user's input.

IL
➢​ Criterion: Minimise the time between submitting a request and getting the first response.

6.​ Fairness: ➖
➢​ Goal: Ensure fair allocation of CPU time among competing processes.
➢​ Criterion: Avoid situations where one process dominates the CPU.

Scheduling Algorithms:
H
1.​ First-Come-First-Serve (FCFS): ➖
➔​ Principle: The first process that arrives is the first to be executed.
➔​ Advantage: Simple to understand and implement.
➔​ Disadvantage: Poor turnaround time and waiting time, known as the "convoy effect."


SA
2.​ Shortest Job Next (SJN) or Shortest Job First (SJF):
➔​ Principle: Execute the process with the shortest burst time first.
➔​ Advantage: Minimises waiting time and turnaround time.
➔​ Disadvantage: Difficult to predict the burst time accurately.

3.​ Round-Robin (RR): ➖


➔​ Principle: Each process gets a fixed time slice (time quantum) to execute, and then the
CPU is given to the next process in the ready queue.
➔​ Advantage: Fairness, simple implementation.
➔​ Disadvantage: Poor turnaround time for long processes, may result in high
context-switching overhead.

4.​ Multilevel Queue Scheduling: ➖


➔​ Principle: Processes are divided into multiple priority levels, and each queue has its own
scheduling algorithm.
➔​ Advantage: Supports the execution of different types of processes with different
scheduling needs.
➔​ Disadvantage: May lead to priority inversion and may not be suitable for all scenarios.
19
These algorithms and criteria are used based on the specific requirements and characteristics of
the system. In some cases, a combination of algorithms or a variation of these algorithms (such
as Multilevel Feedback Queue) might be employed to achieve a balance between different
objectives.

IL
H
SA
➖02
20

UNIT :
Memory Management:

●​ Introduction➖
Memory management in operating systems is a critical aspect that involves handling the
computer's primary memory, also known as RAM (Random Access Memory). It is responsible for
efficient allocation and deallocation of memory to processes, ensuring that the system operates
smoothly and optimally. Memory management plays a crucial role in the overall performance and


stability of an operating system.
Here's an introduction to the key aspects of memory management:

Objectives of Memory Management: ➖

IL
1.​ Relocation:
➔​ Goal: Allow programs to be placed anywhere in the memory.

2.​ Protection:
➔​ Goal: Ensure that one process cannot interfere with the execution of another process or
the operating system.

3.​ Sharing:
H
➔​ Goal: Enable multiple processes to share a portion of the memory.

4.​ Logical Organisation: ➖


➔​ Goal: Present a logical view of memory to processes, regardless of the physical placement
of data.
SA
5.​ Physical Organisation: ➖
➔​ Goal: Optimise the use of physical memory to accommodate multiple processes.

Key Components of Memory Management:

6.​ Memory Allocation:


➔​ Dynamic Allocation: Memory is allocated at runtime based on the program's needs.
➔​ Static Allocation: Memory is allocated at compile-time and remains fixed during the
program's execution.

7.​ Memory Deallocation:


➔​ Explicit Deallocation: The programmer or the program explicitly releases memory when it is
no longer needed.
➔​ Implicit Deallocation: The operating system automatically reclaims memory when a
process terminates.
21
8.​ Address Binding:
➔​ Compile Time: Addresses are assigned during the compilation phase.
➔​ Load Time: Addresses are assigned when a program is loaded into memory.
➔​ Execution Time (Run Time): Addresses are determined during the execution of the
program.

9.​ Swapping:
➔​ Goal: Move a process from main memory to secondary storage (and vice versa) to allow
more processes to fit into memory.
➔​ Swapping Algorithm: Determines which processes to swap in and out.

10.​ Partitioning:
➔​ Fixed Partitioning: Memory is divided into fixed-sized partitions, and each partition can hold
one process.

IL
➔​ Dynamic Partitioning: Memory is divided into variable-sized partitions to accommodate
processes of different sizes.

11.​Fragmentation:
➔​ Internal Fragmentation: Wasted memory within a partition due to the allocation of more
space than needed.
➔​ External Fragmentation: Free memory blocks scattered throughout the system that are too
H
small to be allocated.

Memory Management Techniques:

1.​ Paging:
➔​ Pages: Fixed-size blocks in both physical and logical memory.
SA
➔​ Page Table: Maps logical to physical addresses.

2.​ Segmentation:
➔​ Segments: Variable-sized blocks representing different parts of a program (code, data,
stack).
➔​ Segment Table: Maps each segment to its physical location.

3.​ Virtual Memory:


➔​ Page Faults: When a required page is not in memory.
➔​ Page Replacement Algorithms: Decide which pages to swap out when a new page is
needed.

●​ Address Binding ➖
Address binding in memory management refers to the process of associating a logical address
(also known as a virtual address) with a physical address in the computer's memory. The primary
goal of address binding is to provide a means for processes to access memory in a controlled
and organised manner. There are different phases of address binding in the context of
program execution:
22
1.​ Compile-Time Address Binding:

➔​ Static Addresses: Addresses are assigned to program variables and instructions during
the compilation phase.
➔​ Advantage: Fast execution, as the addresses are known beforehand.
➔​ Disadvantage: Lack of flexibility, as the program cannot adapt to changes in memory
availability.

2.​ Load-Time Address Binding: ➖


➔​ Addresses Assigned at Load Time: Addresses are determined and assigned when a
program is loaded into memory.
➔​ Advantage: More flexibility than compile-time binding.
➔​ Disadvantage: Still less flexible than other methods, as changes require reloading the

IL
program.

3.​ Execution-Time (Run-Time) Address Binding: ➖


➔​ Dynamic Addresses: Addresses are determined during the execution of the program.
➔​ Advantage: Maximum flexibility, allowing adaptation to changing memory requirements.
➔​ Disadvantage: Overhead due to the need for additional hardware support and runtime
checks.
H
4.​ Logical Address vs. Physical Address: ➖
➔​ Logical Address (Virtual Address): The address generated by the CPU during program
execution.
SA
➔​ Physical Address: The actual location in the computer's memory hardware.

5.​ Memory Management Unit (MMU): ➖


➔​ Role: Responsible for converting logical addresses to physical addresses during runtime.
➔​ Translation Lookaside Buffer (TLB): A cache that stores recently translated
virtual-to-physical address mappings to speed up the translation process.

Address Binding Methods: ➖


1.​ Static Binding:➖
➔​ Compile-Time and Load-Time: The addresses are fixed before program execution.
➔​ Advantage: Simplicity and speed.
➔​ Disadvantage: Lack of flexibility.

2.​ Dynamic Binding: ➖


➔​ Execution-Time: Addresses are determined during program execution.
➔​ Advantage: Maximum flexibility.
23
➔​ Disadvantage: Increased overhead and potential performance impact.

➖ int x = 10;
Example:
Consider a simple program with a variable x

3.​ Compile-Time Binding: ➖


➔​ The compiler assigns a fixed memory location for x during compilation.

4.​ Load-Time Binding: ➖


➔​ The loader assigns an actual memory address for x when the program is loaded into
memory.

5.​ Execution-Time Binding: ➖


➔​ The address of x is determined during program execution by the MMU.

IL
In modern operating systems, dynamic binding and virtual memory techniques are commonly
used to provide flexibility, efficient memory utilisation, and isolation between processes. These
methods involve the use of page tables, segmentation, and demand paging to manage the
mapping between logical and physical addresses dynamically.

●​ Relocation, loading, linking, memory sharing and protection ➖


1.​ Relocation ➖
H
The process of assigning load addresses for a program's data and code, and adjusting the code
and data to reflect the assigned addresses. Relocation also connects symbolic references with
symbolic definitions.


SA
2.​ Loading
The process of loading a program from secondary memory to the main memory for execution.

3.​ Linking ➖
The process of collecting and combining various pieces of code and data into a single file that
can be loaded into memory and executed.

4.​ Memory sharing ➖


An operating-system feature that allows database server threads and processes to share data by
sharing access to pools of memory.

5.​ Protection ➖
A mechanism that controls the access of programs, processes, or users to the resources defined
by a computer system. Protection enforces memory protection by preventing processes from
accessing or modifying memory regions that do not belong to them.

Memory management facilitates data sharing among processes, enabling more efficient resource
utilisation.

24
●​ Paging and Segmentation
Paging and segmentation are two fundamental memory management techniques used in
operating systems to manage the allocation of memory to processes efficiently. Each technique
offers unique advantages and is suited to different types of systems and applications.
Let's explore both paging and segmentation:

1.​ Paging: ➖
Definition: Paging is a memory management scheme that divides physical memory into fixed-size
blocks called "pages" and logical memory into fixed-size blocks called "frames." Processes are
also divided into fixed-size blocks called "pages."

Key Features: ➖
➔​ Fixed-size Blocks: Both physical memory and logical memory are divided into fixed-size
blocks.

IL
➔​ Address Translation: Logical addresses generated by the CPU are divided into page
numbers and page offsets. Page numbers are used to index into a page table, which
translates them into physical frame numbers.
➔​ Flexible Allocation: Allows processes to be allocated non-contiguous memory locations.
➔​ Simplifies Memory Management: Eliminates external fragmentation by using fixed-size
pages.
➔​ Page Faults: When a required page is not present in memory, a page fault occurs, and the
H
operating system brings the required page into memory from secondary storage (e.g.,
disk).

Advantages of Paging: ➖
➔​ Simplicity: Paging is simpler to implement compared to segmentation.
➔​ Flexible Allocation: Allows processes to be allocated memory in non-contiguous chunks.
SA
➔​ Solves Fragmentation: Eliminates external fragmentation by using fixed-size pages.

Disadvantages of Paging: ➖
➔​ Internal Fragmentation: Can suffer from internal fragmentation, where the last page of a
process may not be fully utilised.
➔​ Complexity in Page Table Management: Large page tables can be complex to manage,
especially in systems with large address spaces.

2.​ Segmentation: ➖
Definition: Segmentation is a memory management technique that divides the logical address
space of a process into variable-sized segments, such as code, data, stack, etc. Each segment is
treated as a logical unit and is assigned a base address and length.

Key Features: ➖
➔​ Variable-sized Segments: Allows for a more flexible allocation of memory compared to
paging.
➔​ Logical Organisation: Corresponds to the logical structure of the program, such as code,
data, and stack segments.
25
➔​ Address Translation: Each segment is assigned a base address and length. Translation of
logical addresses to physical addresses involves adding the base address of the segment
to the logical address.

Advantages of Segmentation: ➖
➔​ Logical Organisation: Reflects the logical structure of programs, making it easier to
manage and understand memory usage.
➔​ Supports Growing Data Structures: Allows data structures to grow dynamically without the
need for contiguous memory allocation.

Disadvantages of Segmentation: ➖
➔​ Fragmentation: Can suffer from external fragmentation, where free memory is fragmented
into small blocks that are unusable.
➔​ Complex Address Translation: Requires additional hardware support or software overhead

IL
for address translation, especially in systems with large numbers of segments.

Comparison: ➖
1.​ Flexibility: Segmentation offers more flexibility in memory allocation due to variable-sized
segments, while paging uses fixed-size blocks.
2.​ Fragmentation: Paging eliminates external fragmentation but may suffer from internal
fragmentation. Segmentation can suffer from external fragmentation.
H
3.​ Address Translation: Paging involves translation of page numbers to frame numbers using
a page table. Segmentation involves translation of segment numbers to base addresses.
4.​ Implementation Complexity: Segmentation can be more complex to implement and
manage due to variable-sized segments and potential fragmentation issues.


●​ Virtual memory: basic concepts of demand paging, page replacement
SA
algorithms

Virtual memory is a crucial concept in modern operating systems, allowing processes to access
more memory than physically available and providing several benefits such as efficient memory
utilisation, protection, and simplifying programming. Two fundamental concepts in virtual memory
management are demand paging and page replacement algorithms.

1.​ Demand Paging: ➖


Definition: Demand paging is a memory management technique where pages are loaded into
memory only when they are needed (on-demand) rather than loading the entire process into
memory at once. When a process attempts to access a page that is not currently in memory, a
page fault occurs, and the operating system brings the required page into memory from
secondary storage (e.g., disk).

Key Features: ➖
A.​ Lazy Loading: Pages are loaded into memory only when they are accessed, reducing initial
memory overhead.
26
B.​ Efficient Use of Memory: Only the pages needed for execution are loaded, allowing for
efficient memory utilisation.
C.​ Reduced I/O Overhead: Pages are loaded into memory on-demand, reducing the initial I/O
overhead compared to loading the entire process into memory upfront.

2. Page Replacement Algorithms: ➖


Page replacement algorithms are used by the operating system to decide which pages to evict
from memory when the available memory is full and a new page needs to be loaded. The goal of
these algorithms is to minimise page faults and optimise system performance.

1. Optimal Page Replacement:


➔​ Principle: Evicts the page that will not be used for the longest time in the future.
➔​ Advantage: Provides the lowest possible page fault rate (theoretical optimum).
➔​ Disadvantage: Requires future knowledge of page access patterns, which is generally not

IL
feasible.

2. FIFO (First-In-First-Out):
➔​ Principle: Evicts the page that was brought into memory earliest.
➔​ Advantage: Simple implementation.
➔​ Disadvantage: Suffers from the "Belady's Anomaly," where increasing the number of
frames can increase the page fault rate.
H
3. LRU (Least Recently Used):
➔​ Principle: Evicts the page that has not been accessed for the longest time.
➔​ Advantage: Tends to perform well in practice.
➔​ Disadvantage: Requires maintaining a record of the access history for each page, which
can be costly.
SA
4. LFU (Least Frequently Used):
➔​ Principle: Evicts the page that has been accessed the least frequently.
➔​ Advantage: Suitable for scenarios where repeated accesses to the same pages occur.
➔​ Disadvantage: May suffer from the "frequency anomaly," where a page that was heavily
used in the past but not recently is evicted.

5. Clock (Second Chance):


➔​ Principle: Similar to FIFO but with a "clock hand" that indicates the oldest page that has not
been accessed. Pages are given a second chance before being evicted.
➔​ Advantage: Simple implementation with better performance than FIFO.
➔​ Disadvantage: May not perform optimally in all scenarios.
Conclusion:
Demand paging and page replacement algorithms are essential components of virtual memory
management in operating systems. Demand paging allows for efficient memory utilisation by
loading pages into memory only when needed, while page replacement algorithms determine
which pages to evict when memory is full.
➖03
27

UNIT
Input / Output Device Management:

●​ I/O devices and controllers, device drivers; disk storage ➖


I/O Devices and Controllers: ➖
I/O Devices:
Examples include keyboards, mice, monitors, printers, network adapters, storage devices (hard
drives, SSDs), etc.
These devices interact with the computer system to perform input/output operations.

IL
I/O Controllers:
I/O controllers (also called I/O processors or I/O interfaces) act as intermediaries between I/O
devices and the CPU.
They manage the communication between the CPU and the I/O devices, handling data transfer,
status monitoring, and error handling.

Device Drivers: ➖
H
1.​ Definition: Device drivers are software components that facilitate communication between
the operating system and hardware devices.

2.​ Responsibilities:
➔​ Initializing and configuring devices during system startup.
➔​ Handling device-specific commands and requests.
SA
➔​ Managing data transfer between devices and memory.
➔​ Handling interrupts and managing device status.

3.​ Types of Device Drivers: ➖


A.​ Kernel Space Drivers: Run in privileged mode within the operating system kernel.
B.​ User Space Drivers: Run in user space and interact with the kernel through system calls
or device-specific APIs.

Disk Storage: ➖
1.​ Hard Disk Drives (HDDs) and Solid State Drives (SSDs): ➖
HDDs use rotating magnetic disks to store data, while SSDs use flash memory.
Both provide non-volatile storage for operating system files, user data, and applications.

2.​ Disk Organization:


Partitioning: Dividing the disk into logical sections called partitions.
File Systems: Structures for organising and accessing data on disk partitions (e.g., FAT, NTFS,
ext4).
28
Boot Sector: Contains the bootloader and partition table information required to boot the
operating system.
Master Boot Record (MBR) and GUID Partition Table (GPT): Disk partitioning schemes used to
define disk partitions and their properties.

3.​ Disk I/O Operations: ➖


Reading and Writing: Data is transferred between disk storage and memory through read and
write operations.
Caching: Operating systems use disk caches to store frequently accessed data in memory,
improving I/O performance.
Scheduling: Disk scheduling algorithms determine the order in which I/O requests are serviced to
optimise disk access and reduce latency.

4.​ Disk Management: ➖

IL
Disk Formatting: Preparing a disk for use by initialising its file system structures.
Disk Maintenance: Tasks such as disk defragmentation, error checking, and bad sector detection.
RAID (Redundant Array of Independent Disks): Techniques for combining multiple disks into a
single logical unit to improve performance, reliability, or both.

Conclusion: ➖
I/O devices and controllers, device drivers, and disk storage are integral components of operating
H
systems, facilitating communication between the hardware and software layers and enabling
efficient input/output operations. Understanding these components is essential for designing and
managing robust and high-performance operating systems.


●​ File Management: Basic concepts, file operations, access methods, directory
SA
structures and management, remote file systems;
File management is a core aspect of operating systems, responsible for organising and
manipulating data stored on secondary storage devices such as hard drives and SSDs. Here are
the basic concepts, operations, access methods, directory structures, and management
techniques, as well as remote file systems commonly found in operating systems:

Basic Concepts: ➖

1.​ File:
A collection of related information stored on secondary storage.
Files can represent documents, programs, images, databases, etc.

2.​ File Attributes:➖


Metadata associated with a file, including name, type, size, location, creation/modification
timestamps, permissions, etc.

29
3.​ File System:
A method for organising and storing files on a storage device, providing methods for file access
and management.

4.​ File Operations: ➖


A.​ Creation: Creating a new file.
B.​ Opening: Opening an existing file for reading, writing, or both.
C.​ Reading: Retrieving data from a file.
D.​ Writing: Storing data in a file.
E.​ Closing: Releasing resources associated with an open file.
F.​ Deletion: Removing a file from the file system.
G.​Renaming: Changing the name of a file.

Access Methods: ➖

IL
1.​ Sequential Access: ➖
A.​ Accessing data in a linear manner, from the beginning to the end of the file.
B.​ Suitable for tasks like reading logs or processing data sequentially.

2.​ Direct Access (Random Access): ➖


A.​ Accessing data at any location within the file directly.
H
B.​ Achieved through file pointers or byte offsets.
C.​ Suitable for tasks like database operations or random file access.

Directory Structures and Management:

1.​ Single-Level Directory: ➖


SA
A simple directory structure where all files are stored in a single directory.
Lacks organisation and scalability.

2.​ Two-Level Directory: ➖


Organises files into user directories and system directories.
Each user has their own directory, providing isolation.

3.​ Tree-Structured Directory: ➖


Organises directories and subdirectories in a hierarchical tree structure.
Provides better organisation and scalability.

4.​ Acyclic-Graph Directory: ➖


Allows directories to have multiple parents, forming a directed acyclic graph.
Supports sharing files across directories.

5.​ General Graph Directory: ➖


Allows directories to form arbitrary graphs, including cycles.
Less common due to complexity and potential for inconsistencies.
30
Remote File Systems:

1.​ Network File System (NFS): ➖


Allows remote access to files over a network using a client-server model.
Provides transparent access to remote files as if they were local.

2.​ Server Message Block (SMB): ➖


A protocol for accessing shared files, printers, and other resources on a network.
Commonly used in Windows-based environments.

3.​ File Transfer Protocol (FTP): ➖


Allows file transfer between a client and a server over a network.
Supports various operations like upload, download, list, delete, etc.

IL
File Management Functions:

1.​ File Allocation:➖


Allocating storage space for files on disk.
Techniques include contiguous allocation, linked allocation, indexed allocation, etc.

2.​ File Protection: ➖


3.​ File Sharing: ➖
H
Enforcing access control to files through permissions (read, write, execute) and ownership.

Allowing multiple users or processes to access the same file simultaneously.

4.​ File Backup and Recovery: ➖


SA
Creating backups of files to prevent data loss and restoring files from backups in case of failure.

Conclusion: ➖
File management is a fundamental aspect of operating systems, responsible for organizing,
accessing, and manipulating data stored on secondary storage devices. Understanding basic file
concepts, operations, access methods, directory structures, remote file systems, and
management techniques is essential for efficient and secure data handling in operating systems.

●​ File Protection ➖
File protection is important for data security and ensures that sensitive information remains
confidential and secure. Operating systems provide various mechanisms and techniques to
protect files, such as:
1.​ File permissions
2.​ Encryption
3.​ Access control lists
4.​ Auditing
5.​ Physical file security

31
Here are some other file protection features in operating systems:

1.​ Secure File System (SFS): Uses cryptographic techniques to provide file data security.
2.​ NTFS: The default file system for the Windows operating system family. It offers a flexible
security model that allows administrators to control how users and groups can interact with
folders and files.
3.​ Linux: Built on a Unix-like architecture, which is known for its security features and
robustness. It also has built-in security features like file permissions and user accounts that
help to prevent unauthorised access to system files and resources.

IL
H
SA
➖04
32

UNIT
Advanced Operating systems:


●​ Introduction to Distributed Operating system, Characteristics, architecture, Issues,
Communication & Synchronisation;

A Distributed Operating System (DOS) is an operating system that runs on a network of


interconnected computers and coordinates their resources to provide users with a single,
integrated computing environment. Here's an introduction to distributed operating systems,
covering characteristics, architecture, issues, communication, and synchronisation:

Characteristics of Distributed Operating Systems:

IL
1.​ Resource Sharing: ➖
Distributed operating systems enable sharing of hardware resources such as processors,
memory, and storage across multiple nodes in the network.

2.​ Transparency: ➖
Users perceive the distributed system as a single, cohesive entity, hiding the complexities of the
underlying network and hardware.

3.​ Concurrency: ➖
H
Multiple processes can execute concurrently on different nodes, increasing system throughput
and performance.

4.​ Scalability:➖
SA
Distributed systems can easily scale by adding or removing nodes, allowing for increased
processing power and storage capacity.

5.​ Fault Tolerance: ➖


Distributed systems are designed to tolerate failures, ensuring continued operation even if
individual nodes or components fail.

6.​ Heterogeneity: ➖
Distributed systems can consist of diverse hardware and software platforms, allowing integration
of different technologies.

Architecture of Distributed Operating Systems: ➖


1.​ Client-Server Model: ➖
Common architecture where clients request services from servers over the network.
Servers provide resources or services, and clients consume them.

33
2.​ Peer-to-Peer Model:
Distributed architecture where all nodes have equal capabilities and can act as both clients and
servers.
Nodes communicate directly with each other to share resources and services.

3.​ Layered Architecture: ➖


Organises system components into layers, with each layer providing specific functionality and
services.
Examples include the OSI model and TCP/IP protocol stack.

Issues in Distributed Operating Systems: ➖


1.​ Communication Overhead: ➖
Communication between nodes introduces overhead and latency, impacting system

IL
performance.

2.​ Concurrency Control: ➖


Ensuring consistency and integrity of shared resources accessed concurrently by multiple
processes.

3.​ Fault Tolerance: ➖


4.​ Security:➖
H
Handling node failures and ensuring continuous operation of the distributed system.

Protecting data and resources from unauthorised access, interception, and tampering.

5.​ Scalability: ➖
SA
Managing system growth and ensuring that performance scales with the number of nodes.

Communication and Synchronisation: ➖


1.​ Interprocess Communication (IPC): ➖
➔​ Mechanisms for processes running on different nodes to exchange data and synchronise
their actions.
➔​ Examples include message passing, remote procedure calls (RPC), and shared memory.

2.​ Distributed Synchronisation: ➖


➔​ Coordinating the activities of distributed processes to ensure consistency and avoid
conflicts.
➔​ Techniques include distributed locks, timestamps, and distributed algorithms for mutual
exclusion and coordination.

3.​ Distributed File Systems (DFS): ➖


➔​ Providing access to shared files and data across distributed nodes in the network.
➔​ Ensuring data consistency, replication, and fault tolerance.

34
Conclusion:
Distributed operating systems extend traditional operating systems to support computing
environments spanning multiple nodes in a network. They offer resource sharing, transparency,
concurrency, fault tolerance, and scalability. However, they also present challenges such as
communication overhead, concurrency control, fault tolerance, security, and scalability. Effective
communication and synchronisation mechanisms are essential for coordinating activities and
maintaining consistency in distributed systems. Overall, distributed operating systems play a vital
role in enabling collaborative computing, resource sharing, and efficient utilisation of distributed
resources.


●​ Introduction Multiprocessor Operating system, Architecture, Structure,
Synchronisation & Scheduling;

IL
A Multiprocessor Operating System (MPOS) is an operating system designed to run on systems
with multiple processors (also known as multiprocessor systems or parallel systems). These
systems have more than one central processing unit (CPU) capable of executing multiple tasks
simultaneously. Here's an introduction to multiprocessor operating systems, covering
architecture, structure, synchronisation, and scheduling:

Architecture of Multiprocessor Operating Systems:


H
1.​ Symmetric Multiprocessing (SMP): ➖
I.​ In SMP systems, all processors have equal access to memory and peripheral devices.
II.​ Each processor executes the same operating system kernel and has equal access to
system resources.
III.​ SMP systems typically use a shared-memory architecture, where all processors access a
SA
common main memory.

2.​ Asymmetric Multiprocessing (AMP): ➖


I.​ In AMP systems, one processor (the master processor) is responsible for running the
operating system kernel and managing system resources.
II.​ Additional processors (slave processors) perform specific tasks assigned by the master
processor.
III.​ AMP systems can have heterogeneous processors with varying capabilities.

Structure of Multiprocessor Operating Systems: ➖


1.​ Kernel:➖
I.​ The kernel of a multiprocessor operating system manages system resources, including
processors, memory, I/O devices, and scheduling.
II.​ It provides services such as process management, memory management, file system
management, and I/O management.

35
2.​ Schedulers:
I.​ Multiprocessor operating systems use scheduling algorithms to allocate processor time to
processes or threads.
II.​ Schedulers may be centralised or distributed, depending on the system architecture.

3.​ Interprocess Communication (IPC): ➖


I.​ Mechanisms for processes or threads running on different processors to exchange data
and synchronise their actions.
II.​ Examples include message passing, shared memory, and remote procedure calls (RPC).

4.​ Memory Management: ➖


I.​ Managing memory allocation and access across multiple processors.
II.​ Ensuring data consistency and coherence in shared-memory systems.

IL
Synchronisation in Multiprocessor Operating Systems:

1.​ Mutual Exclusion:➖


I.​ Ensuring that only one process or thread accesses a shared resource at a time.
II.​ Techniques include locks, semaphores, and atomic operations.

2.​ Concurrency Control: ➖


consistency.
H
I.​ Managing access to shared data structures to prevent data corruption and ensure

II.​ Techniques include transactional memory, multiversion concurrency control, and


distributed locking protocols.

3.​ Barrier Synchronisation: ➖


SA
I.​ Synchronising multiple processes or threads at predefined synchronisation points.
II.​ Ensuring that all processors reach a specified point before proceeding.

Scheduling in Multiprocessor Operating Systems: ➖


1.​ Processor Allocation:➖
I.​ Assigning processes or threads to available processors based on scheduling policies.
II.​ Policies include load balancing, affinity scheduling, and priority-based scheduling.

2.​ Load Balancing: ➖


I.​ Distributing tasks evenly across available processors to maximise system throughput and
resource utilisation.
II.​ Techniques include task migration, dynamic scheduling, and work-stealing algorithms.

3.​ Scheduling Algorithms: ➖


I.​ Algorithms for determining the order in which processes or threads are executed on
processors.
36
II.​ Examples include round-robin scheduling, priority scheduling, and shortest job first (SJF)
scheduling.

Conclusion:
Multiprocessor operating systems provide a scalable and efficient platform for parallel computing
by harnessing the power of multiple processors. They manage system resources, coordinate task
execution, and ensure synchronisation and scheduling across multiple processors. With their
sophisticated architecture, structure, synchronisation mechanisms, and scheduling algorithms,
multiprocessor operating systems enable high-performance computing and support a wide range
of parallel applications and workloads.


●​ Introduction to Real-Time Operating System, Characteristics, Structure &
Scheduling

IL
A Real-Time Operating System (RTOS) is an operating system designed to meet the stringent
timing requirements of real-time systems, where tasks must be completed within specific
deadlines. RTOSes are commonly used in embedded systems, industrial automation, automotive
systems, aerospace applications, and other environments where timing predictability is critical.

Here's an introduction to real-time operating systems, covering characteristics, structure,


and scheduling:
H
Characteristics of Real-Time Operating Systems:

1.​ Determinism: ➖
I.​ RTOSes provide deterministic behaviour, where the timing of task execution and system
SA
response is predictable and consistent.

2.​ Hard vs. Soft Real-Time: ➖


I.​ Hard real-time systems have strict deadlines that must be met; missing a deadline can
lead to system failure.
II.​ Soft real-time systems have less stringent timing requirements, and occasional missed
deadlines may be tolerated.

3.​ Task Prioritization: ➖


I.​ RTOSes support task prioritisation to ensure that high-priority tasks are executed before
lower-priority tasks, guaranteeing timely response to critical events.

4.​ Interrupt Handling: ➖


I.​ RTOSes provide efficient interrupt handling mechanisms to respond to external events
promptly and minimise interrupt latency.

5.​ Minimal Overhead: ➖ RTOSes are designed to have low overhead, minimising
context-switching time and maximising system responsiveness.

37
Structure of Real-Time Operating Systems:

1.​ Kernel:➖
I.​ The kernel of an RTOS provides core services such as task scheduling, interrupt handling,
memory management, and interprocess communication.
II.​ Kernels in RTOSes are typically small and optimised for fast response times.

2.​ Task Management: ➖


I.​ RTOSes support task creation, scheduling, and termination, allowing developers to define
and manage tasks with specific timing requirements.

3.​ Interrupt Handling: ➖


I.​ RTOSes provide mechanisms for efficient handling of hardware and software interrupts,
ensuring timely response to external events.

IL
4.​ Timers and Clocks: ➖
I.​ RTOSes include timers and clock management facilities for scheduling periodic tasks and
tracking time-sensitive operations.

Scheduling in Real-Time Operating Systems: ➖



H
1.​ Preemptive Scheduling:
I.​ RTOSes use preemptive scheduling to ensure that higher-priority tasks can interrupt
lower-priority tasks, allowing critical tasks to be executed promptly.

2.​ Priority-Based Scheduling: ➖


I.​ Tasks in RTOSes are assigned priorities based on their criticality and timing requirements.
SA
II.​ Priority-based scheduling algorithms such as Rate-Monotonic Scheduling (RMS) and
Earliest Deadline First (EDF) are commonly used to schedule tasks.

3.​ Fixed-Priority vs. Dynamic-Priority Scheduling: ➖


I.​ In fixed-priority scheduling, task priorities are static and defined at compile time.
II.​ In dynamic-priority scheduling, task priorities can change dynamically at runtime based on
system conditions and workload.

Conclusion: ➖
Real-Time Operating Systems play a crucial role in systems where timing predictability and
responsiveness are critical requirements. They provide deterministic behaviour, task
prioritisation, efficient interrupt handling, and low overhead to meet the timing constraints of
real-time applications. With their specialised structure, scheduling algorithms, and support for
task management, RTOSes enable the development of reliable, high-performance systems
across a wide range of domains.

38
●​ Case study of Linux operating system

Linux is one of the most widely used operating systems in the world, powering everything from
personal computers and servers to embedded systems and supercomputers. It has a rich history
and a vibrant open-source community driving its development.
Let's delve into a case study of the Linux operating system:

Background: ➖
Linux was created by Linus Torvalds in 1991 as a Unix-like operating system kernel. Initially
developed as a hobby project, it quickly gained popularity due to its open-source nature and the
collaborative efforts of developers worldwide. Today, Linux distributions (distros) like Ubuntu,
Fedora, Debian, and CentOS are used across various industries and platforms.

Case Study: Linux Kernel Development ➖

IL
1. Community-Driven Development:
I.​ Linux development follows a collaborative model where thousands of developers
contribute to the kernel's development.
II.​ The Linux community actively reviews, tests, and submits patches to improve the kernel's
performance, security, and features.
III.​ Torvalds oversees the development process, maintaining the mainline kernel tree and
H
releasing stable versions.

2. Modular Architecture: ➖
I.​ The Linux kernel follows a modular design, consisting of various subsystems like process
management, memory management, filesystems, networking, and device drivers.
II.​ Each subsystem is responsible for a specific functionality, allowing for easier maintenance,
SA
debugging, and scalability.

3. Continuous Improvement: ➖
I.​ The Linux kernel is constantly evolving, with new features and improvements being added
regularly.
II.​ The development process includes rigorous testing, code review, and performance
optimization to ensure stability and reliability.

4. Wide Hardware Support: ➖


I.​ Linux supports a vast range of hardware architectures, including x86, ARM, MIPS,
PowerPC, and more.
II.​ Device drivers are a critical component of the Linux kernel, providing support for various
hardware devices and peripherals.

5. Security Focus: ➖
I.​ Linux prioritises security, with features like access control, memory protection, and secure
boot mechanisms.
39
II.​ The kernel development process includes regular security audits and vulnerability patches
to address emerging threats.

6. Adaptability and Customization: ➖


I.​ Linux's open-source nature allows vendors and developers to customise the kernel to meet
specific requirements.
II.​ Various Linux distributions tailor the kernel and user-space components to target different
use cases, from desktop computing to embedded systems.

Conclusion: ➖
The Linux operating system stands as a testament to the power of open-source collaboration and
community-driven development. Its modular architecture, continuous improvement, wide
hardware support, security focus, and adaptability make it a preferred choice for a diverse range
of applications and industries. As Linux continues to evolve, it remains a cornerstone of modern

IL
computing, driving innovation and empowering users worldwide.

H HAPPY ENDING BY SAHIL RAUNIYAR


SA

You might also like