[go: up one dir, main page]

0% found this document useful (0 votes)
102 views54 pages

E-Content OS Full Notes Unit - 1 To 4

The document discusses different types of operating systems and their structures. It covers 6 types of operating systems: mainframe systems, desktop systems, multiprocessor systems, distributed systems, real-time systems, and handheld systems. It also describes 3 common OS structures - simple, layered, and micro-kernel structures. The layered structure breaks the OS into separate layers with each layer only using functions from lower layers, making it easier to debug.

Uploaded by

Zaid Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views54 pages

E-Content OS Full Notes Unit - 1 To 4

The document discusses different types of operating systems and their structures. It covers 6 types of operating systems: mainframe systems, desktop systems, multiprocessor systems, distributed systems, real-time systems, and handheld systems. It also describes 3 common OS structures - simple, layered, and micro-kernel structures. The layered structure breaks the OS into separate layers with each layer only using functions from lower layers, making it easier to debug.

Uploaded by

Zaid Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 54

P.G.

DEPARTMENT OF COMPUTER SCIENCE (SHIFT-I)


III - B.Sc (Computer Science)

SUBJECT : OPERATING SYSTEMS


SUBJECT CODE :
CLASS : III B.Sc. [Computer Science]
E-MAIL ID : acharaf@rediffmail.com
ASSISTANT PROFESSOR : A.ASHRAF ALI M.Sc., M.Phil.
OPERATING SYSTEMS

UNIT – I: INTRODUCTION
Definition of OS – Mainframe Systems Desktop Systems – Multi processor System –
Distributed systems – Real time Systems – Handheld Systems – Operating System
Structure – System Components – Operating System Services – System Calls – System
Programs.
UNIT – II: PROCESS MANAGEMENT
Process Concept – Process Scheduling – Operations on Processes – Co – operating
Processes – Inter Process Communication.CPU Scheduling: Scheduling Concepts-
Scheduling Criteria – Scheduling Algorithms – Multiprocessor Scheduling – Real time
Scheduling.
UNIT – III: PROCESS SYNCHRONIZATION
The Critical Section Problem – Semaphores – Critical Regions – Monitors – Deadlocks
Characterization – Handling Deadlocks – Deadlock Prevention – Deadlock Avoidance –
Deadlock Detection – Deadlock Recovery
UNIT – IV: MEMORY MANAGEMENT
Swapping-Contiguous Memory Allocation – Paging – Segmentation. Virtual Memory
– Demand Paging – Page replacement. File System Interface: File Concept – Access
Methods- Directory Structure. File – System Implementation: File – System Structure
– Allocation Methods – Free space Management.
UNIT – V: PROTECTION AND SECURITY
Protection: Goals of Protection – Access Matrix – Implementation of Access matrix –
Security – The Security Problem – User Authentication – System Threats. Case Study:
Linux System
TEXT BOOK:
1. Silberschatz Galvin and Gagne, Operating System Concepts, 6th Edition, John Wiley &
Sons, Inc. , 2004
REFERENCE BOOKS:
1. Operating System Concepts and Design by Milankovic M., 2nd Edition, McGraw
Hill, 1992)
OPERATING SYSTEMS
Unit – I
Introduction:
An operating System (OS) is an intermediary between users and computer hardware. It
provides users an environment in which a user can execute programs conveniently and
efficiently.

In technical terms, it is software which manages hardware. An operating System controls


the allocation of resources and services such as memory, processors, devices and
information.

Definition of OS

An operating system is a program that acts as an interface between the user and the
computer hardware and controls the execution of all kinds of programs.

Evolution of OS:

1.Mainframe Systems

Reduce setup time by batching similar jobs Automatic job sequencing – automatically
transfers control from one job to another. First rudimentary operating system. Resident
monitor
• initial control in monitor
• control transfers to job
• when job completes control transfers pack to monitor

2. Desktop Systems/Personal Computer Systems

➢ The PC operating system is designed for maximizing user convenience and


responsiveness. This system is neither multi-user nor multitasking.
➢ These systems include PCs running Microsoft Windows and the Apple Macintosh.
The MS-DOS operating system from Microsoft has been superseded by multiple
flavors of Microsoft Windows and IBM has upgraded MS-DOS to the OS/2
multitasking system.
➢ The Apple Macintosh operating system has been ported to more advanced hardware,
and now includes new features such as virtual memory and multitasking.

3.Multiprocessor Operating Systems

Multiprocessor operating systems are also known as parallel OS or tightly coupled


OS. Such operating systems have more than one processor in close communication that
sharing the computer bus, the clock and sometimes memory and peripheral devices. It
executes multiple jobs at same time and makes the processing faster.

Multiprocessor systems have three main advantages:

• Increased throughput: By increasing the number of processors, the system


performs more work in less time. The speed-up ratio with N processors is less than
N.
• Economy of scale: Multiprocessor systems can save more money than multiple
single-processor systems, because they can share peripherals, mass storage, and
power supplies.
• Increased reliability: If one processor fails to done its task, then each of the
remaining processors must pick up a share of the work of the failed processor. The
failure of one processor will not halt the system, only slow it down.
The ability to continue providing service proportional to the level of surviving hardware
is called graceful degradation. Systems designed for graceful degradation are called fault
tolerant.

The multiprocessor operating systems are classified into two categories:

1. Symmetric multiprocessing system

2. Asymmetric multiprocessing system

• In symmetric multiprocessing system, each processor runs an identical copy of the


operating system, and these copies communicate with one another as needed.
• In asymmetric multiprocessing system, a processor is called master processor that
controls other processors called slave processor. Thus, it establishes master-slave
relationship. The master processor schedules the jobs and manages the memory for
entire system.

4. Distributed Operating Systems

➢ In distributed system, the different machines are connected in a network and each
machine has its own processor and own local memory.
➢ In this system, the operating systems on all the machines work together to manage
the collective network resource.
➢ It can be classified into two categories:

1. Client-Server systems

2. Peer-to-Peer systems

Advantages of distributed systems.

• With resource sharing facility user at one site may be able to use the resources
available at another.

• Speedup the exchange of data with one another via electronic mail.
• If one site fails in a distributed system, the remaining sites can potentially continue
operating.
• Better service to the customers.
• Reduction of the load on the host computer.
• Reduction of delays in data processing.
5. Real Time Operating Systems

Real time system means that the system is subjected to real time, i.e., response should be
guaranteed within a specified timing constraint or system should meet the specified
deadline. For example: flight control system, real time monitors etc.

Types of real time systems based on timing constraints:

Hard real-time systems

Hard real-time systems guarantee that critical tasks complete on time. In hard real-time
system Secondary storage is limited or missing with data stored in ROM. In these systems
virtual Memory is almost never found.

Soft real-time systems

Soft real time systems are less restrictive. Critical real-time task gets priority over other
tasks and Retains the priority until it completes. Soft real-time systems have limited utility
than hard real-Time systems. For example, Multimedia, virtual reality, Advanced Scientific
Projects like Undersea exploration and planetary rovers etc.

6. Handheld Systems

Handheld systems include Personal Digital Assistants(PDAs), such as Palm-Pilots or


Cellular Telephones with connectivity to a network such as the Internet. They are usually
of limited size due to which most handheld devices have a small amount of memory,
include slow processors, and feature small display screens.

• Many handheld devices have between 512 KB and 8 MB of memory. As a result,


the operating system and applications must manage memory efficiently. This
includes returning all allocated memory back to the memory manager once the
memory is no longer being used.
• Currently, many handheld devices do not use virtual memory techniques, thus
forcing program developers to work within the confines of limited physical
memory.
• Processors for most handheld devices often run at a fraction of the speed of a
processor in a PC. Faster processors require more power. To include a faster
processor in a handheld device would require a larger battery that would have to be
replaced more frequently.

Operating-System Structure

Simple Structure

Such operating systems do not have well defined structure and are small, simple and limited
systems. The interfaces and levels of functionality are not well separated. MS-DOS is an
example of such operating system. In MS-DOS application programs are able to access the
basic I/O routines. These types of operating system cause the entire system to crash if one
of the user programs fails.

Diagram of the structure of MS-DOS is shown below.

Layered structure:

An OS can be broken into pieces and retain much more control on system. In this structure
the OS is broken into number of layers (levels). The bottom layer (layer 0) is the hardware
and the topmost layer (layer N) is the user interface. These layers are so designed that each
layer uses the functions of the lower level layers only. This simplifies the debugging
process as if lower level layers are debugged and an error occurs during debugging then
the error must be on that layer only as the lower level layers have already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified
and passed on which adds overhead to the system. Moreover careful planning of the layers
is necessary as a layer can use only lower level layers. UNIX is an example of this structure.

Micro-kernel:

This structure designs the operating system by removing all non-essential components from
the kernel and implementing them as system and user programs. This result in a smaller
kernel called the micro-kernel.

Advantages of this structure are that all new services need to be added to user space and
does not require the kernel to be modified. Thus it is more secure and reliable as if a service
fails then rest of the operating system remains untouched. Mac OS is an example of this
type of OS.

Components of Operating Systems

What are OS Components?


An operating system is a large and complex system that can only be created by
partitioning into small pieces. These pieces should be a well-defined portion of the
system, which carefully defined inputs, outputs, and functions.

Although Mac, Unix, Linux, Windows, and other OS do not have the same structure,
most of the operating systems share similar OS system components like File, Process,
Memory, I/O device management.

Let's see each of these components in detail.


Following are some of important functions of an operating System.

➢ Memory Management
➢ Processor Management.
➢ Device Management
➢ File Management
➢ Security Management
➢ I/O Device Management
➢ Secondary-Storage Management
➢ Network Management

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main


memory is a large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be access directly by the CPU. So for a
program to be executed, it must in the main memory. Operating System does the following
activities for memory management.

• Keeps tracks of primary memory i.e. what part of it are in use by whom, what part
are not in use.
• In multiprogramming, OS decides which process will get memory when and how
much.
• Allocates the memory when the process requests it to do so.
• De-allocates the memory when the process no longer needs it or has been
terminated.

Processor Management

In multiprogramming environment, OS decides which process gets the processor when and
how much time. This function is called process scheduling. Operating System does the
following activities for processor management.

• Keeps tracks of processor and status of process. Program responsible for this task is
known as traffic controller.
• Allocates the processor (CPU) to a process.
• De-allocates processor when processor is no longer required.
Device Management

OS manages device communication via their respective drivers. Operating System does the
following activities for device management.

• Keeps tracks of all devices. Program responsible for this task is known as the I/O
controller.
• Decides which process gets the device when and for how much time.
• Allocates the device in the efficient way.
• De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Operating System does the following
activities for file management.

• Keeps track of information, location, uses, status etc. The collective facilities are
often known as file system.
• Decides who gets the resources.
• Allocates the resources.
• De-allocates the resources.

Security Management

The various processes in an operating system need to be secured from each other’s
activities. For that purpose, various mechanisms can be used to ensure that those processes
which want to operate files, memory CPU, and other hardware resources should have
proper authorization from the operating system.

For example, Memory addressing hardware helps you to confirm that a process can be
executed within its own address space. The time ensures that no process has control of the
CPU without renouncing it.

Lastly, no process is allowed to do its own I/O, to protect, which helps you to keep the
integrity of the various peripheral devices.
I/O Device Management

One of the important use of an operating system that helps you to hide the variations of
specific hardware devices from the user.

Functions of I/O management in OS:

• It offers buffer caching system


• It provides general device driver code
• It provides drivers for particular hardware devices.
• I/O helps you to knows the individualities of a specific device.

Secondary-Storage Management

The most important task of a computer system is to execute programs. These programs,
along with the data, helps you to access, which is in the main memory during execution.

This Memory of the computer is very small to store all data and programs permanently.
The computer system offers secondary storage to back up the main Memory. Today
modern computers use hard drives/SSD as the primary storage of both programs and data.
However, the secondary storage management also works with storage devices, like a USB
flash drive, and CD/DVD drives.

Programs like assemblers, compilers, stored on the disk until it is loaded into memory, and
then use the disk as a source and destination for processing.

Functions of Secondary storage management in OS:


Here, are major functions of secondary storage management in OS:

• Storage allocation
• Free space management
• Disk scheduling

Network Management
Network management is the process of administering and managing computer
networks. It includes performance management, fault analysis, provisioning of
networks, and maintaining the quality of service.
A distributed system is a collection of computers/processors that never share their
own memory or a clock. In this type of system, all the processors have their local
Memory, and the processors communicate with each other using different
communication lines, like fiber optics or telephone lines.

The computers in the network are connected through a communication network,


which can be configured in a number of different ways. With the help of network
management, the network can be fully or partially connected, which helps users to
design routing and connection strategies that overcome connection and security
issues.

Functions of Network management:


• Distributed systems help you to various computing resources in size and
function. They may involve microprocessors, minicomputers, and many
general-purpose computer systems.
• A distributed system also offers the user access to the various resources the
network shares.
• It helps to access shared resources that help computation to speed-up or
offers data availability and reliability.

Operating System Services

An Operating System provides services to both the users and to the programs.

➢ It provides programs, an environment to execute.


➢ It provides users, services to execute the programs in a convenient manner.
➢ Following are few common services provided by operating systems.
➢ Program execution
➢ I/O operations
➢ File System manipulation
➢ Communication
➢ Error Detection
➢ Resource Allocation
➢ Protection
Program execution

Operating system handles many kinds of activities from user programs to system programs
like printer spooler, name servers, file server etc. Each of these activities is encapsulated
as a process. A process includes the complete execution context (code to execute, data to
manipulate, Registers, OS resources in use). Following are the major activities of an
operating system with respect to program management.

• Loads a program into memory.


• Executes the program.
• Handles program’s execution.
• Provides a mechanism for process synchronization.
• Provides a mechanism for process communication.
• Provides a mechanism for deadlock handling.

I/O Operation

I/O subsystem comprised of I/O devices and their corresponding driver software. Drivers
hides the peculiarities of specific hardware devices from the user as the device driver knows
the peculiarities of the specific device.Operating System manages the communication
between user and device drivers. Following are the major activities of an operating system
with respect to I/O Operation.

• I/O operation means read or write operation with any file or any specific I/O device.
• Program may require any I/O device while running.
• Operating system provides the access to the required I/O device when required.

File system manipulation

A file represents a collection of related information. Computer can store files on the disk
(secondary storage), for long term storage purpose. Few examples of storage media are
magnetic tape, magnetic disk and optical disk drives like CD, DVD. Each of these media
has its own properties like speed, capacity, data transfer rate and data access methods.

A file system is normally organized into directories for easy navigation and usage. These
directories may contain files and other directions. Following are the major activities of an
operating system with respect to file management.

• Program needs to read a file or write a file.


• The operating system gives the permission to the program for operation on file.
• Permission varies from read-only, read-write, denied and so on.
• Operating System provides an interface to the user to create/delete files.
• Operating System provides an interface to the user to create/delete directories.
• Operating System provides an interface to create the backup of file system.

Communication

In case of distributed systems which are a collection of processors that do not share
memory, peripheral devices, or a clock, operating system manages communications
between processes. Multiple processes with one another through communication lines in
the network. OS handles routing and connection strategies, and the problems of contention
and security. Following are the major activities of an operating system with respect to
communication.

• Two processes often require data to be transferred between them.


• The both processes can be on the one computer or on different computer but are
connected through computer network.
• Communication may be implemented by two methods either by Shared Memory or
by Message Passing.

Error handling

Error can occur anytime and anywhere. Error may occur in CPU, in I/O devices or in the
memory hardware. Following are the major activities of an operating system with respect
to error handling.

• OS constantly remains aware of possible errors.


• OS takes the appropriate action to ensure correct and consistent computing.

Resource Management

In case of multi-user or multi-tasking environment, resources such as main memory, CPU


cycles and files storage are to be allocated to each user or job. Following are the major
activities of an operating system with respect to resource management.

• OS manages all kind of resources using schedulers.


• CPU scheduling algorithms are used for better utilization of CPU.
Protection

Considering computer systems having multiple users the concurrent execution of multiple
processes, then the various processes must be protected from each another’s activities.

Protection refers to mechanism or a way to control the access of programs, processes, or


users to the resources defined by computer systems. Following are the major activities of
an operating system with respect to protection.

• OS ensures that all access to system resources is controlled.


• OS ensures that external I/O devices are protected from invalid access attempts.
• OS provides authentication feature for each user by means of a password

System Calls

➢ Programming interface to the services provided by the OS


➢ Typically written in a high-level language (C or C++)
➢ Mostly accessed by programs via a high-level Application Program Interface (API)
rather than direct system call usenThree most common APIs are Win32 API for
Windows, POSIX API for POSIX-based systems (including virtually all versions of
UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM)

System Programs

At the lowest level is hardware. Next are the operating system, then the system programs,
and finally the application programs. System programs provide a convenient environment
for program development and execution. Some of them are simply user interfaces to system
calls; others are considerably more complex.

They can be divided into these categories:

• File management. These programs create, delete, copy, rename, print, dump, list, and
generally manipulate files and directories.

• Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are
more complex, providing detailed performance, logging, and debugging information.
Typically, these programs format and print the output to the terminal or other output
devices or files or display it in a window of the GUI. Some systems also support a registry,
which is used to store and retrieve configuration information.

• File modification. Several text editors may be available to create and modify the content
of files stored on disk or other storage devices. There may also be special commands to
search contents of files or perform transformations of the text.

• Programming-language support. Compilers, assemblers, debuggers and interpreters for


common programming languages (such as C, C++, Java, Visual Basic, and PERL) are often
provided to the user with the operating system.

• Program loading and execution. Once a program is assembled or compiled, it must be


loaded into memory to be executed. The system may provide absolute loaders, relocatable
loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level
languages or machine language are needed as well.

• Communications. These programs provide the mechanism for creating virtual


connections among processes, users, and computer systems. They allow users to send
messages to one another’s screens, to browse web pages, to send electronic-mail messages,
to log in remotely, or to transfer files from one machine to another.

In addition to systems programs, most In addition to systems programs, most operating


systems are supplied with programs that are useful in solving common problems or
performing common operations. Such programs include web browsers, word processors
and text formatters, spreadsheets, database systems, compilers, plotting and statistical-
analysis packages, and games. These programs are known as system utilities or application
programs.
OPERATING SYSTEMS
Unit – II
Process Concept

• An operating system executes a variety of programs:


• Batch system – jobs
• Time-shared systems – user programs or tasks
• Textbook uses the terms job and process almost interchangeably
• Process – a program in execution; process execution must progress in sequential
fashion

A process includes:

• program counter
• stack
• data section

Process in Memory

Process State

As a process executes, it changes state

new: The process is being created

running: Instructions are being executed


waiting: The process is waiting for some event to occur

ready: The process is waiting to be assigned to a processor

terminated: The process has finished execution

Process Scheduling

The process scheduling is the activity of the process manager that handles the removal
of the running process from the CPU and the selection of another process on the basis
of a particular strategy.

Process scheduling is an essential part of a Multiprogramming operating systems. Such


operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Process Scheduling Queues

The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a


separate queue for each of the process states and PCBs of all processes in the same
execution state are placed in the same queue. When the state of a process is changed,
its PCB is unlinked from its current queue and moved to its new state queue.

The Operating System maintains the following important process scheduling queues −

• Job queue − This queue keeps all the processes in the system.

• Ready queue − This queue keeps a set of all processes residing in main memory,
ready and waiting to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority,
etc.). The OS scheduler determines how to move processes between the ready and run
queues which can only have one entry per processor core on the system; in the above
diagram, it has been merged with the CPU.

Two-State Process Model

Two-state process model refers to running and non-running states which are described
below –

S.N. State & Description

1. Running

When a new process is created, it enters into the system as in the running
state.

2. Not Running

Processes that are not running are kept in queue, waiting for their turn to
execute. Each entry in the queue is a pointer to a particular process. Queue
is implemented by using linked list. Use of dispatcher is as follows. When
a process is interrupted, that process is transferred in the waiting queue. If
the process has completed or aborted, the process is discarded. In either
case, the dispatcher then selects a process from the queue to execute.
Schedulers

Schedulers are special system software which handle process scheduling in various
ways. Their main task is to select the jobs to be submitted into the system and to decide
which process to run. Schedulers are of three types –

➢ Long-Term Scheduler

➢ Short-Term Scheduler

➢ Medium-Term Scheduler

Long Term Scheduler

It is also called a job scheduler. A long-term scheduler determines which programs are
admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.

The primary objective of the job scheduler is to provide a balanced mix of jobs, such
as I/O bound and processor bound. It also controls the degree of multiprogramming. If
the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.

On some systems, the long-term scheduler may not be available or minimal. Time-
sharing operating systems have no long term scheduler. When a process changes the
state from new to ready, then there is use of long-term scheduler.

Short Term Scheduler

It is also called as CPU scheduler. Its main objective is to increase system performance
in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are ready
to execute and allocates CPU to one of them.

Short-term schedulers, also known as dispatchers, make the decision of which process
to execute next. Short-term schedulers are faster than long-term schedulers.
Medium Term Scheduler

Medium-term scheduling is a part of swapping. It removes the processes from the


memory. It reduces the degree of multiprogramming. The medium-term scheduler is
in-charge of handling the swapped out-processes.

A running process may become suspended if it makes an I/O request. A suspended


processes cannot make any progress towards completion. In this condition, to remove
the process from memory and make space for other processes, the suspended process
is moved to the secondary storage. This process is called swapping, and the process is
said to be swapped out or rolled out. Swapping may be necessary to improve the
process mix.

S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than Speed is fastest among Speed is in between both


short term scheduler other two short and long term
scheduler.

3 It controls the degree of It provides lesser It reduces the degree of


multiprogramming control over degree of multiprogramming.
multiprogramming

4 It is almost absent or It is also minimal in It is a part of Time sharing


minimal in time sharing time sharing system systems
system

5 It selects processes from It selects those It can re-introduce the


pool and loads them into processes which are process into memory and
memory for execution ready to execute execution can be continued.
Operations on Processes

There are many operations that can be performed on processes. Some of these are
process creation, process preemption, process blocking, and process termination. These
are given in detail as follows –

Process Creation

Processes need to be created in the system for different operations. This can be done
by the following events –

▪ User request for process creation

▪ System initialization

▪ Execution of a process creation system call by a running process

▪ Batch job initialization

A process may be created by another process using fork(). The creating process is called
the parent process and the created process is the child process. A child process can have
only one parent but a parent process may have many children. Both the parent and child
processes have the same memory image, open files, and environment strings. However,

they have distinct address spaces. A diagram that demonstrates process creation using
fork() is as follows –

Process Preemption

An interrupt mechanism is used in preemption that suspends the process executing


currently and the next process to execute is determined by the short-term scheduler.
Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows –

Process Blocking

The process is blocked if it is waiting for some event to occur. This event may be I/O
as the I/O events are executed in the main memory and don’t require the processor.
After the event is complete, the process again goes to the ready state.

A diagram that demonstrates process blocking is as follows −

Process Termination

After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.

A child process can be terminated by its parent process if its task is no longer relevant.
The child process sends its status information to the parent process before it terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.

Cooperating Processes

➢ Independent process cannot affect or be affected by the execution of another


process

➢ Cooperating process can affect or be affected by the execution of another process


Advantages of process cooperation

➢ Information sharing

➢ Computation speed-up

➢ Modularity

➢ Convenience

Interprocess communication

Interprocess communication is the mechanism provided by the operating system that


allows processes to communicate with each other. This communication could involve
a process letting another process know that some event has occurred or the transferring
of data from one process to another.

A diagram that illustrates interprocess communication is as follows –

Synchronization in Interprocess Communication

Synchronization is a necessary part of interprocess communication. It is either provided


by the interprocess control mechanism or handled by the communicating processes.
Some of the methods to provide synchronization are as follows –

• Semaphore

A semaphore is a variable that controls the access to a common resource by multiple


processes. The two types of semaphores are binary semaphores and counting
semaphores.

• Mutual Exclusion

Mutual exclusion requires that only one process thread can enter the critical section at
a time. This is useful for synchronization and also prevents race conditions.
• Barrier

A barrier does not allow individual processes to proceed until all the processes reach
it. Many parallel languages and collective routines impose barriers.

• Spinlock

This is a type of lock. The processes trying to acquire this lock wait in a loop while
checking if the lock is available or not. This is known as busy waiting because the
process is not doing any useful operation even though it is active.

Approaches to Interprocess Communication

The different approaches to implement interprocess communication are given as


follows –

Pipe

A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-
way data channel between two processes. This uses standard input and output methods.
Pipes are used in all POSIX systems as well as Windows operating systems.

Socket

The socket is the endpoint for sending or receiving data in a network. This is true for
data sent between processes on the same computer or data sent between different
computers on the same network. Most of the operating systems use sockets for
interprocess communication.

File

A file is a data record that may be stored on a disk or acquired on demand by a file
server. Multiple processes can access a file as required. All operating systems use files
for data storage.

Signal - Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals are not
used to transfer data but are used for remote commands between processes.
Shared Memory

Shared memory is the memory that can be simultaneously accessed by multiple


processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.

Message Queue

Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored in the queue until their recipient retrieves
them. Message queues are quite useful for interprocess communication and are used by
most operating systems.

A diagram that demonstrates message queue and shared memory methods of


interprocess communication is as follows –

CPU Scheduling

Scheduling of processes/work is done to finish the work on time.

Below are different time with respect to a process.

Arrival Time: Time at which the process arrives in the ready queue.

Completion Time: Time at which process completes its execution.

Burst Time: Time required by a process for CPU execution.


Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time

Waiting Time(W.T): Time Difference between turn around time and burst time.

Waiting Time = Turn Around Time – Burst Time

Why do we need Scheduling?

In Multiprogramming, if the long term scheduler picks more I/O bound processes then
most of the time, the CPU remains idol. The task of Operating system is to optimize the
utilization of resources.

If most of the running processes change their state from running to waiting then there
may always be a possibility of deadlock in the system. Hence to reduce this overhead,
the OS needs to schedule the jobs to get the optimal utilization of CPU and to avoid the
possibility to deadlock.

Scheduling Criteria

• CPU utilization – keep the CPU as busy as possible

• Throughput – # of processes that complete their execution per time unit

• Turnaround time – amount of time to execute a particular process

• Waiting time – amount of time a process has been waiting in the ready queue

• Response time – amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)

• Max CPU utilization

• Max throughput

• Min turnaround time

• Min waiting time

• Min response time


Scheduling Algorithms

There are various algorithms which are used by the Operating System to schedule the
processes on the processor in an efficient way.

The Purpose of a Scheduling algorithm

1. Maximum CPU utilization

2. Fare allocation of CPU

3. Maximum throughput

4. Minimum turnaround time

5. Minimum waiting time

6. Minimum response time

There are the following algorithms which can be used to schedule the jobs.

1. First Come First Serve

It is the simplest algorithm to implement. The process with the minimal arrival time
will get the CPU first. The lesser the arrival time, the sooner will the process gets the
CPU. It is the non-preemptive type of scheduling.

2. Round Robin

In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All
the processes will get executed in the cyclic way. Each of the process will get the CPU
for a small amount of time (called time quantum) and then get back to the ready queue
to wait for its next turn. It is a preemptive type of scheduling.

3. Shortest Job First

The job with the shortest burst time will get the CPU first. The lesser the burst time,
the sooner will the process get the CPU. It is the non-preemptive type of scheduling.
4. Shortest remaining time first

It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according
to the remaining time of the execution.

5. Priority based scheduling

In this algorithm, the priority will be assigned to each of the processes. The higher the
priority, the sooner will the process get the CPU. If the priority of the two processes is
same then they will be scheduled according to their arrival time.

6. Highest Response Ratio Next

In this scheduling Algorithm, the process with highest response ratio will be scheduled
next. This reduces the starvation in the system.

Multiple-Processor Scheduling

✓ CPU scheduling more complex when multiple CPUs are available

✓ Homogeneous processors within a multiprocessor

✓ Load sharing

✓ Asymmetric multiprocessing – only one processor accesses the system data


structures, alleviating the need for data sharing
OPERATING SYSTEMS
Unit – III
The Critical Section Problem
Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure,
CPU or any IO device.

The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes from
entering the critical section.

The critical section problem is used to design a set of protocols which can ensure that
the Race condition among the processes will never arise.

In order to synchronize the cooperative processes, our main task is to solve the critical
section problem. We need to provide a solution in such a way that the following
conditions can be satisfied.

Requirements of Synchronization mechanisms

Primary
1. Mutual Exclusion

Our solution must provide mutual exclusion. By Mutual Exclusion, we mean


that if one process is executing inside critical section then the other process
must not enter in the critical section.
2. Progress

Progress means that if one process doesn't need to execute into critical section
then it should not stop other processes to get into the critical section.

Secondary
1. Bounded Waiting

We should be able to predict the waiting time for every process to get into the
critical section. The process must not be endlessly waiting for getting into the
critical section.

2. Architectural Neutrality

Our mechanism must be architectural natural. It means that if our solution is


working fine on one architecture then it should also run on the other ones as
well.

Semaphores

Semaphores are integer variables that are used to solve the critical section problem by
using two atomic operations, wait and signal that are used for process synchronization.

The definitions of wait and signal are as follows −

• Wait

The wait operation decrements the value of its argument S, if it is positive. If S


is negative or zero, then no operation is performed.
wait(S)

while (S<=0);

S--;

• Signal
The signal operation increments the value of its argument S.

signal(S)

S++;

Types of Semaphores

There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −

• Counting Semaphores

These are integer value semaphores and have an unrestricted value domain.
These semaphores are used to coordinate the resource access, where the
semaphore count is the number of available resources. If the resources are added,
semaphore count automatically incremented and if the resources are removed,
the count is decremented.

• Binary Semaphores

The binary semaphores are like counting semaphores but their value is restricted
to 0 and 1. The wait operation only works when the semaphore is 1 and the signal
operation succeeds when semaphore is 0. It is sometimes easier to implement
binary semaphores than counting semaphores.
Advantages of Semaphores

Some of the advantages of semaphores are as follows −

• Semaphores allow only one process into the critical section. They follow the
mutual exclusion principle strictly and are much more efficient than some other
methods of synchronization.
• There is no resource wastage because of busy waiting in semaphores as
processor time is not wasted unnecessarily to check if a condition is fulfilled to
allow a process to access the critical section.

• Semaphores are implemented in the machine independent code of the


microkernel. So they are machine independent.

Disadvantages of Semaphores

Some of the disadvantages of semaphores are as follows −

• Semaphores are complicated so the wait and signal operations must be


implemented in the correct order to prevent deadlocks.
• Semaphores are impractical for last scale use as their use leads to loss of
modularity. This happens because the wait and signal operations prevent the
creation of a structured layout for the system.
• Semaphores may lead to a priority inversion where low priority processes may
access the critical section first and high priority processes later.

Critical Regions

• High-level synchronization construct 


• A shared variable v of type T, is declared as:
v: shared T 
• Variable v accessed only inside statement
region v when B do S
where B is a boolean expression.
• While statement S is being executed, no other process can access variable v.

• Regions referring to the same shared variable exclude each other in time.
• When a process tries to execute the region statement, the Boolean expression B
is evaluated. If B is true, statement S is executed. If it is false, the process is
delayed until B becomes true and no other process is in the region associated
with v.

Monitors vs Semaphores

Monitors and semaphores are used for process synchronization and allow processes to
access the shared resources using mutual exclusion. However, monitors and
semaphores contain many differences. Details about both of these are given as follows−

Monitors

Monitors are a synchronization construct that were created to overcome the problems
caused by semaphores such as timing errors.

Monitors are abstract data types and contain shared data variables and procedures. The
shared data variables cannot be directly accessed by a process and procedures are
required to allow a single process to access the shared data variables at a time.

This is demonstrated as follows:

monitor monitorName

data variables;

Procedure P1(....)

Procedure P2(....)

{
}

Procedure Pn(....)

Initialization Code(....)

Only one process can be active in a monitor at a time. Other processes that need to
access the shared variables in a monitor have to line up in a queue and are only provided
access when the previous process release the shared variables.

Deadlock Characterization
A deadlock happens in operating system when two or more processes need some
resource to complete their execution that is held by the other process.
A deadlock occurs if the four Coffman conditions hold true. But these conditions are
not mutually exclusive. They are given as follows −

Mutual Exclusion
There should be a resource that can only be held by one process at a time. In the diagram
below, there is a single instance of Resource 1 and it is held by Process 1 only.

Hold and Wait


A process can hold multiple resources and still request more resources from other
processes which are holding them. In the diagram given below, Process 2 holds
Resource 2 and Resource 3 and is requesting the Resource 1 which is held by Process
1.
No Preemption

A resource cannot be preempted from a process by force. A process can only release a
resource voluntarily. In the diagram below, Process 2 cannot preempt Resource 1 from
Process 1. It will only be released when Process 1 relinquishes it voluntarily after its
execution is complete.

Circular Wait

A process is waiting for the resource held by the second process, which is waiting for
the resource held by the third process and so on, till the last process is waiting for a
resource held by the first process. This forms a circular chain. For example: Process 1
is allocated Resource2 and it is requesting Resource 1. Similarly, Process 2 is allocated
Resource 1 and it is requesting Resource 2. This forms a circular wait loop.
Handling Deadlocks

Deadlock prevention and deadlock avoidance, deadlock detection are the main methods
for handling deadlocks. Details about these are given as follows −

Deadlock Prevention

It is important to prevent a deadlock before it can occur. So, the system checks each
transaction before it is executed to make sure it does not lead to deadlock. If there is
even a slight possibility that a transaction may lead to deadlock, it is never allowed to
execute.

Some deadlock prevention schemes that use timestamps in order to make sure that a
deadlock does not occur are given as follows –

• Wait - Die Scheme

• In the wait - die scheme, if a transaction T1 requests for a resource that is held
by transaction T2, one of the following two scenarios may occur −

o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system


earlier than T2, then it is allowed to wait for the resource which will be
free when T2 has completed its execution.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system
after T2, then T1 is killed. It is restarted later with the same timestamp.
• Wound - Wait Scheme

• In the wound - wait scheme, if a transaction T1 requests for a resource that is


held by transaction T2, one of the following two possibilities may occur −

o TS(T1) < TS(T2) - If T1 is older than T2 i.e T1 came in the system


earlier than T2, then it is allowed to roll back T2 or wound T2. Then T1
takes the resource and completes its execution. T2 is later restarted with
the same timestamp.
o TS(T1) > TS(T2) - If T1 is younger than T2 i.e T1 came in the system
after T2, then it is allowed to wait for the resource which will be free
when T2 has completed its execution.

Deadlock Avoidance

It is better to avoid a deadlock rather than take measures after the deadlock has occurred.
The wait for graph can be used for deadlock avoidance. This is however only useful for
smaller databases as it can get quite complex in larger databases.

Wait for graph

The wait for graph shows the relationship between the resources and transactions. If a
transaction requests a resource or if it already holds a resource, it is visible as an edge
on the wait for graph. If the wait for graph contains a cycle, then there may be a deadlock
in the system, otherwise not.
Deadlock Detection
1. If resources have single instance:
In this case for Deadlock detection we can run an algorithm to check for cycle in
the Resource Allocation Graph. Presence of cycle in the graph is the sufficient
condition for deadlock.

In the above diagram, resource 1 and resource 2 have single instances. There is a
cycle R1 → P1 → R2 → P2. So, Deadlock is Confirmed.

2. If there are multiple instances of resources:


Detection of the cycle is necessary but not sufficient condition for deadlock
detection, in this case, the system may or may not be in deadlock varies
according to different situations.

Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock
recovery as it is time and space consuming process. Real-time operating systems use
Deadlock recovery.
Preempt the resource
We can snatch one of the resources from the owner of the resource (process) and give
it to the other process with the expectation that it will complete the execution and will
release this resource sooner. Well, choosing a resource which will be snatched is
going to be a bit difficult.
Rollback to a safe state
System passes through various states to get into the deadlock state. The operating
system canrollback the system to the previous safe state. For this purpose, OS needs to
implement check pointing at every state.

The moment, we get into deadlock, we will rollback all the allocations to get into the
previous safe state.

For Process

Kill a process
Killing a process can solve our problem but the bigger concern is to decide which
process to kill. Generally, Operating system kills a process which has done least
amount of work until now.

Kill all process


This is not a suggestible approach but can be implemented if the problem becomes
very serious. Killing all process will lead to inefficiency in the system because all the
processes will execute again from starting.
OPERATING SYSTEMS
Unit – IV

Swapping

Swapping is a mechanism in which a process can be swapped temporarily out of main


memory (or move) to secondary storage (disk) and make that memory available to other
processes. At some later time, the system swaps back the process from the secondary
storage to main memory.

Though performance is usually affected by swapping process but it helps in running


multiple and big processes in parallel and that's the reason Swapping is also known
as a technique for memory compaction.

The total time taken by swapping process includes the time it takes to move the entire
process to a secondary disk and then to copy the process back to memory, as well as
the time the process takes to regain main memory.

Let us assume that the user process is of size 2048KB and on a standard hard disk
where swapping will take place has a data transfer rate around 1 MB per second. The
actual transfer of the 1000K process to or from memory will take

2048KB / 1024KB per second


= 2 seconds
= 2000 milliseconds
Now considering in and out time, it will take complete 4000 milliseconds plus other
overhead where the process competes to regain main memory.

Memory Allocation

Main memory usually has two partitions −

• Low Memory − Operating system resides in this memory.

• High Memory − User processes are held in high memory.

Operating system uses the following memory allocation mechanism.

S.N. Memory Allocation & Description

1 Single-partition allocation

In this type of allocation, relocation-register scheme is used to protect user


processes from each other, and from changing operating-system code and data.
Relocation register contains value of smallest physical address whereas limit
register contains range of logical addresses. Each logical address must be less than
the limit register.

2 Multiple-partition allocation

In this type of allocation, main memory is divided into a number of fixed-sized


partitions where each partition should contain only one process. When a partition
is free, a process is selected from the input queue and is loaded into the free

partition. When the process terminates, the partition becomes available for another
process.
Paging

A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard that's set up to emulate the computer's RAM. Paging technique plays an important
role in implementing virtual memory.

Paging is a memory management technique in which process address space is broken


into blocks of the same size called pages (size is power of 2, between 512 bytes and
8192 bytes). The size of the process is measured in the number of pages.

Similarly, main memory is divided into small fixed-sized blocks of (physical) memory
called frames and the size of a frame is kept the same as that of a page to have optimum
utilization of the main memory and to avoid external fragmentation.

Advantages and Disadvantages of Paging

Here is a list of advantages and disadvantages of paging −

• Paging reduces external fragmentation, but still suffer from internal


fragmentation.
• Paging is simple to implement and assumed as an efficient memory management
technique.

• Due to equal size of the pages and frames, swapping becomes very easy.

• Page table requires extra memory space, so may not be good for a system having
small RAM.

Segmentation

Segmentation is a memory management technique in which each job is divided into


several segments of different sizes, one for each module that contains pieces that
perform related functions. Each segment is actually a different logical address space
of the program.

When a process is to be executed, its corresponding segmentation are loaded into non-
contiguous memory though every segment is loaded into a contiguous block of
available memory.

Segmentation memory management works very similar to paging but here segments
are of variable-length where as in paging pages are of fixed size.

A program segment contains the program's main function, utility functions, data
structures, and so on. The operating system maintains a segment map table for every
process and a list of free memory blocks along with segment numbers, their size and
corresponding memory locations in main memory. For each segment, the table stores
the starting address of the segment and the length of the segment. A reference to a
memory location includes a value that identifies a segment and an offset.
Virtual Memory

A computer can address more memory than the amount physically installed on the
system. This extra memory is actually called virtual memory and it is a section of a
hard disk that's set up to emulate the computer's RAM.

The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of
physical memory by using disk. Second, it allows us to have memory protection,
because each virtual address is translated to a physical address.

Following are the situations, when entire program is not required to be loaded fully in
main memory.

• User written error handling routines are used only when an error occurred
in the data or computation.

• Certain options and features of a program may be used rarely.

• Many tables are assigned a fixed amount of address space even though
only a small amount of the table is actually used.

• The ability to execute a program that is only partially in memory would


counter many benefits.
• Less number of I/O would be needed to load or swap each user program
into memory.

• A program would no longer be constrained by the amount of physical


memory that is available.

• Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.

Modern microprocessors intended for general-purpose use, a memory management


unit, or MMU, is built into the hardware. The MMU's job is to translate virtual
addresses into physical addresses. A basic example is given below −

Virtual memory is commonly implemented by demand paging. It can also be


implemented in a segmentation system. Demand segmentation can also be used to
provide virtual memory.

Demand Paging

A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not in
advance. When a context switch occurs, the operating system does not copy any of the
old program’s pages out to the disk or any of the new program’s pages into the main
memory Instead, it just begins executing the new program after loading the first page
and fetches that program’s pages as they are referenced.

While executing a program, if the program references a page which is not available in
the main memory because it was swapped out a little ago, the processor treats this
invalid memory reference as a page fault and transfers control from the program to
the operating system to demand the page back into the memory.

Advantages

Following are the advantages of Demand Paging −

• Large virtual memory.

• More efficient use of memory.

• There is no limit on degree of multiprogramming.

Disadvantages

• Number of tables and the amount of processor overhead for handling page
interrupts are greater than in the case of the simple paged management
techniques.
Page Replacement Algorithms
The page replacement algorithm decides which memory page is to be replaced. The
process of replacement is sometimes called swap out or write to disk. Page replacement
is done when the requested page is not found in the main memory (page fault).

Types of Page Replacement Algorithms


There are various page replacement algorithms. Each algorithm has a different method
by which the pages can be replaced.

1. Optimal Page Replacement algorithm → this algorithms replaces the page


which will not be referred for so long in future. Although it can not be practically
implementable but it can be used as a benchmark. Other algorithms are compared
to this in terms of optimality.
2. Least recent used (LRU) page replacement algorithm → this algorithm
replaces the page which has not been referred for a long time. This algorithm is
just opposite to the optimal page replacement algorithm. In this, we look at the
past instead of staring at future.
3. FIFO → in this algorithm, a queue is maintained. The page which is assigned
the frame first will be replaced first. In other words, the page which resides at
the rare end of the queue will be replaced on the every page fault.

File System Interface:


For the majority of users, the file system is the most obvious aspect of any operating
system. This provides users the method for storage and access to data as well as
programs of the operating system where all the users of the computer system can use it.
The file system consists of 2 distinct parts:
• a collection of files, that store related data, and

• a directory structure, which organizes and provides information about all the files
in the system.

In this chapter, you will learn about the different file tribute, concepts of file and its
storage along with operations on files.
File Concept

Computer store information in storage media such as disk, tape drives, and optical disks.
The operating system provides a logical view of the information stored in the disk. This
logical storage unit is a file.

The information stored in files are non-volatile, means they are not lost during power
failures. A file is named collection of related information that is stored on physical
storage.

Data cannot be written to a computer unless it is written to a file. A file, in general, is a


sequence of bits, bytes, lines, or records defined by its owner or creator. The file has a
structure defined by its owner or creator and depends on the file type.

• Text file – It has a sequence of characters.


• Image file – It has visual information such as photographs, vectors art
and so on.
• Source file – It has subroutines and function that are compiled later.
• Object file – It has a sequence of bytes, organized into bytes and used
by the linker.
• Executable file – The binary code that the loader brings to memory for
execution is stored in an exe file

File Access Mechanisms


File access mechanism refers to the manner in which the records of a file may be
accessed. There are several ways to access files −

• Sequential access
• Direct/Random access
• Indexed sequential access

Sequential access

A sequential access is that in which the records are accessed in some sequence, i.e.,
the information in the file is processed in order, one record after the other. This access
method is the most primitive one. Example: Compilers usually access files in this
fashion.

Direct/Random access

• Random access file organization provides, accessing the records directly.

• Each record has its own address on the file with by the help of which it can be
directly accessed for reading or writing.

• The records need not be in any sequence within the file and they need not be in
adjacent locations on the storage medium.

Indexed sequential access

• This mechanism is built up on base of sequential access.


• An index is created for each file which contains pointers to various blocks.
• Index is searched sequentially and its pointer is used to access the file directly.

Directory Structure

Directory can be defined as the listing of the related files on the disk. The directory
may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard
disk can be divided into the number of partitions of different sizes. The partitions are
also called volumes or mini disks.

Each partition must have at least one directory in which, all the files of the partition
can be listed. A directory entry is maintained for each file in the directory which
stores all the information related to that file.

A directory can be viewed as a file which contains the Meta data of the bunch of files.

Every Directory supports a number of common operations on the file:

1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files

File System Implementation


Prerequisite – File Systems in Operating Systems
A file is a collection of related information. The file system resides on secondary storage
and provides efficient and convenient access to the disk by allowing data to be stored,
located, and retrieved.
File system organized in many layers :
• I/O Control level –
Device drivers acts as interface between devices and Os, they help to transfer data
between disk and main memory. It takes block number a input and as output it gives
low level hardware specific instruction.
/li>
• Basic file system –
It Issues general commands to device driver to read and write physical blocks on
disk.It manages the memory buffers and caches. A block in buffer can hold the
contents of the disk block and cache stores frequently used file system metadata.
• File organization Module –
It has information about files, location of files and their logical and physical
blocks.Physical blocks do not match with logical numbers of logical block numbered
from 0 to N. It also has a free space which tracks unallocated blocks.
• Logical file system –
It manages metadata information about a file i.e includes all details about a file except
the actual contents of file. It also maintains via file control blocks. File control block
(FCB) has information about a file – owner, size, permissions, location of file
contents

Allocation Methods

Space Allocation

Files are allocated disk spaces by operating system. Operating systems deploy
following three main ways to allocate disk space to files.

• Contiguous Allocation
• Linked Allocation
• Indexed Allocation
Contiguous Allocation

• Each file occupies a contiguous address space on disk.


• Assigned disk address is in linear order.
• Easy to implement.
• External fragmentation is a major issue with this type of allocation technique.

Linked Allocation

• Each file carries a list of links to disk blocks.


• Directory contains link / pointer to first block of a file.
• No external fragmentation
• Effectively used in sequential access file.
• Inefficient in case of direct access file.

Indexed Allocation

• Provides solutions to problems of contiguous and linked allocation.


• A index block is created having all pointers to files.
• Each file has its own index block which stores the addresses of disk space
occupied by the file.
• Directory contains the addresses of index blocks of files.

Free space management

The system keeps tracks of the free disk blocks for allocating space to files when they
are created. Also, to reuse the space released from deleting the files, free space
management becomes crucial. The system maintains a free space list which keeps track
of the disk blocks that are not allocated to some file or directory. The free space list can
be implemented mainly as:

1. Bitmap or Bit vector –


A Bitmap or Bit Vector is series or collection of bits where each bit
corresponds to a disk block. The bit can take two values: 0 and 1: 0 indicates
that the block is allocated and 1 indicates a free block.
The given instance of disk blocks on the disk in Figure 1 (where green blocks
are allocated) can be represented by a bitmap of 16 bits as: 0000111000000110.
2. Linked List –
In this approach, the free disk blocks are linked together i.e. a free block
contains a pointer to the next free block. The block number of the very first
disk block is stored at a separate location on disk and is also cached in memory.
3. Grouping –
This approach stores the address of the free blocks in the first free block. The
first free block stores the address of some, say n free blocks. Out of these n
blocks, the first n-1 blocks are actually free and the last block contains the
address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk
blocks can be found easily.
4. Counting –
This approach stores the address of the first free disk block and a number n of
free contiguous disk blocks that follow the first block.
Every entry in the list would contain:
1. Address of first free disk block
2. A number n

You might also like