[go: up one dir, main page]

0% found this document useful (0 votes)
142 views52 pages

OS Deadlocks Explained

os 3

Uploaded by

swagathnaik03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views52 pages

OS Deadlocks Explained

os 3

Uploaded by

swagathnaik03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

OPERATING SYSTEMS MODULE 3

DEADLOCKS
DEADLOCK
Define deadlock.
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource held by some other process.

Real life example:


When 2 trains are coming toward each other on same track and there is only one track,
none of the trains can move once they are in front of each other.
Similar situation occurs in operating systems when there are two or more processes hold
some resources and wait for resources held by other(s).

SYSTEM MODEL
A system consists of finite number of resources. (For ex: memory, printers, CPUs). These
resources are distributed among number of processes. A process must request a resource before
using it and release the resource after using it. The process can request any number of resources to
carry out a given task. The total number of resource requested must not exceed the total number of
resources available.
In normal operation, a process must perform following tasks in sequence:
1. Request
If the request cannot be granted immediately (for ex: the resource is being used by
another process), then the requesting-process must wait for acquiring the resource.
For example: open( ), malloc( ), new( ), and request( )
2. Use
The process uses the resource. For example: prints to the printer or reads from the file.
3. Release
The process releases the resource. So that, the resource becomes available for other processes.
For example: close( ), free( ), delete( ), and release( ).
A set of processes is deadlocked when every process in the set is waiting for a resource
that is currently allocated to another process in the set. Deadlock may involve different
types of resources.
As shown in below figure , Both processes P1 & P2 need resources to continue execution.
P1 requires additional resource R1and is in possession of resource R2. P2 requires
additional resource R2 and is in possession of R1. Thus, neither process can continue.
Page 1
OPERATING SYSTEMS MODULE 3

Multithread programs are good candidates for deadlock because they compete for shared
resources.

DEADLOCK CHARACTERIZATION
In a deadlock, processes never finish executing, and system resources are tied up, preventing other
jobs from starting.

NECESSARY CONDITIONS

**********What are the necessary conditions for deadlock? OR


***********What are the characteristics of deadlock?
There are four conditions that are necessary to achieve deadlock:
1. Mutual Exclusion
 Only one process at a time can use a resource.
 At least one resource must be held in a non-sharable mode.
 If any other process requests this resource, then the requesting-process must
wait for the resource to be released.
2. Hold and Wait
A process must be simultaneously holding at least one resource and waiting to acquire additional
resources held by the other process.
3. No Preemption
Once a process is holding a resource ( i.e. once its request has been granted ), then that
resource cannot be taken away from that process until the process voluntarily releases it.
4. Circular Wait
A set of processes {P0, P1, P2, . . ., PN } must exist such that P0 is waiting for a resource that is held
by P1. P1 is waiting for a resource that is held by P2. Pn–1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.

Page 2
OPERATING SYSTEMS MODULE 3

RESOURCE-ALLOCATION-GRAPH (RAG)

The resource-allocation-graph (RAG) is a directed graph that can be used to describe the
deadlock situation.
RAG consists of a
i. Set of vertices (V) and
ii. Set of edges (E).
V is divided into two types of nodes
1) Process vertex or node; P={P1,P2…...... Pn} i.e., set consisting of all active processes
in the system.
2) Resource vertex or node; R={R1,R2….........Rn} i.e., set consisting of all resource
types in thesystem.
E is divided into two types of edges:
1) Request Edge
 A directed-edge Pi → Rj is called a request edge.
 Pi → Rj indicates that process Pi has requested a resource Rj.
2) Assignment Edge
 A directed-edge Rj → Pi is called an assignment edge.
 Rj → Pi indicates that a resource Rj has been allocated to process Pi.
NOTE:
Suppose that process Pi requests resource Rj.; Here, the request for Rj from Pi can be granted only
if the converting request-edge to assignment-edge do not form a cycle in the resource-allocation
graph.
Pictorially, we represent each process Pi as a circle. We represent each resource-type Rj as a
rectangle.

Page 3
OPERATING SYSTEMS MODULE 3

As shown in below figures, the RAG illustrates the following 3 situation


Describe RAG i) With deadlock ii) With a cycle but no deadlock
1) RAG with a deadlock
2) RAG with a cycle and deadlock
3) RAG with a cycle but no deadlock

1.) RAG with a deadlock 2). With a cycle and deadlock 3). With cycle but nodeadlock

CONCLUSION:
1) If a graph contains no cycles, then the system is not deadlocked.
2) If the graph contains a cycle and if only one instance per resource type is used, then
deadlock will occur.
3) If the graph contains a cycle and if several instances per resource type is used, then
there is a possibility of deadlock but not necessarily present

METHODS FOR HANDLING DEADLOCKS


***Briefly explain the methods for handling deadlock
There are three ways of handling deadlocks:
1. Deadlock prevention or Avoidance: Do not allow the system to get into a deadlocked state. In
order to prevent deadlocks, use set of methods for ensuring that at least one of the necessary
conditions for deadlock cannot hold. In order to avoid deadlocks, the system must have additional
information about all processes, such that which resources a process will request and use during
its lifetime. With this additional knowledge, it can decide for each request whether or not the
process should wait.
2. Deadlock detection and recovery: Abort a process or preempt some resources when
deadlocks are detected. Deadlock detection is fairly straightforward, but deadlock
Page 4
OPERATING SYSTEMS MODULE 3

recovery requires either aborting processes or preempting resources.


3. Ignore the problem all together and pretend that deadlocks never occur in the system:
This solution is used by most operating system

DEADLOCK-PREVENTION

How can deadlock be prevented? Describe any three of them.


Deadlocks can be eliminated by preventing (making False) at least one of the four required
conditions:
1) Mutual exclusion
2) Hold-and-wait
3) No preemption
4) Circular-wait.
Mutual Exclusion

 This condition must hold for non-sharable resources. For example: A printer cannot be
simultaneously shared by several processes.
 On the other hand, shared resources do not lead to deadlocks. For example: Simultaneous
access can be granted for read-only file.
 A process never waits for accessing a sharable resource.
 In general, we cannot prevent deadlocks by denying the mutual-exclusion
condition because some resources are non-sharable by default.
Hold and Wait:
To prevent this condition it must be ensure that, whenever a process requests a resource, it does
not hold any other resources.
There are several solutions to this problem.
 Each process must be allocated with all of its resources before it begins execution
 A process must request a resource only when the process has none allocated to it. A
process may request some resources and use them. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated
No Preemption
To prevent this condition the resources must be preempted.
There are several solutions to this problem.
 If a process is holding some resources and requests another resource that cannot be
immediately allocated to it, then all resources currently being held are preempted.

Page 5
OPERATING SYSTEMS MODULE 3

 The preempted resources are added to the list of resources for which the process is
waiting.
 The process will be restarted only when it regains the old resources and the new
resources that it is requesting.
 When a process request resources, we check whether they are available or not. If they are
available allocate them, if they are not check whether they are available with process
waiting for additional resource if so preempt desired resources from the waiting process.
Circular-Wait
 To ensure that this condition never holds is to impose a total ordering of all resource types
by assigning numbers to all resources, and require that each process requests resources in
an increasing/decreasing order of enumeration.
 Require that whenever a process requests a resource, it has released resources
with a lower number.
 One big challenge in this scheme is determining the relative ordering of the different
resources.

DEADLOCK AVOIDANCE
The general idea behind deadlock avoidance is to prevent deadlocks from ever happening.
Deadlocks are requiring additional information about how resources are to be requested.
Deadlock-avoidance algorithm
 Requires more information about each process, and
 Tends to lead to low device utilization.
For example:
1) In simple algorithms, the scheduler only needs to know the maximum number
of each resource that a process might potentially use.
2) In complex algorithms, the scheduler can also take advantage of the schedule
of exactly what resources may be needed in what order.
A deadlock-avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular-wait condition never exists. The resource-allocation state is defined
by the number of available and allocated resources and the maximum demand of each
process.

Page 6
OPERATING SYSTEMS MODULE 3

SAFE STATE

A state is safe if the system can allocate all resources requested by all processes without entering
a deadlock state.
A state is safe if there exists a safe sequence of processes {P0, P1, P2, ..., PN} such that the requests
of each process(Pi) can be satisfied by the currently available resources.
If a safe sequence does not exist, then the system is in an unsafe state, which may lead to
deadlock. All safe states are deadlock free, but not all unsafe states lead to deadlocks.

Safe, Unsafe and Deadlock state spaces

Page 7
OPERATING SYSTEMS MODULE 3

DEADLOCK AVOIDANCE USING RESOURCE ALLOCATION GRAPH ALGORITHM

If resource categories have only single instances of their resources, then deadlock states can be
detected by cycles in the resource-allocation graphs.
In this case, unsafe states can be recognized and avoided by augmenting the resource-allocation
graph with claim edges (denoted by a dashed line).
Claim edge Pi → Rj indicated that process Pi may request resource Rj at some time in future.
The important steps are as below:
1. When a process Pi requests a resource Rj, the claim edge Pi → Rj is converted to a request
edge.
2. Similarly, when a resource Rj is released by the process Pi, the assignment edge Rj → Pi is
reconverted as claim edge Pi → Rj.
3. The request for Rj from Pi can be granted only if the converting request edge to assignment
edge do not form a cycle in the resource allocation graph.
To apply this algorithm, each process Pi must know all its claims before it starts executing.
Conclusion:
 If no cycle exists, then the allocation of the resource will leave the system in a safe state.
 If cycle is found, system is put into unsafe state and may cause a deadlock.
For example: Consider a resource allocation graph shown in Figure below

Suppose P2 requests R2. Though R2 is currently free, we cannot allocate it to P2 as this action will
create a cycle in the graph as shown in below Figure. This cycle will indicate that the system is in
unsafe state: because, if P1 requests R2 and P2 requests R1 later, a deadlock will occur.

Page 8
OPERATING SYSTEMS MODULE 3

Unsafe State In Resource-Allocation Graph


Problem:
The resource-allocation graph algorithm is not applicable when there are multiple instances for
each resource.
Solution: Use banker's algorithm.
BANKERS ALGORITHM
This algorithm is applicable to the system with multiple instances of each resource types.
However, this algorithm is less efficient then the resource-allocation-graph algorithm. When a
process starts up, it must declare the maximum number of resources that it may need. This number
may not exceed the total number of resources in the system. When a request is made, the system
determines whether granting the request would leave the system in a safe state.
If the system in a safe state, then the resources are allocated;
else the process must wait until some other process releases enough resources.
Assumptions: Let n = number of processes in the system Let m = number of resources types.
Following data structures are used to implement the banker’s algorithm.
Explain the data structures used in Bankers algorithm.
Available [m]
 This vector indicates the no. of available resources of each type.
 If Available[j]=k, then k instances of resource type Rj is available.
Max [n][m]
 This matrix indicates the maximum demand of each process of each resource.
 If Max[i,j]=k, then process Pi may request at most k instances of resource type Rj.
Allocation [n][m]
 This matrix indicates no. of resources currently allocated to each process.
 If Allocation[i,j]=k, then Pi is currently allocated k instances of Rj.

Page 9
OPERATING SYSTEMS MODULE 3

Need [n][m]
 This matrix indicates the remaining resources need of each process.
 If Need[i,j]=k, then Pi may need k more instances of resource Rj to complete its task.
 So, Need[i,j] = Max[i,j] - Allocation[i]
Write and explain Bankers algorithm. (explain both safety and Resource request algorithm)
This algorithm is applicable to the system with multiple instances of each resource types.
When a process starts up, it must declare the maximum number of resources that it may need. This
number may not exceed the total number of resources in the system. When a request is made, the
system determines whether granting the request would leave the system in a safe state.
If the system in a safe state, then the resources are allocated;
else the process must wait until some other process releases enough resources.
Assumptions: Let n = number of processes in the system Let m = number of resources types.
Following data structures are used to implement the banker’s algorithm
Available [m], Max [n][m], Allocation [n][m], Need [n][m]
The Banker’s algorithm has two parts: 1) Safety Algorithm
2) Resource – Request Algorithm
Safety Algorithm
Step 1:
Let Work and Finish be two vectors of length m and n respectively.
Initialize:
Work = Available
Finish[i] = false for i=1,2,3,…….n
Step 2:
Find an index(i) such that both
a) Finish[i] = false
b) Need i <= Work.
If no such i exist, then go to step 4
Step 3:
Set:
Work = Work + Allocation(i)
Finish[i] = true
Go to step 2
Step 4:
If Finish[i] = true for all i, then the system is in safe state.

Page 10
OPERATING SYSTEMS MODULE 3

Resource-Request Algorithm

****Explain Resource Request Algorithm


This algorithm determines if a new request is safe, and grants it only if it is safe to do so. When a
request is made ( that does not exceed currently available resources ), pretend it has been granted,
and then see if the resulting state is a safe one. If so, grant the request, and if not, deny the request.
Let Requesti be the request vector of process Pi.
If Requesti [j]=k, then process Pi wants k instances of the resource type Rj.
Step 1:
If Requesti <= Needi then go to step 2
else
Raise an error condition, since the process has exceeded its maximum claim.
Step 2:
If Requesti <= Available then go to step 3
else
Pi must wait, since the resources are not available.
Step 3:
If the system wants to allocate the requested resources to process Pi then modify
the state as follows:
Available = Available – Requesti
Allocationi = Allocationi + Requesti
Needi = Needi – Requesti
Step 4:
If the resulting resource-allocation state is safe, then
i) transaction is complete and
ii) Pi is allocated its resources.
Step 5:
If the new state is unsafe,then
i) Pi must wait for Requesti and
ii) Old resource-allocation state is restored.

Page 11
OPERATING SYSTEMS MODULE 3

An Illustrative Example

1. Consider the following snapshot of a system:

Allocation Max Available


A B C A B C A B C
P0 0 1 0 7 5 3 3 3 2
P1 2 0 0 3 2 2
P2 3 0 3 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

Answer the following questions using Banker's algorithm.


i) What is the content of the matrix need?
ii) Is the system in a safe state?
iii) If a request from process P1 arrives for (1, 0, 2) can the request be granted
immediately?
Solution:
The content of the matrix Need is given by Need = Max - Allocation
So, the content of Need Matrix is:
Need
A B C
P0 7 4 3
P1 1 2 2
P2 6 0 0
P3 0 1 1
P4 4 3 1

Applying the Safety algorithm on the given system:


Step 1: Initialization
Work = Available ie: Work = 3 3 2

Process P0 P1 P2 P3 P4

Finish False False False False False

Page 12
OPERATING SYSTEMS MODULE 3

Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (3 3 2) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (1 2 2) <= (3 3 2) ie: True, So P1 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P1] = (3 3 2) + (2 0 0) = (5 3 2)

Process P0 P1 P2 P3 P4

Finish False True False False False

Step 2: For i=2


Finish[P2] = false and Need[P2] <= Work i.e. (6 0 0) <= (5 3 2) ie: false, So P2 must wait.
Step 2: For i=3
Finish[P3] = false and Need[P3] <= Work i.e. (0 1 1) <= (5 3 2) ie: True, So P3 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P3] = (5 3 2) + (2 1 1) = (7 4 3)

Process P0 P1 P2 P3 P4

Finish False True False True False

Step 2: For i=4


Finish[P4] = false and Need[P4] <= Work i.e. (4 3 1) <= (7 4 3) ie: True, So P4 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P4] = (7 4 3) + (0 0 2) = (7 4 5)

Process P0 P1 P2 P3 P4

Finish False True False True True

Step 2: For i=0


Finish[P0] = false and Need[P0] <= Work i.e. (7 4 3) <= (7 4 5) ie: True, So P0 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P0] = (7 4 5) + (0 1 0) = (7 5 5)

Process P0 P1 P2 P3 P4

Finish True True False True True

Page 13
OPERATING SYSTEMS MODULE 3

Step 2: For i=2


Finish[P2] = false and Need[P2] <= Work i.e. (6 0 0) <= (7 5 5) ie: True, So P2 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P2] = (7 5 5) + (3 0 2) = (10, 5 7)

Process P0 P1 P2 P3 P4

Finish True True True True True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P1, P3, P4, P0, P2>.
ii) Conclusion: Yes, the system is currently in a safe state.

Solution (iii): P1 requests (1 0 2) i.e. Request [P1] = (1 0 2)


To decide whether the request is granted, we use Resource Request algorithm.
Step 1: Request[P1] <= Need[P1] i.e. (1 0 2) <= (1 2 2) ie: true.
Step 2: Request[P1] <=Available (at time t0, when system snapshot is taken)
i.e. (1 0 2) <= (3 3 2) ie: true.
Step 3: Available = Available – Request [P1] = (3 3 2) - (1 0 2) = (2 3 0)
Allocation[P1] = Allocation[P1] + Request[P1] = (2 0 0) + (1 0 2)= (3 0 2)
Need[P1] = Need[P1] – Request[P1] = (1 2 2) - (1 0 2)= (0 2 0)

We arrive at the following new system state:


Allocation Max Available
A B C A B C A B C
P0 0 1 0 7 5 3 2 3 0
P1 3 0 2 3 2 2
P2 3 0 2 9 0 2
P3 2 1 1 2 2 2
P4 0 0 2 4 3 3

The content of the matrix Need is given by Need = Max - Allocation


So, the content of Need Matrix is:

Page 14
OPERATING SYSTEMS MODULE 3

Need
A B C
P0 7 4 3
P1 0 2 0
P2 6 0 0
P3 0 1 1
P4 4 3 1
To determine whether this new system state is safe, we again execute Safetyalgorithm.
Step 1: Initialization
Work = Available ie: Work = 2 3 0

Process P0 P1 P2 P3 P4

Finish False False False False False

Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (2 3 0) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (0 2 0) <= (2 3 0) ie: True, So P1 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P1] = (2 3 0) + (3 0 2) = (5 3 2)

Process P0 P1 P2 P3 P4

Finish False True False False False

Step 2: For i= 2
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (5 3 2) ie: false So P2 must wait.
Step 2: For i=3
Finish [P3] = false and Need [P3] <= Work i.e. (0 1 1) <= (5 3 2) ie: True, So P3 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P3] = (5 3 2) + (2 1 1) = (7 4 3)

Process P0 P1 P2 P3 P4

Finish False True False True False

Page 15
OPERATING SYSTEMS MODULE 3

Step 2: For i=4


Finish [P4] = false and Need [P4] <= Work i.e. (4 3 1) <= (7 4 3) ie: True, So P4 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P4] = (7 4 3) + (0 0 2) = (7 4 5)

Process P0 P1 P2 P3 P4

Finish False True False True True

Step 2: For i =0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (7 4 5) ie: True, So P0 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P0] = (7 4 5) + (0 1 0) = (7 5 5)

Process P0 P1 P2 P3 P4

Finish True True False True True

Step 2: For i =2
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (7 5 5) ie: True, So P2 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P2] = (7 5 5) + (3 0 2) = (10, 5, 7)

Process P0 P1 P2 P3 P4

Finish True True True True True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P1, P3, P4, P0, P2>.
Conclusion: Since the system is in safe sate, the request can be granted.

Page 16
OPERATING SYSTEMS MODULE 3

2. Consider the following snapshot of a system:


Allocation Max Available
A B C A B C A B C
P0 0 0 2 0 0 4 1 0 2
P1 1 0 0 2 0 1
P2 1 3 5 1 3 7
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7
Answer the following questions using Banker's algorithm.
i) What is the content of the matrix need?
ii) Is the system in a safe state?
iii) If a request from process P2 arrives for (0, 0, 2) can the request be granted
immediately?
Solution:
i) The content of the matrix Need is given by Need = Max - Allocation
So, the content of Need Matrix is:
Need
A B C
P0 0 0 2
P1 1 0 1
P2 0 0 2
P3 2 1 0
P4 0 1 4

ii) Applying the Safety algorithm on the given system:


Step 1: Initialization
Work = Available ie: Work = (1 0 2)

Process P0 P1 P2 P3 P4

Finish False False False False False

Step 2: For i=0

Need[P0] <= work ie: (0 0 2) <= (1 0 2) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 0 2) + (0 0 2) = (1 0 4)

Page 17
OPERATING SYSTEMS MODULE 3

Set Finish[P0] = True

Step 2: For i=1

Need[P1] <= work ie: (1 0 1) <= (1 0 4) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (1 0 4) + (1 0 0) = (2 0 4)

Set Finish[P1] = True

Step 2: For i=2

Need[P2] <= work ie: (0 0 2) <= (2 0 4) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (2 0 4) + (1 3 5) = (3 3 9)

Set Finish[P2] = True

Step 2: For i=3

Need[P3] <= work ie: (2 1 0) <= (3 3 9) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (3 3 9) + (6 3 2) = (9, 6, 11)

Set Finish[P3] = True

Step 2: For i=4

Need[P4] <= work ie: (0 1 4) <= (9, 6, 11) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (9, 6, 11) + (1 4 3) = (10, 10, 14)

Set Finish[P4] = True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P0, P1, P2, P3, P4>.

Solution (iii): P2 requests (0 0 2) i.e. Request [P2] = (0 0 2)


To decide whether the request is granted, we use Resource Request algorithm.
Step 1: Request[P2] <= Need[P2] i.e. (0 0 2) <= (0 0 2) ie: true.
Step 2: Request[P2] <=Available (at time t0, when system snapshot is taken)
i.e. (0 0 2) <= (1 0 2) ie: true.
Step 3: Available = Available – Request [P2] = (1 0 2) - (0 0 2) = (1 0 0)
Allocation[P2] = Allocation[P2] + Request[P2] = (1 3 5) + (0 0 2)= (1 3 7)
Need[P2] = Need[P2] – Request[P2] = (0 0 2) - (0 0 2) = (0 0 0)

Page 18
OPERATING SYSTEMS MODULE 3

We arrive at the following new system state:


Allocation Max Available
A B C A B C A B C
P0 0 0 2 0 0 4 1 0 0
P1 1 0 0 2 0 1
P2 1 3 7 1 3 7
P3 6 3 2 8 4 2
P4 1 4 3 1 5 7

The content of the matrix Need is given by Need = Max - Allocation


So, the content of Need Matrix is:
Need
A B C
P0 0 0 2
P1 1 0 1
P2 0 0 0
P3 2 1 0
P4 0 1 4
To determine whether this new system state is safe, we again execute Safetyalgorithm.
Step 1: Initialization
Work = Available ie: Work = (1 0 0)

Process P0 P1 P2 P3 P4

Finish False False False False False

Step 2: For i=0

Need[P0] <= work ie: (0 0 2) <= (1 0 0) ie: False, So P0 must wait.


Step 2: For i=1

Need[P1] <= work ie: (1 0 1) <= (1 0 0) ie: False So P1 must wait.


Step 2: For i=2

Need[P2] <= work ie: (0 0 0) <= (1 0 0) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 0 0) + (1 3 7) = (2 3 7)

Set Finish[P2] = True

Page 19
OPERATING SYSTEMS MODULE 3

Step 2: For i=3

Need[P3] <= work ie: (2 1 0) <= (2 3 7) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 3 7) + (6 3 2) = (8, 6, 9)

Set Finish[P3] = True

Step 2: For i=4

Need[P4] <= work ie: (0 1 4) <= (8, 6, 9) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (8, 6, 9) + (1 4 3) = (9, 10, 12)

Set Finish[P4] = True

Step 2: For i=0


Need[P0] <= work ie: (0 0 2) <= (9, 10, 12) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (9, 10, 12) + (0 0 2) = (9, 10, 14)

Set Finish[P0] = True

Step 2: For i=1


Need[P1] <= work ie: (1 0 1) <= (9, 10, 14) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (9, 10, 14) + (1 0 0) = (10, 10, 14)

Set Finish[P1] = True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P2, P3, P4, P0, P2>.
Conclusion: Since the system is in safe sate, the request can be granted.

3. Consider the following snapshot of a system:


Allocation Max Available
A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 5 2 0
P1 1 0 0 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6
Answer the following questions using Banker's algorithm.
i) What is the content of the matrix need?
ii) Is the system in a safe state?
iii) If a request from process P1 arrives for (0, 4, 2, 0) can the request be granted?
Page 20
OPERATING SYSTEMS MODULE 3

The content of the matrix Need is given by Need = Max - Allocation


So, the content of Need Matrix is:
Need
A B C D
P0 0 0 0 0
P1 0 7 5 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2
Step 1: Initialization
Work = Available ie: Work = (1 5 2 0)

Process P0 P1 P2 P3 P4

Finish False False False False False

Step 2: For i=0


Need[P0] <= work ie: (0 0 0 0) <= (1 5 2 0) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 5 2 0) + (0 0 1 2) = (1 5 3 2)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (0 7 5 0) <= (1 5 3 2) ie: False, So P1 must wait
Step 2: For i=2
Need[P2] <= work ie: (1 0 0 2) <= (1 5 3 2) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 5 3 2) + (1 3 5 4) = (2 8 8 6)
Set Finish[P2] = True

Step 2: For i=3


Need[P3] <= work ie: (0 0 2 0) <= (2, 8, 8, 6) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2, 8, 8, 6) + (0 6 3 2) = (2, 14, 11, 8)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 6 4 2) <= (2, 14, 11, 8) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (2, 14, 11, 8) + (0 0 1 4) = (2, 14, 12, 12)
Set Finish[P4] = True

Step 2: For i=1

Page 21
OPERATING SYSTEMS MODULE 3

Need[P1] <= work ie: (0 7 5 0) <= (2, 14, 12, 12)ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (2, 14, 12, 12) + (1 0 0 0) = (3, 14, 12, 12)
Set Finish[P1] = True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P0, P2, P3, P4, P0>.
Solution (iii): P1 requests (0, 4, 2, 0) i.e. Request [P1] = (0, 4, 2, 0)
To decide whether the request is granted, we use Resource Request algorithm.
Step 1: Request[P1] <= Need[P1] i.e. (0, 4, 2, 0) <= (0, 7, 5, 0) ie: true.
Step 2: Request[P1] <=Available (at time t0, when system snapshot is taken)
i.e. (0, 4, 2, 0) <= (1, 5, 2, 0) ie: true.
Step 3: Available = Available – Request [P1] = (1, 5, 2, 0) - (0, 4, 2, 0) = (1, 1, 0, 0)
Allocation[P1] = Allocation[P1] + Request[P1] = (1 0 0 0) + (0, 4, 2, 0) = (1, 4, 2, 0)
Need[P1] = Need[P1] – Request[P1] = (0, 7, 5, 0) - (0, 4, 2, 0) = (0, 3, 3, 0)

We arrive at the following new system state:


Allocation Max Available
A B C D A B C D A B C D
P0 0 0 1 2 0 0 1 2 1 1 0 0
P1 1 4 2 0 1 7 5 0
P2 1 3 5 4 2 3 5 6
P3 0 6 3 2 0 6 5 2
P4 0 0 1 4 0 6 5 6

The content of the matrix Need is given by Need = Max - Allocation


So, the content of Need Matrix is:
Need
A B C D
P0 0 0 0 0
P1 0 3 3 0
P2 1 0 0 2
P3 0 0 2 0
P4 0 6 4 2

Page 22
OPERATING SYSTEMS MODULE 3

To determine whether this new system state is safe, we again execute Safetyalgorithm.
Step 1: Initialization
Work = Available ie: Work = (1, 1, 0, 0)

Process P0 P1 P2 P3 P4

Finish False False False False False

Step 2: For i=0


Need[P0] <= work ie: (0 0 0 0) <= (1 1 0 0) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 1 0 0) + (0 0 1 2) = (1 1 1 2)
Set Finish[P0] = True
Step 2: For i=1
Need[P1] <= work ie: (0 3 3 0) <= (1 1 1 2) ie: False, So P1 must wait
Step 2: For i=2
Need[P2] <= work ie: (1 0 0 2) <= (1 1 1 2) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 1 1 2) + (1 3 5 4) = (2 4 6 6)
Set Finish[P2] = True

Step 2: For i = 3
Need[P3] <= work ie: (0 0 2 0) <= (2 4 6 6) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 4 6 6) + (0 6 3 2) = (2, 10, 9, 8)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 6 4 2) <= (2, 10, 9, 8) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (2, 10, 9, 8) + (0 0 1 4) = (2, 10, 10, 12)
Set Finish[P4] = True

Step 2: For i=1


Need[P1] <= work ie: (0 3 3 0) <= (2, 10, 10, 12) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (2, 10, 10, 12) + (1 4 2 0) = (3, 14, 12, 12)
Set Finish[P1] = True

Step 4: Finish[Pi] = True for 0 <= i<= 4


Hence, the system is currently in a safe state. The safe sequence is <P0, P2, P3, P4, P1>.
Conclusion: Since the system is in safe sate, the request can be granted.

Page 23
OPERATING SYSTEMS MODULE 3

Page 24
OPERATING SYSTEMS MODULE 3

Page 25
OPERATING SYSTEMS MODULE 3

DEADLOCK DETECTION
If a system does not use either of deadlock-prevention or deadlock-avoidance algorithm then a
deadlock may occur. In this environment, the system must provide:
1. An algorithm to examine the system-state to determine whether a deadlock has occurred.
2. An algorithm to recover from the deadlock.
SINGLE INSTANCE OF EACH RESOURCE TYPE
If all the resources have only a single instance, then deadlock detection-algorithm can be defined
using a wait-for-graph. The wait-for-graph is applicable to only a single instance of a resource
type.
A wait-for-graph (WAG) is a variation of the resource-allocation-graph. The wait-for-graph can
be obtained from the resource-allocation-graph by removing the resource nodes and collapsing
the appropriate edges.
An edge from Pi to Pj implies that process Pi is waiting for process Pj to release a resource that Pi
needs. An edge Pi → Pj exists if and only if the corresponding graph contains two edges Pi → Rq
and Rq → Pj.
For example:
For the following resource Allocation Graph, write the wait-for –graph.
Consider resource-allocation-graph shown in below Figure

The corresponding wait-for-graph is shown in below Figure:

Page 26
OPERATING SYSTEMS MODULE 3

A deadlock exists in the system if and only if the wait-for-graph contains a cycle. To detect
deadlocks, the system needs to 1) Maintain the wait-for-graph and 2) Periodically execute an
algorithm that searches for a cycle in the graph.
SEVERAL INSTANCES OF A RESOURCE TYPE
The wait-for-graph is applicable to only a single instance of a resource type. However, the wait-
for-graph is not applicable to a multiple instance of a resource type. The following detection-
algorithm can be used for a multiple instance of a resource type.
Assumptions:
Let ‘n’ be the number of processes in the system Let ‘m’ be the number of resources types.
Following data structures are used to implement this algorithm.
Available [m]
This vector indicates the no. of available resources of each type.
If Available[j]=k, then k instances of resource type Rj is available.
Allocation [n][m]
This matrix indicates no. of resources currently allocated to each process.
If Allocation[i,j]=k, then Pi is currently allocated k instances of Rj.
Request [n][m]
This matrix indicates the current request of each process.
If Request [i, j] = k, then process Pi is requesting k more instances of resource type Rj.

Page 27
OPERATING SYSTEMS MODULE 3

DETECTION-ALGORITHM USAGE
The detection-algorithm must be executed based on following factors:
1. The frequency of occurrence of a deadlock.
2. The no. of processes affected by the deadlock.
If deadlocks occur frequently, then the detection-algorithm should be executed frequently.
Resources allocated to deadlocked-processes will be idle until the deadlock is broken.
Problem:
Deadlock occurs only when some processes make a request that cannot be granted immediately.
Solution 1:
The deadlock-algorithm must be executed whenever a request for allocation cannot be granted
immediately. In this case, we can identify set of deadlocked-processes and specific process
causing the deadlock.
Solution 2:
The deadlock-algorithm must be executed in periodic intervals. For example: once in an hour,
that is whenever CPU utilization drops below certain threshold
RECOVERY FROM DEADLOCK
********Explain different methods to recover from deadlock

The 3 different deadlock recovery methods are:


1. Inform the system-operator for manual intervention of deadlocked process.
2. Process Termination
3. Resource Preemption
Process Termination
Two methods to remove deadlocks:
i. Terminate all deadlocked-processes. This method will definitely break the deadlock-cycle.
However, this method incurs great expense. This is because a) Deadlocked-processes
might have computed for a long time. b) Results of these partial computations must be
discarded. c) Probably, the results must be re-computed later.
ii. Terminate one process at a time until the deadlock-cycle is eliminated: This method incurs
large overhead. This is because after each process is aborted, deadlock-algorithm must be
executed to determine if any other process is still deadlocked.
For process termination, following factors need to be considered:
1. Priority of the process
2. How long process has computed, and how much longer to completion

Page 28
OPERATING SYSTEMS MODULE 3

3. Resources the process has used


4. Resources process needs to complete
5. How many processes will need to be terminated
6. Is process interactive or batch?
Resource Preemption
Some resources are taken from one or more deadlocked-processes. These resources are given to
other processes until the deadlock-cycle is broken. Three issues need to be considered:
1. Selecting a victim: Which resources/processes are to be pre-empted (or blocked)? The
order of pre-emption must be determined to minimize cost. a) The cost factors includes the
time taken by deadlocked-process for computation. b) The number of resources used by
the deadlocked-processes.
2. Rollback: If a resource is taken from a process, the process cannot continue its normal
execution. In this case, the process must be rolled-back to break the deadlock. This method
requires the system to keep more information about the state of all running processes
3. Starvation: In a system where victim-selection is based on cost-factors, the same process
may be always picked as a victim. As a result, this process never completes its designated
task. To ensure a process is picked as a victim only a (small) finite number of times.

Page 29
OPERATING SYSTEMS MODULE 3

MEMORY MANAGEMENT
Main Memory
Basic Hardware structure of Main memory: Program must be brought (from disk) into main
memory and placed within a process for it to be run. Main-memory and registers are the only
storage that a CPU can access directly.
 Register access in one CPU clock.
 Main-memory can take many cycles.
 Cache sits between main-memory and CPU registers.
 Protection of memory required to ensure correct operation.
A pair of base- and limit-registers used to define the logical (virtual) address as shown in
below figures.

A base and a limit-register define a logical-address space Hardware address protection with base and limit-registers

ADDRESS BINDING
Explain the multi-step processing of a user program with a neat block diagram.

Address binding of instructions to memory-addresses can happen at 3 different stages as shown


in below figure.
Compile Time: If memory-location is known in advance then absolute code can be generated. If
starting location changes, then we must recompile code

Page 30
OPERATING SYSTEMS MODULE 3

Load Time: Must generate re-locatable code if memory-location is not known at compile time.
Execution Time: Binding delayed until run-time if the process can be moved during its execution
from one memory-segment to another. So it needs hardware support for address maps (e.g. base
and limit-registers).

Multistep processing of a user-program

Page 31
OPERATING SYSTEMS MODULE 3

LOGICAL VERSUS PHYSICAL ADDRESS SPACE


Logical-address is generated by the CPU (also referred to as virtual-address). Physical-address is
the address seen by the memory-unit. The compile-time and load-time address-binding methods
generate identical logical and physical addresses. The execution-time address binding scheme
results in differing logical and physical addresses.
Differentiate between Logical address and Physical address
S. No. Logical address Physical address

1 An address generated by the CPU An address seen by the memory unit

2 The set of all logical addresses Set of all physical addresses mapped to the
generated by CPU for a program is corresponding logical addresses is referred as
called logical address space. Physical Address space

3 Logical address is generated by the Physical address is computed by Memory


CPU. Management Unit..
4 The user can view the logical The user can never view physical address of
address of a program. program

MMU (Memory-Management Unit)


Hardware device that maps virtual-address to physical-address is MMU. The value in the
relocation-register is added to every address generated by a user-process at the time it is sent to
memory. The user-program deals with logical-addresses; it never sees the real physical-addresses.

Dynamic relocation using a relocation-register

Page 32
OPERATING SYSTEMS MODULE 3

DYNAMIC LOADING & LINKING


***Write a short note on dynamic loading and Linking
Dynamic loading can be used to obtain better memory-space utilization. A routine is not loaded
until it is called. This works as follows:
1. Initially, all routines are kept on disk in a re-locatable-load format.
2. Firstly, the main-program is loaded into memory and is executed.
3. When a main-program calls the routine, the main-program first checks to see whether the
routine has been loaded.
4. If routine has been not yet loaded, the loader is called to load desired routine into memory.
5. Finally, control is passed to the newly loaded-routine.
Advantages:
1. An unused routine is never loaded.
2. Useful when large amounts of code are needed to handle infrequently occurring cases.
3. Although the total program-size may be large, the portion that is used (and hence loaded)
may be much smaller.
4. Does not require special support from the OS.
Dynamic Linking: Linking is postponed until execution-time. This feature is usually used with
system libraries, such as language subroutine libraries. A stub is included in the image for each
library-routine reference. The stub is a small piece of code used to locate the appropriate memory-
resident library-routine. When the stub is executed, it checks to see whether the needed routine is
already in memory. If not, the program loads the routine into memory. Stub replaces itself with
the address of the routine, and executes the routine. Thus, the next time that particular code-
segment is reached, the library-routine is executed directly, incurring no cost for dynamic-linking.
All processes that use a language library execute only one copy of the library code.
SHARED LIBRARIES
A library may be replaced by a new version, and all programs that reference the library
will automatically use the new one version info. is included in both program & library so
that programs won't accidentally execute incompatible versions.
SWAPPING
A process must be in memory to be executed. A process can be swapped temporarily out-of-
memory to a backing-store (secondary device) and then brought into memory for continued
execution.
Backing-store is a fast disk which is large enough to accommodate copies of all memory-
images for all users. Roll out/Roll in is a swapping variant used for priority-based
Page 33
OPERATING SYSTEMS MODULE 3

scheduling algorithms.
 Lower-priority process is swapped out so that higher-priority process can be
loaded and executed.
 Once the higher-priority process finishes, the lower-priority process can be
swapped in and continued
Swapping depends upon address-binding:
1) If binding is done at load-time, then process cannot be easily moved to a different
location.
2) If binding is done at execution-time, then a process can be swapped into a
different memory- space, because the physical-addresses are computed during
execution-time.

Swapping of two processes using a disk as a backing-store

Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.
Disadvantages: Context-switch time is fairly high. If we want to swap a process, we must be sure
that it is completely idle. Two solutions: Never swap a process with pending I/O operation.
Execute I/O operations only into OS buffers.
CONTIGUOUS MEMORY ALLOCATION
Memory is usually divided into 2 partitions:
1. One for the resident OS.
2. One for the user-processes.
Each process is contained in a single contiguous section of memory.

Page 34
OPERATING SYSTEMS MODULE 3

Memory Mapping & Protection


Memory-protection means protecting OS from user-process and protecting user-processes from
one another. Memory-protection is done using
 Relocation-register: contains the value of the smallest physical-address.
 Limit-register: contains the range of logical-addresses.
Each logical-address must be less than the limit-register. The MMU maps the logical-address
dynamically by adding the value in the relocation-register. This mapped-address is sent to
memory as shown in below figure. When the CPU scheduler selects a process for execution,
the dispatcher loads the relocation and limit-registers with the correct values. Because every
address generated by the CPU is checked against these registers, we can protect the OS from the
running-process. The relocation-register scheme provides an effective way to allow the OS size to
change dynamically.
Transient OS code: Code that comes & goes as needed to save memory-space and overhead for
unnecessary swapping.

Hardware support for relocation and limit-registers

Page 35
OPERATING SYSTEMS MODULE 3

MEMORY ALLOCATION
Two types of memory partitioning are:
1) Fixed-sized partitioning and
2) Variable-sized partitioning
Fixed-sized Partitioning (Static partition) The memory is divided into fixed-sized partitions
before the process enters main memory. The size of each partition may or may not be same. Each
partition may contain exactly one process. The degree of multiprogramming is bound by the
number of partitions (Limitation on number of processes). When a partition is free, a process is
selected from the input queue and loaded into the free partition. When the process terminates, the
partition becomes available for another process.
Variable-sized Partitioning ( Dynamic partition): Partitions are made as the process enters into
main memory. The OS keeps a table indicating which parts of memory are available and which
parts are occupied. A hole is a block of available memory. Normally, memory contains a set of
holes of various sizes. Initially, all memory is available for user-processes and considered one
large hole. When a process arrives, the process is allocated memory from a large hole. If we find
the hole, we allocate only as much memory as is needed and keep the remaining memory
available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes.

1. First Fit
2. Best Fit
3. Worst Fit
Difference between First fit and Best fit algorithms

S. No. First Fit Algorithm Best Fit Algorithm


1 Allocate the first hole in main memory that is Allocate the smallest hole in main
big enough to fit the requested page memory that is big enough to fit the
requested page
2 Searching time for free holes in main Searching time for free holes in main
memory is less memory is more
3 It is faster in operation It is slower in operation
4 Internal fragmentation is more. Internal fragmentation is less

Page 36
OPERATING SYSTEMS MODULE 3

First Fit
 Allocate the first hole that is big enough to fit the requested page.
 Searching can start either at the beginning of the set of holes or at the location where the
previous first-fit search ended.
Best Fit
 Allocate the smallest hole that is big enough to fit the requested page
 We must search the entire list, unless the list is ordered by size.
 This strategy produces the smallest leftover hole. Internal fragmentation is less.
Worst Fit
 Allocate the largest hole and fit the requested page.
 Again, we must search the entire list, unless it is sorted by size.
 This strategy produces the largest leftover hole. Internal fragmentation is more.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
FRAGMAENTATION
What is fragmentation?
As processes are loaded and removed from main memory the free memory space is broken into
little pieces. After some time that processes cannot be allocated to memory because of smaller
size and memory block or partition remains unused. This problem is called fragmentation.
Two types of memory fragmentation:
1) Internal fragmentation and
2) External fragmentation
Internal Fragmentation
The general approach is to break the physical-memory into fixed-sized blocks and allocate
memory in units based on block size or process size as shown below:

Page 37
OPERATING SYSTEMS MODULE 3

The allocated(partitioned) memory to a process may be slightly larger than the requested-memory.
The difference between requested-memory and allocated-memory is called internal fragmentation
i.e. Unused memory that is internal to a partition.
External Fragmentation
External fragmentation occurs when there is enough total memory-space to satisfy a request but
the available-spaces are not in contiguous. (i.e. storage is fragmented into a large number of small
holes). For example in below figure we have 3 free holes of size say 9K, 5K and 6K. Let us
consider the next arriving process whose size = 20K. Memory partition cannot be allocated for
this process, even though the total available size of memory is more than the size of process.
Since the available-spaces are not in contiguous.

Both the first-fit and best-fit strategies for memory-allocation suffer from external fragmentation.
Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will be
lost to fragmentation. This property is known as the 50-percent rule.
Given the memory partitions of 200K, 700K, 500K, 300K, 100K, 400K. Apply first fit and
best fit to place 315K, 427K, 250K, 550K processes.
Solution: First Fit

Page 38
OPERATING SYSTEMS MODULE 3

Best Fit:

Worst Fit

Total internal fragmentation = 385K + 73K +150K = 568K


Available total free memory holes = 200K +300K + 100K =600K. It is not possible to allocate
memory for process whose size = 550K (No available memory partition whose size >= 550K)
External fragmentation due to non-availability of contiguous memory = 550K ie: the size of
process P4 = 550K.
Therefore the Best fit algorithm makes an efficient use of memory for the given processes.

2. Given the 5 memory partitions 100KB, 500KB, 200KB, 300KB and 600KB, how each of the
First fit, best fit and worst fit algorithms place processes of 212KB, 417KB, 112KB and 426KB
size. Which algorithm makes an efficient use of the memory?

Internal fragmentation = 288K + 88K + 183K = 559K


There is no external fragmentation, because the total available memory (400K) is < P4 process
size (426K).

Page 39
OPERATING SYSTEMS MODULE 3

Internal fragmentation = 83K + 88K + 88K + 174K = 433K


There is no external fragmentation

Internal fragmentation = 83K + 188K + 388K = 659K


There is no external fragmentation, because the total available memory (300K) is < P4 process
size (426K). That means, there is no free memory hole which can big enough to fit the P4 process.
Therefore the Best fit algorithm makes an efficient use of memory for the given processes

Page 40
OPERATING SYSTEMS MODULE 3

PAGING
What is paging?
Paging is a memory management scheme that permits the physical address space of a process to
be noncontiguous.
It avoids external fragmentation. It also solves the considerable problem of fitting memory chunks
of varying sizes onto the backing store.
Paging Hardware
Explain the concept of simple paging hardware.
Divide physical (Main) memory into fixed-sized blocks called frames and logical (secondary)
memory is broken into fixed size blocks called pages.
• When a process is to be executed, its pages are loaded into any available main memory-
frames from the backing-store. The backing-store is divided into fixed-sized blocks that
are of the same size as the memory-frames.
• Still have Internal fragmentation

The page-table contains the base-address (frame number on which page is loaded) of each page in
physical-memory. Address generated by CPU is the logical address divided into 2 parts:
1. Page-number(p) is used as an index to the page-table and
2. Offset(d) is combined with the base-address (frame number) to define the physical-
address. This physical-address is sent to the memory-unit to fetch the required
data/instruction.

Page 41
OPERATING SYSTEMS MODULE 3

Paging model of logical and physical-memory

With supporting paging hardware, explain in detail concept of paging with an example for
32-byte memory with 4 byte pages with a process being 16 bytes. How many bits are
reserved for page number and page offset in the logical address. Suppose the logical address
is 5, calculate the corresponding physical address, after populating memory and page table.
For explanation refer above paging hardware topic
Example:
Given that page size = 4 byte. Therefore frame size = 4 byte.
Process size = 16 byte
Number of pages = 16/4 = 4 Pages
Main memory size = 32 byte
Number of frames in main memory = 32/4 =8
When CPU generates logical address it contains two parts:
Page Number and Offset field.
Number of pages = 4, so we need 2 bits to represent 4 different pages; 0,1,2, 3
Here the number of bits required for Offset field = number of bits required for page size = 2 bits
So the number of bits reserved for page no. = 2 bits and number of bits for offset = 2 bits.
Therefore CPU generates logical address of size 4 bits.
Suppose CPU generates a logical address = 5
Logical address (0101)
Page number (2) Offset (2)
01 01

Page 42
OPERATING SYSTEMS MODULE 3

The physical address corresponding to this logical address (5) is obtained by referring the
following memory and page table information.
In the above logical address page no = 01 = Page 1
Now the MMU refers the Page Table and finds the frame number corresponding to page 1. ie:
frame 6.
Since main memory contains 8 frames, we need 3 bits to represent 8 different frames, f0, f1….f7,
in physical address. Offset field = 2 bits
Physical address
Frame number (3) Offset (2)
110 01
That is the logical address 5 (page 1, offset 1) maps to physical address = (6 x 4) +1 = 25
Therefore the physical address corresponding to the logical address 5 = 11001 = 25

PROBLEMS WITH SIMPLE PAGING SCHEME


Mention the problems with simple paging

1. Problem with simple paging is that extra memory references to access the page table is
required to get the frame number corresponding to the page number.
2. Thus two memory accesses are needed to access a byte in main memory, one for the page
table entry and one for the byte.
3. In simple paging scheme memory access is slowed by a factor of 2.

Page 43
OPERATING SYSTEMS MODULE 3

TRANSLATION LOOKASIDE BUFFER


The problems with simple paging scheme can be solved using translation look-aside buffer (TLB)
paging scheme.
What is TLB? With neat diagram explain the concept of TLB.
A translation-Look-aside buffer (TLB) is an associative high speed memory cache that is used to
reduce the time taken to access a user memory location. It is a part of MMU. The TLB contains
only a few of the page-table entries.
Working:

A TLB consists of two parts: a) Page number) b) Frame number. TLB contains only a few of the
page-table entries. When a logical-address is generated by the CPU, its page-number is presented
(entered) to the TLB.
 If the page-number is found(TLB hit), its frame-number is immediately available and used
to access memory.
 If page-number is not found in TLB (TLB miss), a memory-reference to page table must
be made. The obtained frame-number can be used to access memory. In addition, we add
this page-number and frame-number to the TLB, so that they will be found quickly on the
next reference.
If the TLB is already full of entries, the OS must select one for replacement. Percentage of times
that a particular page-number is found in the TLB is called hit ratio.
Advantage: Search operation is fast.
Disadvantage: Hardware is expensive.

Page 44
OPERATING SYSTEMS MODULE 3

In the paging scheme with TLB, it takes 20ns to search the TLB and 100ns to access
memory. Find the effective access time and percentage slowdown in memory access time if
i. Hit ratio is 80%

ii. Hit ratio is 98%

Solution:
In the paging scheme with TLB, if we find the page in TLB, it takes 20ns to search the TLB and
100ns to access memory, and then a mapped memory access takes 120ns when the page number is
in the TLB.
If we fail to find the page number in TLB(20ns), then we must first access memory for the page
table to get the page number and corresponding frame number(100ns). Then access memory for
the desired byte in memory (100ns).
Thus total 220ns time required to access the byte if the page number is not in TLB. (TLB search
time in main memory page table access + In main memory desired byte access)
i) Effective memory Access time = 0.80 x 120 + 0.20 x 220 = 140ns
Percentage slow down in memory access time = (140-100) = 40%

ii) Effective memory Access time = 0.98 x 120 + 0.02 x 220 =122ns
Percentage slow down in memory access time = (122-100) = 22%

MEMORY PROTECTION
Memory-protection is achieved by protection-bits for each frame. The protection-bits are kept in
the page-table. One protection-bit can define a page to be read-write or read-only. Every reference
to memory goes through the page-table to find the correct frame-number. Firstly, the physical-
address is computed. At the same time, the protection-bit is checked to verify that no writes are
being made to a read-only page. An attempt to write to a read-only page causes a hardware-trap to
the OS (or memory-protection violation).

Valid Invalid Bit


This bit is attached to each entry in the page-table as shown below. Valid bit: The page is in the
process’ logical-address space. Invalid bit: The page is not in the process’ logical-address space.
Illegal addresses are trapped by use of valid-invalid bit. The OS sets this bit for each page to allow
or disallow access to the page

Page 45
OPERATING SYSTEMS MODULE 3

Valid(V) or Invalid (I) bit in page table


SHARED PAGES
Advantage of paging: Possible to share common code. Re-entrant code is non-self-modifying
code, it never changes during execution. Two or more processes can execute the same code at the
same time. Each process has its own copy of registers and data-storage to hold the data for the
process's execution. The data for 2 different processes will be different. Only one copy of the
editor need be kept in physical-memory as shown in below figure. Each user's page-table maps
onto the same physical copy of the editor, but data pages are mapped onto different frames.
Disadvantage: 1) Systems that use inverted page-tables have difficulty implementing shared-
memory

Sharing of code in paging environment

Page 46
OPERATING SYSTEMS MODULE 3

STRUCTURE OF THE PAGE TABLE


The 3 types of page table structures are:
1. Hierarchical Paging
2. Hashed Page-tables
3. Inverted Page-tables
HIERARCHICAL PAGING
***Explain Hierarchical Paging structure with example
Most computers support a large logical-address space (232 to 264). In these systems, the page-table
itself becomes excessively large. So divide the page-table into smaller pieces.
Two Level Paging Algorithm: The page-table itself is also paged as shown below:

Two level page table scheme

This is also known as forward-mapped page-table because address translation works from the
outer page-table inwards.

Address translation for a two-level 32-bit paging architecture

Consider the system with a 32-bit logical-address space and a page-size of 4 KB. A logical-
address is divided into 20-bit page-number and 12-bit page-offset. Since the page-table is paged,

Page 47
OPERATING SYSTEMS MODULE 3

the page-number is further divided into 10-bit page-number and 10-bit page-offset.Thus, a logical-
address is as follows:

HASHED PAGE TABLES


This approach is used for handling address spaces larger than 32 bits. The hash-value is the virtual
page-number. Each entry in the hash-table contains a linked-list of elements that hash to the same
location (to handle collisions). Each element consists of 3 fields: Virtual page-number, Value of
the mapped page-frame and Pointer to the next element in the linked-list.
The algorithm works as follows

Hashed page-table

The virtual page-number is hashed into the hash-table. The virtual page-number is compared with
the first element in the linked-list. If there is a match, the corresponding page-frame (field 2) is
used to form the desired physical-address. If there is no match, subsequent entries in the linked-
list are searched for a matching virtual page-number.
INVERTED PAGE TABLES
It has one entry for each real page of memory. Each entry consists of virtual-address of the page
stored in that real memory-location and information about the process that owns the page.
Each virtual-address consists of a triplet:
<process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number>
The algorithm works as follows:

Page 48
OPERATING SYSTEMS MODULE 3

Inverted page table

1. When a memory-reference occurs, part of the virtual-address, consisting of <process-id,


page-number>, is presented to the memory subsystem.
2. The inverted page-table is then searched for a match.
3. If a match is found, at entry i-then the physical-address < i, offset> is generated.
4. If no match is found, then an illegal address access has been attempted.
Advantage:
1) Decreases memory needed to store each page-table
Disadvantages:
1. Increases amount of time needed to search table when a page reference occurs.
2. Difficulty implementing shared-memory.

SEGMENTATION
Explain segmentation with an example.
Segmentation is a memory-management scheme that supports user-view of memory. A logical-
address space is a collection of segments. Each segments of various size, which has a name and a
length. The addresses specify both segment-name and offset within the segment.
Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
For example: The code, Global variables, The heap, from which memory is allocated. The stacks
used by each thread.

Page 49
OPERATING SYSTEMS MODULE 3

Programmers view of a program

Hardware Support
Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical-
addresses. In the segment-table, each entry has following 2 fields:
Segment-base contains starting physical-address where the segment resides in memory.
Segment-limit specifies the length of the segment

Segmentation hardware
A logical-address consists of 2 parts:
Segment-number(s) is used as an index to the segment-table. Offset(d) must be between 0 and the
segment-limit.
If offset is not between 0 & segment-limit, then we trap to the OS (logical-addressing attempt
beyond end of segment).
If offset is legal, then it is added to the segment-base to produce the physical-memory address

Page 50
OPERATING SYSTEMS MODULE 3

Consider the following segment table:


Segment Base Length(Limit)
0 219 600
1 2300 14
2 1327 580
3 1952 96

What are the physical addresses for the following logical addresses:
i. (0, 430) ii. (1, 10) iii. (2, 500) iv) 3, 400

Solution
i. (0, 430)
Segment = 0 and limit = 430
Therefore physical address = 219 + 430 = 649
ii. (1, 10)
Segment = 1 and limit = 10
Therefore physical address = 2300 + 10 = 2310
iii. (2, 500)
Segment = 2 and limit = 500
Therefore physical address = 1327 + 500 = 1827
iv. (3, 400)
Segment = 3 and limit value = 400 which is > limit value 96. So a reference to byte
400 of segment 3 would result in a trap to the OS, as this segment is only 96 bytes long,
thereby generating segment Fault error.
Consider the following segment table:

Segment Base Length(Limit)


0 330 124
1 876 211
2 111 99
3 498 302

What are the physical addresses for the following logical addresses:
i. (0, 9, 9) ii. (2, 78) iii. (1, 265) iv) (3, 222) v) (0, 111)

Solution
i. (0, 9, 9): Logical address is incorrect; since it contains only two parts as segment and
limit.
ii. (2, 78)
Segment = 2 and limit = 78
Therefore physical address = 111 + 78 = 189

Page 51
OPERATING SYSTEMS MODULE 3

iii. (1, 265)


Segment = 1 and limit = 265
Llimit value = 265 which is > limit value 211. So a reference to byte 265 of segment 1
would result in a trap to the OS, as the segment1 is only 211 bytes long, thereby
generating segment Fault error.
iv. (3, 222)
Segment = 3 and limit = 222
Therefore physical address = 498 + 222 = 720
v. (0, 111)
Segment = 0 and limit = 111
Therefore physical address = 330 + 111 = 441

Differentiate between segmentation and Paging


S. No. Paging Segementation
1 A page is of fixed block size. A segment is of variable size
2 Paging may lead to internal Segmentation may lead to external fragmentation
fragmentation
3 The user specified address is The user specifies each address by two quantities
divided by CPU into a page number a segment number and the offset (Segment limit)
and offset.
4 The hardware decides the page size The segment size is specified by the user
5 Paging involves a page table that Segmentation involves the segment table that
contains base address of each page contains segment number and offset (segment
length).

Page 52

You might also like