OS Deadlocks Explained
OS Deadlocks Explained
DEADLOCKS
DEADLOCK
Define deadlock.
Deadlock is a situation where a set of processes are blocked because each process is holding a
resource and waiting for another resource held by some other process.
SYSTEM MODEL
A system consists of finite number of resources. (For ex: memory, printers, CPUs). These
resources are distributed among number of processes. A process must request a resource before
using it and release the resource after using it. The process can request any number of resources to
carry out a given task. The total number of resource requested must not exceed the total number of
resources available.
In normal operation, a process must perform following tasks in sequence:
1. Request
If the request cannot be granted immediately (for ex: the resource is being used by
another process), then the requesting-process must wait for acquiring the resource.
For example: open( ), malloc( ), new( ), and request( )
2. Use
The process uses the resource. For example: prints to the printer or reads from the file.
3. Release
The process releases the resource. So that, the resource becomes available for other processes.
For example: close( ), free( ), delete( ), and release( ).
A set of processes is deadlocked when every process in the set is waiting for a resource
that is currently allocated to another process in the set. Deadlock may involve different
types of resources.
As shown in below figure , Both processes P1 & P2 need resources to continue execution.
P1 requires additional resource R1and is in possession of resource R2. P2 requires
additional resource R2 and is in possession of R1. Thus, neither process can continue.
Page 1
OPERATING SYSTEMS MODULE 3
Multithread programs are good candidates for deadlock because they compete for shared
resources.
DEADLOCK CHARACTERIZATION
In a deadlock, processes never finish executing, and system resources are tied up, preventing other
jobs from starting.
NECESSARY CONDITIONS
Page 2
OPERATING SYSTEMS MODULE 3
RESOURCE-ALLOCATION-GRAPH (RAG)
The resource-allocation-graph (RAG) is a directed graph that can be used to describe the
deadlock situation.
RAG consists of a
i. Set of vertices (V) and
ii. Set of edges (E).
V is divided into two types of nodes
1) Process vertex or node; P={P1,P2…...... Pn} i.e., set consisting of all active processes
in the system.
2) Resource vertex or node; R={R1,R2….........Rn} i.e., set consisting of all resource
types in thesystem.
E is divided into two types of edges:
1) Request Edge
A directed-edge Pi → Rj is called a request edge.
Pi → Rj indicates that process Pi has requested a resource Rj.
2) Assignment Edge
A directed-edge Rj → Pi is called an assignment edge.
Rj → Pi indicates that a resource Rj has been allocated to process Pi.
NOTE:
Suppose that process Pi requests resource Rj.; Here, the request for Rj from Pi can be granted only
if the converting request-edge to assignment-edge do not form a cycle in the resource-allocation
graph.
Pictorially, we represent each process Pi as a circle. We represent each resource-type Rj as a
rectangle.
Page 3
OPERATING SYSTEMS MODULE 3
1.) RAG with a deadlock 2). With a cycle and deadlock 3). With cycle but nodeadlock
CONCLUSION:
1) If a graph contains no cycles, then the system is not deadlocked.
2) If the graph contains a cycle and if only one instance per resource type is used, then
deadlock will occur.
3) If the graph contains a cycle and if several instances per resource type is used, then
there is a possibility of deadlock but not necessarily present
DEADLOCK-PREVENTION
This condition must hold for non-sharable resources. For example: A printer cannot be
simultaneously shared by several processes.
On the other hand, shared resources do not lead to deadlocks. For example: Simultaneous
access can be granted for read-only file.
A process never waits for accessing a sharable resource.
In general, we cannot prevent deadlocks by denying the mutual-exclusion
condition because some resources are non-sharable by default.
Hold and Wait:
To prevent this condition it must be ensure that, whenever a process requests a resource, it does
not hold any other resources.
There are several solutions to this problem.
Each process must be allocated with all of its resources before it begins execution
A process must request a resource only when the process has none allocated to it. A
process may request some resources and use them. Before it can request any additional
resources, however, it must release all the resources that it is currently allocated
No Preemption
To prevent this condition the resources must be preempted.
There are several solutions to this problem.
If a process is holding some resources and requests another resource that cannot be
immediately allocated to it, then all resources currently being held are preempted.
Page 5
OPERATING SYSTEMS MODULE 3
The preempted resources are added to the list of resources for which the process is
waiting.
The process will be restarted only when it regains the old resources and the new
resources that it is requesting.
When a process request resources, we check whether they are available or not. If they are
available allocate them, if they are not check whether they are available with process
waiting for additional resource if so preempt desired resources from the waiting process.
Circular-Wait
To ensure that this condition never holds is to impose a total ordering of all resource types
by assigning numbers to all resources, and require that each process requests resources in
an increasing/decreasing order of enumeration.
Require that whenever a process requests a resource, it has released resources
with a lower number.
One big challenge in this scheme is determining the relative ordering of the different
resources.
DEADLOCK AVOIDANCE
The general idea behind deadlock avoidance is to prevent deadlocks from ever happening.
Deadlocks are requiring additional information about how resources are to be requested.
Deadlock-avoidance algorithm
Requires more information about each process, and
Tends to lead to low device utilization.
For example:
1) In simple algorithms, the scheduler only needs to know the maximum number
of each resource that a process might potentially use.
2) In complex algorithms, the scheduler can also take advantage of the schedule
of exactly what resources may be needed in what order.
A deadlock-avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular-wait condition never exists. The resource-allocation state is defined
by the number of available and allocated resources and the maximum demand of each
process.
Page 6
OPERATING SYSTEMS MODULE 3
SAFE STATE
A state is safe if the system can allocate all resources requested by all processes without entering
a deadlock state.
A state is safe if there exists a safe sequence of processes {P0, P1, P2, ..., PN} such that the requests
of each process(Pi) can be satisfied by the currently available resources.
If a safe sequence does not exist, then the system is in an unsafe state, which may lead to
deadlock. All safe states are deadlock free, but not all unsafe states lead to deadlocks.
Page 7
OPERATING SYSTEMS MODULE 3
If resource categories have only single instances of their resources, then deadlock states can be
detected by cycles in the resource-allocation graphs.
In this case, unsafe states can be recognized and avoided by augmenting the resource-allocation
graph with claim edges (denoted by a dashed line).
Claim edge Pi → Rj indicated that process Pi may request resource Rj at some time in future.
The important steps are as below:
1. When a process Pi requests a resource Rj, the claim edge Pi → Rj is converted to a request
edge.
2. Similarly, when a resource Rj is released by the process Pi, the assignment edge Rj → Pi is
reconverted as claim edge Pi → Rj.
3. The request for Rj from Pi can be granted only if the converting request edge to assignment
edge do not form a cycle in the resource allocation graph.
To apply this algorithm, each process Pi must know all its claims before it starts executing.
Conclusion:
If no cycle exists, then the allocation of the resource will leave the system in a safe state.
If cycle is found, system is put into unsafe state and may cause a deadlock.
For example: Consider a resource allocation graph shown in Figure below
Suppose P2 requests R2. Though R2 is currently free, we cannot allocate it to P2 as this action will
create a cycle in the graph as shown in below Figure. This cycle will indicate that the system is in
unsafe state: because, if P1 requests R2 and P2 requests R1 later, a deadlock will occur.
Page 8
OPERATING SYSTEMS MODULE 3
Page 9
OPERATING SYSTEMS MODULE 3
Need [n][m]
This matrix indicates the remaining resources need of each process.
If Need[i,j]=k, then Pi may need k more instances of resource Rj to complete its task.
So, Need[i,j] = Max[i,j] - Allocation[i]
Write and explain Bankers algorithm. (explain both safety and Resource request algorithm)
This algorithm is applicable to the system with multiple instances of each resource types.
When a process starts up, it must declare the maximum number of resources that it may need. This
number may not exceed the total number of resources in the system. When a request is made, the
system determines whether granting the request would leave the system in a safe state.
If the system in a safe state, then the resources are allocated;
else the process must wait until some other process releases enough resources.
Assumptions: Let n = number of processes in the system Let m = number of resources types.
Following data structures are used to implement the banker’s algorithm
Available [m], Max [n][m], Allocation [n][m], Need [n][m]
The Banker’s algorithm has two parts: 1) Safety Algorithm
2) Resource – Request Algorithm
Safety Algorithm
Step 1:
Let Work and Finish be two vectors of length m and n respectively.
Initialize:
Work = Available
Finish[i] = false for i=1,2,3,…….n
Step 2:
Find an index(i) such that both
a) Finish[i] = false
b) Need i <= Work.
If no such i exist, then go to step 4
Step 3:
Set:
Work = Work + Allocation(i)
Finish[i] = true
Go to step 2
Step 4:
If Finish[i] = true for all i, then the system is in safe state.
Page 10
OPERATING SYSTEMS MODULE 3
Resource-Request Algorithm
Page 11
OPERATING SYSTEMS MODULE 3
An Illustrative Example
Process P0 P1 P2 P3 P4
Page 12
OPERATING SYSTEMS MODULE 3
Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (3 3 2) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (1 2 2) <= (3 3 2) ie: True, So P1 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P1] = (3 3 2) + (2 0 0) = (5 3 2)
Process P0 P1 P2 P3 P4
Process P0 P1 P2 P3 P4
Process P0 P1 P2 P3 P4
Process P0 P1 P2 P3 P4
Page 13
OPERATING SYSTEMS MODULE 3
Process P0 P1 P2 P3 P4
Page 14
OPERATING SYSTEMS MODULE 3
Need
A B C
P0 7 4 3
P1 0 2 0
P2 6 0 0
P3 0 1 1
P4 4 3 1
To determine whether this new system state is safe, we again execute Safetyalgorithm.
Step 1: Initialization
Work = Available ie: Work = 2 3 0
Process P0 P1 P2 P3 P4
Step 2: For i= 0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (2 3 0) ie: false So P0 must wait.
Step 2: For i=1
Finish [P1] = false and Need [P1] <= Work i.e. (0 2 0) <= (2 3 0) ie: True, So P1 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P1] = (2 3 0) + (3 0 2) = (5 3 2)
Process P0 P1 P2 P3 P4
Step 2: For i= 2
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (5 3 2) ie: false So P2 must wait.
Step 2: For i=3
Finish [P3] = false and Need [P3] <= Work i.e. (0 1 1) <= (5 3 2) ie: True, So P3 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P3] = (5 3 2) + (2 1 1) = (7 4 3)
Process P0 P1 P2 P3 P4
Page 15
OPERATING SYSTEMS MODULE 3
Process P0 P1 P2 P3 P4
Step 2: For i =0
Finish [P0] = false and Need [P0] <= Work i.e. (7 4 3) <= (7 4 5) ie: True, So P0 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P0] = (7 4 5) + (0 1 0) = (7 5 5)
Process P0 P1 P2 P3 P4
Step 2: For i =2
Finish [P2] = false and Need [P2] <= Work i.e. (6 0 0) <= (7 5 5) ie: True, So P2 must be kept in
safe sequence.
Step 3: Work = Work + Allocation [P2] = (7 5 5) + (3 0 2) = (10, 5, 7)
Process P0 P1 P2 P3 P4
Page 16
OPERATING SYSTEMS MODULE 3
Process P0 P1 P2 P3 P4
Need[P0] <= work ie: (0 0 2) <= (1 0 2) ie: True, So P0 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P0] = (1 0 2) + (0 0 2) = (1 0 4)
Page 17
OPERATING SYSTEMS MODULE 3
Need[P1] <= work ie: (1 0 1) <= (1 0 4) ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (1 0 4) + (1 0 0) = (2 0 4)
Need[P2] <= work ie: (0 0 2) <= (2 0 4) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (2 0 4) + (1 3 5) = (3 3 9)
Need[P3] <= work ie: (2 1 0) <= (3 3 9) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (3 3 9) + (6 3 2) = (9, 6, 11)
Need[P4] <= work ie: (0 1 4) <= (9, 6, 11) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (9, 6, 11) + (1 4 3) = (10, 10, 14)
Page 18
OPERATING SYSTEMS MODULE 3
Process P0 P1 P2 P3 P4
Need[P2] <= work ie: (0 0 0) <= (1 0 0) ie: True, So P2 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P2] = (1 0 0) + (1 3 7) = (2 3 7)
Page 19
OPERATING SYSTEMS MODULE 3
Need[P3] <= work ie: (2 1 0) <= (2 3 7) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 3 7) + (6 3 2) = (8, 6, 9)
Need[P4] <= work ie: (0 1 4) <= (8, 6, 9) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (8, 6, 9) + (1 4 3) = (9, 10, 12)
Process P0 P1 P2 P3 P4
Page 21
OPERATING SYSTEMS MODULE 3
Need[P1] <= work ie: (0 7 5 0) <= (2, 14, 12, 12)ie: True, So P1 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P1] = (2, 14, 12, 12) + (1 0 0 0) = (3, 14, 12, 12)
Set Finish[P1] = True
Page 22
OPERATING SYSTEMS MODULE 3
To determine whether this new system state is safe, we again execute Safetyalgorithm.
Step 1: Initialization
Work = Available ie: Work = (1, 1, 0, 0)
Process P0 P1 P2 P3 P4
Step 2: For i = 3
Need[P3] <= work ie: (0 0 2 0) <= (2 4 6 6) ie: True, So P3 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P3] = (2 4 6 6) + (0 6 3 2) = (2, 10, 9, 8)
Set Finish[P3] = True
Step 2: For i=4
Need[P4] <= work ie: (0 6 4 2) <= (2, 10, 9, 8) ie: True, So P4 must be kept in safe sequence.
Step3: Therefore work = work + Allocation [P4] = (2, 10, 9, 8) + (0 0 1 4) = (2, 10, 10, 12)
Set Finish[P4] = True
Page 23
OPERATING SYSTEMS MODULE 3
Page 24
OPERATING SYSTEMS MODULE 3
Page 25
OPERATING SYSTEMS MODULE 3
DEADLOCK DETECTION
If a system does not use either of deadlock-prevention or deadlock-avoidance algorithm then a
deadlock may occur. In this environment, the system must provide:
1. An algorithm to examine the system-state to determine whether a deadlock has occurred.
2. An algorithm to recover from the deadlock.
SINGLE INSTANCE OF EACH RESOURCE TYPE
If all the resources have only a single instance, then deadlock detection-algorithm can be defined
using a wait-for-graph. The wait-for-graph is applicable to only a single instance of a resource
type.
A wait-for-graph (WAG) is a variation of the resource-allocation-graph. The wait-for-graph can
be obtained from the resource-allocation-graph by removing the resource nodes and collapsing
the appropriate edges.
An edge from Pi to Pj implies that process Pi is waiting for process Pj to release a resource that Pi
needs. An edge Pi → Pj exists if and only if the corresponding graph contains two edges Pi → Rq
and Rq → Pj.
For example:
For the following resource Allocation Graph, write the wait-for –graph.
Consider resource-allocation-graph shown in below Figure
Page 26
OPERATING SYSTEMS MODULE 3
A deadlock exists in the system if and only if the wait-for-graph contains a cycle. To detect
deadlocks, the system needs to 1) Maintain the wait-for-graph and 2) Periodically execute an
algorithm that searches for a cycle in the graph.
SEVERAL INSTANCES OF A RESOURCE TYPE
The wait-for-graph is applicable to only a single instance of a resource type. However, the wait-
for-graph is not applicable to a multiple instance of a resource type. The following detection-
algorithm can be used for a multiple instance of a resource type.
Assumptions:
Let ‘n’ be the number of processes in the system Let ‘m’ be the number of resources types.
Following data structures are used to implement this algorithm.
Available [m]
This vector indicates the no. of available resources of each type.
If Available[j]=k, then k instances of resource type Rj is available.
Allocation [n][m]
This matrix indicates no. of resources currently allocated to each process.
If Allocation[i,j]=k, then Pi is currently allocated k instances of Rj.
Request [n][m]
This matrix indicates the current request of each process.
If Request [i, j] = k, then process Pi is requesting k more instances of resource type Rj.
Page 27
OPERATING SYSTEMS MODULE 3
DETECTION-ALGORITHM USAGE
The detection-algorithm must be executed based on following factors:
1. The frequency of occurrence of a deadlock.
2. The no. of processes affected by the deadlock.
If deadlocks occur frequently, then the detection-algorithm should be executed frequently.
Resources allocated to deadlocked-processes will be idle until the deadlock is broken.
Problem:
Deadlock occurs only when some processes make a request that cannot be granted immediately.
Solution 1:
The deadlock-algorithm must be executed whenever a request for allocation cannot be granted
immediately. In this case, we can identify set of deadlocked-processes and specific process
causing the deadlock.
Solution 2:
The deadlock-algorithm must be executed in periodic intervals. For example: once in an hour,
that is whenever CPU utilization drops below certain threshold
RECOVERY FROM DEADLOCK
********Explain different methods to recover from deadlock
Page 28
OPERATING SYSTEMS MODULE 3
Page 29
OPERATING SYSTEMS MODULE 3
MEMORY MANAGEMENT
Main Memory
Basic Hardware structure of Main memory: Program must be brought (from disk) into main
memory and placed within a process for it to be run. Main-memory and registers are the only
storage that a CPU can access directly.
Register access in one CPU clock.
Main-memory can take many cycles.
Cache sits between main-memory and CPU registers.
Protection of memory required to ensure correct operation.
A pair of base- and limit-registers used to define the logical (virtual) address as shown in
below figures.
A base and a limit-register define a logical-address space Hardware address protection with base and limit-registers
ADDRESS BINDING
Explain the multi-step processing of a user program with a neat block diagram.
Page 30
OPERATING SYSTEMS MODULE 3
Load Time: Must generate re-locatable code if memory-location is not known at compile time.
Execution Time: Binding delayed until run-time if the process can be moved during its execution
from one memory-segment to another. So it needs hardware support for address maps (e.g. base
and limit-registers).
Page 31
OPERATING SYSTEMS MODULE 3
2 The set of all logical addresses Set of all physical addresses mapped to the
generated by CPU for a program is corresponding logical addresses is referred as
called logical address space. Physical Address space
Page 32
OPERATING SYSTEMS MODULE 3
scheduling algorithms.
Lower-priority process is swapped out so that higher-priority process can be
loaded and executed.
Once the higher-priority process finishes, the lower-priority process can be
swapped in and continued
Swapping depends upon address-binding:
1) If binding is done at load-time, then process cannot be easily moved to a different
location.
2) If binding is done at execution-time, then a process can be swapped into a
different memory- space, because the physical-addresses are computed during
execution-time.
Major part of swap-time is transfer-time; i.e. total transfer-time is directly proportional to the
amount of memory swapped.
Disadvantages: Context-switch time is fairly high. If we want to swap a process, we must be sure
that it is completely idle. Two solutions: Never swap a process with pending I/O operation.
Execute I/O operations only into OS buffers.
CONTIGUOUS MEMORY ALLOCATION
Memory is usually divided into 2 partitions:
1. One for the resident OS.
2. One for the user-processes.
Each process is contained in a single contiguous section of memory.
Page 34
OPERATING SYSTEMS MODULE 3
Page 35
OPERATING SYSTEMS MODULE 3
MEMORY ALLOCATION
Two types of memory partitioning are:
1) Fixed-sized partitioning and
2) Variable-sized partitioning
Fixed-sized Partitioning (Static partition) The memory is divided into fixed-sized partitions
before the process enters main memory. The size of each partition may or may not be same. Each
partition may contain exactly one process. The degree of multiprogramming is bound by the
number of partitions (Limitation on number of processes). When a partition is free, a process is
selected from the input queue and loaded into the free partition. When the process terminates, the
partition becomes available for another process.
Variable-sized Partitioning ( Dynamic partition): Partitions are made as the process enters into
main memory. The OS keeps a table indicating which parts of memory are available and which
parts are occupied. A hole is a block of available memory. Normally, memory contains a set of
holes of various sizes. Initially, all memory is available for user-processes and considered one
large hole. When a process arrives, the process is allocated memory from a large hole. If we find
the hole, we allocate only as much memory as is needed and keep the remaining memory
available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes.
1. First Fit
2. Best Fit
3. Worst Fit
Difference between First fit and Best fit algorithms
Page 36
OPERATING SYSTEMS MODULE 3
First Fit
Allocate the first hole that is big enough to fit the requested page.
Searching can start either at the beginning of the set of holes or at the location where the
previous first-fit search ended.
Best Fit
Allocate the smallest hole that is big enough to fit the requested page
We must search the entire list, unless the list is ordered by size.
This strategy produces the smallest leftover hole. Internal fragmentation is less.
Worst Fit
Allocate the largest hole and fit the requested page.
Again, we must search the entire list, unless it is sorted by size.
This strategy produces the largest leftover hole. Internal fragmentation is more.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization.
FRAGMAENTATION
What is fragmentation?
As processes are loaded and removed from main memory the free memory space is broken into
little pieces. After some time that processes cannot be allocated to memory because of smaller
size and memory block or partition remains unused. This problem is called fragmentation.
Two types of memory fragmentation:
1) Internal fragmentation and
2) External fragmentation
Internal Fragmentation
The general approach is to break the physical-memory into fixed-sized blocks and allocate
memory in units based on block size or process size as shown below:
Page 37
OPERATING SYSTEMS MODULE 3
The allocated(partitioned) memory to a process may be slightly larger than the requested-memory.
The difference between requested-memory and allocated-memory is called internal fragmentation
i.e. Unused memory that is internal to a partition.
External Fragmentation
External fragmentation occurs when there is enough total memory-space to satisfy a request but
the available-spaces are not in contiguous. (i.e. storage is fragmented into a large number of small
holes). For example in below figure we have 3 free holes of size say 9K, 5K and 6K. Let us
consider the next arriving process whose size = 20K. Memory partition cannot be allocated for
this process, even though the total available size of memory is more than the size of process.
Since the available-spaces are not in contiguous.
Both the first-fit and best-fit strategies for memory-allocation suffer from external fragmentation.
Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N blocks will be
lost to fragmentation. This property is known as the 50-percent rule.
Given the memory partitions of 200K, 700K, 500K, 300K, 100K, 400K. Apply first fit and
best fit to place 315K, 427K, 250K, 550K processes.
Solution: First Fit
Page 38
OPERATING SYSTEMS MODULE 3
Best Fit:
Worst Fit
2. Given the 5 memory partitions 100KB, 500KB, 200KB, 300KB and 600KB, how each of the
First fit, best fit and worst fit algorithms place processes of 212KB, 417KB, 112KB and 426KB
size. Which algorithm makes an efficient use of the memory?
Page 39
OPERATING SYSTEMS MODULE 3
Page 40
OPERATING SYSTEMS MODULE 3
PAGING
What is paging?
Paging is a memory management scheme that permits the physical address space of a process to
be noncontiguous.
It avoids external fragmentation. It also solves the considerable problem of fitting memory chunks
of varying sizes onto the backing store.
Paging Hardware
Explain the concept of simple paging hardware.
Divide physical (Main) memory into fixed-sized blocks called frames and logical (secondary)
memory is broken into fixed size blocks called pages.
• When a process is to be executed, its pages are loaded into any available main memory-
frames from the backing-store. The backing-store is divided into fixed-sized blocks that
are of the same size as the memory-frames.
• Still have Internal fragmentation
The page-table contains the base-address (frame number on which page is loaded) of each page in
physical-memory. Address generated by CPU is the logical address divided into 2 parts:
1. Page-number(p) is used as an index to the page-table and
2. Offset(d) is combined with the base-address (frame number) to define the physical-
address. This physical-address is sent to the memory-unit to fetch the required
data/instruction.
Page 41
OPERATING SYSTEMS MODULE 3
With supporting paging hardware, explain in detail concept of paging with an example for
32-byte memory with 4 byte pages with a process being 16 bytes. How many bits are
reserved for page number and page offset in the logical address. Suppose the logical address
is 5, calculate the corresponding physical address, after populating memory and page table.
For explanation refer above paging hardware topic
Example:
Given that page size = 4 byte. Therefore frame size = 4 byte.
Process size = 16 byte
Number of pages = 16/4 = 4 Pages
Main memory size = 32 byte
Number of frames in main memory = 32/4 =8
When CPU generates logical address it contains two parts:
Page Number and Offset field.
Number of pages = 4, so we need 2 bits to represent 4 different pages; 0,1,2, 3
Here the number of bits required for Offset field = number of bits required for page size = 2 bits
So the number of bits reserved for page no. = 2 bits and number of bits for offset = 2 bits.
Therefore CPU generates logical address of size 4 bits.
Suppose CPU generates a logical address = 5
Logical address (0101)
Page number (2) Offset (2)
01 01
Page 42
OPERATING SYSTEMS MODULE 3
The physical address corresponding to this logical address (5) is obtained by referring the
following memory and page table information.
In the above logical address page no = 01 = Page 1
Now the MMU refers the Page Table and finds the frame number corresponding to page 1. ie:
frame 6.
Since main memory contains 8 frames, we need 3 bits to represent 8 different frames, f0, f1….f7,
in physical address. Offset field = 2 bits
Physical address
Frame number (3) Offset (2)
110 01
That is the logical address 5 (page 1, offset 1) maps to physical address = (6 x 4) +1 = 25
Therefore the physical address corresponding to the logical address 5 = 11001 = 25
1. Problem with simple paging is that extra memory references to access the page table is
required to get the frame number corresponding to the page number.
2. Thus two memory accesses are needed to access a byte in main memory, one for the page
table entry and one for the byte.
3. In simple paging scheme memory access is slowed by a factor of 2.
Page 43
OPERATING SYSTEMS MODULE 3
A TLB consists of two parts: a) Page number) b) Frame number. TLB contains only a few of the
page-table entries. When a logical-address is generated by the CPU, its page-number is presented
(entered) to the TLB.
If the page-number is found(TLB hit), its frame-number is immediately available and used
to access memory.
If page-number is not found in TLB (TLB miss), a memory-reference to page table must
be made. The obtained frame-number can be used to access memory. In addition, we add
this page-number and frame-number to the TLB, so that they will be found quickly on the
next reference.
If the TLB is already full of entries, the OS must select one for replacement. Percentage of times
that a particular page-number is found in the TLB is called hit ratio.
Advantage: Search operation is fast.
Disadvantage: Hardware is expensive.
Page 44
OPERATING SYSTEMS MODULE 3
In the paging scheme with TLB, it takes 20ns to search the TLB and 100ns to access
memory. Find the effective access time and percentage slowdown in memory access time if
i. Hit ratio is 80%
Solution:
In the paging scheme with TLB, if we find the page in TLB, it takes 20ns to search the TLB and
100ns to access memory, and then a mapped memory access takes 120ns when the page number is
in the TLB.
If we fail to find the page number in TLB(20ns), then we must first access memory for the page
table to get the page number and corresponding frame number(100ns). Then access memory for
the desired byte in memory (100ns).
Thus total 220ns time required to access the byte if the page number is not in TLB. (TLB search
time in main memory page table access + In main memory desired byte access)
i) Effective memory Access time = 0.80 x 120 + 0.20 x 220 = 140ns
Percentage slow down in memory access time = (140-100) = 40%
ii) Effective memory Access time = 0.98 x 120 + 0.02 x 220 =122ns
Percentage slow down in memory access time = (122-100) = 22%
MEMORY PROTECTION
Memory-protection is achieved by protection-bits for each frame. The protection-bits are kept in
the page-table. One protection-bit can define a page to be read-write or read-only. Every reference
to memory goes through the page-table to find the correct frame-number. Firstly, the physical-
address is computed. At the same time, the protection-bit is checked to verify that no writes are
being made to a read-only page. An attempt to write to a read-only page causes a hardware-trap to
the OS (or memory-protection violation).
Page 45
OPERATING SYSTEMS MODULE 3
Page 46
OPERATING SYSTEMS MODULE 3
This is also known as forward-mapped page-table because address translation works from the
outer page-table inwards.
Consider the system with a 32-bit logical-address space and a page-size of 4 KB. A logical-
address is divided into 20-bit page-number and 12-bit page-offset. Since the page-table is paged,
Page 47
OPERATING SYSTEMS MODULE 3
the page-number is further divided into 10-bit page-number and 10-bit page-offset.Thus, a logical-
address is as follows:
Hashed page-table
The virtual page-number is hashed into the hash-table. The virtual page-number is compared with
the first element in the linked-list. If there is a match, the corresponding page-frame (field 2) is
used to form the desired physical-address. If there is no match, subsequent entries in the linked-
list are searched for a matching virtual page-number.
INVERTED PAGE TABLES
It has one entry for each real page of memory. Each entry consists of virtual-address of the page
stored in that real memory-location and information about the process that owns the page.
Each virtual-address consists of a triplet:
<process-id, page-number, offset>.
Each inverted page-table entry is a pair <process-id, page-number>
The algorithm works as follows:
Page 48
OPERATING SYSTEMS MODULE 3
SEGMENTATION
Explain segmentation with an example.
Segmentation is a memory-management scheme that supports user-view of memory. A logical-
address space is a collection of segments. Each segments of various size, which has a name and a
length. The addresses specify both segment-name and offset within the segment.
Normally, the user-program is compiled, and the compiler automatically constructs segments
reflecting the input program.
For example: The code, Global variables, The heap, from which memory is allocated. The stacks
used by each thread.
Page 49
OPERATING SYSTEMS MODULE 3
Hardware Support
Segment-table maps 2 dimensional user-defined addresses into one-dimensional physical-
addresses. In the segment-table, each entry has following 2 fields:
Segment-base contains starting physical-address where the segment resides in memory.
Segment-limit specifies the length of the segment
Segmentation hardware
A logical-address consists of 2 parts:
Segment-number(s) is used as an index to the segment-table. Offset(d) must be between 0 and the
segment-limit.
If offset is not between 0 & segment-limit, then we trap to the OS (logical-addressing attempt
beyond end of segment).
If offset is legal, then it is added to the segment-base to produce the physical-memory address
Page 50
OPERATING SYSTEMS MODULE 3
What are the physical addresses for the following logical addresses:
i. (0, 430) ii. (1, 10) iii. (2, 500) iv) 3, 400
Solution
i. (0, 430)
Segment = 0 and limit = 430
Therefore physical address = 219 + 430 = 649
ii. (1, 10)
Segment = 1 and limit = 10
Therefore physical address = 2300 + 10 = 2310
iii. (2, 500)
Segment = 2 and limit = 500
Therefore physical address = 1327 + 500 = 1827
iv. (3, 400)
Segment = 3 and limit value = 400 which is > limit value 96. So a reference to byte
400 of segment 3 would result in a trap to the OS, as this segment is only 96 bytes long,
thereby generating segment Fault error.
Consider the following segment table:
What are the physical addresses for the following logical addresses:
i. (0, 9, 9) ii. (2, 78) iii. (1, 265) iv) (3, 222) v) (0, 111)
Solution
i. (0, 9, 9): Logical address is incorrect; since it contains only two parts as segment and
limit.
ii. (2, 78)
Segment = 2 and limit = 78
Therefore physical address = 111 + 78 = 189
Page 51
OPERATING SYSTEMS MODULE 3
Page 52