Unit IV (Memory and I/O Management) - Syllabus
Memory Management
Memory Management Requirements, Memory Partitioning: Fixed
Partitioning, Dynamic Partitioning, Buddy System, Relocation, Paging,
Segmentation.
I/O Management
I/O Devices, Organization of the I/O Function, I/O Buffering, Disk
Scheduling (FIFO, SSTF, SCAN, C-SCAN, LOOK, C-LOOK)
Department of Information Technology, VIIT, Pune-48
Memory Management
● The memory comprises a large array or group of bytes,each with its own location.
● The programs, along with the information they access, should be in the main
memory during execution.
● The CPU fetches instructions from memory according to the value of the program
counter.
● To achieve a degree of multiprogramming and proper utilization of memory, memory
management is important.
● In most schemes, the kernel occupies some fixed portion of main memory and the rest is
shared by multiple processes
● It is the task carried out by the OS and hardware to accommodate multiple processes in
main memory
● If only a few processes can be kept in main memory, then much of the time all processes
will be waiting for I/O and the CPU will be idle
● Hence, memory needs to be allocated efficiently in order to pack as many processes into
memory as possible
2 Department of Information Technology, VIIT, Pune-48
Memory Management
● Main memory is associated with the processor, so moving instructions and
information into and out of the processor is extremely fast.
L1 has a smaller memory capacity than L2. Also, L1 can be accessed faster than L2. L1 is usually
in-built to the chip, while L2 is soldered on the motherboard very close to the chip
3 Department of Information Technology, VIIT, Pune-48
Why Memory Management is Required?
● Allocate and de-allocate memory before and after process execution.
● To keep track of used memory space by processes.
● To minimize fragmentation issues.
● To proper utilization of main memory.
● To maintain data integrity while executing of process.
4 Department of Information Technology, VIIT, Pune-48
Memory Management Requirements
● Relocation
● programmer cannot know where the program will be placed in
memory when it is executed
● a process may be (often) relocated in main memory due to
swapping
● swapping enables the OS to have a larger pool of
ready-to-execute processes
● memory references in code (for both instructions and data)
must be translated to actual physical memory address
5 Department of Information Technology, VIIT, Pune-48
Memory Management Requirements
● Protection
● processes should not be able to reference memory locations in
another process without permission
● impossible to check addresses at compile time in programs since
the program could be relocated
● address references must be checked at run time by hardware
6 Department of Information Technology, VIIT, Pune-48
Memory Management Requirements
● Sharing
● must allow several processes to access a common portion of
main memory without compromising protection
● cooperating processes may need to share access to the
same data structure
● better to allow each process to access the same copy of
the program rather than have their own separate copy
7 Department of Information Technology, VIIT, Pune-48
Memory Management Requirements
● Logical Organization
● users write programs in modules with different characteristics
● instruction modules are execute-only
● data modules are either read-only or read/write
● some modules are private others are public
● To effectively deal with user programs, the OS and hardware
should support a basic form of module to provide the required
protection and sharing
8 Department of Information Technology, VIIT, Pune-48
Memory Management Requirements
● Physical Organization
● secondary memory is the long term store for programs and
data while main memory holds program and data currently in
use
● moving information between these two levels of memory is a
major concern of memory management (OS)
● it is highly inefficient to leave this responsibility to the
application programmer
9 Department of Information Technology, VIIT, Pune-48
Logical and Physical Address Space
● Logical Address Space: An address generated by the CPU is known as
a “Logical Address”. It is also known as a Virtual address. Logical
address space can be defined as the size of the process. A logical
address can be changed.
● Physical Address Space: An address seen by the memory unit (i.e the
one loaded into the memory address register of the memory)
● A Physical address is also known as a Real address. The set of all
physical addresses corresponding to these logical addresses is known as
Physical address space.
● A physical address is computed by MMU. The run-time mapping from
virtual to physical addresses is done by a hardware device Memory
Management Unit(MMU).
● The physical address always remains constant.
10 Department of Information Technology, VIIT, Pune-48
Simple Memory Management
● Here we study the simpler case where there is no virtual memory
● An executing process must be loaded entirely in main memory
● Although the following simple memory management techniques
are not used in modern OS, they lay the ground for a proper
discussion of virtual memory
● fixed partitioning
● dynamic partitioning
● simple paging
● simple segmentation
11 Department of Information Technology, VIIT, Pune-48
Fixed Partitioning
● Partition main memory into a
set of non overlapping regions
called partitions
● Partitions can be of equal or
unequal sizes
Disadvantages
1. Internal fragmentation
2. Limit on Process size
3. Limitation on degree of multiprogramming
4. External fragmentation
12 Department of Information Technology, VIIT, Pune-48
Fixed Partitioning
● Any process whose size is less than or equal to a partition size can be loaded
into the partition
● If all partitions are occupied, the operating system can swap a process out of a
partition
● A program may be too large to fit in a partition. The programmer must then
design the program with overlays (“The process of transferring a block of
program code or other data into internal memory, replacing what is
already stored”. )
● when the module needed is not present, the user program must load that
module into the program’s partition, overlaying whatever program or data
are there
13 Department of Information Technology, VIIT, Pune-48
Fixed Partitioning
● Main memory use is inefficient. Any program, no matter
how small, occupies an entire partition. This is called
internal fragmentation.
● Unequal-size partitions lessens these problems but they still
remain...
● Equal-size partitions was used in early IBM’s OS/MFT
(Multiprogramming with a Fixed number of Tasks)
14 Department of Information Technology, VIIT, Pune-48
Placement Algorithm with Partitions
● Equal-size partitions
● If there is an available partition, a process can be loaded into
that partition
● because all partitions are of equal size, it does not matter
which partition is used
● If all partitions are occupied by blocked processes, choose one
process to swap out to make room for the new process
15 Department of Information Technology, VIIT, Pune-48
Placement Algorithm with Partitions
● Unequal-size partitions: use of
multiple queues
● assign each process to the
smallest partition within
which it will fit
● A queue for each partition
size is maintained
● tries to minimize internal
fragmentation
● Problem: some queues will
be empty if no processes
within a size range is present
16 Department of Information Technology, VIIT, Pune-48
Placement Algorithm with Partitions
● Unequal-size partitions: use of a
single queue
● When its time to load a
process into main memory
the smallest available
partition that will hold the
process is selected
● increases the level of
multiprogramming at the
expense of internal
fragmentation
17 Department of Information Technology, VIIT, Pune-48
Dynamic Partitioning
● Partitions are of variable length and number
● Each process is allocated exactly as much memory as it requires
● Eventually holes are formed in main memory. This is called external
fragmentation
● Must use compaction to shift processes so they are contiguous and all free
memory is in one block
● Used in IBM’s OS/MVT (Multiprogramming with a Variable number of Tasks)
18 Department of Information Technology, VIIT, Pune-48
Dynamic Partitioning: an example
● A hole of 64K is left after loading 3 processes: not enough room for another
process 4 of size 128k
● Eventually each process is blocked. The OS swaps out process 2 to bring in
process 4
19 Department of Information Technology, VIIT, Pune-48
Dynamic Partitioning: an example
● another hole of 96K is created
● The OS swaps out process 1 to bring in again process 2 and another
hole of 96K is created...
20 Department of Information Technology, VIIT, Pune-48
Dynamic Partitioning
Advantages
● Less internal fragmentation
● No limitation on no. of processes
● No limitation on process size
21 Department of Information Technology, VIIT, Pune-48
Partition Allocation strategies
22 Department of Information Technology, VIIT, Pune-48
Placement Algorithm
● Used to decide which free
block to allocate to a process
● Goal: to reduce usage of
compaction (time consuming)
● Possible algorithms:
● Best-fit: choose smallest
hole
● First-fit: choose first hole
from beginning
● Next-fit: choose first hole
from last placement
23 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – First fit
● According to this strategy, allocate the first hole or first free partition to the process that is big
enough. This searching can start from the beginning of the set of holes.
● Searching can be stopped as soon as we find a free hole that is large enough.
● For example:
● Process P1 of size 10KB has arrived and then the first hole that is enough to meet the
requirements of size 20KB is chosen and allocated to the process.
24 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Best fit
● With this strategy, the smallest free partition/ hole that is big enough and meets the requirements
of the process is allocated to the process.
● This strategy searches the entire list of free partitions/holes in order to find a hole whose size is
either greater than or equal to the size of the process.
● For example:
● Process P1 of size 10KB is arrived and then the smallest hole to meet the requirements of size
10 KB is chosen and allocated to the process.
25 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Worst fit
● With this strategy, the Largest free partition/ hole that meets the requirements of the process is
allocated to the process.
● It is done so that the portion that is left is big enough to be useful. This strategy is just the
opposite of best Fit.
● This strategy searches the entire list of holes in order to find the largest hole and then allocate the
largest hole to process.
● For example-
● Process P1 of size 10KB has arrived and then the largest hole of size 80 KB is chosen and
allocated to the process.
26 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Next fit
● Next-fit: choose first hole from last placement
● Step 1. Enter the number of memory blocks.
● Step 2. Enter the size of each memory block.
● Step 3. Enter the number of processes with their sizes.
● Step 4. Carry out search from the last allocated memory location.
● Step 6. If the current memory size is smaller, then continue to check the next blocks.
● Step 7. Stop
27 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Next fit
28 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Best fit
● In Best fit memory allocation scheme, the operating system searches that can –
● Accommodate the process
● Also, leaves the minimum memory wastage
The process size is 40.
While blocks 1, 2 and 4 can
accommodate the process.
Block 2 is chosen, as it leaves
the lowest memory wastage
29 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Worst fit
● The algorithms searches sequentially starting from first memory block and searches for
the memory block that fulfills the following condition –
● Can accommodate the process size
● Leaves the largest wasted space (fragmentation) after the process is allocated to given
memory block
30 Department of Information Technology, VIIT, Pune-48
Partition Allocation Algorithm – Worst fit
● Step 1: Input memory block with a size.
● Step 2: Input process with size.
● Step 3: Initialize by selecting each process to find the maximum block
size that can be assigned to the current process.
● Step 4: If the condition does not fulfill, they leave the process.
● Step 5: If the condition is not fulfilled, then leave the process and check
for the next process.
● Step 6: Stop.
31 Department of Information Technology, VIIT, Pune-48
Placement Algorithm: comments
● Next-fit often leads to allocation of the largest block at the end of
memory
● First-fit favors allocation near the beginning: tends to create less
fragmentation then Next-fit
● Best-fit searches for smallest block: the fragment left behind is small
as possible
● main memory quickly forms holes too small to hold any process:
compaction generally needs to be done more often
32 Department of Information Technology, VIIT, Pune-48
Buddy System
● Allocates memory from fixed-size segment consisting of
physically-contiguous pages
● Memory allocated using power-of-2 allocator
● Satisfies requests in units sized as power of 2
● Request rounded up to next highest power of 2
● When smaller allocation needed than is available, current chunk split into two
buddies of next-lower power of 2
● Continue until appropriate sized chunk available
● For example, assume 256KB chunk available, kernel requests 21KB
● Split into AL and AR of 128KB each
● One further divided into BL and BR of 64KB
• One further into CL and CR of 32KB each – one used to satisfy request
● Advantage – quickly coalesce unused chunks into larger chunk
● Disadvantage - fragmentation
Buddy System Allocator
Buddy System Allocator
Given : Memory size = 1MB
Request A 🡪100k, Request A 🡪240k, Request A 🡪64k, Request A 🡪256k,Release B,
Release A, Request E 🡪75k, Release C, Release E, Release D
1 MB
512 512 Request A 🡪100k
128 128 512
100 128 512
Fragmentation of 28 k
Buddy System Allocator
0
A requests 32 M
B requests 64 M
C requests 60 M
D requests 150 M
Release B
1024 Release A
E requests 100 M
F requests 100 M
Paging
● Paging permits the physical address space of a process to be non-contiguous.
● It is a fixed-size partitioning scheme. In the Paging technique, the secondary
memory and main memory are divided into equal fixed-size partitions.
● Paging helps to avoid external fragmentation and the need for compaction.
● A process is divided into number of equal size chunks called pages
● Size of each page must be same.
● Main memory is divided into equal fixed-sized chunks called frames.
● The size of pages and frames must be same.
● The process pages can thus be assigned to the available chunks in main memory
called frames (or page frames)
● Consequence: a process does not need to occupy a contiguous portion of
memory
37 Department of Information Technology, VIIT, Pune-48
Paging
38 Department of Information Technology, VIIT, Pune-48
Paging
39 Department of Information Technology, VIIT, Pune-48
Logical address used in paging
● The CPU always generates a logical address.
● In order to access the main memory, always a physical address is needed.
● The logical address generated by CPU always consists of two parts:
● Page Number(p)
● Page Offset (d)
● Page Number is used to specify the specific page of the process from which the CPU
wants to read the data. and it is also used as an index to the page table.
● Page offset is mainly used to specify the specific word on the page that the CPU
wants to read.
● Page Table - The Page table mainly contains the base address of each page in the
Physical memory.
● The base address is then combined with the page offset in order to define the physical
memory address .
● Thus page table mainly provides the corresponding frame number (base address of
the frame) where that page is stored in the main memory.
40 Department of Information Technology, VIIT, Pune-48
Logical address used in paging
● The frame number is combined with the page offset and forms the required
physical address.
● So, The physical address consists of two parts:
● Frame Number(f)
● Page offset(d) or displacement
● The Frame number is used to indicate the specific frame where the required page is
stored.
● and Page Offset indicates the specific word that has to be read from that page.
● The Page size (like the frame size) is defined with the help of hardware. It is important
to note here that the size of the page is typically the power of 2 that varies between
512 bytes and 16 MB per page and it mainly depends on the architecture of the
computer.
41 Department of Information Technology, VIIT, Pune-48
Logical address in paging
42 Department of Information Technology, VIIT, Pune-48
Logical to physical address in paging
Frame number and
offset are
combined to
generate the
physical address
Page number is Page table
given as input to produces frame
Page table number as output
43 Department of Information Technology, VIIT, Pune-48
For a 32 B memory with 4 B pages
0
Page table
0 A 0 5 4
0 a 1 6
Page 0 1 b 8
2 1
2 c
3 D 3 2 16
4 e Page no. Frame no.
5 f 20 a,
Page 1
6 g b,
7 H Logical address is 0 & Offset is also 0 c,
We have to see in which frame the d
8 i corresponding page is available.( in
9 j page table) 24
Page 2 10 k
28
11 L Frame no.* size of a page + offset
12 M 32
13 N 5*4+0
14 O = 20
Page 4
15 p
44 Department of Information Technology, VIIT, Pune-48
Segmentation
● In Operating Systems, Segmentation is a memory management
technique in which the memory is divided into the variable size
parts.
● When a process gets loaded into main memory, its different segments
can be located anywhere
● The details about each segment are stored in a table called a
segment table. Segment table is stored in one (or many) of the
segments.
● Each segment is fully packed with instructsion/data: no internal
fragmentation
● Segmentation gives user’s view of the process which paging does
not give
45 Department of Information Technology, VIIT, Pune-48
Why Segmentation is required?
● Paging is more close to the Operating system rather than the User.
● It divides all the processes into the form of pages regardless of the fact
that a process can have some relative parts of functions which need to
be loaded in the same page.
● Operating system doesn't care about the User's view of the process.
● It may divide the same function into different pages and those pages
may or may not be loaded at the same time into the memory.
● Paging decreases the efficiency of the system.
● segmentation which divides the process into the segments.
● Each segment contains the same type of functions such as the main
function can be included in one segment and the library functions can
be included in the other segment.
46 Department of Information Technology, VIIT, Pune-48
Why Segmentation is required?
47 Department of Information Technology, VIIT, Pune-48
48 Department of Information Technology, VIIT, Pune-48
Why Segmentation is required?
49 Department of Information Technology, VIIT, Pune-48
Why Segmentation is required?
Sr No. Paging Segmentation
1 Non-Contiguous memory Non-contiguous memory allocation
allocation
2 Paging divides program into fixed Segmentation divides program into variable
size pages. size segments.
3 OS is responsible Compiler is responsible.
4 Paging is faster than segmentation Segmentation is slower than paging
5 Paging is closer to OS Segmentation is closer to User
6 Logical address is divided into Logical address is divided into segment
page number and page offset number and segment offset
7 Page table is used to maintain the Segment Table maintains the segment
page information. information
50 Department of Information Technology, VIIT, Pune-48
Segmentation
● Segment table contains mainly two information about segment:
● Base: It is the base address of the segment
● Limit: It is the length of the segment.
51 Department of Information Technology, VIIT, Pune-48
Segmentation
● In contrast with paging, segmentation is visible to the programmer
● provided as a convenience to organize logically programs (ex: data
in one segment, code in another segment)
● must be aware of segment size limit
● The OS maintains a segment table for each process. Each entry
contains:
● the starting physical addresses of that segment.
● the length of that segment (for protection)
52 Department of Information Technology, VIIT, Pune-48
Simple segmentation and paging comparison
● Segmentation requires more complicated hardware for address translation
● Segmentation suffers from external fragmentation
● Paging only yield a small internal fragmentation
● Segmentation is visible to the programmer whereas paging is transparent
● Segmentation can be viewed as commodity offered to the programmer to
organize logically a program into segments and using different kinds of
protection (ex: execute-only for code but read-write for data)
● for this we need to use protection bits in segment table entries
53 Department of Information Technology, VIIT, Pune-48
I/O Management
● I/O management is another important function of OS
● I/O Requests are managed by Device Drivers in collaboration with some
system programs inside the I/O device.
● I/O devices can be mainly divided into 4 categories:
● Storage Category: disks and tapes are examples of this category
● Transmission Category: network cards and modems
● Human-interface Category: Screens, keyboards, and mike are human-interface
devices for input/output operations
● Specialized Category: There are also some other devices like joysticks
54 Department of Information Technology, VIIT, Pune-48
Disk Scheduling Algorithms
● A Process makes the I/O requests to the operating system to access
the disk.
● Disk Scheduling Algorithm manages those requests and decides the
order of the disk access given to the requests.
● Disk Scheduling Algorithms are needed because a process can make
multiple I/O requests and multiple processes run at the same time.
● The requests made by a process may be located at different sectors on
different tracks.
● Due to this, the seek time may increase more.
● These algorithms help in minimizing the seek time by ordering the
requests made by the processes.
55 Department of Information Technology, VIIT, Pune-48
Disk Scheduling
● Important Terms related to Disk Scheduling Algorithms
● Seek Time - It is the time taken by the disk arm to locate the desired track.
● Rotational Latency - The time taken by a desired sector of the disk to
rotate itself to the position where it can access the Read/Write head.
● Transfer Time - It is the time taken to transfer the data requested by the
processes.
● Disk Access Time - Disk Access time is the sum of the Seek Time,
Rotational Latency, and Transfer Time.
56 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● First Come First Serve (FCFS)
● In this algorithm, the requests are served in the order they come. Those who come first are
served first. This is the simplest algorithm. Eg. Suppose the order of requests are 45, 21,
67, 90, 4, 50, 89, 52, 61, 87, 25 Head pointer starting at 50
Seek Time = Distance Moved by the disk arm = 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62 = 376
57 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● SSTF Scheduling Algorithm
● Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the
least disk arm movement from its current position regardless of the direction. It reduces the
total seek time as compared to FCFS.
● Consider the following disk request sequence for a disk with 100 tracks and Head at 50
and 45 is the only process at start.
● 45, 21, 67, 90, 4, 89, 52, 61, 87, 25
Seek Time = Distance Moved by the disk arm = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136
58 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● SSTF Scheduling Algorithm – Disadvantages
● It may cause starvation for some requests.
● Switching direction on the frequent basis slows the working of algorithm.
● It is not the most optimal algorithm.
59 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● SCAN Scheduling Algorithm
● It is also called as Elevator Algorithm. In this, the disk arm moves into a particular direction
till the end, satisfying all the requests coming in its path, turns back and moves in the
reverse direction satisfying requests coming in its path.
● It works in the way an elevator works, elevator moves in a direction completely till the last
floor of that direction and then turns back.
● Consider the following disk request sequence for a disk with 100 tracks Head at 54
● 98, 137, 122, 183, 14, 133, 65, 78
40 + 14 + 65 + 13 + 20 + 24 + 11 + 4 + 46 = 237
60 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● C SCAN Scheduling Algorithm
● In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests
until it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction
without servicing any request then it turns back and start moving in that direction servicing
the remaining requests.
● Consider the following disk request sequence for a disk with 100 tracks Head at 54
● 98, 137, 122, 183, 14, 133, 65, 78
No. of cylinders crossed = 40 + 14 +
199 + 16 + 46 + 4 + 11 + 24 + 20 + 13
= 387
61 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● Look Scheduling Algorithm
● In this scheduling algorithm, the arm of the disk stops moving inwards (or outwards) when
no more request in that direction exists.
● This algorithm tries to overcome the overhead of SCAN algorithm which forces disk arm to
move in one direction till the end regardless of knowing if any request exists in the direction
or not.
● Consider the following disk request sequence 98, 137, 122, 183, 14, 133, 65, 78
Number of cylinders crossed = 40 +
51 + 13 + +20 + 24 + 11 + 4 + 46 =
209
62 Department of Information Technology, VIIT, Pune-48
Disk Scheduling algorithms
● C Look Scheduling Algorithm
● C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm
of the disk moves outwards servicing requests until it reaches the highest request cylinder,
then it jumps to the lowest request cylinder without servicing any request then it again start
moving outwards servicing the remaining requests.
● Consider the requests - 98, 137, 122, 183, 14, 133, 65, 78
Number of cylinders crossed = 11 + 13 + 20 + 24 +
11 + 4 + 46 + 169 = 298
63 Department of Information Technology, VIIT, Pune-48
End of Unit 4….
64 Department of Information Technology, VIIT, Pune-48