Memory Management and Disk Scheduling
Memory Management and Disk Scheduling
Virtual Memory in OS
Virtual Memory is a storage scheme that provides user an illusion of having a very big main
memory. This is done by treating a part of secondary memory as the main memory.
In this scheme, User can load the bigger size processes than the available main memory by
having the illusion that the memory is available to load the process.
Instead of loading one big process in the main memory, the Operating System loads the
different parts of more than one process in the main memory.
By doing this, the degree of multiprogramming will be increased and therefore, the CPU
utilization will also be increased.
In modern word, virtual memory has become quite common these days. In this scheme,
whenever some pages needs to be loaded in the main memory for the execution and the memory
is not available for those many pages, then in that case, instead of stopping the pages from
entering in the main memory, the OS search for the RAM area that are least used in the recent
times or that are not referenced and copy that into the secondary memory to make the space for
the new pages in the main memory.
Since all this procedure happens automatically, therefore it makes the computer feel like it is
having the unlimited RAM.
Demand Paging
Demand Paging is a popular method of virtual memory management. In demand paging, the
pages of a process which are least used, get stored in the secondary memory.
A page is copied to the main memory when its demand is made or page fault occurs. There are
various page replacement algorithms which are used to determine the pages which will be
replaced. We will discuss each one of them later in detail.
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB. The main
memory contains 8 frame of 1 KB each. The OS resides in the first two partitions. In the third
partition, 1st page of P1 is stored and the other frames are also shown as filled with the different
pages of processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame
each. The page tables of both the processes contain various information that is also shown in
the image.
The CPU contains a register which contains the base address of page table that is 5 in the case
of P1 and 7 in the case of P2. This page table base address will be added to the page number
of the Logical address when it comes to accessing the actual corresponding entry.
Demand Paging in OS
According to the concept of Virtual Memory, in order to execute some process, only a part of
the process needs to be present in the main memory which means that only a few pages will
only be present in the main memory at any time.
However, deciding, which pages need to be kept in the main memory and which need to be
kept in the secondary memory, is going to be difficult because we cannot say in advance that a
process will require a particular page at particular time.
Therefore, to overcome this problem, there is a concept called Demand Paging is introduced.
It suggests keeping all pages of the frames in the secondary memory until they are required. In
other words, it says that do not load any page in the main memory until it is required.
Whenever any page is referred for the first time in the main memory, then that page will be
found in the secondary memory.
After that, it may or may not be present in the main memory depending upon the page
replacement algorithm which will be covered later in this tutorial.
What is a Page Fault?
If the referred page is not present in the main memory then there will be a miss and the concept
is called Page miss or page fault.
The CPU has to access the missed page from the secondary memory. If the number of page
fault is very high then the effective access time of the system will become very high.
What is Thrashing?
If the number of page faults is equal to the number of referred pages or the number of page
faults are so high so that the CPU remains busy in just reading the pages from the secondary
memory then the effective access time will be the time taken by the CPU to read one word
from the secondary memory and it will be so high. The concept is called thrashing.
If the page fault rate is PF %, the time taken in getting a page from the secondary memory and
again restarting is S (service time) and the memory access time is ma then the effective access
time can be given as;
Inverted Page Table is the global page table which is maintained by the Operating System for
all the processes. In inverted page table, the number of entries is equal to the number of frames
in the main memory. It can be used to overcome the drawbacks of page table.
There is always a space reserved for the page regardless of the fact that whether it is present in
the main memory or not. However, this is simply the wastage of the memory if the page is not
present.
We can save this wastage by just inverting the page table. We can save the details only for the
pages which are present in the main memory. Frames are the indices and the information saved
inside the block will be Process ID and page number.
Page Replacement Algorithms in Operating Systems (OS)
This is the first basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries
to access it. When the frames are filled then the actual problem starts. The fixed number of
frames is filled up with the help of first frames present. This concept is fulfilled with the help
of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the First In First Out Page Replacement
Algorithm comes into picture.
The First In First Out (FIFO) Page Replacement Algorithm removes the Page in the frame
which is allotted long back. This means the useless page which is in the frame for a longer time
is removed and the new page which is in the ready queue and is ready to occupy the frame is
allowed by the First In First Out Page Replacement.
Let us understand this First In First Out Page Replacement Algorithm working with the help
of an example.
Example:
Points to Remember
Reference String:
Number of Page Hits = 8
Number of Page Faults = 12
The Ratio of Page Hit to the Page Fault = 8 : 12 - - - > 2 : 3 - - - > 0.66
The Page Hit Percentage = 8 *100 / 20 = 40%
The Page Fault Percentage = 100 - Page Hit Percentage = 100 - 40 = 60%
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy. So, with the help of First in First Out Page
Replacement Algorithm we remove the frame which contains the page is older among the
pages. By removing the older page, we give access for the new frame to occupy the empty
space created by the First in First out Page Replacement Algorithm.
This is the second basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries
to access it. When the frames are filled then the actual problem starts. The fixed number of
frames is filled up with the help of first frames present. This concept is fulfilled with the help
of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the OPTIMAL Page Replacement Algorithm
comes into picture.
The OPTIMAL Page Replacement Algorithms works on a certain principle. The principle is:
Replace the Page which is not used in the Longest Dimension of time in future
This principle means that after all the frames are filled then, see the future pages which are to
occupy the frames. Go on checking for the pages which are already available in the frames.
Choose the page which is at last.
Example:
Now we need to enter 0 into the frame by removing one page from the page
From the sub sequence 0, 3, 4, 6, 0, 2, 1 we can say that 1 is the last occurring page number.
So we can say that 0 can be placed in the frame body by removing 1 from the frame.
Let us understand this OPTIMAL Page Replacement Algorithm working with the help of an
example.
Example:
Points to Remember
Reference String:
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that
we would replace the Page which is not used in the Longest Dimension of time in future.
There comes a question what if there is absence of page which is in the frame.
Here, we can see that page number 5 is not present in the Reference String. But the number 5
is present in the Frame. So, as the page number 5 is absent we remove it when required and
other page can occupy that position.
This is the last basic algorithm of Page Replacement Algorithms. This algorithm is basically
dependent on the number of frames used. Then each frame takes up the certain page and tries
to access it. When the frames are filled then the actual problem starts. The fixed number of
frames is filled up with the help of first frames present. This concept is fulfilled with the help
of Demand Paging
After filling up of the frames, the next page in the waiting queue tries to enter the frame. If the
frame is present then, no problem is occurred. Because of the page which is to be searched is
already present in the allocated frames.
If the page to be searched is found among the frames then, this process is known as Page Hit.
If the page to be searched is not found among the frames then, this process is known as Page
Fault.
When Page Fault occurs this problem arises, then the Least Recently Used (LRU) Page
Replacement Algorithm comes into picture.
The Least Recently Used (LRU) Page Replacement Algorithms works on a certain principle.
The principle is:
Replace the page with the page which is less dimension of time recently used page in the past.
Example:
6, 1, 1, 2, 0, 3, 4, 6, 0
The pages with page numbers 6, 1, 2 are in the frames occupying the frames.
Now, we need to travel back into the past to check which page can be replaced.
Let us understand this Least Recently Used (LRU) Page Replacement Algorithm working with
the help of an example.
Example:
Reference String:
Explanation
First, fill the frames with the initial pages. Then, after the frames are filled we need to create a
space in the frames for the new page to occupy.
Here, we would fill the empty spaces with the pages we and the empty frames we have. The
problem occurs when there is no space for occupying of pages. We have already known that
we would replace the Page which is not used in the Longest Dimension of time in past or can
be said as the Page which is very far away in the past.
Disk Structure in OS
Hard Disk is a secondary storage device. It is a type of electro mechanical device. A hard disk
stores and retrieves digital data using magnetic storage. Disk is rigid rapidly rotating platters
coated with magnetic material.
Disk controllеr
Cylindеr
Sеctor
Some important parameters used to measure the performance of hard disk are as follow
Sееk timе
● Includеs thе initial startup timе and thе timе rеquirеd to travеrsе thе tracks to bе crossеd
oncе thе accеss arm is up to spееd.
● Timе rеquirеd for thе rеquеstеd sеctor to comе undеr thе rеad/writе hеad.
● Rotational delay is generally the half of the time taken in one rotation.
Data transfer rate is define as the amount of data transfer in per unit time for example 30
MB/Sec.
Data Transfer time is the total time taken to transfer a specific amount of data from the disk.
Data Transfer time depends on the data transfer rate of the disk.
Average Access Time = Seek Time + Rotational Latecny + Data Transfer Time
The list of various disks scheduling algorithm is given below. Each algorithm is carrying
some advantages and disadvantages. The limitation of each algorithm leads to the evolution
of a new algorithm.
It is the simplest Disk Scheduling algorithm. It services the IO requests in the order in which
they arrive. There is no starvation in this algorithm, every request is serviced.
Disadvantages
Example
Consider the following disk request sequence for a disk with 100 tracks 45, 21, 67, 90, 4, 50,
89, 52, 61, 87, 25
Head pointer starting at 50 and moving in left direction. Find the number of head movements
in cylinders using FCFS scheduling.
Solution
= (50-45)+(45-21)+(67-21)+(90-67)+(90-4)+(50-4)+(89-50)+(61-52)+(87-61)+(87-25)
= 5 + 24 + 46 + 23 + 86 + 46 + 49 + 9 + 26 + 62
= 376
Shortest seek time first (SSTF) algorithm selects the disk I/O request which requires the least
disk arm movement from its current position regardless of the direction. It reduces the total
seek time as compared to FCFS.
It allows the head to move to the closest track in the service queue.
Disadvantages
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 50. Find the number of head movements in cylinders using SSTF
scheduling.
Solution:
Number of cylinders = 5 + 7 + 9 + 6 + 20 + 2 + 1 + 65 + 4 + 17 = 136
Scan Algorithm
It is also called as Elevator Algorithm. In this algorithm, the disk arm moves into a particular
direction till the end, satisfying all the requests coming in its path,and then it turns backand
moves in the reverse direction satisfying requests coming in its path.
It works in the way an elevator works, elevator moves in a direction completely till the last
floor of that direction and then turns back.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using SCAN scheduling.
C-SCAN algorithm
In C-SCAN algorithm, the arm of the disk moves in a particular direction servicing requests
until it reaches the last cylinder, then it jumps to the last cylinder of the opposite direction
without servicing any request then it turns back and start moving in that direction servicing
the remaining requests.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using C-SCAN scheduling.
Look Scheduling
It is like SCAN scheduling Algorithm to some extant except the difference that, in this
scheduling algorithm, the arm of the disk stops moving inwards (or outwards) when no more
request in that direction exists. This algorithm tries to overcome the overhead of SCAN
algorithm which forces disk arm to move in one direction till the end regardless of knowing if
any request exists in the direction or not.
Example
Consider the following disk request sequence for a disk with 100 tracks
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using LOOK scheduling.
C Look Scheduling
C Look Algorithm is similar to C-SCAN algorithm to some extent. In this algorithm, the arm
of the disk moves outwards servicing requests until it reaches the highest request cylinder,
then it jumps to the lowest request cylinder without servicing any request then it again start
moving outwards servicing the remaining requests.
It is different from C SCAN algorithm in the sense that, C SCAN force the disk arm to move
till the last cylinder regardless of knowing whether any request is to be serviced on that
cylinder or not.
Example
Consider the following disk request sequence for a disk with 100 tracks 98, 137, 122, 183,
14, 133, 65, 78
Head pointer starting at 54 and moving in left direction. Find the number of head movements
in cylinders using C LOOK scheduling.
A file can be defined as a data structure which stores the sequence of records. Files are stored
in a file system, which may exist on a disk or in the main memory. Files can be simple (plain
text) or complex (specially-formatted).
The collection of files is known as Directory. The collection of directories at the different
levels, is known as File System.
Attributes of the File
1.Name
Every file carries a name by which the file is recognized in the file system. One directory cannot
have two files with the same name.
2.Identifier
Along with the name, Each File has its own extension which identifies the type of the file. For
example, a text file has the extension .txt, A video file can have the extension .mp4.
3.Type
In a File System, the Files are classified in different types such as video files, audio files, text
files, executable files, etc.
4.Location
In the File System, there are several locations on which, the files can be stored. Each file carries
its location as its attribute.
5.Size
The Size of the File is one of its most important attribute. By size of the file, we mean the
number of bytes acquired by the file in the memory.
6.Protection
The Admin of the computer may want the different protections for the different files. Therefore,
each file carries its own set of permissions to the different group of Users.
Every file carries a time stamp which contains the time and date on which the file is last
modified.
Sequential Access
Most of the operating systems access the file sequentially. In other words, we can say that most
of the files need to be accessed sequentially by the operating system.
In sequential access, the OS read the file word by word. A pointer is maintained which initially
points to the base address of the file. If the user wants to read first word of the file then the
pointer provides that word to the user and increases its value by 1 word. This process continues
till the end of the file.
Modern word systems do provide the concept of direct access and indexed access but the most
used method is sequential access due to the fact that most of the files such as text files, audio
files, video files, etc need to be sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In most of the cases, we
need filtered information from the database. The sequential access can be very slow and
inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the record we needed is
stored in 10th block. In that case, the sequential access will not be implemented because it will
traverse all the blocks in order to access the needed record.
Direct access will give the required result despite of the fact that the operating system has to
perform some complex tasks such as determining the desired block number. However, that is
generally implemented in database applications.
Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a group of certain
records. However, A particular record can be accessed by its index. The index is nothing but
the address of a record in the file.
In index accessing, searching in a large database became very quick and easy but we need to
have some extra space in the memory to store the index value.