OS - Unit-3
OS - Unit-3
UNIT-3
Memory Management
Introduction
In a multiprogramming computer, the Operating System resides in a part of memory and the rest is
used by multiple processes.
The task of subdividing the memory among different processes is called Memory Management.
Memory management is a method in the operating system to manage operations between main
memory and disk during process execution.
The main aim of memory management is to achieve efficient utilization of memory.
Logical Address Space: An address generated by the CPU is known as a ―Logical Address‖.
It is also known as a Virtual address.
Logical address space can be defined as the size of the process.
A logical address can be changed.
Physical Address Space: An address seen by the memory unit (i.e the one loaded into the memory
address register of the memory) is commonly known as a ―Physical Address‖.
A Physical address is also known as a Real address.
The set of all physical addresses corresponding to these logical addresses is known as Physical
address space.
A physical address is computed by MMU. The run- time mapping from virtual to physical addresses
is done by a hardware device Memory Management Unit(MMU).
The physical address always remains constant.
Static Loading: Static Loading is basically loading the entire program into a fixed address. It requires
more memory space.
Advantages:
Fast execution — no delays caused by loading code during runtime.
Predictable memory layout.
Simpler loader logic.
Disadvantages:
Higher memory usage (everything is loaded, even unused functions).
Less flexibility — the program can’t load new features or plugins during execution
Dynamic Loading: The entire program and all data of a process must be in physical memory for the
process to execute. So, the size of a process is limited to the size of physical memory. To gain proper
memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded until it is
called. All routines are residing on disk in a relocatable load format. One of the advantages of
dynamic loading is that the unused routine is never loaded. This loading is useful when a large
amount of code is needed to handle it efficiently.
Advantages:
Saves memory — only what’s needed gets loaded.
Faster startup time.
Supports modular and plugin-based programs.
Enables updates or changes at runtime.
Disadvantages:
Slight performance overhead during execution when new code is loaded.
More complex memory management — must track what’s been loaded and where.
To perform a linking task a linker is used. A linker is a program that takes one or more object files
generated by a compiler and combines them into a single executable file.
Static Linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only static
linking, in which system language libraries are treated like any other object module.
Advantages:
Faster execution (no need to resolve symbols at run time).
No dependency on external files once compiled.
Simpler memory management at run time.
Disadvantages:
Larger executable size.
Multiple programs using the same library will each have their own copy — wastes memory.
Updating a library means recompiling every program that uses it.
Operating System Unit-III
Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, ―Stub‖ is included for each appropriate library routine reference. A stub is a small piece of
code. When the stub is executed, it checks whether the needed routine is already in memory or not. If
not available then the program loads the routine into memory.
Advantages:
Smaller executable size.
Memory-efficient — multiple programs share the same library in RAM.
Easy to update shared libraries without recompiling the entire app.
Disadvantages:
Slightly slower start-up time due to symbol resolution.
If the library changes its interface, programs may break (dependency hell).
Needs careful memory protection to avoid overwriting shared areas.
Swapping
In this approach, the operating system keeps track of the first and last location available for the
allocation of the user program
The operating system is loaded either at the bottom or at top
Interrupt vectors are often loaded in low memory therefore it makes sense to load the
operating system in low memory
Sharing of data and code does not make much sense in a single process environment
The Operating system can be protected from user programs with the help of a fence register.
Memory partitions scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
Each partition is a block of contiguous memory
Memory is partitioned into a fixed number of partition
Each partition is of fixed size
Example: As shown in fig. memory is partitioned into 5 regions the region is reserved for
updating the system the remaining four partitions are for the user program
Memory Allocation
The main memory must accommodate both the operating system and the various user processes. We
need to allocate different parts of the main memory in the most efficient way possible.
The main memory is usually divided into two partitions: one for the resident operating system, and one
for the user processes. We may place the operating system in either low memory or high memory. The
major factor affecting this decision is the location of the interrupt vector. Since the interrupt vector is
often in low memory, programmers usually place the operating system in low memory as well.
There are following two ways to allocate memory for user processes:
1. Contiguous memory allocation
2. Non-contiguous memory allocation
In contiguous memory allocation, memory is divided into two main parts: one for the operating
system and one for user processes. When a program (process) is loaded into memory, it gets a single,
continuous block of memory.
The operating system decides how to allocate memory to different processes that are waiting to be
loaded. This method makes it simple to manage, as each process is placed in one continuous segment.
However, it can lead to problems if there isn't enough contiguous space for a process, even if there is
enough total free memory scattered across the system.
In contiguous memory allocation, a process is given a continuous block of memory. This can be
done in two ways:
1. Fixed-size Partition Scheme: Memory is divided into partitions of fixed sizes, and each
process is allocated one partition.
2. Variable-size Partition Scheme: Memory is divided into partitions of varying sizes, and
processes are allocated a partition based on their size.
1. Fixed-Size Partitioning
Fixed-Size Partitioning is a memory management method where the computer divides its memory
into equal-sized blocks. Each block holds one process (program).
Operating System Unit-III
Key Points:
1. Contiguous Allocation: Each process gets one continuous block of memory.
2. Fixed-Size Blocks: Memory is split into partitions of the same size that don’t change.
3. Internal Fragmentation: If a process is smaller than its allocated block, the extra space in
that block is wasted (called internal fragmentation).
Advantages:
Simplicity: Easy to set up and manage.
Easy Tracking: It's simple to keep track of which blocks are free or occupied.
Multiprogramming: Allows multiple programs to run at the same time.
Disadvantages:
1. Can't Fit Larger Processes: If a process needs more memory than a partition, it can't run,
even if there's enough total free memory.
2. Limited Multiprogramming: The number of processes that can run is limited by the number
of partitions.
3. Internal Fragmentation: Wasted space occurs when a process is smaller than its allocated
partition.
Operating System Unit-III
2. Variable-Size Partition
The Variable-Size Partition allocates memory based on the specific needs of each process, rather
than using fixed-size blocks.
Key Points:
1. No Fixed Blocks: Memory is not divided into fixed sizes. Each process gets the exact amount
of memory it needs.
2. Efficient Allocation: Processes are allocated memory only when they request it, and they get
exactly what they need.
3. Dynamic Block Sizes: Larger processes get larger blocks, and smaller processes get smaller
blocks.
Operating System Unit-III
Advantages:
1. No Internal Fragmentation: Each process gets exactly the memory it needs, so there’s no
wasted space.
2. Flexible Capacity: The number of processes that can run depends on available memory, not a
fixed number of partitions.
3. Accommodation of Large Processes: Large processes can be allocated enough space as long
as there’s enough available memory.
Disadvantages:
1. Complex Implementation: It's harder to manage because memory sizes are dynamic and not
fixed.
2. Tracking Overhead: The system needs to keep track of which memory blocks are used and
how much space is available, which can be complex and costly.
When a program (process) needs memory, the system finds a free space (hole) to store it. There are different
methods to choose which hole to use.
1. First-Fit
The Operating System (OS) scans memory from the beginning and picks the first hole that is big
enough for the process.
The process is placed in that hole, even if there is extra space left.
Pros:
Fast allocation because it doesn’t check all holes.
Simple to implement.
Cons:
May leave many small unused spaces (memory fragmentation).
Operating System Unit-III
2. Best-Fit
The OS searches all holes and picks the smallest one that can fit the process.
This method reduces wasted space because it finds the closest fit.
Pros:
Minimizes unused space, making memory usage more efficient.
Cons:
Takes longer because it searches all holes to find the best match.
Can create many tiny unusable holes over time
5. Worst-Fit
The OS searches all holes and picks the largest one to store the process.
This method leaves a large chunk of space, which can be useful for bigger processes later.
Pros:
Helps keep large free spaces available.
Cons:
Wastes memory if the remaining space after allocation is too small for other processes.
Operating System Unit-III
Fragmentation
When programs (processes) are added and removed from memory, the free space gets divided into
small scattered pieces. Over time, these small free spaces become too tiny to fit new programs, even
though the total free memory is enough. This problem is called Fragmentation, and it causes wasted
memory that cannot be used.
1. External Fragmentation: The total memory space exists to satisfy a request, but it is not contiguous.
This wasted space not allocated to any partition is called external fragmentation. The external
fragmentation can be reduce by compaction. The goal is to shuffle the memory contents to place all free
memory together in one large block.
Compaction is possible only if relocation is dynamic, and is done at execution time.
2. Internal Fragmentation: The allocated memory may be slightly larger than requested memory. The
wasted space within a partition is called internal fragmentation. One method to reduce internal
fragmentation is to use partitions of different size.
NOTE:
External fragmentation happens outside a process (because memory is broken into small gaps).
Internal fragmentation happens inside a process (because extra allocated space is unused).
Both lead to memory waste
Operating System Unit-III
Paging in OS
Main memory(RAM)is divided into equal-sized blocks called frames. A program is also
divided into equal-sized parts called pages (same size as frames). When a program runs, its
pages are loaded into any available frames in memory (not necessarily next to each other).
This helps efficiently use memory space.
Paging is a memory management technique that helps a computer access data faster. Instead of
loading an entire program into memory, the operating system divides it into small fixed-size
parts called "pages." When a program needs a page, the OS quickly brings it from storage to
main memory. This allows the program to use non-contiguous memory, making efficient use
of available space.
If a page is not currently in memory when needed, the OS retrieves it from secondary storage
(like a hard drive). This process helps prevent memory wastage and ensures smooth program
execution.
Since pages and frames are of the same size, they can be easily mapped to each other, allowing for
quick and efficient memory allocation.
NOTE:
The logical address is divided into two parts:
o Page number (p): Identifies which page the data is on.
o Page offset (d): Tells the exact location within that page.
Operating System Unit-III
This means pages can be stored anywhere in physical memory, not necessarily in order.
A new process arrives that needs four pages (page 0, page 1, page 2, page 3).
These pages need to be stored in four available frames.
o Page 3 → Frame 20
The free frame list updates by removing the allocated frames.
A page table is a data structure used by the OS to map logical addresses (used by the program) to
physical addresses (actual locations in RAM).
Example Calculation:
If page size = 4 bytes
3. Frame Table
The OS maintains a frame table to keep track of physical memory usage.
Each frame (a fixed-size block in RAM) can either be free or allocated to a process.
The frame table records which frames are free and which belong to which process.
Advantages-
The advantages of paging are-
It allows to store parts of a single process in a non-contiguous fashion.
It solves the problem of external fragmentation.
Disadvantages-
The disadvantages of paging are-
It suffers from internal fragmentation.
There is an overhead of maintaining a page table for each process.
The time taken to fetch the instruction increases since now two memory accesses are required.
Segmentation in OS
Segmentation is a way for the OS to manage memory by dividing a program into parts called
segments (like code, data, or stack). These segments are of different sizes and placed separately in
memory.
This helps the OS easily keep track of used and free memory, making memory allocation faster. Since
it's non-contiguous, it avoids wasting space (no internal fragmentation).
Operating System Unit-III
4. If it's safe, the system adds the base address + offset to get the physical address in main
memory.
So, segmentation helps the CPU find the exact location of data in memory safely and efficiently.
o Base = 2200
o Limit = 100
Offset 80 is within the limit → OK
So, Physical Address = 2200 + 80 = 2280
Operating System Unit-III
Advantages of Segmentation
Runs different program parts (like code, data) separately, improving efficiency.
Allows parallel execution, so the system responds faster.
Makes better use of the CPU.
Avoids internal memory wastage.
Uses a segment table to keep track of segments.
Keeps sensitive code (like security) separate from data.
Matches how users see programs (as modules).
Users can choose segment size (unlike paging, where hardware decides it).
Disadvantages of Segmentation
Slower performance, as the system checks and allocates memory for each segment.
Needs more resources to manage than simpler methods.
Can cause external fragmentation (free memory scattered in small parts).
Segment table adds extra work and memory use.
Slower access time (needs two memory lookups).
Unequal segment sizes make swapping harder.
Segmented Paging is a memory management technique that combines both segmentation and
paging to use the best of both worlds.
How It Works:
1. Memory is divided in two steps:
o First into segments (like code, data, stack)
o Then each segment is further divided into pages (fixed-size blocks)
2. Virtual Address Structure:
The virtual address is broken into three parts:
3. va = (s, p, d)
o s = segment number (which segment the data belongs to)
o p = page number (which page inside the segment)
o d = offset (exact location inside the page)
Operating System Unit-III
4. Translation Process:
o The CPU first uses s to look into the Segment Table and find the base address of that
segment’s Page Table.
o Then it uses p to look into the Page Table and find the frame number (where the page
is in physical memory).
o Finally, it adds the offset d to get the exact physical address.
Registers Involved:
STR (Segment Table Register): Stores base address of Segment Table
PMT (Page Map Table): Contains Page Tables for each segment
3. Segment Table
CPU uses s to access the Segment Table
5. Page Table
Once the base address of the page table is known, the p (page number) is used to find the
frame number (where the page is in physical memory)
6. Frame
Each entry in the page table points to a frame (block in physical memory)
7. + (Adder)
Finally, the offset d is added to the frame number, and we get the physical address.
File System
Files are stored on disk or other storage and do not disappear when a user logs off.
Files have names and are associated with access permission that permits controlled sharing.
Files could be arranged or more complex structures to reflect the relationship between them.
File Attributes
File attributes are pieces of information about a file that the operating system uses to manage it
properly. They help in organizing, securing, and handling files.
Name: This is the name given to the file (e.g., report.docx). It helps users identify and access
the file easily.
Identifier: A unique number (not shown to users) used by the system to keep track of the file.
Type: Tells the system what kind of file it is, like a text file, video file, or software program.
This helps the system know how to open or run the file.
Location: Shows exactly where the file is stored on the storage device (like a map pointing to
the file's place on the disk).
Size: Tells how big the file is, measured in bytes, kilobytes, etc. This helps in knowing how
much space it uses.
Protection: Controls who can read, write, or execute the file. Helps keep the file safe from
unauthorized access.
Time, Date, and User Info: Shows when the file was created, last changed, and by which
user. Useful for tracking changes and system monitoring.
File Operations
File operations are basic actions we perform on files, like making a new file, reading it, or writing into
it. The operating system helps manage these actions.
1. Create Operation
This is used to make a new file.
2. Write Operation
This is used to add or update data in a file.
The system asks for the file name and the data to write.
It writes the data into the file, and the file’s size may increase.
A pointer (marker) keeps track of where the next data should be written.
Operating System Unit-III
3. Read Operation
This is used to read data from a file.
The system looks for the file and finds where the needed data is stored.
A read pointer shows where to start reading.
After reading, the pointer moves ahead so it knows where to continue next time.
This saves time and reduces confusion during repeated read/write.
4. Reposition Operation
Also called file seek.
5. Delete Operation
Used to remove a file.
The system finds the file and deletes its entry from the folder (directory).
Frees up space for other files.
6. Truncate Operation
Deletes only the contents of a file, not the file itself.
The file becomes empty, but its name and settings (like permissions) stay the same.
Helpful when you want to clear data but keep the file.
7. Close Operation
Done when you’re finished using a file.
8. Append Operation
Adds new data to the end of a file.
Useful for logs, records, or when you don’t want to change old data.
9. Rename Operation
Used to change the name of a file.
File Types
File types help the Operating System (OS) understand what kind of data is inside a file and how it
should be used. Every file has a type based on what it contains and how it works.
Operating systems like MS-DOS and UNIX support different types of files. These are mainly:
2. Directory Files
These are special files that store information about other files.
A directory is like a folder that contains file names, types, sizes, and locations.
It helps to organize and manage files in the system.
Example: A folder named My Documents that contains files like resume.docx, image.png.
o Tapes
Example:
/dev/sda – a block special file (disk)
File Structure
A File Structure needs to be predefined format in such a way that an operating system understands.
It has an exclusively defined structure, which is based on its type.
4. UNIX Example:
o Treats files as just bytes.
o The OS handles packing these bytes into blocks automatically.
5. I/O Operations:
o Reading/writing always happens one block at a time.
6. Internal Fragmentation:
o If the last block of a file isn’t full, the extra space is wasted.
o Bigger blocks = more possible waste.
When a program needs a file, the operating system finds and accesses it using file access methods.
There are 4 types:
Sequential Access Method
Direct Access Method
Indexed Access Method
Indexed Sequential Access Method
Advantages
Easy to implement.
Fast access to the next record using lexicographic order (alphabetical or sorted order).
Disadvantages
Slow if the needed record is not next to the current one.
Inserting a new record may require shifting a large portion of the file.
Advantages
Files can be accessed instantly, reducing average access time.
No need to read all previous blocks — you can jump straight to the required one.
Operating System Unit-III
File Directories
A single directory may or may not contain multiple files. It can also have sub-directories inside the
main directory. Information about files is maintained by Directories. In Windows OS, it is called
folders
If in the directory, we want to read the list of all files, then first, it should be opened, and afterwards
we read the directory, it is a must to close the directory so that the internal tablespace can be free up.
1. Single-Level Directory:
Single-Level Directory is the easiest directory structure. There is only one directory in a single level
directory, and that directory is called a root directory. In a single-level directory, all the files
are present in one directory that makes it easy to understand. In this, under the root directory, the
user cannot create the subdirectories.
Advantages
The implementation of a single-level directory is so easy.
In a single-level directory, if all the files have a small size, then due to this, the searching of
the files will be easy.
In a single-Level directory, the operations such as searching, creation, deletion, and updating
can be performed.
Operating System Unit-III
Disadvantages
If the size of the directory is large in Single-Level Directory, then the searching will be tough.
In a single-level directory, we cannot group the similar type of files.
Another disadvantage of a single-level directory is that there is a possibility of collision
because the two files cannot have the same name.
The task of choosing the unique file name is a little bit complex.
2. Two-Level Directory
Two-Level Directory is another type of directory structure. In this, it is possible to create an
individual directory for each of the users. There is one master node in the two-level directory that
include an individual directory for every user. At the second level of the directory, there is a different
directory present for each of the users. Without permission, no user can enter into the other user’s
directory
3. Tree-Structured Directory
A tree-structured directory is a way to organize files and folders like a family tree. The root directory
is at the top, and it has branches (subdirectories) that lead to files or more folders. Each user has their
own folder and can't change others' files, but they can read public ones. The system administrator has
full control. You can find files by using absolute paths (starting from the root, e.g., /home/user/docs)
or relative paths (from your current location, e.g., docs/file.txt). This system makes it easier to find
and manage files.
Operating System Unit-III
4. Acyclic-Graph Directory
Unlike tree directories, files can be shared between multiple folders in an acyclic-graph
directory.
Files can have different "paths" using links (shortcuts). There are two types:
Hard link: The file is physically stored in one place, but multiple names (links) point
to it.
Symbolic link: A shortcut to the file, but the file exists somewhere else.
Deleting files:
Hard link: The file is deleted only when all links to it are removed.
Symbolic link: The link is removed, but the file remains, leaving a "dangling" link
(broken shortcut).
This system helps share files between different directories without making duplicates.
5. General-Graph Directory
The General-Graph directory is another vital type of directory structure. In this type of directory,
within a directory we can create cycle of the directory where we can derive the various directory with
the help of more than one parent directory.
The main issue in the general-graph directory is to calculate the total space or size, taken by the
directories and the files.
Operating System Unit-III
TYPES:
1. Disk layout and partitioning
2. File system organization
3. File allocation methods
4. Directory structure
4. Directory structure
The directory structure is how files and folders are organized on a computer. It helps keep
everything in order so users and programs can find files easily.
Single-level directory: All files are in one big list.
Two-level directory: One main folder with subfolders inside it.
Tree-structured directory: A complex system of folders inside other folders, like a tree with
branches.
Directories can also have permissions to control who can access or change the files inside them.
File Allocation Methods help organize how files are stored on a hard disk. Their goal is to:
1. Maximize space usage: Store files in a way that uses the disk space efficiently.
2. Reduce fragmentation: Prevent gaps or wasted space that slow down file access.
These methods make it easier and faster to access files on a disk by determining where and how files
are placed.
From the above figure, we can conclude that index block 1 consists of all the pointers of the file
with -1 set as an unused part of the index block.
The operating system manages the free space in the hard disk. This is known as free space
management in operating systems.
The OS maintains a free space list to keep track of the free disk space. The free space list
consists of all free disk blocks that are not allocated to any file or directory.
For saving a file in the disk, the operating system searches the free space list for the required
disk space and then allocates that space to the file. When a file is deleted, the space allocated
to it is added to the free space list.
The operating system uses various techniques to manage free space and optimize the use of
storage devices.
1. Bitmap or Bit vector – A Bitmap or Bit Vector is series or collection of bits where each bit
corresponds to a disk block. The bit can take two values: 0 and 1: 0 indicates that the block is
allocated and 1 indicates a free block. The given instance of disk blocks on the disk in Figure
1 (where green blocks are allocated) can be represented by a bitmap of 16 bits as:
0000111000000110.
Operating System Unit-III
Advantages –
Simple to understand.
Finding the first free block is efficient. It requires scanning the words (a group of 8 bits) in a
bitmap for a non-zero word. (A 0-valued word has all bits 0). The first free block is then found
by scanning for the first 1 bit in the non-zero word.
2. Linked List – In this approach, the free disk blocks are linked together i.e. a free block contains a
pointer to the next free block. The block number of the very first disk block is stored at a separate
location on disk and is also cached in memory.
In Figure-2, the free space list head points to Block 5 which points to Block 6, the next free
block and so on. The last free block would contain a null pointer indicating the end of free list.
A drawback of this method is the I/O required for free space list traversal.
3. Grouping – This approach stores the address of the free blocks in the first free block. The first
free block stores the address of some, say n free blocks. Out of these n blocks, the first n-1
blocks are actually free and the last block contains the address of next free n blocks.
An advantage of this approach is that the addresses of a group of free disk blocks can be found
easily.
Operating System Unit-III
4. Counting – This approach stores the address of the first free disk block and a number n of free
contiguous disk blocks that follow the first block. Every entry in the list would contain:
1. Address of first free disk block
2. A number n
Efficient use of storage space: Free space management techniques help to optimize the use of
storage space on the hard disk or other secondary storage devices.
Easy to implement: Some techniques, such as linked allocation, are simple to implement and
require less overhead in terms of processing and memory resources.
Faster access to files: Techniques such as contiguous allocation can help to reduce disk
fragmentation and improve access time to files.
Fragmentation: Techniques such as linked allocation can lead to fragmentation of disk space,
which can decrease the efficiency of storage devices.
Overhead: Some techniques, such as indexed allocation, require additional overhead in terms
of memory and processing resources to maintain index blocks.
Limited scalability: Some techniques, such as FAT, have limited scalability in terms of the
number of files that can be stored on the disk.
Risk of data loss: In some cases, such as with contiguous allocation, if a file becomes
corrupted or damaged, it may be difficult to recover the data.
Overall, the choice of free space management technique depends on the specific requirements
of the operating system and the storage devices being used. While some techniques may offer
advantages in terms of efficiency and speed, they may also have limitations and drawbacks
that need to be considered.
**********************************************************