[go: up one dir, main page]

0% found this document useful (0 votes)
41 views17 pages

Os Unit Iii Notes

Uploaded by

sshreyakam365.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views17 pages

Os Unit Iii Notes

Uploaded by

sshreyakam365.0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

OS UNIT III

Q1. Explain address binding in operating system.

Introduction:

Address binding refers to the process of mapping computer instructions and


data to physical memory locations. This essential function of computer
memory management is performed by the operating system (OS) on behalf
of applications that require memory access. Address binding ensures that
logical addresses, also known as virtual addresses, are correctly mapped to
physical addresses in memory. There are three primary types of address
binding in an OS:

1. Compile Time Address Binding:

 This occurs during the compilation process.


 The compiler is responsible for performing address binding in
coordination with the OS.
 At this stage, the memory addresses are fixed, meaning the program
must always run in the same location in memory.

2. Load Time Address Binding:

 Address binding happens when the program is loaded into memory.


 Managed by the OS memory manager, also known as the loader.
 This method allows the program to be relocated to different memory
areas if needed.
 The logical addresses in the program are not linked to physical
addresses until the program is loaded.

3. Execution Time or Dynamic Address Binding:

 This type of binding is done during program execution.


 It is particularly useful for scripts and variables that require dynamic
memory allocation.
 The OS assigns memory to variables as they are encountered during
execution.
 The memory allocation persists until the program finishes or until the
variable is explicitly released.

Summary:

Address binding is crucial for efficient memory utilization and for enabling
programs to run on different systems and memory configurations. Each type
of binding offers different levels of flexibility and control, with compile-time
binding being the most rigid and execution-time binding offering the most
dynamism. Understanding these types helps in optimizing program
performance and ensuring robust memory management in complex
computing environments.
Q2. What is memory relocation? Explain.

Introduction:

Memory relocation is a crucial aspect of memory management in operating


systems, enabling efficient utilization and management of physical memory.
It involves adjusting the addresses used in a process to match the physical
location in memory where the process is loaded. This adjustment can happen
either statically at load time or dynamically at runtime.

Static Relocation occurs when the operating system (OS) adjusts the
addresses in a process at load time to reflect its position in memory. Once a
process starts executing, its location in memory is fixed and cannot be
changed. This method is simple but inflexible, as it does not allow the
process to be moved during execution or to grow dynamically.

Dynamic Relocation: on the other hand, Dynamic Relocation uses


hardware support to perform address translation at runtime. The hardware
includes a relocation register (base) and a limit register. The relocation
register is added to the virtual address to obtain the physical address, while
the limit register ensures that the address is within valid bounds. If the
address exceeds the limit, an address trap occurs, preventing illegal memory
access.

Dynamic relocation offers significant advantages:

1.Flexibility: The OS can move a process during execution, allowing for


better memory utilization.

2.Scalability: Processes can grow over time, adapting to changing resource


needs.
3.Simplicity: The hardware implementation involves basic operations,
making it fast and efficient.

However, dynamic relocation also has drawbacks:

Performance Overhead: The additional operation on every memory


reference can slow down the hardware.

Limited Memory Sharing: Sharing memory between processes is not


feasible.

Physical Memory Constraints: Processes are still limited by the size of


physical memory.

Complexity: Memory management becomes more intricate, complicating


the OS design.

Summary:

Relocation enhances **transparency**, as processes are unaware of their


physical memory location. **Safety** is ensured by checking each memory
reference against valid bounds, preventing unauthorized access.
**Efficiency** is achieved through rapid hardware-based address translation,
although moving processes when they grow can be slow. Overall, memory
relocation is essential for dynamic and efficient memory management in
modern operating systems.
Q3. Explain memory sharing and protection in detail.

Introduction:

Memory Sharing:

Memory sharing in operating systems allows multiple processes to access


the same physical memory space. This is essential for efficient resource
utilization and inter-process communication. Shared memory enables
processes to exchange information quickly without the need for data
copying, thereby improving performance. Common implementations include
shared libraries, where multiple applications use the same code, reducing
memory footprint.

Memory Protection:

Memory protection is a critical feature in operating systems designed to


prevent processes from interfering with each other’s memory space. This
ensures system stability and security by preventing unauthorized access to
memory.
1. Purpose:

 Stability: Prevents one process from corrupting another process's


memory, enhancing overall system reliability.
 Security: Restricts unauthorized access to sensitive data, preventing
potential breaches.

2. Techniques:

 Segmentation: Divides memory into segments, each with specific


access rights. For example, the OS kernel segment might be read-only,
while user data segments are read-write.
 Paged Virtual Memory: Divides memory into pages, each mapped to
physical memory via a page table. This allows for flexible memory
management and protection.
 Protection Keys: Assigns keys to memory pages, controlling access
permissions such as read, write, or execute.

3. Implementation:

Memory Management Unit (MMU): Hardware component that translates


virtual addresses to physical addresses and enforces access controls.

Virtual Memory: Provides an abstraction layer, enabling memory


virtualization and isolation of processes.

Advantages:

Improved Stability: Prevents processes from interfering with each other,


reducing system crashes.

Increased Security: Blocks unauthorized memory access, protecting


sensitive data.

Disadvantages:
Memory Fragmentation: Virtual memory can lead to fragmented physical
memory, reducing efficiency.

Limitations: Memory protection can sometimes be bypassed by exploiting


OS vulnerabilities.

Compatibility Issues: Older software may not be compatible with modern


memory protection mechanisms.

In summary, memory sharing and protection are fundamental to the


functionality and security of contemporary operating systems. They ensure
that multiple processes can efficiently share resources while maintaining
robust protection against unauthorized access and accidental interference.

Q4. Explain paging in detail.

Definition:

“Paging is a memory management scheme used in operating systems to


divide processes into fixed-size blocks called pages and allocate them to
corresponding blocks of main memory, known as frames, to efficiently
manage memory usage and facilitate process execution.”

1. Conceptual Overview: Paging is a memory management scheme used


in operating systems to efficiently manage memory and facilitate the
execution of processes. It involves dividing the process and memory into
fixed-size blocks called pages and frames, respectively.

2. Process Division: Each process is divided into smaller, fixed-size units


called pages. This division allows the operating system to manage memory
more efficiently by loading only the required pages into memory, rather than
loading the entire process at once.
3. Memory Division: Main memory (RAM) is divided into fixed-size blocks
called frames. These frames serve as the unit of allocation for storing pages
of processes. The size of each frame is determined by the operating system
and is typically uniform across the system.

4. Storage Mechanism: Pages of a process are stored in frames of the


main memory. The operating system maintains a page table for each
process, which maps the logical addresses of pages to the physical
addresses of frames. This mapping allows the CPU to access pages in
memory using logical addresses.

5. Page Faults: Pages of a process are brought into main memory only
when they are needed for execution. When a process tries to access a page
that is not currently in memory, a page fault occurs. The operating system
then retrieves the required page from secondary storage (such as a hard
disk) and loads it into a free frame in main memory.

6. Optimal Storage: The operating system aims to allocate contiguous


frames for storing pages whenever possible. This contiguous allocation helps
reduce fragmentation and improves memory utilization efficiency.

7. Equal Frame Sizes: Different operating systems may define different


frame sizes, but within a given system, the size of each frame must be equal.
This uniformity simplifies memory management and ensures consistency in
page allocation and addressing.

In summary, paging is a memory management technique that divides


processes and main memory into fixed-size blocks (pages and frames,
respectively), facilitates efficient storage and retrieval of process pages, and
optimizes memory utilization by allocating contiguous frames and
maintaining uniform frame sizes.
Q5. What is segmentation Explain.

Introduction:

Segmentation is a memory management technique used in operating


systems where memory is divided into variable-sized segments, each
representing a logical unit such as a code module, data structure, or
subroutine. Unlike paging, which divides memory into fixed-sized blocks
called pages, segmentation allows for more flexibility by accommodating
segments of different sizes.

In segmentation, each segment is associated with a segment descriptor,


which typically includes information such as the segment's base address and
its length (also known as the limit). These descriptors are stored in a data
structure called a segment table. When a process references a memory
address, the operating system uses the segment table to translate the
logical address into a physical address.
Segmentation offers several advantages:

1. Reduced Internal Fragmentation: Segments are sized according to the


needs of the program, reducing wasted memory due to internal
fragmentation compared to fixed-size allocation schemes like paging.

2. Logical Organization: Segments allow for a more natural and logical


organization of memory, as different parts of a program (such as code, data,
and stack) can be stored in separate segments.

3. Protection and Sharing: Segmentation enables protection and sharing


of memory segments among different processes. Each segment can have its
own access permissions, allowing the operating system to enforce memory
protection.

However, segmentation also has its drawbacks:

1. External Fragmentation: Over time, as segments are allocated and


deallocated, gaps of free memory may form between segments, leading to
external fragmentation. This fragmentation can make it challenging to
allocate contiguous memory blocks for new segments.

2. Complex Memory Management: Managing variable-sized segments


requires more complex memory management algorithms compared to fixed-
size allocation schemes like paging. This complexity can lead to higher
overhead and potentially slower performance.

Overall, segmentation is a powerful memory management technique that


offers flexibility and efficient memory utilization, especially for programs with
varying memory requirements. However, its effectiveness depends on
effective memory management strategies to mitigate fragmentation and
optimize memory allocation.
Q6. What is page replacement algorithm, Explain FIFO. (tooo biggg)

Introduction:

Page Replacement Algorithm is used when a page fault occurs. Page Fault
means the page referenced by the CPU is not present in the main memory.

When the CPU generates the reference of a page, if there is any vacant
frame available in the main memory then the page is loaded in that vacant
frame. In another case, if there is no vacant frame available in the main
memory, it is required to replace one of the pages in the main memory with
the page referenced by the CPU.

Page Replacement Algorithm is used to decide which page will be replaced to


allocate memory to the current referenced page.

Different Page Replacement Algorithms suggest different ways to decide


which page is to be replaced. The main objective of these algorithms is to
reduce the number of page faults.

First In First Out (FIFO) :

This algorithm is similar to the operations of the queue. All the pages are
stored in the queue in the order they are allocated frames in the main
memory. The one which is allocated first stays in the front of the queue. The
one which is allocated the memory first is replaced first. The one which is at
the front of the queue is removed at the time of replacement.

Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8,
9, 6, 7, 1, 6, 7, 8, 9, 1

As in the above figure shown, let there are 3 frames in the memory.
 6, 7, 8 are allocated to the vacant slots as they are not in memory.
 When 9 comes page fault occurs, it replaces 6 which is the oldest in
memory or front element of the queue.
 Then 6 comes (Page Fault), it replaces 7 which is the oldest page in
memory now.
 Similarly, 7 replaces 8, 1 replaces 9.
 Then 6 comes which is already in memory (Page Hit).
 Then 7 comes (Page Hit).
 Then 8 replaces 6, 9 replaces 7. Then 1 comes (Page Hit).
 Number of Page Faults = 9
 While using the First In First Out algorithm, the number of page faults
increases by increasing the number of frames. This phenomenon is
called Belady's Anomaly.
 Let's take the same above order of pages with 4 frames.

In the above picture shown, it can be seen that the number of page
faults is 10.
 There were 9 page faults with 3 frames and 10 page faults
with 4 frames.
 The number of page faults increased by increasing the number of
frames.
Q7. Explain optimal page replacement and least recently used.

Optimal Page Replacement – In this algorithm, the page which would be


used after the longest interval is replaced. In other words, the page which is
farthest to come in the upcoming sequence is replaced.

Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8,
9, 6, 7, 1, 6, 7, 8, 9, 1, 7, 9, 6

First, all the frames are empty. 6, 7, 8 are allocated to the frames
(Page Fault).
 Now, 9 comes and replaces 8 as it is the farthest in the upcoming
sequence. 6 and 7 would come earlier than that so not replaced.
 Then, 6 comes which is already present (Page Hit).
 Then 7 comes (Page Hit).
 Then 1 replaces 9 similarly (Page Fault).
 Then 6 comes (Page Hit), 7 comes (Page Hit).
 Then 8 replaces 6 (Page Fault) and 9 replaces 8 (Page Fault).
 Then 1, 7, 9 come respectively which are already present in the
memory.
 Then 6 replaces 9 (Page Fault), it can also replace 7 and 1 as no other
page is present in the upcoming sequence.
 The number of Page Faults = 8
 This is the most optimal algorithm but is impractical because it is
impossible to predict the upcoming page references.

Least Recently Used – This algorithm works on previous data. The page
which is used the earliest is replaced or which appears the earliest in the
sequence is replaced.

Example: Consider the Pages referenced by the CPU in the order are 6, 7, 8,
9, 6, 7, 1, 6, 7, 8, 9, 1, 7, 9, 6

First, all the frames are empty. 6, 7, 8 are allocated to the frames
(Page Fault).
 Now, 9 comes and replaces 6 which is used the earliest (Page Fault).
 Then, 6 replaces 7, 7 replaces 8, 1 replaces 9 (Page Fault).
 Then 6 comes which is already present (Page Hit).
 Then 7 comes (Page Hit).
 Then 8 replaces 1, 9 replaces 6, 1 replaces 7, and 7 replaces 8 (Page
Fault).
 Then 9 comes (Page Hit).
 Then 6 replaces 1 (Page Fault).
 The number of Page Faults = 12

Q8. Explain static and dynamic binding.

Definition:

Static Binding: In operating systems, static binding refers to linking a


program with its libraries or resources at compile time, resulting in
predictable behavior and minimal runtime overhead.

Dynamic Binding: Dynamic binding in operating systems involves linking a


program with its libraries or resources at runtime, allowing for flexibility,
resource sharing, and adaptation to runtime conditions.

Static Binding:

1. Linking: During compilation, all necessary libraries and external


dependencies are linked to the executable file.

2. Efficiency: Since binding occurs at compile time, there is minimal


overhead during program execution related to locating and loading external
resources.
3. Predictability: The behavior of the program is predictable since the
addresses of functions and resources are determined beforehand.

4. Examples: Static linking is commonly used in systems programming and


embedded systems where performance and predictability are crucial. For
instance, in firmware development or when creating standalone executable
files.

Dynamic Binding:

1. Linking: Instead of being linked at compile time, libraries and resources


are linked dynamically when the program is loaded into memory or during its
execution.

2. Flexibility: Dynamic binding allows for more flexibility as it enables the


loading of libraries and resources based on runtime conditions and user
interactions.

3. Resource Sharing: Dynamic binding facilitates resource sharing among


multiple programs since libraries can be loaded into memory once and
shared by multiple processes.

4. Examples: Dynamic linking is commonly used in modern operating


systems for loading shared libraries (DLLs in Windows, shared objects in
Unix-like systems) and for runtime linking in interpreted languages like
Python and JavaScript.

In summary, static binding links program resources at compile time, offering


efficiency and predictability, while dynamic binding links resources at
runtime, providing flexibility and resource sharing capabilities. Each
approach has its advantages and is chosen based on the specific
requirements and constraints of the operating system and the application
being developed.

You might also like