[go: up one dir, main page]

0% found this document useful (0 votes)
9 views18 pages

Memory Management

The document discusses memory management in modern computers, detailing concepts such as address binding, logical vs. physical address space, and the role of the memory management unit (MMU). It covers techniques for efficient memory usage, including dynamic loading, dynamic linking, overlays, and swapping, as well as memory allocation strategies and fragmentation issues. The document emphasizes the importance of managing memory effectively to optimize performance and resource utilization.

Uploaded by

sambit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

Memory Management

The document discusses memory management in modern computers, detailing concepts such as address binding, logical vs. physical address space, and the role of the memory management unit (MMU). It covers techniques for efficient memory usage, including dynamic loading, dynamic linking, overlays, and swapping, as well as memory allocation strategies and fragmentation issues. The document emphasizes the importance of managing memory effectively to optimize performance and resource utilization.

Uploaded by

sambit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Memory Management

Soma Hazra

1
Background Details
Memory is central to the operation of modern computers.
Memory consists of a large array of words or bytes, each with its
own address.
The CPU fetches instructions from memory according to the value
of the program counter.
These instructions may cause additional loading from and storing
to specific memory addresses.
 (e.g.- instruction-execution cycle)
The memory unit sees only a stream of memory addresses; it
doesn’t know how they are generated by instruction counter.
So, how a memory address is generated by the running program
can be ignored.
We are interested in only the sequence of memory addresses
generated by the running program.
2
Address Binding
Usually, a program resides on a disk as a binary executable file.

The program must be brought into memory and placed within a process for it to be
executed.

Input queue – collection of processes on the disk that are waiting to be brought
into memory for execution forms input queue.

Most of the system allow a user process to reside in any part of the physical
memory.

User programs go through several steps before being executed.

Addresses are in the source program are generally symbolic.

A compiler will bind these symbolic addresses to relocatable addresses (e.g. “14
bytes from the beginning of this module”).

The linkage editor or loader will in turn bind these relocatable addresses to
absolute address (such as 74014).

3
Address Binding contd…
Address binding of instructions and data to memory
addresses can happen at three different stages.
Compile time: If memory location known a priori, absolute
code can be generated; must recompile code if starting
location changes.
Load time: Must generate relocatable code if memory location
is not known at compile time.
Execution time: Binding delayed until run time if the process
can be moved during its execution from one memory segment
to another.
Need hardware support for address maps (e.g., base and limit
registers).
4
Logical vs. Physical Address Space
Logical address – An address generated by the CPU.
Also referred to as virtual address.
Physical address – addresses seen by the memory unit. (i.e.
the one loaded in to memory-address register of the
memory)
The set of all logical addresses generated by a program is a
logical-address space, where as the set of physical addresses
corresponding to this logical addresses is a physical-address
space.
Logical and physical addresses are the same in compile-time
and load-time.
But in execution time Logical (virtual) and physical addresses
differ.
5
Memory management Unit
The run-time mapping from virtual address to physical
address is done by a hardware device called the memory
management unit (MMU).
[or]
The memory management unit (MMU) is a hardware device
that maps virtual address to physical address.

In MMU scheme, the value in the relocation register is added


to every address generated by a user process at the time it is
sent to memory.

The user program deals with logical addresses; it never sees


the real physical addresses.
6
Dynamic relocation using a
relocation register

7
Dynamic Loading
It is used to get better memory-space utilization.
With dynamic loading, a routine is not loaded until it is called.
All routines are kept on disk in a relocatable load format. The main
program is loaded into memory and is executed.
Then it can load other routines when ever they will needed.
So unused routine is never loaded, which saves the memory.
Useful when large amounts of code are needed to handle
infrequently occurring cases. (e.g. error routines).
No special support from the operating system is required
implemented through program design by the user.
Operating system may help the programmer by providing library
routines to implement dynamic loading.
8
Dynamic Linking
Similar to that of dynamic Loading. But rather than loading
being postponed, linking is postponed until execution time.
This feature is usually used with system libraries.
With dynamic linking, a stub is included in the image of
each library-routine reference.
This stub is a small piece of code that indicates how to
locate the appropriate memory-resident library routine, or
how to load the library if the routine is not present.
When stub executed it checks whether the needed routine
is already in the memory. If not the stub loads the routine
into the memory.
Dynamic linking is particularly useful for libraries updates,
such as bug fixes.

9
Overlays
What will happen when a process is larger than
amount of memory available?
Overlays are used to decrease to the total amount of
memory needed by a process.
A process keeps in memory only those instructions
and data that are needed at any given time.
When other instructions are needed, they are loaded
into the space previously occupied by instructions
that are no longer needed.
Overlays can be implemented by the user; no special
support is needed from operating system.
Programming design of overlay structure is complex.
10
Swapping
A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for
continued execution.
Backing store – a fast disk large enough to accommodate
copies of all memory images for all users; must provide
direct access to these memory images.
Roll out, Roll in – swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped
out so higher-priority process can be loaded and executed.
Major part of swap time is transfer time; total transfer time
is directly proportional to the amount of memory swapped.
Modified versions of swapping are found on many systems,
like, UNIX, Linux, and Windows.

11
Schematic View of Swapping

12
Contiguous Memory Allocation
Main memory usually divided into two partitions:
 Resident operating system, usually places in low memory with
interrupt vector.
 User processes then places in high memory.

Memory Protection
 Relocation-register scheme used to protect user processes from each
other, and from changing operating-system code and data.
 Relocation register contains value of smallest physical address; limit
register contains range of logical addresses – each logical address
must be less than the limit register.
 The MMU maps the logical address dynamically by adding the value
in the relocation register.
 This mapped address sent to memory.

13
Hardware Support for Relocation
and Limit Registers

14
Memory Allocation
Divide memory into several fixed-sized partitions.
Each partition may contain exactly one process.
The O.S keeps a table indicating which parts of memory are
available and which are occupied as:
1. allocated partitions
2. free partitions (hole)

Initially, all non-OS memory is available for user processes.


 Hole – block of available memory;
 When a process arrives, it is allocated memory from a hole
large enough to accommodate it.
 Holes of various size are scattered throughout memory.

15
Dynamic Storage-Allocation
Problem
When a process arrives, search for a hole big enough for it.
If none available, the process must wait.
When a process terminates, memory is freed, creating a hole.
This new hole may join with other contiguous holes to create a bigger
hole.

How to satisfy a request of size n from a list of free holes?


First-fit: Allocate the first hole that is big enough.
Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size. Produces the smallest leftover hole.
Worst-fit: Allocate the largest hole; must also search entire list. Produces
the largest leftover hole.

Note: First-fit and best-fit better than worst-fit in terms of speed


16
and storage utilization.
External and Internal
Fragmentation
External Fragmentation - Total memory space exists
to satisfy a request, but it is not contiguous.
With the first fit memory allocation rule, given N
allocated blocks, another 0.5N blocks may be lost due
to fragmentation (50 percent rule).
Internal Fragmentation - The memory allocated may
be slightly larger than the size needed by the process.
With Internal Fragmentation, the size difference
between the process size and the allocated memory is
memory internal to a partition, but not being used.

17
Compaction
One can reduce external fragmentation by compaction.
For compaction, two methods are available:
1) Shuffle memory contents to place all free memory together
in one large block.
Note: Compaction is possible only if relocation is dynamic, and
is done at execution time.
2) Permit the logical-address space of a process to be non-
contiguous.
 Hence, with this method physical memory can be allocated
to a process whenever it is available.
 Two techniques achieve the above solution:
1. Paging
2. Segmentation

18

You might also like