Virtual Memory
Hardware and Control Structures
Two characteristics fundamental to memory
management:
1) all memory references are logical addresses that are
dynamically translated into physical addresses at run time
2) a process may be broken up into a number of pieces that
don’t need to be contiguously located in main memory
during execution
If these two characteristics are present, it is not
necessary that all of the pages or segments of a
process be in main memory during execution
Operating system brings into main memory a few pieces of the
program
Resident set - portion of process that is in main memory
An interrupt is generated when an address is needed that is not
in main memory
Operating system places the process
in a blocking state
Continued . . .
Execution of a Process
To bring the piece of process that contains the logical address into
main memory
operating system issues a disk I/O Read request
another process is dispatched to run while the disk I/O takes
place
an interrupt is issued when disk I/O is complete, which
causes the operating system to place the affected process in the
Ready state
Implications
More processes may be maintained in main memory
only load in some of the pieces of each process
with so many processes in main memory, it is very likely a
process will be in the Ready state at any particular time
A process may be larger than all of main memory
Real and Virtual Memory
Real memory
• main memory, the actual RAM
Virtual memory
• memory on disk
• allows for effective multiprogramming and relieves the
user of tight constraints of main memory
Table 8.2
Characteristics of
Paging and
Segmentation
A state in To avoid this, the
which the operating system
system spends tries to guess,
most of its based on recent
time swapping history, which
process pieces pieces are least
rather than likely to be used
executing in the near future
instructions
Principle of Locality
Program and data references within a process tend to cluster
Only a few pieces of a process will be needed over a short
period of time
Therefore it is possible to make intelligent guesses about
which pieces will be needed in the future
Avoids thrashing
For virtual memory to be practical and
effective:
• hardware must support paging and
segmentation
• operating system must include software for
managing the movement of pages and/or
segments between secondary memory and
main memory
Paging
The term virtual memory is usually associated with systems that
employ paging
Each process has its own page table
each page table entry contains the frame number of the
corresponding page in main memory
Memory
Management
Formats
Address Translation
•In most systems, there is one page table per process.
•But each process can occupy huge amounts of virtual
memory.
•For example, in the VAX (Virtual Address Extension)
architecture, each process can have up to 231 = 2 GB
of virtual memory.
•Using 29 = 512-byte pages means that as many as 222
page table entries are required per process.
•Clearly, the amount of memory devoted to page tables
alone could be unacceptably high.
•To overcome this problem, most virtual memory
schemes store page tables in virtual memory rather
than real memory.
•This means that page tables are subject to paging just
as other pages are.
•When a process is running, at least a part of its page
table must be in main memory, including the page
table entry of the currently executing page.
•Some processors make use of a two-level scheme to
organize large page tables.
•In this scheme, there is a page directory, in which
each entry points to a page table.
•Thus, if the length of the page directory is X, and if the
maximum length of a page table is Y, then a process
can consist of up to X * Y pages.
Two-Level
Hierarchical Page Table
Address Translation
4-Kbyte (212)
Pages
Translation Lookaside
Buffer (TLB)
Each virtual memory To overcome the effect of
reference can cause two doubling the memory
physical memory access time, most virtual
accesses: memory schemes make
one to fetch the page
use of a special high-speed
table entry cache called a translation
lookaside buffer (TLB)
one to fetch the data
Use of a TLB
TLB
Operation
Page Size
The smaller the page size, the lesser the amount of internal
fragmentation
However, more pages are required per process
more pages per process means larger page tables
for large programs in a heavily multiprogrammed environment
some portion of the page tables of active processes must be in
virtual memory instead of main memory (double page faults)
The physical characteristics of most secondary-memory
devices (disks) favor a larger page size for more efficient
block transfer of data
Paging Behavior of a Program
Locality, locality, locality
Example: Page Sizes
Page Size
The design issue of main memory is
page size is related to getting larger and
the size of physical address space used by
main memory and applications is also
program size growing
most obvious on
personal computers
where applications are
becoming increasingly
complex
Segmentation
Advantages:
Segmentation • simplifies handling
allows the of growing data
programmer to structures
view memory as • allows programs to
consisting of be altered and
multiple address recompiled
spaces or independently
segments • lends itself to
sharing data
among processes
• lends itself to
protection
Segmentation
Segment Organization
Each segment table entry contains the starting address of the
corresponding segment in main memory and the length of the
segment
A bit is needed to determine if the segment is already in main
memory
Another bit is needed to determine if the segment has been
modified since it was loaded in main memory
Address Translation
Combined Paging and
Segmentation
In a combined
paging/segmentation system
a user’s address space is Segmentation is visible to the
broken up into a number of programmer
segments.
Each segment is broken up
into a number of fixed-sized Paging is transparent to the
pages which are equal in programmer
length to a main memory
frame
Address Translation
Combined Segmentation
and Paging