Unit 4-Memory System
Unit 4-Memory System
Memory is the hardware component of computer which stores the information temporarily
or permanently. The size of information that can be stored depends on the number of bytes
present in the memory.
Data Transfer Between Memory and Processor
Data transfer between the memory and the processor take place using two registers
1. Memory Address Register (MAR)
2. Memory Data Register (MDR)
The MAR is k-bits long and the MDR is n-bits long.
The memory unit may contain up to 2 locations.
During the memory cycle, n-bits of data is transferred between the memory and the
processor.
Memory and processor are connected through a processor bus. This bus consists of
i. A k-bit address bus-it carries address
ii. A n-bit Data bus - It carries data.
iii. Control lines – It includes control lines for coordinating data transfer such as
Read/Wrtite and Memory Function Completed (MFC). The other control lines
provide information about the number of bytes that are to be transferred.
Writing Data to a Memory Location
To carry out the writ operation, the processor
❖ Sets the R/W line to '0'.
❖ Places(loads) the address of the memory location, where the data has to be written
into MAR register.
❖ Places (loads) the given data into MDR register.
Reading Data from a Memory Location
To carry out the read operation, the processor
❖ Sets the R/W line to '1'.
❖ Places the address of the memory location, where the required data is stored in MAR
register.
The memory will then respond back to the processor by,
2.Discuss about semiconductor RAMs, ROMs and speed, size and cost.
Semiconductor memory is an electronic device used to store data and also used as
computer memory. It is referred as primary memory as CPU usually accesses information
from it (code and data)
Types of semiconductor memory
Electronic semiconductor memory technology can be split into two main types or categories
according to the way in which the memory operates:
1. RAM - Random Access Memory
2. ROM - Read Only Memory
Random Access Memory (RAM)
The read and write memory of a computer is called RAM. The RAM is a volatile memory,
means information written to it can be accessed as long as power is on.RAM holds data and
processing instructions temporarily until the CPU needs it. Scratchpad storage in memory
space is used for the temporary storage of data
Architecture
In RAM architecture, the memory cells can be accessed for information from anywhere on
the computer system.
This communication between different peripherals and RAM is achieved by data input and
output lines, control lines which specifies the direction of transfer and address selection
lines.
It contains n-data input lines,n-data output lines, k-address lines and control inputs(i.e.,
Read and Write). The ‘n’ data input lines carry the information into the memory which is to
be stored and the ‘n’ data output lines carry out the stored information from the memory.
The k-address lines select the desired word from the 2k words available in the memory. The
two control inputs perform the desired operation on the memory i.e., the read input
transfer the binary information into the memory and the write input transfer the binary
information out of memory.
The memory unit specifies the total available words and their bits per word. The address
line selects one word among the available words in the memory through a k-bit address.
Address is an identification number given to each word. When a k-bit address is given to the
address line a particular word is selected with the help of internal decoder from the
memory. Each memory cell of a typical memory device is able to store a single bit of data
and the cells are typically arranged in an array.
Types of RAM
1. SRAM: Static Random-Access Memory
2. DRAM: Dynamic Random-Access Memory
3. VRAM: Video Random Access Memory
1.SRAM (Static Random-Access Memory): SRAM is a type of memory which uses multiple
transistors usually around 4 to 6 in each cell.
2.DRAM (Dynamic Random-Access Memory): The memory cells in this type of memory are
paired with a transistor and capacitor recovering constant refreshing. Though it is more
complicated than S RAM that advantage of using DRAM is because of its memories cell
structural simplicity.These units only require one capacitor and transistor per bit, which is
less costly per bit.
3.VRAM(Video Random Access Memory): This type of memory is also known as “Multiport
Dynamic Random Access Memory” (MPDRAM) which is specially used for 3D accelerators or
video adapters.VRAM is called multiport because it has two independent access ports which
allows the graphics processor and the CPU to access the memory unit simultaneously.
VARM also holds graphics specific information like 3D cemetry data and texture maps and
the VRAM specifications on a device determine the resolution and colour depth. Most
systems nowadays use SGRAM (Synchronous Graphic RAM) as it is less costly and the
performance is nearly the same.
ROMs
ROM is a non-volatile read only storage unit within electronic systems, which is used to
store information that doesn't change during its lifespan in the system, referred to as
firmware.
Architecture of ROM
In ROM, the binary data is written only once that too during the process of manufacturing
and the data written can't be erased.
Block structure
1. The unit consists of k input lines and n out lines.
2. The k input lines take the input address from where we want to access the content
of the ROM.
3. Since the input lines are either 0 or 1 (binary form) they can be referred to as 2 ^ K
total addresses and each of these addresses contains in bit of information which, will
be given out as the output of the ROM. This is specified as 2k x n ROM.
Internal structure
The internal structure consists of two components: the decoder and OR (logic) gates.
1. The decoder is a combinational circuit and used to decode any encoded form like
binary to understandable forms like decimal form. Within the ROM structure, the
input into a decoder will be binary and the output will be represented in a decimal
form.
2. All the OR logic gates will have output of the decoder as their input.
Types of ROM
ROM can be classified into the following:
1. MROM (Masked Read Only Memory)
2. PROM (Programmable Read Only Memory)
3. EPROM (Erasable Programmable Read Only Memory)
4. EEPROM (Electrically Erasable Programmable Read Only Memory)
1. MROM (Masked Read Only Memory): MROM is the original type of ROM, it is read
only. Therefore, this memory unit cannot be modified.
2. PROM (Programmable Read Only Memory): This type of ROM is one, which can be
programmed once the chip has been created. However once the chip has been
programmed, the information return is permanent and it cannot be erased or
removed.
3. EPROM (Erasable Programmable Read Only Memory): This type of memory, which
was developed in 1971 by Dov Frohman, can be reprogrammed only when exposed
to ultraviolet light,otherwise it cannot be modified and there for no new data can be
saved. These types of chips are not commonly used anymore in computer system
and has been replaced by EEPROM from chips
4. EEPROM (Electrically Erasable Programmable Read Only Memory): This type of
memory can be erased and reprogrammed only by using an electrical charge.
EEPROM was developed in 1970 by George Perlegos when he was in Intel. The edge
that EPROM has, is that it can remember data in the system when the system is
powered off.EEPROM is considered to be more superior to PROM and EPROM and is
used for the BIOS (Basic input output system which deals with hardware initialisation
during the booting process) of computers designed after 1994. The use of EEPROM
allows the computer system to update the BIOS without the need to open the
computer system a remove any chips
Advantages
1. It is non-volatile means data has been said by the manufacture will function as
accepted when device is turned on
2. Due to them being static, they don't need a refreshing time.
3. In comparison to RAM, the circuit is simpler.
4. Data can be stored permanently.
Disadvantages
1. It can't be modified as it is a read only memory.
2. If any changes are required it's not possible.
Speed, Size and Cost
Memory hierarchy
There are various storage devices that allow data to be stored and access to buy the CPU.
Secondary storage devices are hard disk drives, optical disk drives and other devices.
Examples of primary memory are ROM,EPROM.The memory hierarchy of a computer is
determined as follows, the storage devices at higher level or less capable, more expensive
but have quick access compared to the storage devices at the lower levels.
1. The top of the memory hierarchy are the process registers because the access to the
data stored in them is the fastest. So, they are the at the top in terms of the speed of
access.
2. The next level is the processor cache. It is a relatively small amount of memory that
can be implemented directly on the processor chip and holds copies of instructions
and data stored in a much larger memory that is provided externally. Cache has two
levels: The processor chip.The Level 2 (L2) cache is between the main memory and
the processor.
3. The next level is the main memory. It is larger in size than processor cache but
relatively slow in speed. Dynamic memory components like SIMMs, DIMMs, or
RIMMs implement this large memory.
4. The last level is the magnetic disk which offers huge amount of storage at low
cost.They are significantly slower than the main memory.
The memories from an essential part of every computer. The highest storage and the fast
retrieval of data is important in measuring the performance of these memories. The
designers, while designing them will face three major problems as follows,
Issues to be Considered in its Design
1. If the capacity of a memory is increased, the access time will also be increased.
2. If access time is reduced, cost per bit will be increased.
3. If the capacity of a memory is increased, the cost per bit will be decreased.
Memory management is the process of controlling and coordinating the computer memory
of a computer, assigning portions known as blocks of memory to various running programs
and operating system applications such that they carry out their operations to enhance
system performance.
The five memory management requirements or as follows:
1. Relocation
2. Protection
3. Sharing
4. Local Organisation
5. Physical Organisation
1. Relocation: In a multiprogramming environment the accessible main memory is
shared among number of processes. The users can swap the active processes in and
out of the main memory to maximize the use of processor by supplying a huge pool
of ready process to accomplish. If a program has been swapped out to disk,it would
be quite limiting to declare that when it is next swapped back in, it should be place in
the main memory as it is.If the location is occupied the process has to be relocated
to different area. Therefore, this requirement supports the concept of modular
programming.
2. Protection: Every process must be protected against unnecessary interference by
the other processes, whether accidental or intentional. Therefore, programs in other
processes must not be able to reference memory locations in a process for writing or
reading operation without permission. Therefore, this requirement supports process,
isolation, protection and access control.
3. Sharing: Processes that work together on some task may need to share access to the
same Data Structure. Memory management system should therefore allow
regulated call up to shared area of memory without compromising necessary
protection. Therefore, this requirement supports protection and access control.
4. Local Organisation: Almost consists consistently, main memory in a computer
system is structured as a linear, or 1-Dimensional, address space compromising of an
arrangement of bytes or words. Secondary memory, at its physical level is equally
structured. Though this organisation closely reflects at the actual machine hardware,
it does not relate to the way in which programs are normally constructed. Most of
the programs are structured into sub parts, some of which are unmodifiable and
some of them contain data that maybe modified. Therefore, this requirement
supports the concept of modular programming.
5. Physical Organisation: The system memory is organised into two levels; one level is
main memory and other level is secondary memory. Main memory offers faster
access but it has high cost and it is a volatile memory with less storage capacity.
Secondary memory is a slower and cheaper, but it offers permanent storage and
non-volatile with huge storage capacity. Therefore, this requirement supports long
term storage and automatic allocation and management.
Use of Memory Management
The reason for using memory management as follows:
It allows you to check how much memory needs to the allocated to processes that decide
which processor should get memory at what time.
Tracks whenever inventory gets freed or unallocated. According to it will update the status.
It allocates the space to application routines.
It also makes sure that this application does not interfere with each other.
Helps protect different processes from each other.
It places the programs in memory so that memory is utilised to its full extent.
In this mapping technique, M-bit address is used to access the main memory and N-bit
INDEX field is used to access the cache memory. Consider a 512 x 512 cache memory and
32K x 12 main memory, then,
Cache memory address = N-bits= 9 - bits [ 29 = 512]
Main memory address= M-bits= 15-bits [215 = 32K]
Index field = N-bits = 9 bits
Tag field = M-N bits = 15-9= 6 bits
If w = 8 words,
Word field = (log2w) bits = log28 = 3 log22 = 3 bits
Block field = (N – log 2w) bits = (9- log 28) = 9-3 = 6 bits
When a 15- bit address is read from the main memory, the 9-bits in the INDEX field are used
to access a cache line(Each cache line contains a tag and its associated data).The tag stored
in this cache line is then compared with the tag of the given 15-bit address.
❖ If they both match, the 3-bit word is used to identify one of the 8 words from that
line.
❖ If they do not match, the complete block of data is read from the main memory and
is replaced with the previous cache line (block)
Example
The above diagram shows that cache memory stores BLOCK 0 from main memory. If any
word from the block is required by CPU, then it can be easily read from the cache memory.
Suppose, CPU requires a word from Block 1 with an address 01003. As the index address is
003, the cache is accessed with that address. Then, the TAG fields of each of these
addresses are compared. They do not match, as the required address contains 01 as it tag
and the cache line contains 00 has its tag. Thus, the complete BLOCK 0 in cache memory is
replaced with BLOCK 1 by accessing it from main memory.
That 12-bit tag field determines the main memory block to be placed in the cache memory.
Hence, whenever processor intends to find any kind of data, it initially examines the tag field
in order to gain the information on the existence of block in the cache memory.
Lw = Number of sets x (Number of tag bits + Number of bits in a data word) bits
Then, the size of the cache memory will be (2N x Lw). Thus, a cache memory that employs set
associating mapping contains ‘s’ number of sets and each set will hold ‘s’ number of
memory words in a single word of cache.
Example
In the figure, a single index address will point to three different tag data pairs.
Consider a 512 x12 cache memory and 32K x 12 main memory, then,
Cache memory address = N-bits = 9 bits [29 = 512]
Main memory address = M-bits = 15 bits [215 = 32K]
Index field = N-bits = 9-bits
Tag field = M-N bits = 15 - 9 = 6 bits
When a 15-bits address is generated by the CPU to access data, the cache memory is
accessed using the 9-bit index address. Suppose, a data word at address 01003 is requested
by the CPU, then the cache line with index address 003 is accessed. At this address, there
are three different tag data pairs. Thus, an associative search is carried out to compare the
tag value of the required data word with the three different tags in the current cache line.
1. If a match is obtained, then its corresponding data word is accessed.
2. If a match does not occur and if set is full,then one of the tag data pair in the set is
replaced with a new tag data pair using any one of the following replacement
algorithms.
➢ Random replacement algorithm
➢ First-In First-Out replacement algorithm (FIFO)
➢ Least Recently Used replacement algorithm (LRU)
7. Discuss about optical devices.
OR
What are the optical devices? Explain its construction and principle of
operation.
Optical disks are external storage devices used to store large data using optical technology.
The popular optical storage devices are
➢ CD Technology
➢ DVD Technology
➢ Blue Ray Technology
These dark spots and bright spots create a binary pattern that is represented as 0's and 1's.
The pattern is then used by the disk drive to read the data stored on the disk. The computer
uses this binary pattern to interpret the data and can then read or write files to the disk.