[go: up one dir, main page]

0% found this document useful (0 votes)
51 views19 pages

Unit 4-Memory System

The document discusses the basic concepts of memory systems, including the roles of RAM and ROM, data transfer between memory and processors, and the architecture of memory. It explains the types of semiconductor memory, including SRAM, DRAM, and various types of ROM, as well as the importance of cache memory in improving CPU performance. Additionally, it covers virtual memory and memory management requirements, emphasizing the translation of virtual addresses to physical addresses and the use of demand paging.

Uploaded by

Praphulla Mukhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views19 pages

Unit 4-Memory System

The document discusses the basic concepts of memory systems, including the roles of RAM and ROM, data transfer between memory and processors, and the architecture of memory. It explains the types of semiconductor memory, including SRAM, DRAM, and various types of ROM, as well as the importance of cache memory in improving CPU performance. Additionally, it covers virtual memory and memory management requirements, emphasizing the translation of virtual addresses to physical addresses and the use of demand paging.

Uploaded by

Praphulla Mukhi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

1.Explain about basic concepts memory system.

Memory is the hardware component of computer which stores the information temporarily
or permanently. The size of information that can be stored depends on the number of bytes
present in the memory.
Data Transfer Between Memory and Processor
Data transfer between the memory and the processor take place using two registers
1. Memory Address Register (MAR)
2. Memory Data Register (MDR)
The MAR is k-bits long and the MDR is n-bits long.
The memory unit may contain up to 2 locations.
During the memory cycle, n-bits of data is transferred between the memory and the
processor.

Memory and processor are connected through a processor bus. This bus consists of
i. A k-bit address bus-it carries address
ii. A n-bit Data bus - It carries data.
iii. Control lines – It includes control lines for coordinating data transfer such as
Read/Wrtite and Memory Function Completed (MFC). The other control lines
provide information about the number of bytes that are to be transferred.
Writing Data to a Memory Location
To carry out the writ operation, the processor
❖ Sets the R/W line to '0'.
❖ Places(loads) the address of the memory location, where the data has to be written
into MAR register.
❖ Places (loads) the given data into MDR register.
Reading Data from a Memory Location
To carry out the read operation, the processor
❖ Sets the R/W line to '1'.
❖ Places the address of the memory location, where the required data is stored in MAR
register.
The memory will then respond back to the processor by,

• Placing the required data on the databus.


• Setting MFC signal to 1 to inform the processor that require data has been placed on
to the bus.
When the processor receives MFC signal, it moves the data from databus to the MDR
register.
Apart from transferring single data from/to the memory, a block of data can also be
transferred.In this transfer, the address of a memory location corresponds to the first
address location of the block of data.

2.Discuss about semiconductor RAMs, ROMs and speed, size and cost.
Semiconductor memory is an electronic device used to store data and also used as
computer memory. It is referred as primary memory as CPU usually accesses information
from it (code and data)
Types of semiconductor memory
Electronic semiconductor memory technology can be split into two main types or categories
according to the way in which the memory operates:
1. RAM - Random Access Memory
2. ROM - Read Only Memory
Random Access Memory (RAM)
The read and write memory of a computer is called RAM. The RAM is a volatile memory,
means information written to it can be accessed as long as power is on.RAM holds data and
processing instructions temporarily until the CPU needs it. Scratchpad storage in memory
space is used for the temporary storage of data
Architecture
In RAM architecture, the memory cells can be accessed for information from anywhere on
the computer system.
This communication between different peripherals and RAM is achieved by data input and
output lines, control lines which specifies the direction of transfer and address selection
lines.
It contains n-data input lines,n-data output lines, k-address lines and control inputs(i.e.,
Read and Write). The ‘n’ data input lines carry the information into the memory which is to
be stored and the ‘n’ data output lines carry out the stored information from the memory.
The k-address lines select the desired word from the 2k words available in the memory. The
two control inputs perform the desired operation on the memory i.e., the read input
transfer the binary information into the memory and the write input transfer the binary
information out of memory.
The memory unit specifies the total available words and their bits per word. The address
line selects one word among the available words in the memory through a k-bit address.
Address is an identification number given to each word. When a k-bit address is given to the
address line a particular word is selected with the help of internal decoder from the
memory. Each memory cell of a typical memory device is able to store a single bit of data
and the cells are typically arranged in an array.
Types of RAM
1. SRAM: Static Random-Access Memory
2. DRAM: Dynamic Random-Access Memory
3. VRAM: Video Random Access Memory
1.SRAM (Static Random-Access Memory): SRAM is a type of memory which uses multiple
transistors usually around 4 to 6 in each cell.
2.DRAM (Dynamic Random-Access Memory): The memory cells in this type of memory are
paired with a transistor and capacitor recovering constant refreshing. Though it is more
complicated than S RAM that advantage of using DRAM is because of its memories cell
structural simplicity.These units only require one capacitor and transistor per bit, which is
less costly per bit.
3.VRAM(Video Random Access Memory): This type of memory is also known as “Multiport
Dynamic Random Access Memory” (MPDRAM) which is specially used for 3D accelerators or
video adapters.VRAM is called multiport because it has two independent access ports which
allows the graphics processor and the CPU to access the memory unit simultaneously.
VARM also holds graphics specific information like 3D cemetry data and texture maps and
the VRAM specifications on a device determine the resolution and colour depth. Most
systems nowadays use SGRAM (Synchronous Graphic RAM) as it is less costly and the
performance is nearly the same.
ROMs
ROM is a non-volatile read only storage unit within electronic systems, which is used to
store information that doesn't change during its lifespan in the system, referred to as
firmware.
Architecture of ROM

In ROM, the binary data is written only once that too during the process of manufacturing
and the data written can't be erased.
Block structure
1. The unit consists of k input lines and n out lines.
2. The k input lines take the input address from where we want to access the content
of the ROM.
3. Since the input lines are either 0 or 1 (binary form) they can be referred to as 2 ^ K
total addresses and each of these addresses contains in bit of information which, will
be given out as the output of the ROM. This is specified as 2k x n ROM.
Internal structure
The internal structure consists of two components: the decoder and OR (logic) gates.
1. The decoder is a combinational circuit and used to decode any encoded form like
binary to understandable forms like decimal form. Within the ROM structure, the
input into a decoder will be binary and the output will be represented in a decimal
form.
2. All the OR logic gates will have output of the decoder as their input.
Types of ROM
ROM can be classified into the following:
1. MROM (Masked Read Only Memory)
2. PROM (Programmable Read Only Memory)
3. EPROM (Erasable Programmable Read Only Memory)
4. EEPROM (Electrically Erasable Programmable Read Only Memory)
1. MROM (Masked Read Only Memory): MROM is the original type of ROM, it is read
only. Therefore, this memory unit cannot be modified.
2. PROM (Programmable Read Only Memory): This type of ROM is one, which can be
programmed once the chip has been created. However once the chip has been
programmed, the information return is permanent and it cannot be erased or
removed.
3. EPROM (Erasable Programmable Read Only Memory): This type of memory, which
was developed in 1971 by Dov Frohman, can be reprogrammed only when exposed
to ultraviolet light,otherwise it cannot be modified and there for no new data can be
saved. These types of chips are not commonly used anymore in computer system
and has been replaced by EEPROM from chips
4. EEPROM (Electrically Erasable Programmable Read Only Memory): This type of
memory can be erased and reprogrammed only by using an electrical charge.
EEPROM was developed in 1970 by George Perlegos when he was in Intel. The edge
that EPROM has, is that it can remember data in the system when the system is
powered off.EEPROM is considered to be more superior to PROM and EPROM and is
used for the BIOS (Basic input output system which deals with hardware initialisation
during the booting process) of computers designed after 1994. The use of EEPROM
allows the computer system to update the BIOS without the need to open the
computer system a remove any chips
Advantages
1. It is non-volatile means data has been said by the manufacture will function as
accepted when device is turned on
2. Due to them being static, they don't need a refreshing time.
3. In comparison to RAM, the circuit is simpler.
4. Data can be stored permanently.
Disadvantages
1. It can't be modified as it is a read only memory.
2. If any changes are required it's not possible.
Speed, Size and Cost
Memory hierarchy
There are various storage devices that allow data to be stored and access to buy the CPU.
Secondary storage devices are hard disk drives, optical disk drives and other devices.
Examples of primary memory are ROM,EPROM.The memory hierarchy of a computer is
determined as follows, the storage devices at higher level or less capable, more expensive
but have quick access compared to the storage devices at the lower levels.
1. The top of the memory hierarchy are the process registers because the access to the
data stored in them is the fastest. So, they are the at the top in terms of the speed of
access.
2. The next level is the processor cache. It is a relatively small amount of memory that
can be implemented directly on the processor chip and holds copies of instructions
and data stored in a much larger memory that is provided externally. Cache has two
levels: The processor chip.The Level 2 (L2) cache is between the main memory and
the processor.
3. The next level is the main memory. It is larger in size than processor cache but
relatively slow in speed. Dynamic memory components like SIMMs, DIMMs, or
RIMMs implement this large memory.
4. The last level is the magnetic disk which offers huge amount of storage at low
cost.They are significantly slower than the main memory.
The memories from an essential part of every computer. The highest storage and the fast
retrieval of data is important in measuring the performance of these memories. The
designers, while designing them will face three major problems as follows,
Issues to be Considered in its Design
1. If the capacity of a memory is increased, the access time will also be increased.
2. If access time is reduced, cost per bit will be increased.
3. If the capacity of a memory is increased, the cost per bit will be decreased.

3.Write about cache memory and its performance consideration


Cache memory is a small,high-speed memory that acts as a buffer between the CPU and
main memory(RAM). It can be accessed by the CPU at much faster speed than main
memory.
Location of Cache Memory
1. Cache memory lies on the path between the CPU and the main memory.
2. It facilities that transfer of data between the processor and the main memory at the
speed which matches the speed of the processor.
3. Data is transferred in the form of words between the cache memory and the CPU.
4. Data is transferred in the form of blocks or pages between the cache memory and
the main memory.

Purpose of Cache Memory


1. The fast speed of cache memory makes its extremely useful.
2. It is used for bridging the speed mismatch between the fastest CPU and the main
memory.
3. It does not let the CPU performance suffer due to the slower speed of the main
memory
Levels of Cache Memory
There can be various levels of cash memory which are as follows:
➢ Level 1 (L1) or Registers: It stores and access the data which is immediately stores in
the CPU. For example, instruction register, program counter, accumulator, address
register etc.
➢ Level 2 (L2) or Cache Memory: It is the fastest memory that stores data temporarily
for fast access by the CPU and it has a fastest access time.
➢ Level 3 (L3) or Main Memory: It is the main memory where the computer stores all
the current data and is a volatile which means that it loses data on power off.
➢ Level 4 (L4) or Secondary Memory: It is slow in terms of access time. But the data
stays permanently in this memory.
Types of Cache Memory
There are two types as follows:
1. Primary Cache: It is located on the processor chip always. Besides its access time is
comparable to the processor.
2. Secondary Cache: This memory is present between the primary cache and the main
memory.Besides, we can also call it level 2 (L2) cache.
Advantages
➢ It is faster than the main memory.
➢ The access time is quite less in comparison to the main memory.
➢ The speed of accessing data increases hence, the CPU works faster.
➢ Moreover, the performance of the CPU also becomes better.
➢ The recent data stores in the cache and therefore, the outputs are faster.
Disadvantages
➢ It is quite expensive.
➢ The storage capacity is limited.
Performance Consideration
The performance of the cache is in terms of the hit ratio. The CPU searches the data in the
cache when it requires writing or read any data from the main memory. In this case, 2 cases
may occur as follows:
1. If the CPU finds that data in the cache, a cache hit occurs and it reads the data from
the cache.
2. If it does not find the data in the cache, a cache miss occurs. Furthermore, during
cache miss, the cache allows the entry of data and then reads data from the main
memory.
3. Therefore, we can define the hit ratio as the number of hits divided by the sum of
hits and misses.
Hit ratio = Number of hits/Number of attempted accesses = hit/hit + miss
For high performance systems, the hit rate must be ≥ 0.9
Design Consideration for Memory Systems
The choice of a RAM chip for a given application depends on several factors like speed,
power dissipation, size of the chip, availability of block transfer feature etc.
1. Bipolar memories are generally used when very fast operation is the primary
requirement. High power dissipation in bipolar circuits makes it difficult to realise
high bit densities.
2. Dynamic MOS memory is the predominant technology used in the main memories of
computer, because their high bit-density makes it possible to implement large
memories economically.
3. Static MOS memory chips have higher densities and slightly longer access times
compared to bipolar chips. They have lower densities than dynamic memories but
are easier to use because they do not require refreshing.

4.Explain about virtual memory and memory management requirements.


Virtual Memory
• Virtual memory is a storage allocation scheme that allows secondary memory
to be addressed as if it were part of the main memory, increasing the
capacity of physical main memory when large data programs are fetched.
• This is done by using portion of secondary memory as main memory and this
technique is implemented using both hardware and software, mapping
virtual addresses into physical computer memory addresses.
• All memory references within a process are dynamically translated into
physical addresses at run time.
• Virtual memory is implemented using demand paging or demand
segmentation.
Virtual memory organisation
• Virtual memory techniques move programs and data blocks from secondary
storage to main memory when needed.
• This process provides binary addresses for instructions or data, which are
translated into physical addresses by hardware and software
components.These binary addresses are called virtual or logical addresses.
• If the virtual address refers to a program or data segment in main memory,
its contents are easily accessed.
• If it refers to a part of a program not in main memory, its contents are
moved from secondary storage to main memory before access.
• The Memory Management Unit (MMU) converts the virtual address into its
physical address, and if required data is not in main memory, the operating
system moves the data from the disk to main memory using the DMA
scheme.

Address Translation in Virtual Memory


In a simple method of translating virtual address into physical address, it is assumed that all
the programs or data is divided into units of fixed length called pages.
A virtual memory address translation method based on the concept of fixed-length pages is
depicted in the figure below
➢ The data shifts between main memory and virtual memory in the form of pages and
it all depends on the system to decide the size of each page.
➢ The virtual address consists of two fields, with one being virtual page number and
other is the offset.
➢ The virtual page number, gives the location of a specific word or byte in a page.
➢ Information about the main memory location of each page is kept in a page table.
➢ An area in the main memory that can hold one page is called page frame.
➢ The starting address of a page stable is stored in a page table base register.
➢ By adding the virtual page number to the contents of this register, the address of the
matching entry in the page table is obtained.
➢ If the pages in main memory, this location contains it starting address.
➢ The MMU uses the page table data for each read and write operation. Since the page
table may be large and the MMU is present on the processor chip (together with the
primary cache) complete page table cannot be included on this chip.
➢ A small cache called the Translation Lookaside Buffer (TLB) is included in the MMU
for access the page table entries.
➢ A TLB entry is divided into two parts, a key and a value. When key is provided to TLB
it looks up for it, simultaneously in all entries (i.e., a typical property of associative
memory) and returns its corresponding value field is found (TLB hit).
➢ Otherwise, if it is not found (TLB miss), then a page table present in main memory is
used to map that logical address to physical address.
Advantages
➢ Several process can share system libraries by mapping them into virtual address
space. These libraries are stored as pages in physical memory.
➢ Inter process communication can be done by sharing virtual memory among several
processes.
➢ Process creation can speed up by sharing pages with fork () system call.
Disadvantages
➢ Application switching speed is reduced by using virtual memory.
➢ If virtual memory is used, available space of hard disk is less
➢ The performance of virtual memory is less compared to that of primary memory.
➢ The stability of the system is reduced by the complexity of the software increases.
➢ Difficult to implement algorithm of virtual memory.

Memory Management Requirements

Memory management is the process of controlling and coordinating the computer memory
of a computer, assigning portions known as blocks of memory to various running programs
and operating system applications such that they carry out their operations to enhance
system performance.
The five memory management requirements or as follows:
1. Relocation
2. Protection
3. Sharing
4. Local Organisation
5. Physical Organisation
1. Relocation: In a multiprogramming environment the accessible main memory is
shared among number of processes. The users can swap the active processes in and
out of the main memory to maximize the use of processor by supplying a huge pool
of ready process to accomplish. If a program has been swapped out to disk,it would
be quite limiting to declare that when it is next swapped back in, it should be place in
the main memory as it is.If the location is occupied the process has to be relocated
to different area. Therefore, this requirement supports the concept of modular
programming.
2. Protection: Every process must be protected against unnecessary interference by
the other processes, whether accidental or intentional. Therefore, programs in other
processes must not be able to reference memory locations in a process for writing or
reading operation without permission. Therefore, this requirement supports process,
isolation, protection and access control.
3. Sharing: Processes that work together on some task may need to share access to the
same Data Structure. Memory management system should therefore allow
regulated call up to shared area of memory without compromising necessary
protection. Therefore, this requirement supports protection and access control.
4. Local Organisation: Almost consists consistently, main memory in a computer
system is structured as a linear, or 1-Dimensional, address space compromising of an
arrangement of bytes or words. Secondary memory, at its physical level is equally
structured. Though this organisation closely reflects at the actual machine hardware,
it does not relate to the way in which programs are normally constructed. Most of
the programs are structured into sub parts, some of which are unmodifiable and
some of them contain data that maybe modified. Therefore, this requirement
supports the concept of modular programming.
5. Physical Organisation: The system memory is organised into two levels; one level is
main memory and other level is secondary memory. Main memory offers faster
access but it has high cost and it is a volatile memory with less storage capacity.
Secondary memory is a slower and cheaper, but it offers permanent storage and
non-volatile with huge storage capacity. Therefore, this requirement supports long
term storage and automatic allocation and management.
Use of Memory Management
The reason for using memory management as follows:
It allows you to check how much memory needs to the allocated to processes that decide
which processor should get memory at what time.
Tracks whenever inventory gets freed or unallocated. According to it will update the status.
It allocates the space to application routines.
It also makes sure that this application does not interfere with each other.
Helps protect different processes from each other.
It places the programs in memory so that memory is utilised to its full extent.

5.Explain about Secondary storage.


Secondary storage devices are non-volatile and used to store huge amounts of data . The
computers usually use input and output channels to access data from the secondary devices
to transfer data to an intermediate area in the main memory. The data from the secondary
storage devices can be accessed in milliseconds.
Classification of Secondary Storage
Secondary storage devices are generally separated into three types:
1. Magnetic storage devices such as hard disc drive
2. Optical storage devices, such as CD, DVD and Blue-ray discs.
3. Solid state storage devices, such as solid-state drives and USB memory sticks
1.Magnetic devices: Magnetic devices such as hard disk drives use magnetic fields to
magnetise tiny individual sections of a metal spinning disk. Each tiny section represents one
bit.A magnetised section represents a binary ‘1’ and demagnetised section represents a
binary ‘0’. These sections are so tiny that disks can content terrabytes (TB) of data.
As the disk is spinning a read/write head moves across its surface. To write data, the head
magnetises or demagnetises a section of the disk that is spinning under it. To read data the,
the head makes a note of whether the section is magnetised or not.
Magnetic devices are fairly cheap, high in capacity and durable. However, they are
susceptible to damage if dropped. They are also vulnerable to magnetic fields. A strong
magnet might possibly erase the data the device holds.
2.Optical devices: Optical devices use a laser to scan the surface of a spinning disc made
from metal and plastic. The disc surface is divided into tracks, with each track containing
many flat areas and hollows.The flat areas are known as lands and the hollows as pits. When
the laser shines on the disc surface, lands reflect the light back, where are pitss scatter the
laser beam. A sensor looks for the reflected light. Reflected light (lands) represents a binary
‘1’ and no reflection (pits) represents a binary ‘0’.
Optical media also come and different types:
ROM media have data pre-written on them. The data cannot be overwritten. Music, films,
software and games are often distributed in this way.
R media are blank. An optical device writes data to them by shining a laser onto the disc.
The laser burns pits to represent ‘0’s. The media can only be written to once, but read many
times. Copies of data often made using these media.
RW works in similar way to R,except that the disc can be written to more than once.
3.Solid state devices: solid state devices use non-volatile random-access memory (RAM) to
store data indefinitely. They tend to have much faster access times than other types of
device and because they have no moving parts, are more durable.
Since this type of memory is expensive, solid-state devices tend to be smaller in capacity
and other types. For example, solid-state drive that holds 256 GB might be of a similar cost
to a hard disk with several terabytes’ capacity.
Solid state devices require little power, making them ideal for portable devices where
battery life is a big consideration. They are also portable due to their small size and
durability.

6.Explain the various methods mapping by techniques associated with cache


memories.
Basically, there are three mapping techniques which are primary focus. They are,
a) Direct mapping
b) Associative mapping
c) Set associative mapping
In these three techniques,the cache size is considered as 128 blocks each of which is divided
into 16 words.The main memory is assumed to have 4K blocks with each block
corresponding to 16 words.
a) Direct Mapping
Direct mapping is a simple method, maps each main memory block to a single cache
line. In other words, each memory block is assigned to a single cache line. If a line
has been occupied by a memory block and a new block must be loaded, the old block
is replaced. An address space consists of an index field and a tag field. The cache is
used to store the tag field, while the remaining of the data is stored in main memory.
Formula used in direct mapping
i = j modulo m
Where,
i - Cache line number
j - Main memory block number
m - Number of lines in the cache
In direct mapping, line offset is viewed as index bits.
An address can also be used in this mapping technique. If cache memory contact 2N words
then a N-bits address is required to access the cache memory. If main memory contains 2M
words, then M-bits address is required to access the main memory.
A main memory address contains two fields, a TAG field and an INDEX field. (M-N) bits
specify and N-bits specify an INDEX field. The INDEX field is inturn divided into a BLOCK field
and a WORD field. If the direct mapping organisation uses a block size of w words, then the
BLOCK field will contain (N -log 2w) bits and the WORD field will contain log 2wbits.All the
words in a block will have the same TAG field.

In this mapping technique, M-bit address is used to access the main memory and N-bit
INDEX field is used to access the cache memory. Consider a 512 x 512 cache memory and
32K x 12 main memory, then,
Cache memory address = N-bits= 9 - bits [ 29 = 512]
Main memory address= M-bits= 15-bits [215 = 32K]
Index field = N-bits = 9 bits
Tag field = M-N bits = 15-9= 6 bits
If w = 8 words,
Word field = (log2w) bits = log28 = 3 log22 = 3 bits
Block field = (N – log 2w) bits = (9- log 28) = 9-3 = 6 bits
When a 15- bit address is read from the main memory, the 9-bits in the INDEX field are used
to access a cache line(Each cache line contains a tag and its associated data).The tag stored
in this cache line is then compared with the tag of the given 15-bit address.
❖ If they both match, the 3-bit word is used to identify one of the 8 words from that
line.
❖ If they do not match, the complete block of data is read from the main memory and
is replaced with the previous cache line (block)
Example

The above diagram shows that cache memory stores BLOCK 0 from main memory. If any
word from the block is required by CPU, then it can be easily read from the cache memory.
Suppose, CPU requires a word from Block 1 with an address 01003. As the index address is
003, the cache is accessed with that address. Then, the TAG fields of each of these
addresses are compared. They do not match, as the required address contains 01 as it tag
and the cache line contains 00 has its tag. Thus, the complete BLOCK 0 in cache memory is
replaced with BLOCK 1 by accessing it from main memory.

b) Associative mapping technique


In this type of mapping, the content and the addresses of the memory word are stored in
the associative memory. Each line of the cache can hold any block. The word id bits specify
which word in the block is needed, but the tab becomes all the remaining bits. This means
that any word can be put anywhere in the cache memory. It is considered to be the fastest
and most flexible way to map. In associative mapping, the index bits are all 0.
Here, the memory address is of 16 bits and consists of two major fields i.e., tag field (12 bits)
and word field (4 bits)

That 12-bit tag field determines the main memory block to be placed in the cache memory.
Hence, whenever processor intends to find any kind of data, it initially examines the tag field
in order to gain the information on the existence of block in the cache memory.

c) Set Associative Mapping Technique


A set associative mapping technique is a combination of associative mapping and direct
mapping techniques.
In this mapping technique, the cache memory is divided into ‘s’ number of sets. A single set
contains one or more tag data pace in one word of case. This and index address will point if
number of take tag data pairs.
If the index address contains N-bits then it can point to 2N words and the Length of the
word, (Lw) will be,

Lw = Number of sets x (Number of tag bits + Number of bits in a data word) bits

Then, the size of the cache memory will be (2N x Lw). Thus, a cache memory that employs set
associating mapping contains ‘s’ number of sets and each set will hold ‘s’ number of
memory words in a single word of cache.
Example

In the figure, a single index address will point to three different tag data pairs.
Consider a 512 x12 cache memory and 32K x 12 main memory, then,
Cache memory address = N-bits = 9 bits [29 = 512]
Main memory address = M-bits = 15 bits [215 = 32K]
Index field = N-bits = 9-bits
Tag field = M-N bits = 15 - 9 = 6 bits
When a 15-bits address is generated by the CPU to access data, the cache memory is
accessed using the 9-bit index address. Suppose, a data word at address 01003 is requested
by the CPU, then the cache line with index address 003 is accessed. At this address, there
are three different tag data pairs. Thus, an associative search is carried out to compare the
tag value of the required data word with the three different tags in the current cache line.
1. If a match is obtained, then its corresponding data word is accessed.
2. If a match does not occur and if set is full,then one of the tag data pair in the set is
replaced with a new tag data pair using any one of the following replacement
algorithms.
➢ Random replacement algorithm
➢ First-In First-Out replacement algorithm (FIFO)
➢ Least Recently Used replacement algorithm (LRU)
7. Discuss about optical devices.
OR
What are the optical devices? Explain its construction and principle of
operation.
Optical disks are external storage devices used to store large data using optical technology.
The popular optical storage devices are
➢ CD Technology
➢ DVD Technology
➢ Blue Ray Technology

❖ The bottom layer of disk consists of polycarbonate plastic.The thickness of an optical


disk is mainly contributed by the polycarbonate plastic itself, and it's thickness
ranges upto 1.2 mm .
❖ These compact disks consists of spiral sections known as sectors.These sectors
consists of tiny dot like regions referred as pits. the regions which remains
unaffected by the laser beam are the lands.
❖ These sectors are covered by a thin layer of aluminium which is again covered by a
protective acrylic.
❖ Finally, the topmost layer is deposited and stamped with a label.
❖ The laser source and the photodetector are placed below the polycarbonate plastic.
❖ When a strong laser is focused on this disk, the emitted beam travels through this
plastic, reflects off the aluminium layer, and travels back toward the photodetector
through which data is read.
Laser Beam Transition from Pit to Land
❖ When the laser beam comes out of its source, it strikes either a pit or land.
❖ When this kind of interference occurs, the laser beam gets smoothly reflected back,
where it is caught by a special circuitry called a photo detector.
❖ A bright spot can be seen whenever the photodetector is able to catch the reflected
beam. Now that the disc is spinning, the laser may hit the edges where there is a
transition from pit to land or land to pit.
❖ In these cases, the laser beam doesn't get reflected back in a smooth way. Instead, it
gets deflected, which means it never reach the detector. This is indicated on the
detector by a dark spot.
The representation of transitions in the form of binary data is as shown in figure below

These dark spots and bright spots create a binary pattern that is represented as 0's and 1's.
The pattern is then used by the disk drive to read the data stored on the disk. The computer
uses this binary pattern to interpret the data and can then read or write files to the disk.

You might also like