Unit - 4 Question and Answers
Unit - 4 Question and Answers
ESSAY QUESTIONS:
1. Explain asynchronous data transfer methods in detail.
In a computer system, two units like the CPU and an I/O interface need to exchange data. The way
they transfer data depends on how their clocks (timing signals) work. When two units like the CPU
and an I/O interface exchange data, the transfer can be either synchronous or asynchronous. In
synchronous transfer, the registers of both units share a common clock signal. This means data
transfer happens in step with the same clock pulses, making it simple to design but requiring both
units to work at the same speed. In asynchronous transfer, the CPU and I/O interface use separate
clocks, and their operations are independent. In that case, the two units are said to be asynchronous to
each other. This approach is widely used in most computer systems.
The main problem with asynchronous I/O is that it is hard to make sure that the CPU and I/O device
are working together at the right moment. Since there is no common clock or fixed time slot for
sending and receiving data, the CPU cannot be sure whether the data present on the data bus is new
(fresh) or old (stale). If the CPU reads too early, it may get wrong data because the I/O device has not
yet placed the correct data on the bus. If it reads too late, the data might have already changed. This
can cause errors in data transfer.
This problem is solved by following mechanism:
1. Strobe
2. Handshaking
Strobe Control:
In asynchronous data transfer, the two units (like CPU and I/O device) work with different clocks, so
they need a way to coordinate data transfer. To do this, they use control signals to tell each other
when data is ready to be sent or received. One simple method is using a strobe pulse. A strobe pulse
is a short signal sent by one unit to the other to indicate that the data on the data bus is valid and
should be read (or written).
The strobe control method is a simple way of transferring data between two units (like CPU and I/O
device) when they do not share a common clock. In this method, a single control line called the
strobe line is used to indicate the timing of data transfer.
The strobe signal is a short pulse that tells the other unit when to read or write the data. The strobe
can be generated by either:
Source-initiated strobe: The sending unit (source) places data on the data bus and activates the
strobe to inform the receiving unit to read the data.
Destination-initiated strobe: The receiving unit (destination) activates the strobe when it is
ready to accept data, asking the source to place data on the bus.
Source-initiated strobe:
Figure 11-3(a) shows a source-initiated transfer. The data bus carries the binary information from
source unit to the destination unit. Typically, the bus has multiple lines to transfer an entire byte or
word. The strobe is a single line that informs the destination unit when a valid data word is available
in the bus.
In asynchronous data transfer, the data bus is used to carry binary information from the source unit
(sender) to the destination unit (receiver). The data bus usually has several lines so that a full byte or
word of data can be transferred at once. Along with the data bus, there is a single strobe line that tells
the destination when the data on the bus is valid.
The process works as follows: First, the source places the data on the data bus. After a small delay
to ensure that the data settle to steady value, the source activates the strobe pulse. This pulse stays
active for enough time so that the destination can read the data safely. Usually, the destination unit
reads the data on the falling edge of the strobe pulse (when it changes from high to low). After this,
the source turns off the strobe signal, which means that the data bus no longer carries valid data. The
source may leave the old data on the bus, but it is ignored because the strobe is no longer active. Only
when a new strobe pulse is sent will the destination treat the data as valid again.
Destination-initiated strobe:
Figure 11-4 shows a data transfer initiated by the destination unit. In destination-initiated strobe
control, the data transfer process is started by the destination unit (receiver). The destination first
activates the strobe pulse to inform the source unit (sender) that it is ready to receive data. The source
then places the required binary data on the data bus and keeps it there for a sufficient amount of time
so that the destination can read it correctly. The destination usually uses the falling edge of the strobe
pulse (when it goes from high to low) to store the data from the bus into its internal register. After
reading the data, the destination disables the strobe pulse, indicating that it no longer needs the data.
The source removes the data from the bus after a fixed time.
Handshaking:
The main problem with the strobe method is that the unit initiating the transfer (source or destination)
cannot be sure whether the other unit has completed its part of the transfer. For example, a source
unit does not know if the destination has actually received the data, and a destination unit does not
know if the source has placed the data on the bus.
The handshaking method solves this problem by using a second control signal to provide a
confirmation (reply) to the initiating unit. In the two-wire handshaking method, one control line goes
in the same direction as the data and is used by the source to indicate that valid data is available on
the bus. The second control line goes in the opposite direction and is used by the destination to
indicate that it is ready to accept the data. This ensures that both units are synchronized and data is
transferred safely without errors.
Source-initiated transfer using handshaking:
Figure 11-5 shows the data transfer procedure when initiated by the source. In the two-wire
handshaking method, there are two control signals: Data Valid, generated by the source unit, and
Data Accepted, generated by the destination unit. The source starts the transfer by placing data on
the bus and enabling the Data Valid signal, which tells the destination that the data is ready. The
destination then reads the data from the bus and activates the Data Accepted signal to confirm it has
received the data. After this, the source disables the Data Valid signal, making the data on the bus
invalid. The destination then disables the Data Accepted signal, and the system returns to its initial
state. The source will not send the next data item until the destination is ready again, ensuring safe
and synchronized data transfer between the units.
Destination-initiated transfer using handshaking:
The destination-initiated transfer using handshaking lines is shown in Fig. 11-6. In destination-
initiated transfer using handshaking, the process starts with the destination unit instead of the source.
The destination sends a signal called ready for Data to indicate that it is prepared to receive data. The
source waits until it receives this signal before placing data on the bus. After that, the handshaking
procedure is the same as in source-initiated transfer: the source enables Data Valid, the destination
reads the data and confirms it, then both units return to their initial state. Essentially, the only
difference between source-initiated and destination-initiated handshaking is which unit starts the
process; otherwise, the sequence of events is identical, with the Ready for Data signal acting like the
complement of the Data Accepted signal.
2. Discuss the modes of data transfer with examples.
Binary information received from an external device is usually stored in memory so it can be
processed later. Similarly, information sent from the central computer to an external device usually
comes from the memory unit. The CPU plays a role in executing I/O instructions and may
temporarily hold the data, but the main source or destination of the data is memory.
Data transfer between the CPU, memory, and I/O devices can be handled in different ways. Data
transfer to and from peripherals is performed using one of three main modes.
1. Programmed I/O
2. Interrupt-initiated I/О
3. Direct memory access (DMА)
Programmed I/O:
In programmed I/O, data transfers are controlled by the instructions written in a computer program.
Each data item is transferred only when a specific instruction in the program is executed. Usually, the
transfer happens between a CPU register and a peripheral device. Additional instructions are needed
to move data between the CPU and memory. Programmed I/O requires the CPU to constantly
monitor the peripheral device. After initiating a data transfer, the CPU must check the interface
continuously to see when the next transfer can be performed. This means the CPU is heavily involved
and may spend a lot of time waiting for the peripheral to be ready.
In programmed I/O, the CPU usually waits in a program loop until the I/O device is ready for data
transfer. This is inefficient because the CPU remains busy doing nothing while waiting. This problem
can be avoided by using an interrupt system. With interrupts, the CPU can continue executing other
programs while the I/O interface monitors the device. When the device is ready for data transfer, the
interface sends an interrupt request to the CPU. The CPU then temporarily pauses its current task,
executes a service program to handle the I/O transfer, and afterwards returns to the original task. This
method allows the CPU to work efficiently without wasting time waiting for I/O devices.
Example of Programmed I/O:
An example of data transfer from an I/O device through an interface into the CPU is shown in below
Figure.
The device transfers bytes of data one at a time as they are available. When a byte of data is
available, the device places it in the I/O bus and enables its data valid line. The interface accepts the
byte into its data register and enables the data accepted line. The interface sets a bit in the status
register that we will refer to as an F or "flag" bit. The device can now disable the data valid line, but
it will not transfer another byte until the data accepted line is disabled by the interface. If the flag is
equal to 1, the CPU reads data from the data register. The flag bit is then cleared to 0 by either the
CPU or the interface, depending on how the interface circuits are designed. Once the flag is cleared,
the interface disables the data accepted line and the device then transfer the next data byte.
The transfer of each byte requires three instructions:
1. Read the status register.
2. Check the status of the flag bit and branch to step 1 if not set or to step3 if set.
3. Read the data register.
A flowchart of the program that must be written for the CPU is shown in Fig. 11-11
Interrupt-initiated I/О:
This mode of transfer uses the interrupt facility. While the CPU is running a program, it does not
check the flag. However, when the flag is set, the computer is momentarily interrupted from
proceeding with the current program and is informed of the fact that the flag has been set. The CPU
deviates from what it is doing to take care of the input or output transfer. After the transfer is
completed, the computer returns to the previous program to continue what it was doing before the
interrupt.
The CPU responds to the interrupt signal by storing the return address from the program counter into
a memory stack and then control branches to a service routine that processes the required I/O
transfer. The way that the processor chooses the branch address of the service routine varies from one
unit to another. In principle, there are two methods for accomplishing this. One is called vectored
interrupt and the other, nonvectored interrupt. In a nonvectored interrupt, the branch address is
assigned to a fixed location in memory. In a vectored interrupt, the source that interrupts supplies the
branch information to the computer.
Priority Interrupt:
A priority interrupt is a system that establishes a priority over the various sources to determine which
condition is to be serviced first when two or more requests arrive simultaneously. The system may
also determine which conditions are permitted to interrupt the computer while another interrupt is
being serviced. Higher-priority interrupt levels are assigned to requests which, if delayed or
interrupted, could have serious consequences. Devices with high speed transfers such as magnetic
disks are given high priority, and slow devices such as keyboards receive low priority. When two
devices interrupt the computer at the same time, the computer services the device, with the higher
priority first.
A polling procedure is used to identify the highest-priority source by software means. In this method
there is one common branch address for all interrupts. The program that takes care of interrupts
begins at the branch address and polls the interrupt sources in sequence. The order in which they are
tested determines the priority of each interrupt. The highest-priority source is tested first, and if its
interrupt signal is on, control branches to a service routine for this source. Otherwise, the next-lower-
priority source is tested and so on.
Direct Memory Access (DMA) is a technique used to transfer data between high-speed peripheral
devices, such as magnetic disks, and main memory without involving the CPU for each byte or word.
In this method, the peripheral device takes control of the memory bus and performs the data transfer
directly, bypassing the CPU. This allows the CPU to execute other tasks while the transfer is in
progress, greatly improving system efficiency, especially for large blocks of data. DMA is commonly
used for high-speed devices where programmed I/O would create a performance bottleneck.
When a DMA transfer is initiated, the DMA controller temporarily takes control of the system buses
— the address bus, data bus, and control bus — from the CPU.
To make this possible, the CPU uses two special control signals:
Bus Request (BR): Sent by the DMA controller (A specialized hardware unit that manages
direct data transfers between Memory and I/O Device) to ask the CPU for control of the
buses.
Bus Grant (BG): Sent by the CPU to allow the DMA controller to take control.
When the BR signal is received, the CPU stops its current work and releases the buses by putting
them in a high impedance state (which means the CPU’s bus connections are temporarily
disconnected).
This allows the DMA controller to use the buses safely for data transfer.
Diagram
The CPU activates the Bus Grant (BG) output to inform the external DMA that the Bus Request (BR)
can now take control of the buses to conduct memory transfer without processor.
When the DMA terminates the transfer, it disables the Bus Request (BR) line. The CPU disables the
Bus Grant (BG), takes control of the buses and return to its normal operation.
DMA Transfer: The CPU communicates with the DMA through the address and data buses as with
any interface unit. The DMA has its own address, which activates the DS and RS lines. The CPU
initializes the DMA through the data bus. Once the DMA receives the start control command, it can
transfer between the peripheral and the memory. When BG = 0 the RD and WR are input lines
allowing the CPU to communicate with the internal DMA registers. When BG=1, the RD and WR
are output lines from the DMA controller to the random access memory to specify the read or write
operation of data.
The transfer can be made in several ways that are:
1. DMA Burst
2. Cycle Stealing
DMA Burst: In this mode, the DMA controller takes complete control of the system buses — the
address bus, data bus, and control bus — from the CPU and performs a continuous transfer of a block
of data between memory and the I/O device without any interruption.
During this period
The CPU is temporarily disabled from accessing the system buses (it remains idle).
The DMA controller transfers an entire block (burst) of data in one go.
After the transfer is complete, control of the buses is returned to the CPU.
Example
If 1000 bytes need to be transferred from disk to memory, the DMA controller transfers all 1000
bytes continuously in a single burst operation while the CPU waits..
Cycle Stealing: In this mode, the DMA controller transfers one data word at a time between the I/O
device and main memory. After transferring a single word, the DMA controller releases control of
the system buses and returns them to the CPU so that the CPU can continue its normal operation.
DMA Controller:
A Direct Memory Access (DMA) Controller is a hardware device that manages data transfer directly
between I/O devices and main memory, bypassing the CPU to improve system performance.
It acts as an intermediary between the I/O device and memory, taking over the system buses during
data transfer operations.
Functions of the DMA Controller:
Request Handling: Receives DMA requests from I/O devices.
Bus Control: Takes control of the address, data, and control buses from the CPU using Bus Request
(BR) and Bus Grant (BG) signals.
Address Generation: Provides the memory address for each data transfer automatically (the CPU
doesn’t need to specify it for every word).
Data Transfer: Transfers data directly between memory and I/O device without CPU intervention.
Transfer Count Tracking: Keeps track of the number of bytes or words remaining in the transfer.
Interrupt Generation: After the transfer is complete, it sends an interrupt signal to the CPU to
indicate the completion of the operation.
The DMA controller needs the usual circuits of an interface to communicate with the CPU and I/O
device. The DMA controller has three registers:
1. Address Register
2. Word Count Register
3. Control Register
Address Register: Holds the memory address where the next data word will be read/written.
Word Count Register: he Word Count Register (also called Byte Count Register) in a DMA
Controller holds the total number of words or bytes to be transferred between the I/O device and
main memory.
4. Describe the functions of an Input-Output Processor (IOP).
Input-Output Processor:
The Input-Output Processor (IOP) is just like a CPU that handles the details of I/O operations. It is
more capable than typical DMA controller. The IOP can fetch and execute its own instructions that
are specifically designed to characterize I/O transfers. In addition to the I/O tasks, it can also perform
other processing tasks like arithmetic, logic, branching, and code translation. The main memory unit
plays a pivotal role. It communicates with the processor via DMA.
Functions of the IOP
The Input–Output Processor (IOP) is a specialized processor that handles I/O operations and manages
the loading and storing of data in memory, acting as an interface between the system and peripheral
devices. It executes its own I/O-specific instructions and uses DMA to transfer data efficiently.
1. Initiation:
The IOP is triggered by a request from the system or a peripheral device to start an I/O
operation.
2. Instruction Fetch:
It fetches instructions from its own instruction set, which are specifically designed for I/O
transfers.
3. Memory Allocation:
Space is allocated in the main memory to hold the data being transferred.
4. Data Transfer via DMA:
The IOP uses Direct Memory Access (DMA) to transfer data directly between the I/O device
and memory, bypassing the CPU.
5. Buffering:
Data is temporarily buffered between the I/O device and memory to ensure smooth and
efficient processing.
6. Execution of I/O Commands:
Commands such as read, write, or synchronize are executed to control the data transfer
process.
7. Error Handling:
If any errors occur, interrupts are generated and error correction is handled independently by
the IOP.
8. Completion of Transfer:
Once the transfer is complete, the results are stored in memory, and the operation is marked
as complete.
9. Resource Release:
Control of the resources is released, and the CPU resumes its normal processing tasks.
Applications of Input–Output Processors (IOPs)
Input–Output Processors (IOPs) are specialized processors that handle I/O operations
independently of the CPU, improving system efficiency and performance. They have a wide range of
applications: