[go: up one dir, main page]

0% found this document useful (0 votes)
20 views23 pages

COA Chapter 5

Chapter 5 of 'Computer Organization and Architecture' discusses interfacing and communication in computer systems, covering I/O fundamentals such as programmed I/O, interrupt-driven I/O, handshaking, and buffering. It explains the role of interrupts, bus protocols, and direct memory access (DMA), as well as RAID architectures for data redundancy and performance. The chapter highlights the advantages and limitations of these techniques in enhancing system performance and reliability.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views23 pages

COA Chapter 5

Chapter 5 of 'Computer Organization and Architecture' discusses interfacing and communication in computer systems, covering I/O fundamentals such as programmed I/O, interrupt-driven I/O, handshaking, and buffering. It explains the role of interrupts, bus protocols, and direct memory access (DMA), as well as RAID architectures for data redundancy and performance. The chapter highlights the advantages and limitations of these techniques in enhancing system performance and reliability.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Computer Organization and Architecture

Chapter 5

Interfacing and communication


Out lines
 I/O fundamentals: handshaking, buffering, programmed I/O, interrupt-driven I/O
 Interrupt
 Buses: bus protocols, direct-memory access (DMA)
 RAID architectures
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
 The computer is useless without some kind of interface to the outside world.
 There are many different devices which we can connect to the computer system; keyboards,
and disk drives are some of the more familiar ones.
 Irrespective of the details of how such devices are connected we can say that all I/O is
governed by three basic strategies.
 Programmed I/O
 Interrupt driven I/O
 Direct Memory Access
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
Programmed I/O
 It Is a method of transferring data between the CPU and a peripheral, such as a network
adapter or an ATA storage device.
 In general, programmed I/O happens when software running on the CPU uses instructions
that access I/O address space to perform data transfers to or from an I/O device.
 The PIO interface is grouped into different modes that correspond to different transfer rates.
 The electrical signaling among the different modes is similar only the cycle time between
transactions is reduced in order to achieve a higher transfer rate
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
Interrupt driven I/O
 It Is a way of controlling input/output activity in which a peripheral or terminal that needs to
make or receive a data transfer sends a signal that causes a program interrupt to be set.
 At a time appropriate to the priority level of the I/O interrupt, relative to the
total interrupt system, the processor enters an interrupt service routine (ISR).
 The function of the routine will depend upon the system of interrupt levels and priorities that
is implemented in the processor.
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
Handshaking
 Handshaking is a I/O control method to synchronize I/O devices with the microprocessor.
 As many I/O devices accepts or release information at a much slower rate than the
microprocessor, this method is used to control the microprocessor to work with a I/O device at
the I/O devices data transfer rate.
 Handshaking is an automated process of negotiation that dynamically sets parameters of a
communications channel established between two entities before normal communication over
the channel begins.
 It follows the physical establishment of the channel and precedes normal information transfer.
 The handshaking process usually takes place in order to establish rules for communication
when a computer sets about communicating with a foreign device.
 When a computer communicates with another device like a modem, printer, or network server,
it needs to handshake with it to establish a connection.
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
 Example: Supposing that we have a printer connected to a system.
 The printer can print 100 characters/second, but the microprocessor can send much more
information to the printer at the same time.
 That’s why, just when the printer gets it enough data to print it places a logic
1 signal at its Busy pin, indicating that it is busy in printing.
 The microprocessor now tests the busy bit to decide if the printer is busy or not. When the
printer will become free it will change the busy bit and the microprocessor will again send
enough amounts of data to be printed.
 This process of interrogating the printer is called handshaking.
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
Buffering
 In operating systems, buffering is a technique which is used to enhance the performance of
I/O operations of the system.
 Basically, buffering in operating system is a method of storing data in a buffer or cache
temporarily, this buffered data then can be accessed more quickly as compared to the original
source of the data.
 In a computer system, data is stored on several devices like hard discs, magnetic tapes,
optical discs and network devices.
 In the case, when a process requires to read or write data from one of these storage devices,
it has to wait while the device retrieves or stores the data.
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
 This waiting time could be very high, especially for those devices which are slow or have a
high latency.
 This problem can be addressed by buffering. Buffering provides a temporary storage area,
called buffer.
 Buffer can store data before it is sent to or retrieved from the storage device.
 When the buffer is fully occupied, then data is sent to the storage device in a batch, this will
reduce the number of access operations required and hence improves the performance of the
system.
I/O fundamentals: handshaking, buffering, programmed
I/O, interrupt-driven I/O
Advantages of Buffering
 Buffering reduces the number of I/O operations required to access data.
 Buffering reduces the amount of time for that processes have to wait for the data.
 Buffering improves the performance of I/O operations as it allows data to be read or written
in large blocks instead of 1 byte or 1 character at a time.
 Buffering can improve the overall performance of the system by reducing the number of
system calls and context switches required for I/O operations.
Limitations of Buffering
 Buffers of large sizes consume significant amount of memory that can degrade the system
performance.
 Buffering may cause a delay between the time data is read or written and the time it is
processed by the application.
 Buffering may also impact the real-time system performance and hence, can cause
synchronization issues.
Interrupt
 In system programming, an interrupt is a signal to the processor emitted by hardware or
software indicating an event that needs immediate attention.
 An interrupt is a signal from a device attached to a computer or from a program within the
computer that causes the main program that operates the computer (the operating system) to
stop and figure out what to do next.
 An interrupt alerts the processor to a high-priority condition requiring the interruption of the
current code the processor is executing.
 The processor responds by suspending its current activities, saving its state, and executing a
function called an interrupt handler (or an interrupt service routine, ISR) to deal with the
event.
 This interruption is temporary, and, after the interrupt handler finishes, the processor
resumes normal activities. There are two types of interrupts: hardware interrupts and
software interrupts.
Hardware interrupts
 Hardware interrupts are used by devices to communicate that they require attention from the
operating system.
 Internally, hardware interrupts are implemented using electronic alerting signals that are sent
to the processor from an external device, which is either a part of the computer itself, such as
a disk controller, or an external peripheral.
 For example, pressing a key on the keyboard or moving the mouse triggers hardware
interrupts that cause the processor to read the keystroke or mouse position.
 Unlike the software type (described below), hardware interrupts are asynchronous and can
occur in the middle of instruction execution, requiring additional care in programming.
 The act of initiating a hardware interrupt is referred to as an interrupt (IRQ).
Software interrupt
 A software interrupt is caused either by an exceptional condition in the processor itself, or a
special instruction in the instruction which causes an interrupt when it is executed.
 The former is often called a trap or exception and is used for errors or events occurring
during program executions that are exceptional enough that they cannot be handled within
the program itself.
 For example, if the processor’s arithmetic logic unit is commanded to divide a number by
zero, this impossible demand will cause a divide-by-zero exception, perhaps causing the
computer to abandon the calculation or display an error message.
 Software interrupt instructions function similarly to subroutine calls and are used for a
variety of purposes, such as to request services from low-level system software such as
device drivers.
 For example, computers often use software interrupt instructions to communicate with the
disk controller to request data be read or written to the disk.
Interrupts can be categorized into these different types:
 Maskable interrupt (IRQ): a hardware interrupt that may be ignored by setting a bit in an
interrupt mask register’s (IMR) bit-mask.
 Non-maskable interrupt (NMI): a hardware interrupt that lacks an associated bit- mask, so
that it can never be ignored. NMIs are used for the highest priority tasks such as timers,
especially watchdog timers.
 Inter-processor interrupt (IPI): a special case of interrupt that is generated by one
processor to interrupt another processor in a multiprocessor system.
 Software interrupt: an interrupt generated within a processor by executing an instruction.
Software interrupts are often used to implement system calls because they result in a
subroutine call with a CPU ring level change.
 Spurious interrupt: a hardware interrupt that is unwanted. They are typically generated by
system conditions such as electrical interference on an interrupt line or through incorrectly
designed hardware
Bus protocols
 A bus protocol is the set of rules that govern the behavior of various devices connected to the
bus as to when to place information on the bus, assert control signals, and so on
 In a synchronous bus, all devices derive timing information from a common clock line.
 Equal spaced pulses on this line define equal time intervals In the simplest form of a
synchronous bus, each of these intervals constitutes a bus cycle during which one data
transfer can take place
 Asynchronous bus: The handshake process eliminates the need for synchronization of the
sender and receiver clock, thus simplifying timing design
 Synchronous bus: Clock circuitry must be designed carefully to ensure proper
synchronization, and delays must be kept within strict bounds
Direct-memory Access (DMA)
 The direct memory access (DMA) I/O technique provides direct access to the memory
while the microprocessor is temporarily disabled.
 A DMA controller temporarily borrows the address bus, data bus, and control bus from the
microprocessor and transfers the data bytes directly between an I/O port and a series of
memory locations.
 The DMA transfer is also used to do high-speed memory-to memory transfers.
 Two control signals are used to request and acknowledge a DMA transfer in the
microprocessor-based system.
 The direct memory access(DMA) I/O technique provides direct access to the memory while
the microprocessor is temporarily disabled.
 A DMA controller temporarily borrows the address bus, data bus, and control bus from the
microprocessor and transfers the data bytes directly between an I/O port and a series of
memory locations.
 The DMA transfer is also used to
do high-speed memory-to-memory transfers. Two control signals are used to request and
acknowledge a DMA transfer in the microprocessor-based system.
 The HOLD signal is a bus request signal which asks the microprocessor to release control of
the buses after the current bus cycle.
 The HLDA signal is a bus grant signal which indicates that the microprocessor has indeed
released control of its buses by placing the buses at their high-impedance states.
 The HOLD input has a higher priority than the INTR or NMI interrupt inputs.
 RAID (redundant array of independent disks) is a way of storing the same data
in different places on multiple hard disks or solid-state drives (SSDs) to
protect data in the case of a drive failure.
 There are different RAID levels, however, and not all have the goal of
providing redundancy.
 RAID works by placing data on multiple disks and allowing input/output (I/O)
operations to overlap in a balanced way, improving performance.
 Because using multiple disks increases the mean time between failures, storing
data redundantly also increases fault tolerance.
 RAID arrays appear to the operating system (OS) as a single logical drive.
 RAID employs the techniques of disk mirroring or disk striping. Mirroring will copy
identical data onto more than one drive.
 Striping partitions help spread data over multiple disk drives. Each drive's storage space is
divided into units ranging from a sector of 512 bytes up to several megabytes.
 The stripes of all the disks are interleaved and addressed in order. Disk mirroring and disk
striping can also be combined in a RAID array.
 A RAID controller is a device used to manage hard disk drives in a storage array. It can be
used as a level of abstraction between the OS and the physical disks, presenting groups of
disks as logical units.
 Using a RAID controller can improve performance and help protect data in case of a crash.
 A RAID controller may be hardware- or software-based. In a hardware-based
RAID product, a physical controller manages the entire array.
 The controller can also be designed to support drive formats such as Serial Advanced
Technology Attachment and Small Computer System Interface.
 A physical RAID controller can also be built into a server's motherboard. With software-
based RAID, the controller uses the resources of the hardware system, such as the central
processor and memory.
 While it performs the same functions as a hardware-based RAID controller, software-based
RAID controllers may not enable as much of a performance boost and can affect the
performance of other applications on the server.
Advantages of RAID
 Transfer of large sequential files and graphic images is easier.
 Hardware based implementation is more robust.
 Software based implementation is cost-effective.
 Highest performance and Data protection can be achieved.
 Fault tolerance capacity is high.
 They require less power.
Disadvantages of RAID
 The disadvantages include:
 In spite of using this technology, backup software is a must.
 Mapping Logic blocks onto physical locations is complex.
 Data chunk size affects the performance of disk array.
?Welcome

You might also like