[go: up one dir, main page]

0% found this document useful (0 votes)
60 views5 pages

Chapter 1

The document outlines the basic structure of computers, detailing components such as the processor, memory, and control unit, and their roles in processing information. It explains concepts like random-access memory (RAM), registers, program execution, and the significance of parallelism in enhancing performance. Additionally, it discusses cache memory and virtual memory, emphasizing their impact on execution speed and memory capacity.

Uploaded by

Pandu Ranga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views5 pages

Chapter 1

The document outlines the basic structure of computers, detailing components such as the processor, memory, and control unit, and their roles in processing information. It explains concepts like random-access memory (RAM), registers, program execution, and the significance of parallelism in enhancing performance. Additionally, it discusses cache memory and virtual memory, emphasizing their impact on execution speed and memory capacity.

Uploaded by

Pandu Ranga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

chapter

1
Basic Structure of Computers

We refer to the arithmetic and logic circuits, in conjunction with the main
control circuits, as the processor.

The memory consists of a large number of semiconductor storage cells,


each capable of storing one bit of information. These cells are rarely read
or written individually. Instead, they are handled in groups of fixed size
called words. The memory is organized so that one word can be stored or
retrieved in one basic operation. The number of bits in each word is
referred to as the word length of the computer, typically 16, 32, or 64 bits.

A memory in which any location can be accessed in a short and fixed


amount of time after specifying its address is called a random-access
memory (RAM). The time required to access one word is called the
memory access time. This time is independent of the location of the word
being accessed.

When operands are brought into the processor, they are stored in high-
speed storage elements called registers. Each register can store one word
of data. Access times to registers are even shorter than access times to the
cache unit on the processor chip.

The memory, arithmetic and logic, and I/O units store and process
information and perform input and output operations. The operation of
these units must be coordinated in some way. This is the responsibility of
the control unit.

The operation of a computer can be summarized as follows:

-> The computer accepts information in the form of programs and data
through an input unit and stores it in the memory.
->Information stored in the memory is fetched under program control into
an arithmetic and logic unit, where it is processed.
->Processed information leaves the computer through an output unit.
->All activities in the computer are directed by the control unit.

After operands have been loaded from memory into processor registers,
arithmetic or logic operations can be performed on them.

In addition to the ALU and the control circuitry, the processor contains a
number of registers used for several different purposes. The instruction
register (IR) holds the instruction that is currently being executed. The
program counter (PC) is another specialized register. It contains the
memory address of the next instruction to be fetched and executed. During
the execution of an instruction, the contents of the PC are updated to
correspond to the address of the next instruction to be executed. It is
customary to say that the PC points to the next instruction that is to be
fetched from the memory.

A program must be in the main memory in order for it to be executed. It is


often transferred there from secondary storage through the input unit.
Execution of the program begins when the PC is set to point to the first
instruction of the program. The contents of the PC are transferred to the
memory along with a Read control signal. When the addressed word (in
this case, the first instruction of the program) has been fetched from the
memory it is loaded into register IR. At this point, the instruction is ready to
be interpreted and executed.

Normal execution of a program may be preempted if some device requires


urgent service. For example, a monitoring device in a computer-controlled
industrial process may detect a dangerous condition. In order to respond
immediately, execution of the current program must be suspended. To
cause this, the device raises an interrupt signal, which is a request for
service by the processor. The processor provides the requested service by
executing a program called an interrupt-service routine.

Sign Extension

We often need to represent a value given in a certain number of bits by


using a larger number of bits. For a positive number, this is achieved by
adding 0s to the left. For a negative number in 2’s-complement
representation, the leftmost bit, which indicates the sign of the number, is a
1. A longer number with the same value is obtained by replicating the sign
bit to the left as many times as needed. This operation is called sign
extension.

When the actual result of an arithmetic operation is outside the


representable range, an arithmetic overflow has occurred.

Significant digits of precision.

scale factors.
We conclude that a binary floating-point number can be represented by:

 a sign for the number


 some significant bits
 a signed scale factor exponent for an implied base of 2

binary-coded decimal (BCD)

The speed with which a computer executes programs is affected by the


design of its instruction set, its hardware and its software, including the
operating system, and the technology in which the hardware is
implemented. Because programs are usually written in a high-level
language, performance is also affected by the compiler that translates
programs into machine language.

Parallelism

Performance can be increased by performing a number of operations in


parallel. Parallelism can be implemented on many different levels.

Instruction-level Parallelism

The simplest way to execute a sequence of instructions in a processor is to


complete all steps of the current instruction before starting the steps of the
next instruction. If we overlap the execution of the steps of successive
instructions, total execution time will be reduced. For example, the next
instruction could be fetched from memory at the same time that an
arithmetic operation is being performed on the register operands of the
current instruction. This form of parallelism is called pipelining.

Multicore Processors

Multiple processing units can be fabricated on a single chip. In technical


literature, the term core is used for each of these processors. The term
processor is then used for the complete chip. Hence, we have the
terminology dual-core, quad-core, and octo-core processors for chips that
have two, four, and eight cores, respectively.

Multiprocessors

Computer systems may contain many processors, each possibly containing


multiple cores. Such systems are called multiprocessors. These systems
either execute a number of different application tasks in parallel, or they
execute subtasks of a single large task in parallel. All processors usually
have access to all of the memory in such systems, and the term shared-
memory multiprocessor is often used to make this clear.

Cache memory makes the main memory appear faster than it really is, and
virtual memory makes it appear larger.

The ratio of execution time without the cache to execution time with the
cache is called as speedup.

You might also like