[go: up one dir, main page]

0% found this document useful (0 votes)
78 views174 pages

Computer System Architecture

The document provides an overview of digital computers, detailing their components such as hardware, software, CPU, memory, and input-output devices. It explains logic gates, Boolean algebra, and the simplification of Boolean functions, including methods like Karnaugh maps. Additionally, it covers combinational circuits, including half and full adders, and their design procedures.

Uploaded by

sadaieswaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
78 views174 pages

Computer System Architecture

The document provides an overview of digital computers, detailing their components such as hardware, software, CPU, memory, and input-output devices. It explains logic gates, Boolean algebra, and the simplification of Boolean functions, including methods like Karnaugh maps. Additionally, it covers combinational circuits, including half and full adders, and their design procedures.

Uploaded by

sadaieswaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 174

RAAJ ACADEMY

TRB ASSISTANT PROFESSOR


Computer System Architecture
Digital Computers

A Digital computer can be considered as a digital system that performs various computational tasks.

The first electronic digital computer was developed in the late 1940s and was used primarily for
numerical computations.

By convention, the digital computers use the binary number system, which has two digits: 0 and 1. A
binary

digit is called a bit.

A computer system is subdivided into two functional entities: Hardware and Software.

The hardware consists of all the electronic components and electromechanical devices that comprise the
physical entity of the device.

The software of the computer consists of the instructions and data that the computer manipulates to
perform various data-processing tasks.

o The Central Processing Unit (CPU) contains an arithmetic and logic unit for manipulating data, a
number of registers for storing data, and a control circuit for fetching and executing instructions.
o The memory unit of a digital computer contains storage for instructions and data.
o The Random Access Memory (RAM) for real-time processing of the data.

UNIT-IV 1
o The Input-Output devices for generating inputs from the user and displaying the final results to the
user.
o The Input-Output devices connected to the computer include the keyboard, mouse, terminals,
magnetic disk drives, and other communication devices.

Logic Gates

o The logic gates are the main structural part of a digital system.
o Logic Gates are a block of hardware that produces signals of binary 1 or 0 when input logic
requirements are satisfied.
o Each gate has a distinct graphic symbol, and its operation can be described by means of algebraic
expressions.
o The seven basic logic gates includes: AND, OR, XOR, NOT, NAND, NOR, and XNOR.
o The relationship between the input-output binary variables for each gate can be represented in
tabular form by a truth table.
o Each gate has one or two binary input variables designated by A and B and one binary output
variable designated by x.

AND GATE:

The AND gate is an electronic circuit which gives a high output only if all its inputs are high. The AND
operation is represented by a dot (.) sign.

OR GATE:

UNIT-IV 2
The OR gate is an electronic circuit which gives a high output if one or more of its inputs are high. The
operation performed by an OR gate is represented by a plus (+) sign.

NOT GATE:

The NOT gate is an electronic circuit which produces an inverted version of the input at its output. It is
also known as an Inverter.

NAND GATE:

The NOT-AND (NAND) gate which is equal to an AND gate followed by a NOT gate. The NAND gate
gives a high output if any of the inputs are low. The NAND gate is represented by a AND gate with a
small circle on the output. The small circle represents inversion.

NOR GATE:

The NOT-OR (NOR) gate which is equal to an OR gate followed by a NOT gate. The NOR gate gives a
low output if any of the inputs are high. The NOR gate is represented by an OR gate with a small circle
on the output. The small circle represents inversion.

UNIT-IV 3
Exclusive-OR/ XOR GATE:

The 'Exclusive-OR' gate is a circuit which will give a high output if one of its inputs is high but not both
of them. The XOR operation is represented by an encircled plus sign.

EXCLUSIVE-NOR/Equivalence GATE:

The 'Exclusive-NOR' gate is a circuit that does the inverse operation to the XOR gate. It will give a low
output if one of its inputs is high but not both of them. The small circle represents inversion.

UNIT-IV 4
Boolean algebra

Boolean algebra can be considered as an algebra that deals with binary variables and logic operations.
Boolean algebraic variables are designated by letters such as A, B, x, and y. The basic operations
performed are AND, OR, and complement.

The Boolean algebraic functions are mostly expressed with binary variables, logic operation symbols,
parentheses, and equal sign. For a given value of variables, the Boolean function can be either 1 or 0. For
instance, consider the Boolean function:

F = x + y'z

The logic diagram for the Boolean function F = x + y'z can be represented as:

o The Boolean function F = x + y'z is transformed from an algebraic expression into a logic diagram
composed of AND, OR, and inverter gates.

UNIT-IV 5
o Inverter at input 'y' generates its complement y'.
o There is an AND gate for the term y'z, and an OR gate is used to combine the two terms (x and
y'z).
o The variables of the function are taken to be the inputs of the circuit, and the variable symbol of
the function is taken as the output of the circuit.

The truth table for the Boolean function F = x + y'z can be represented as:

Examples of Boolean algebra simplifications using logic gates

In this section, we will look at some of the examples of Boolean algebra simplification using Logic gates.

1. F1 = xyz'

2. F2 = x + y'z

UNIT-IV 6
3. F3 = xy' + x'z

4. F4 = x'y'z + x'yz + xy'

Truth tables for F1= xyz', F2= x+y'z, F3= xy'+x'z and F4= x'y'z+x'yz+xy'

x y z F1 F2 F3 F4

0 0 0 0 0 0 0

0 0 1 0 1 1 1

0 1 0 0 0 0 0

0 1 1 0 0 1 1

1 0 0 0 1 1 1

1 0 1 0 1 1 1

1 1 0 1 1 0 0

1 1 1 0 1 0 0

Laws of Boolean algebra

The basic Laws of Boolean Algebra can be stated as follows:

UNIT-IV 7
o Commutative Law states that the interchanging of the order of operands in a Boolean equation
does not change its result. For example:
1. OR operator → A + B = B + A
2. AND operator → A * B = B * A
o Associative Law of multiplication states that the AND operation are done on two or more than
two variables. For example:
A * (B * C) = (A * B) * C
o Distributive Law states that the multiplication of two variables and adding the result with a
variable will result in the same value as multiplication of addition of the variable with individual
variables. For example:
A + BC = (A + B) (A + C).
o Annulment law:
A.0 = 0
A+1=1
o Identity law:
A.1 = A
A+0=A
o Idempotent law:
A+A=A
A.A = A
o Complement law:
A + A' = 1
A.A'= 0
o Double negation law:
((A)')' = A
o Absorption law:
A.(A+B) = A
A + AB = A

De Morgan's Law is also known as De Morgan's theorem, works depending on the concept of Duality.
Duality states that interchanging the operators and variables in a function, such as replacing 0 with 1 and
1 with 0, AND operator with OR operator and OR operator with AND operator.

UNIT-IV 8
De Morgan stated 2 theorems, which will help us in solving the algebraic problems in digital electronics.
The De Morgan's statements are:

1. "The negation of a conjunction is the disjunction of the negations", which means that the
complement of the product of 2 variables is equal to the sum of the compliments of individual
variables. For example, (A.B)' = A' + B'.
2. "The negation of disjunction is the conjunction of the negations", which means that compliment
of the sum of two variables is equal to the product of the complement of each variable. For
example, (A + B)' = A'B'.

Simplification using Boolean algebra

Let us consider an example of a Boolean function: AB+A (B+C) + B (B+C)


The logic diagram for the Boolean function AB+A (B+C) + B (B+C) can be represented as:

We will simplify this Boolean function on the basis of rules given by Boolean algebra.
AB + A (B+C) + B (B+C)
AB + AB + AC + BB + BC {Distributive law; A (B+C) = AB+AC, B (B+C) = BB+BC}
AB + AB + AC + B + BC {Idempotent law; BB = B}
AB + AC + B + BC {Idempotent law; AB+AB = AB}
AB + AC +B {Absorption law; B+BC = B}
B + AC {Absorption law; AB+B = B}
Hence, the simplified Boolean function will be B + AC.
The logic diagram for Boolean function B + AC can be represented as:

UNIT-IV 9
Map Simplification

The Map method involves a simple, straightforward procedure for simplifying Boolean expressions.

Map simplification may be regarded as a pictorial arrangement of the truth table which allows an easy
interpretation for choosing the minimum number of terms needed to express the function algebraically.
The map method is also known as Karnaugh map or K-map.

Each combination of the variables in a truth table is called a mid-term.

There are four min-terms in a two variable map. Therefore, the map consists of four squares, one for
each min-term. The 0's and 1's marked for each row, and each column designates the values of variable
x and y, respectively.

Two-variable map:

Representation of functions in the two-variable map:

UNIT-IV 10
Three variable map

There are eight min-terms in a three-variable map. Therefore, the map consists of eight squares.

Three variable map:

o The map was drawn in part (b) in the above image is marked with numbers in each row and each
column to show the relationship between the squares and the three variables.
o Any two adjacent squares in the map differ by only one variable, which is primed in one square
and unprimed in the other. For example, m5 and m7 lie in the two adjacent squares. Variable y is
primed in m5 and unprimed in m7, whereas the other two variables are the same in both the
squares.
o From the postulates of Boolean algebra, it follows that the sum of two min-terms in adjacent
squares can be simplified to a single AND term consisting of only two literals. For example,
consider the sum of two adjacent squares say m5 and m7:
m5+m7 = xy'z+xyz= xz(y'+y)= xz.

UNIT-IV 11
Examples of Boolean algebra simplifications using the map method

F(x,y,z) = Σ(3,4,6,7)

F(A, B, C, D) = ∑(0, 2, 5, 7, 8, 10, 13, 15)

F(A, B, C, D) = Σm(0, 1, 3, 5, 7, 8, 9, 11, 13, 15)

UNIT-IV 12
F(A, B, C, D) = Σm(1, 3, 4, 6, 8, 9, 11, 13, 15) + Σd(0, 2, 14)

Sum of product(SOP)

• A canonical sum of products is a boolean expression that entirely consists of minterms. The
Boolean function F is defined on two variables X and Y. The X and Y are the inputs of the
boolean function F whose output is true when any one of the inputs is set to true.

X Y F M

0 0 0 X'Y'

0 1 1 X'Y
1 0 1 XY'

1 1 1 XY

Product of Sum (POS)

UNIT-IV 13
• A canonical product of sum is a boolean expression that entirely consists of maxterms. The
Boolean function F is defined on two variables X and Y. The X and Y are the inputs of the boolean
function F whose output is true when only one of the inputs is set to true.

X Y F M

0 0 0 X'+Y'
0 1 1 X'+Y

1 0 1 X+Y'

1 1 1 X+Y

Combinational Circuits

A combinational circuit comprises of logic gates whose outputs at any time are determined directly from
the present combination of inputs without any regard to previous inputs.

A combinational circuit performs a specific information-processing operation fully specified logically by


a set of Boolean functions.

The basic components of a combinational circuit are: input variables, logic gates, and output variables.

The 'n' input variables come from an external source whereas the 'm' output variables go to an external
destination. In many applications, the source or destination are storage registers.

Design procedure of a Combinational Circuit

The design procedure of a combinational circuit involves the following steps:

1. The problem is stated.


2. The total number of available input variables and required output variables is determined.
3. The input and output variables are allocated with letter symbols.

UNIT-IV 14
4. The exact truth table that defines the required relationships between inputs and outputs is derived.
5. The simplified Boolean function is obtained from each output.
6. The logic diagram is drawn.

The combinational circuit that performs the addition of two bits is called a half adder and the one that
performs the addition of three bits (two significant bits and a previous carry) is a full adder.

Half - Adder

A Half-adder circuit needs two binary inputs and two binary outputs. The input variable shows the augend
and addend bits whereas the output variable produces the sum and carry. We can understand the function
of a half-adder by formulating a truth table. The truth table for a half-adder is:

o 'x' and 'y' are the two inputs, and S (Sum) and C (Carry) are the two outputs.
o The Carry output is '0' unless both the inputs are 1.
o 'S' represents the least significant bit of the sum.

The simplified sum of products (SOP) expressions is:

S = x'y+xy', C = xy

The logic diagram for a half-adder circuit can be represented as:

UNIT-IV 15
Full - Adder

This circuit needs three binary inputs and two binary outputs. The truth table for a full-adder is:

o Two of the input variable 'x' and 'y', represent the two significant bits to be added.
o The third input variable 'z', represents the carry from the previous lower significant position.
o The outputs are designated by the symbol 'S' for sum and 'C' for carry.
o The eight rows under the input variables designate all possible combinations of 0's, and 1's that
these variables may have.
o The input-output logical relationship of the full-adder circuit may be expressed in two Boolean
functions, one for each output variable.
o Each output Boolean function can be simplified by using a unique map method.

UNIT-IV 16
Maps for a full-adder:

The logic diagram for a full-adder circuit can be represented as:

Half Subtractor

• The half subtractor is also a building block for subtracting two binary numbers. It has two inputs
and two outputs. This circuit is used to subtract two single bit binary numbers A and B. The 'diff'
and 'borrow' are two output states of the half subtractor.

Block diagram

UNIT-IV 17
Full Subtractor

• The Half Subtractor is used to subtract only two numbers. To overcome this problem, a full
subtractor was designed. The full subtractor is used to subtract three 1-bit numbers A, B, and C,
which are minuend, subtrahend, and borrow, respectively. The full subtractor has three input states
and two output states i.e., diff and borrow.

Block diagram

Construction of Full Subtractor Circuit:

UNIT-IV 18
Binary Adder

The registers play an important role in performing the micro-operations. The registers hold the digital
component and the data which performs the arithmetic operation. The Binary Adder is a logical circuit
which is used to perform the addition operation of two binary number of any length.

The Binary Adder is formed with the help of the Full-Adder circuit. The Full-Adders are connected in
series, and the output carry of the first Adder will be treated as the input carry of the next Full-Adder.

N-Bit Parallel Adder

The Full Adder is used to sum two single-bit binary numbers with carry input. In digital calculation, we
need to add two n-bit binary numbers rather than only single-bit binary numbers. For this purpose, we
need to use n-bit parallel Adder. In order to get N-bit parallel adder, we cascade the n number of Full
Adders. The carry output of the first Adder is treated as the carry input of the second Adder.

4-bit Binary Adder

UNIT-IV 19
o The 'A' and 'B' are the augend, and addend bits are defined by the subscript numbers. The subscripts
start from right to left, and the lower-order bit is defined by subscript '0'.
o The C0, C1, C2, and C3 are the carry inputs which are connected together as a chain using Full
Adder. The C4 is the carry output produced by the last Full-Adder.
o The Cout of the first Adder is connected as the Cin of the next Full-Adder.
o The S0, S1, S2, and S3 are the sum outputs that produce the sum of augend and addend bits.
o The inputs for the input variable 'A' and 'B' are fetched from different source registers. For
example, the bit for the input variable 'A' comes from register 'R1', and a bit for the input variable
'B' comes from register 'R2'.
o The outcome produced by adding both input variables is stored into either third register or to one
of the source registers.

Binary Adder-Subtractor

A Binary Adder-Subtractor is a special type of circuit that is used to perform both operations, i.e., Addition
and Subtraction. The operation which is going to be used depends on the values contained by the control
signal. In Arithmetic Logical Unit, it is one of the most important components.

To work with Binary Adder-Subtractor, it is required that we have knowledge of the XOR gate, Full-
Adder, Binary Addition, and subtraction.

For example, we will take two 4-bit binary numbers 'X' and 'Y' for the operation with digits.

X0 X1 X2 X3 for X
Y0 Y1 Y2 Y3 for Y

The Binary Adder-Subtractor is a combination of 4 Full-Adder, which is able to perform the addition and
subtraction of 4-bit binary numbers. The control line determines whether the operation being performed

UNIT-IV 20
is either subtraction or addition. This determination is done by the binary values 0 and 1, which is hold by
K.

In the above diagram, the control lines of the first Full-Adder is directly coming as its input(input carry
C0). The X0 is the least significant bit of A, which is directly inputted in the Full-Adder. The result
produced by performing the XOR operation of Y0 and K is the third input of the Binary Adder-Subtractor.
The sum/difference(S0) and carry(C0) are the two outputs produced from the First Full-adder.

When the value of K is set to true or 1, the Y0⨁K produce the complement of Y0 as the output. So the
operation would be X+Y0', which is the 2's complement subtraction of X and Y. It means when the value
of K is 1; the subtraction operation is performed by the binary Adder-Subtractor.

In the same way, when the value of K is set to 0, the Y0⨁K produce Y0 as the output. So the operation
would be X+Y0, which is the binary addition of X and Y. It means when the value of K is 0; the addition
operation is performed by the binary Adder-Subtractor.

The carry/borrow C0 is treated as the carry/borrow input for the second Full-Adder. The sum/difference
S0 defines the least significant bit of the sum/difference of numbers X and Y. Just like X 0, the X1, X2, and
X3 are faded directly to the 2nd, 3rd, and 4th Full-Adder as an input. The outputs after performing the XOR
operation of Y1, Y2, and Y3 inputs with K are the third inputs for 2nd, 3rd, and 4th Full-Adder. The carry
C1, C2 are passed as the input to the Full-Adder. Cout is the output carry of the sum/difference. To form
the final result, the S1, S2, S3 are recorded with s0. We will use n number of Full-Adder to design the n-bit
binary Adder-Subtractor.

Example:

We assume that we have two 3 bit numbers, i.e., X=100 and Y=011, and feed them in Full-Adder as an
input.

X0 = 0 X1 = 0 X2 = 1

UNIT-IV 21
Y0 = 1 Y1 = 1 & Y2 = 0

For K=0:

Y0⨁K=Y0 and Cin=K=0

So, from first Full-Adder

S0 = X0+Y0+Cin

S0= 0+1+0

S0=1

C0=0

Similarly,

S1 = X1+Y1+C0
S1 = 0+1+0
S1=1 and C1=0

Similarly,

S2 = X2+Y2+C1
S2 = 1+0+0
S2=1 and C2=0

Thus,

X= 100 =4
Y = 011 = 3
Sum = 0111 = 7

For K=1

Y0⨁K=Y0' and Cin=k=1

So,

S0 = X0+Y0'+Cin
S0 = 0+0+1
S0=1 and C0=0

Similarly,

UNIT-IV 22
S1 = X1+Y1'+C0
S1 = 0+0+0
S1=0 and C1=0

Similarly,

S2 = X2+Y2'+C1
S2 = 1+1+0
S2=0 and C2=0

Thus,

X = 010 = 4
Y = 011 = 3

Difference = 001 = 1

Decimal or BCD Adder

The BCD-Adder is used in the computers and the calculators that perform arithmetic operation directly in
the decimal number system. The BCD-Adder accepts the binary-coded form of decimal numbers. The
Decimal-Adder requires a minimum of nine inputs and five outputs.

There is the following table used in designing of BCD-Adder.

UNIT-IV 23
From the above table, it is clear that if the produced sum is between 1 to 9, the Binary and the BCD code
is the same. But for 10 to 19 decimal numbers, both the codes are different. In the above table, the binary
sum combinations from 10 to 19 give invalid BCD. There are the following points that help the circuit to
identify the invalid BCD.

1. It is obvious from the table that a correction is needed when the 'Binary Sum' has an output carry
K=1.
2. The other six combinations from 10 to 15 need correction in which the bit on the Z 8 position is 1.
3. In the Binary sum of 8 and 9, the bit on the Z8 position is also 1. So, the second step fails, and we
need to modify it.
4. To distinguish these two numbers, we specify that the bit on the Z4 or Z2 position also needs to be
1 with the bit of Z8
5. The condition for a correction and an output carry can be expressed by the Boolean function:

C=K+Z8.Z4+Z8.Z2

Once the circuit found the invalid BCD, the circuit adds the binary number of 6 into the invalid BCD code
to make it valid.

In the above diagram,

UNIT-IV 24
1. We take a 4-bit Binary-Adder, which takes addend and augend bits as an input with an input
carry 'Carry in'.
2. The Binary-Adder produces five outputs, i.e., Z8, Z4, Z2, Z1, and an output carry K.
3. With the help of the output carry K and Z8, Z4, Z2, Z1 outputs, the logical circuit is designed to
identify the Cout
4. The Z8, Z4, Z2, and Z1 outputs of the binary adder are passed into the 2nd 4-bit binary adder as an
Augend.
5. The addend bit of the 2nd 4-bit binary adder is designed in such a way that the 1st and the 4th bit of
the addend number are 0 and the 2 nd and the 3rd bit are the same as Cout. When the value of Cout is
0, the addend number will be 0000, which produce the same result as the 1 st 4-bit binary number.
But when the value of the Cout is 1, the addend bit will be 0110, i.e., 6, which adds with the augent
to get the valid BCD number.

Example: 1001+1000

1. First, add both the numbers using a 4-bit binary adder and pass the input carry to 0.
2. The binary adder produced the result 0001 and carried output 'K' 1.
3. Then, find the Cout value to identify that the produced BCD is invalid or valid using the
expression Cout=K+Z8.Z4+Z8.Z2.
K=1
Z8 = 0
Z4 = 0
Z2 = 0
Cout = 1+0*0+0*0
Cout = 1+0+0
Cout = 1
4. The value of Cout is 1, which expresses that the produced BCD code is invalid. Then, add the
output of the 1st 4-bit binary adder with 0110.
= 0001+0110
= 0111
5. The BCD is represented by the carry output as:
BCD=Cout Z8 Z4 Z2 Z1=1 0 1 1 1

UNIT-IV 25
Decoder

The combinational circuit that change the binary information into 2 N output lines is known
as Decoders. The binary information is passed in the form of N input lines. The output lines define the 2N-
bit code for the binary information. In simple words, the Decoder performs the reverse operation of
the Encoder. At a time, only one input line is activated for simplicity. The produced 2N-bit output code is
equivalent to the binary information.

6.
There are various types of decoders which are as follows:

2 to 4 line decoder:

In the 2 to 4 line decoder, there is a total of three inputs, i.e., A0, and A1 and E and four outputs, i.e., Y0,
Y1, Y2, and Y3. For each combination of inputs, when the enable 'E' is set to 1, one of these four outputs
will be 1. The block diagram and the truth table of the 2 to 4 line decoder are given below.

Block Diagram:

7.

Truth Table:

UNIT-IV 26
8.

The logical expression of the term Y0, Y0, Y2, and Y3 is as follows:

Y3=E.A1.A0
Y2=E.A1.A0'
Y1=E.A1'.A0
Y0=E.A1'.A0'

Logical circuit of the above expressions is given below:

9.

3 to 8 line decoder:

The 3 to 8 line decoder is also known as Binary to Octal Decoder. In a 3 to 8 line decoder, there is a total
of eight outputs, i.e., Y0, Y1, Y2, Y3, Y4, Y5, Y6, and Y7 and three outputs, i.e., A0, A1, and A2. This circuit
has an enable input 'E'. Just like 2 to 4 line decoder, when enable 'E' is set to 1, one of these four outputs
will be 1. The block diagram and the truth table of the 3 to 8 line encoder are given below.

Block Diagram:

UNIT-IV 27
10.
11. ADVERTISEMENT

Truth Table:

The logical expression of the term Y0, Y1, Y2, Y3, Y4, Y5, Y6, and Y7 is as follows:

Y0=A0'.A1 '.A2'
Y1=A0.A1 '.A2'
Y2=A0'.A1.A2'
Y3=A0.A1.A2'
Y4=A0'.A1 '.A2
Y5=A0.A1 '.A2
Y6=A0'.A1.A2
Y7=A0.A1.A2

UNIT-IV 28
Logical circuit of the above expressions is given below:

4 to 16 line Decoder

In the 4 to 16 line decoder, there is a total of 16 outputs, i.e., Y0, Y1, Y2,……, Y16 and four inputs, i.e., A0,
A1, A2, and A3. The 3 to 16 line decoder can be constructed using either 2 to 4 decoder or 3 to 8 decoder.
There is the following formula used to find the required number of lower-order decoders.

Required number of lower order decoders=m2/m1

m1 = 8
m2 = 16

Required number of 3 to 8 decoders= =2

Block Diagram:

UNIT-IV 29
Truth Table:

The logical expression of the term A0, A1, A2,…, A15 are as follows:

Y0=A0'.A1 '.A2'.A3'
Y1=A0'.A1 '.A2'.A3
Y2=A0'.A1 '.A2.A3'
Y3=A0'.A1 '.A2.A3
Y4=A0'.A1.A2'.A3'
Y5=A0'.A1.A2'.A3

UNIT-IV 30
Y6=A0'.A1.A2.A3'
Y7=A0'.A1.A2.A3
Y8=A0.A1 '.A2'.A3'
Y9=A0.A1 '.A2'.A3
Y10=A0.A1 '.A2.A3'
Y11=A0.A1 '.A2.A3
Y12=A0.A1.A2 '.A3'
Y13=A0.A1.A2 '.A3
Y14=A0.A1.A2.A3'
Y15=A0.A1.A2 '.A3

Logical circuit of the above expressions is given below:

Encoders

The combinational circuits that change the binary information into N output lines are known as Encoders.
The binary information is passed in the form of 2N input lines. The output lines define the N-bit code for
the binary information. In simple words, the Encoder performs the reverse operation of the Decoder. At a
time, only one input line is activated for simplicity. The produced N-bit output code is equivalent to the
binary information.

UNIT-IV 31
There are various types of encoders which are as follows:

4 to 2 line Encoder:

In 4 to 2 line encoder, there are total of four inputs, i.e., Y0, Y1, Y2, and Y3, and two outputs, i.e., A0 and
A1. In 4-input lines, one input-line is set to true at a time to get the respective binary code in the output
side. Below are the block diagram and the truth table of the 4 to 2 line encoder.

Block Diagram:

Truth Table:

The logical expression of the term A0 and A1 is as follows:

A1=Y3+Y2
A0=Y3+Y1

UNIT-IV 32
Logical circuit of the above expressions is given below:

8 to 3 line Encoder:

The 8 to 3 line Encoder is also known as Octal to Binary Encoder. In 8 to 3 line encoder, there is a total
of eight inputs, i.e., Y0, Y1, Y2, Y3, Y4, Y5, Y6, and Y7 and three outputs, i.e., A0, A1, and A2. In 8-input
lines, one input-line is set to true at a time to get the respective binary code in the output side. Below are
the block diagram and the truth table of the 8 to 3 line encoder.

Block Diagram:

ADVERTISEMENT

Truth Table:

UNIT-IV 33
The logical expression of the term A0, A1, and A2 are as follows:

A2=Y4+Y5+Y6+Y7
A1=Y2+Y3+Y6+Y7
A0=Y7+Y5+Y3+Y1

Logical circuit of the above expressions is given below:

Decimal to BCD Encoder

The Octal to Binary Encoder is also known as 10 to 4 line Encoder. In 10 to 4 line encoder, there are total
of ten inputs, i.e., Y0, Y1, Y2, Y3, Y4, Y5, Y6, Y7, Y8, and Y9 and four outputs, i.e., A0, A1, A2, and A3. In
10-input lines, one input-line is set to true at a time to get the respective BCD code in the output side. The
block diagram and the truth table of the decimal to BCD encoder are given below.

Block Diagram:

UNIT-IV 34
Truth Table:

The logical expression of the term A0, A1, A2, and A3 is as follows:

A3 = Y9 + Y8
A2 = Y7 + Y6 + Y5 +Y4
A1 = Y7 + Y6 + Y3 +Y2
A0 = Y9 + Y7 +Y5 +Y3 + Y1

Logical circuit of the above expressions is given below:

UNIT-IV 35
Priority Encoder:

4 to 2 line Priority Encoder:

In this priority encoder, there are total of 4 inputs, i.e., Y 0, Y1, Y2, and Y3, and two outputs, i.e., A0 and
A1. The Y3 has high and Y0 has low priority inputs. When more than one input is '1' at the same time, the
output will be the (binary) code corresponding to the higher priority input. Below is the truth table of the
4 to 2 line priority encoder.

Truth Table:

The logical expression of the term A0 and A1 can be found using K-map as:

UNIT-IV 36
A1=Y3+Y2
A0=Y3+Y2'.Y1

Logical circuit of the above expressions is given below:

UNIT-IV 37
Uses of Encoders:

1. These systems are very easy to use in all digital systems.


2. Encoders are used to convert a decimal number into the binary number. The objective is to perform
a binary operation such as addition, subtraction, multiplication, etc.

Multiplexer

A multiplexer is a combinational circuit that has 2 n input lines and a single output line. Simply, the
multiplexer is a multi-input and single-output combinational circuit. The binary information is received
from the input lines and directed to the output line. On the basis of the values of the selection lines, one
of these data inputs will be connected to the output.

Unlike encoder and decoder, there are n selection lines and 2n input lines. So, there is a total of 2N possible
combinations of inputs. A multiplexer is also treated as Mux.

There are various types of the multiplexer which are as follows:

2×1 Multiplexer:

In 2×1 multiplexer, there are only two inputs, i.e., A0 and A1, 1 selection line, i.e., S0 and single outputs,
i.e., Y. On the basis of the combination of inputs which are present at the selection line S 0, one of these 2
inputs will be connected to the output. The block diagram and the truth table of the 2×1 multiplexer are
given below.

Block Diagram:

UNIT-IV 38
Truth Table:

The logical expression of the term Y is as follows:

Y=S0'.A0+S0.A1

Logical circuit of the above expression is given below:

3.

4×1 Multiplexer:

In the 4×1 multiplexer, there is a total of four inputs, i.e., A0, A1, A2, and A3, 2 selection lines, i.e., S0 and
S1 and single output, i.e., Y. On the basis of the combination of inputs that are present at the selection
lines S0 and S1, one of these 4 inputs are connected to the output. The block diagram and the truth table of
the 4×1 multiplexer are given below.

UNIT-IV 39
Block Diagram:

4.

Truth Table:

5.

The logical expression of the term Y is as follows:

Y=S1' S0' A0+S1 ' S0 A1+S1 S0' A2+S1 S0 A3

Logical circuit of the above expression is given below:

UNIT-IV 40
8 to 1 Multiplexer

In the 8 to 1 multiplexer, there are total eight inputs, i.e., A0, A1, A2, A3, A4, A5, A6, and A7, 3 selection
lines, i.e., S0, S1and S2 and single output, i.e., Y. On the basis of the combination of inputs that are present
at the selection lines S0, S1, and S2, one of these 8 inputs are connected to the output. The block diagram
and the truth table of the 8×1 multiplexer are given below.

Block Diagram:

Truth Table:

UNIT-IV 41
The logical expression of the term Y is as follows:

Y=S0'.S1'.S2'.A0+S0.S1'.S2'.A1+S0 '.S1.S2'.A2+S0.S1.S2'.A3+S0'.S1'.S2 A4+S0.S1'.S2 A5+S0'.S1.S2 .A6+S0.S1.S


3.A7

Logical circuit of the above expression is given below:

8 ×1 multiplexer using 4×1 and 2×1 multiplexer

We can implement the 8×1 multiplexer using a lower order multiplexer. To implement the 8×1
multiplexer, we need two 4×1 multiplexers and one 2×1 multiplexer. The 4×1 multiplexer has 2 selection
lines, 4 inputs, and 1 output. The 2×1 multiplexer has only 1 selection line.

UNIT-IV 42
For getting 8 data inputs, we need two 4×1 multiplexers. The 4×1 multiplexer produces one output. So, in
order to get the final output, we need a 2×1 multiplexer. The block diagram of 8×1 multiplexer using 4×1
and 2×1 multiplexer is given below.

6.

16 to 1 Multiplexer

In the 16 to 1 multiplexer, there are total of 16 inputs, i.e., A0, A1, …, A16, 4 selection lines, i.e., S0, S1,
S2, and S3 and single output, i.e., Y. On the basis of the combination of inputs that are present at the
selection lines S0, S1, and S2, one of these 16 inputs will be connected to the output. The block diagram
and the truth table of the 16×1

Block Diagram:

UNIT-IV 43
Truth Table:

7.

The logical expression of the term Y is as follows:

Y=A0.S0'.S1'.S2'.S3'+A1.S0'.S1'.S2 '.S3+A2.S0'.S1'.S2.S3'+A3.S0'.S1 '.S2.S3+A4.S0'.S1.S2'.S3'+A5.S0 '.S1.S2'


.S3+A6.S1.S2.S3'+A7.S0 '.S1.S2.S3+A8.S0.S1'.S2'.S3'+A9 .S0.S1'.S2'.S3+Y10.S0.S1'.S2.S3 '+A11.S0.S1'.S2.S
3+A12 S0.S1.S2 '.S3'+A13.S0.S1.S2'.S3+A14.S0.S1 .S2.S3'+A15.S0.S1.S2'.S3

Logical circuit of the above expression is given below:

8.

UNIT-IV 44
16×1 multiplexer using 8×1 and 2×1 multiplexer

We can implement the 16×1 multiplexer using a lower order multiplexer. To implement the 8×1
multiplexer, we need two 8×1 multiplexers and one 2×1 multiplexer. The 8×1 multiplexer has 3 selection
lines, 4 inputs, and 1 output. The 2×1 multiplexer has only 1 selection line.

For getting 16 data inputs, we need two 8 ×1 multiplexers. The 8×1 multiplexer produces one output. So,
in order to get the final output, we need a 2×1 multiplexer. The block diagram of 16×1 multiplexer using
8×1 and 2×1 multiplexer is given below.

De-multiplexer

A De-multiplexer is a combinational circuit that has only 1 input line and 2 N output lines. Simply, the
multiplexer is a single-input and multi-output combinational circuit. The information is received from the
single input lines and directed to the output line. On the basis of the values of the selection lines, the input
will be connected to one of these outputs. De-multiplexer is opposite to the multiplexer.

Unlike encoder and decoder, there are n selection lines and 2 n outputs. So, there is a total of 2n possible
combinations of inputs. De-multiplexer is also treated as De-mux.

There are various types of De-multiplexer which are as follows:

1×2 De-multiplexer:

In the 1 to 2 De-multiplexer, there are only two outputs, i.e., Y 0, and Y1, 1 selection lines, i.e., S0, and
single input, i.e., A. On the basis of the selection value, the input will be connected to one of the outputs.
The block diagram and the truth table of the 1×2 multiplexer are given below.

ADVERTISEMENT

UNIT-IV 45
Block Diagram:

Truth Table:

The logical expression of the term Y is as follows:

Y0=S0'.A
Y1=S0.A

Logical circuit of the above expressions is given below:

1×4 De-multiplexer:

UNIT-IV 46
In 1 to 4 De-multiplexer, there are total of four outputs, i.e., Y 0, Y1, Y2, and Y3, 2 selection lines, i.e.,
S0 and S1 and single input, i.e., A. On the basis of the combination of inputs which are present at the
selection lines S0 and S1, the input be connected to one of the outputs. The block diagram and the truth
table of the 1×4 multiplexer are given below.

Block Diagram:

Truth Table:

ADVERTISEMENT

The logical expression of the term Y is as follows:

Y0=S1' S0' A
y1=S1' S0 A
y2=S1 S0' A
y3=S1 S0 A

Logical circuit of the above expressions is given below:

UNIT-IV 47
1×8 De-multiplexer

In 1 to 8 De-multiplexer, there are total of eight outputs, i.e., Y0, Y1, Y2, Y3, Y4, Y5, Y6, and Y7, 3 selection
lines, i.e., S0, S1and S2 and single input, i.e., A. On the basis of the combination of inputs which are present
at the selection lines S0, S1 and S2, the input will be connected to one of these outputs. The block diagram
and the truth table of the 1×8 de-multiplexer are given below.

Block Diagram:

Truth Table:

UNIT-IV 48
The logical expression of the term Y is as follows:

Y0=S0'.S1'.S2'.A
Y1=S0.S1'.S2'.A
Y2=S0'.S1.S2'.A
Y3=S0.S1.S2'.A
Y4=S0'.S1'.S2 A
Y5=S0.S1'.S2 A
Y6=S0'.S1.S2 A
Y7=S0.S1.S3.A

Logical circuit of the above expressions is given below:

UNIT-IV 49
1×8 De-multiplexer using 1×4 and 1×2 de-multiplexer

We can implement the 1×8 de-multiplexer using a lower order de-multiplexer. To implement the 1×8 de-
multiplexer, we need two 1×4 de-multiplexer and one 1×2 de-multiplexer. The 1×4 multiplexer has 2
selection lines, 4 outputs, and 1 input. The 1×2 de-multiplexer has only 1 selection line.

For getting 8 data outputs, we need two 1×4 de-multiplexer. The 1×2 de-multiplexer produces two outputs.
So, in order to get the final output, we have to pass the outputs of 1×2 de-multiplexer as an input of both
the 1×4 de-multiplexer. The block diagram of 1×8 de-multiplexer using 1×4 and 1×2 de-multiplexer is
given below.

1 x 16 De-multiplexer

In 1×16 de-multiplexer, there are total of 16 outputs, i.e., Y0, Y1, …, Y16, 4 selection lines, i.e., S0, S1, S2,
and S3 and single input, i.e., A. On the basis of the combination of inputs which are present at the selection
lines S0, S1, and S2, the input will be connected to one of these outputs. The block diagram and the truth
table of the 1×16 de-multiplexer are given below.

Block Diagram:

UNIT-IV 50
Truth Table:

The logical expression of the term Y is as follows:

Y0=A.S0'.S1'.S2'.S3'
Y1=A.S0'.S1'.S2'.S3
Y2=A.S0'.S1'.S2.S3'
Y3=A.S0'.S1'.S2.S3
Y4=A.S0'.S1.S2'.S3'
Y5=A.S0'.S1.S2'.S3

UNIT-IV 51
Y6=A.S0'.S1.S2.S3'
Y7=A.S0'.S1.S2.S3
Y8=A.S0.S1'.S2'.S3'
Y9=A.S0.S1'.S2'.S3
Y10=A.S0.S1'.S2.S3'
Y11=A.S0.S1'.S2.S3
Y12=A.S0.S1.S2'.S3'
Y13=A.S0.S1.S2'.S3
Y14=A.S0.S1.S2.S3'
Y15=A.S0.S1.S2'.S3

Logical circuit of the above expressions is given below:

1×16 de-multiplexer using 1×8 and 1×2 de-multiplexer

UNIT-IV 52
We can implement the 1×16 de-multiplexer using a lower order de-multiplexer. To implement the 1×16
de-multiplexer, we need two 1×8 de-multiplexer and one 1×2 de-multiplexer. The 1×8 multiplexer has 3
selection lines, 1 input, and 8 outputs. The 1×2 de-multiplexer has only 1 selection line.

For getting 16 data outputs, we need two 1×8 de-multiplexer. The 1×8 de-multiplexer produces eight
outputs. So, in order to get the final output, we need a 1×2 de-multiplexer to produce two outputs from a
single input. Then we pass these outputs into both the de-multiplexer as an input. The block diagram of
1×16 de-multiplexer using 1×8 and 1×2 de-multiplexer is given below.

Introduction

In our previous sections, we learned about combinational circuit and their working. The combinational
circuits have set of outputs, which depends only on the present combination of inputs. Below is the block
diagram of the synchronous logic circuit.

The sequential circuit is a special type of circuit that has a series of inputs and outputs. The outputs of the
sequential circuits depend on both the combination of present inputs and previous outputs. The previous

UNIT-IV 53
output is treated as the present state. So, the sequential circuit contains the combinational circuit and its
memory storage elements. A sequential circuit doesn't need to always contain a combinational circuit. So,
the sequential circuit can contain only the memory element.

Difference between the combinational circuits and sequential circuits are given below:

Combinational Circuits Sequential Circuits

1) The outputs of the The outputs of the sequential


combinational circuit circuits depend on both present
depend only on the present inputs and present
inputs. state(previous output).

2) The feedback path is not The feedback path is present in


present in the the sequential circuits.
combinational circuit.

3) In combinational circuits, In the sequential circuit,


memory elements are not memory elements play an
required. important role and require.

4) The clock signal is not The clock signal is required for


required for combinational sequential circuits.
circuits.

5) The combinational circuit It is not simple to design a


is simple to design. sequential circuit.

Types of Sequential Circuits

Asynchronous sequential circuits

The clock signals are not used by the Asynchronous sequential circuits. The asynchronous circuit is
operated through the pulses. So, the changes in the input can change the state of the circuit. The
asynchronous circuits do not use clock pulses. The internal state is changed when the input variable is
changed. The un-clocked flip-flops or time-delayed are the memory elements of asynchronous sequential
circuits. The asynchronous sequential circuit is similar to the combinational circuits with feedback.

Synchronous sequential circuits

In synchronous sequential circuits, synchronization of the memory element's state is done by the clock
signal. The output is stored in either flip-flops or latches(memory devices). The synchronization of the
outputs is done with either only negative edges of the clock signal or only positive edges.

Clock Signal and Triggering

UNIT-IV 54
Clock signal

A clock signal is a periodic signal in which ON time and OFF time need not be the same. When ON time
and OFF time of the clock signal are the same, a square wave is used to represent the clock signal. Below
is a diagram which represents the clock signal:

A clock signal is considered as the square wave. Sometimes, the signal stays at logic, either high 5V or
low 0V, to an equal amount of time. It repeats with a certain time period, which will be equal to twice the
'ON time' or 'OFF time'.

Types of Triggering

These are two types of triggering in sequential circuits:

Level triggering

The logic High and logic Low are the two levels in the clock signal. In level triggering, when the clock
pulse is at a particular level, only then the circuit is activated. There are the following types of level
triggering:

Positive level triggering

In a positive level triggering, the signal with Logic High occurs. So, in this triggering, the circuit is
operated with such type of clock signal. Below is the diagram of positive level triggering:

Negative level triggering

In negative level triggering, the signal with Logic Low occurs. So, in this triggering, the circuit is operated
with such type of clock signal. Below is the diagram of Negative level triggering:

UNIT-IV 55
Edge triggering

In clock signal of edge triggering, two types of transitions occur, i.e., transition either from Logic Low to
Logic High or Logic High to Logic Low.

Based on the transitions of the clock signal, there are the following types of edge triggering:

Positive edge triggering

The transition from Logic Low to Logic High occurs in the clock signal of positive edge triggering. So,
in positive edge triggering, the circuit is operated with such type of clock signal. The diagram of positive
edge triggering is given below.

Negative edge triggering

The transition from Logic High to Logic low occurs in the clock signal of negative edge triggering. So, in
negative edge triggering, the circuit is operated with such type of clock signal. The diagram of negative
edge triggering is given below.

Basics of Flip Flop

A circuit that has two stable states is treated as a flip flop. These stable states are used to store binary data
that can be changed by applying varying inputs. The flip flops are the fundamental building blocks of the
digital system. Flip flops and latches are examples of data storage elements. In the sequential logical
circuit, the flip flop is the basic storage element. The latches and flip flops are the basic storage elements
but different in working. There are the following types of flip flops:

UNIT-IV 56
SR Flip Flop

The S-R flip flop is the most common flip flop used in the digital system. In SR flip flop, when the set
input "S" is true, the output Y will be high, and Y' will be low. It is required that the wiring of the circuit
is maintained when the outputs are established. We maintain the wiring until set or reset input goes high,
or power is shutdown.

The S-R flip flop is the simplest and easiest circuit to understand.

Truth Table:

J-K Flip-flop

The JK flip flop is used to remove the drawback of the S-R flip flop, i.e., undefined states. The JK flip
flop is formed by doing modification in the SR flip flop. The S-R flip flop is improved in order to construct
the J-K flip flop. When S and R input is set to true, the SR flip flop gives an inaccurate result. But in the
case of JK flip flop, it gives the correct output.

UNIT-IV 57
In J-K flip flop, if both of its inputs are different, the value of J at the next clock edge is taken by the
output Y. If both of its input is low, then no change occurs, and if high at the clock edge, then from one
state to the other, the output will be toggled. The JK Flip Flop is a Set or Reset Flip flop in the digital
system.

Truth Table:

D Flip Flop

D flip flop is a widely used flip flop in digital systems. The D flip flop is mostly used in shift-registers,
counters, and input synchronization.

Truth Table:

UNIT-IV 58
T Flip Flop

Just like JK flip-flop, T flip flop is used. Unlike JK flip flop, in T flip flop, there is only single input with
the clock input. The T flip flop is constructed by connecting both of the inputs of JK flip flop together as
a single input.

The T flip flop is also known as Toggle flip-flop. These T flip-flops are able to find the complement of its
state.

Truth Table:

SR Flip Flop

The SR flip flop is a 1-bit memory bistable device having two inputs, i.e., SET and RESET. The SET
input 'S' set the device or produce the output 1, and the RESET input 'R' reset the device or produce the
output 0. The SET and RESET inputs are labeled as S and R, respectively.

UNIT-IV 59
The SR flip flop stands for "Set-Reset" flip flop. The reset input is used to get back the flip flop to its
original state from the current state with an output 'Q'. This output depends on the set and reset conditions,
which is either at the logic level "0" or "1".

The NAND gate SR flip flop is a basic flip flop which provides feedback from both of its outputs back to
its opposing input. This circuit is used to store the single data bit in the memory circuit. So, the SR flip
flop has a total of three inputs, i.e., 'S' and 'R', and current output 'Q'. This output 'Q' is related to the
current history or state. The term "flip-flop" relates to the actual operation of the device, as it can be
"flipped" to a logic set state or "flopped" back to the opposing logic reset state.

The NAND Gate SR Flip-Flop

We can implement the set-reset flip flop by connecting two cross-coupled 2-input NAND gates together.
In the SR flip flop circuit, from each output to one of the other NAND gate inputs, feedback is connected.
So, the device has two inputs, i.e., Set 'S' and Reset 'R' with two outputs Q and Q' respectively. Below are
the block diagram and circuit diagram of the S-R flip flop.

Block Diagram:

Circuit Diagram:

UNIT-IV 60
The Set State

In the above diagram, when the input R is set to false or 0 and the input S is set to true or 1, the NAND
gate Y has an input 0, which will produce the output Q' 1. The value of Q' is faded to the NAND gate 'X'
as input 'A', and now both the inputs of the NAND gate 'X' are 1(S=A=1), which will produce the output
'Q' 0.

Now, if the input R is changed to 1 with 'S' remaining 1, the inputs of NAND gate 'Y' is R=1 and B=0.
Here, one of the inputs is also 0, so the output of Q' is 1. So, the flip flop circuit is set or latched with Q=0
and Q'=1.

Reset State

The output Q' is 0, and output Q is 1 in the second stable state. It is given by R =1 and S = 0. One of the
inputs of NAND gate 'X' is 0, and its output Q is 1. Output Q is faded to NAND gate Y as input B. So,
both the inputs to NAND gate Y are set to 1, therefore, Q' = 0.

Now, if the input S is changed to 0 with 'R' remaining 1, the output Q' will be 0 and there is no change in
state. So, the reset state of the flip flop circuit has been latched, and the set/reset actions are defined in the
following truth table:

From the above truth table, we can see that when set 'S' and reset 'R' inputs are set to 1, the outputs Q and
Q' will be either 1 or 0. These outputs depend on the input state S or R before the input condition exist.
So, when the inputs are 1, the states of the outputs remain unchanged.

The condition in which both the inputs states are set to 0 is treated as invalid and must be avoided.

JK Flip Flop

The SR Flip Flop or Set-Reset flip flop has lots of advantages. But, it has the following switching
problems:

o When Set 'S' and Reset 'R' inputs are set to 0, this condition is always avoided.

UNIT-IV 61
o When the Set or Reset input changes their state while the enable input is 1, the incorrect latching
action occurs.

The JK Flip Flop removes these two drawbacks of SR Flip Flop.

The JK flip flop is one of the most used flip flops in digital circuits. The JK flip flop is a universal flip
flop having two inputs 'J' and 'K'. In SR flip flop, the 'S' and 'R' are the shortened abbreviated letters for
Set and Reset, but J and K are not. The J and K are themselves autonomous letters which are chosen to
distinguish the flip flop design from other types.

The JK flip flop work in the same way as the SR flip flop work. The JK flip flop has 'J' and 'K' flip flop
instead of 'S' and 'R'. The only difference between JK flip flop and SR flip flop is that when both inputs
of SR flip flop is set to 1, the circuit produces the invalid states as outputs, but in case of JK flip flop, there
are no invalid states even if both 'J' and 'K' flip flops are set to 1.

The JK Flip Flop is a gated SR flip-flop having the addition of a clock input circuitry. The invalid or
illegal output condition occurs when both of the inputs are set to 1 and are prevented by the addition of a
clock input circuit. So, the JK flip-flop has four possible input combinations, i.e., 1, 0, "no change" and
"toggle". The symbol of JK flip flop is the same as SR Bistable Latch except for the addition of a clock
input.

Block Diagram:

Circuit Diagram:

UNIT-IV 62
In SR flip flop, both the inputs 'S' and 'R' are replaced by two inputs J and K. It means the J and K input
equates to S and R, respectively.

The two 2-input AND gates are replaced by two 3-input NAND gates. The third input of each gate is
connected to the outputs at Q and Q'. The cross-coupling of the SR flip-flop permits the previous invalid
condition of (S = "1", R = "1") to be used to produce the "toggle action" as the two inputs are now
interlocked.

If the circuit is "set", the J input is interrupted from the "0" position of Q' through the lower NAND gate.
If the circuit is "RESET", K input is interrupted from 0 positions of Q through the upper NAND gate.
Since Q and Q' are always different, we can use them to control the input. When both inputs 'J' and 'K' are
set to 1, the JK toggles the flip flop as per the given truth table.

Truth Table:

When both of the inputs of JK flip flop are set to 1 and clock input is also pulse "High" then from the SET
state to a RESET state, the circuit will be toggled. The JK flip flop work as a T-type toggle flip flop when
both of its inputs are set to 1.

The JK flip flop is an improved clocked SR flip flop. But it still suffers from the "race" problem. This
problem occurs when the state of the output Q is changed before the clock input's timing pulse has time
to go "Off". We have to keep short timing plus period (T) for avoiding this period.

UNIT-IV 63
D Flip Flop

In SR NAND Gate Bistable circuit, the undefined input condition of SET = "0" and RESET = "0" is
forbidden. It is the drawback of the SR flip flop. This state:

1. Override the feedback latching action.


2. Force both outputs to be 1.
3. Lose the control by the input, which first goes to 1, and the other input remains "0" by which the
resulting state of the latch is controlled.

We need an inverter to prevent this from happening. We connect the inverter between the Set and Reset
inputs for producing another type of flip flop circuit called D flip flop, Delay flip flop, D-type Bistable,
D-type flip flop.

The D flip flop is the most important flip flop from other clocked types. It ensures that at the same time,
both the inputs, i.e., S and R, are never equal to 1. The Delay flip-flop is designed using a gated SR flip-
flop with an inverter connected between the inputs allowing for a single input D(Data).

This single data input, which is labeled as "D" used in place of the "Set" input and for the complementary
"Reset" input, the inverter is used. Thus, the level-sensitive D-type or D flip flop is constructed from a
level-sensitive SR flip flop.

So, here S=D and R= ~D(complement of D)

Block Diagram

Circuit Diagram

UNIT-IV 64
We know that the SR flip-flop requires two inputs, i.e., one to "SET" the output and another to "RESET"
the output. By using an inverter, we can set and reset the outputs with only one input as now the two input
signals complement each other. In SR flip flop, when both the inputs are 0, that state is no longer possible.
It is an ambiguity that is removed by the complement in D-flip flop.

In D flip flop, the single input "D" is referred to as the "Data" input. When the data input is set to 1, the
flip flop would be set, and when it is set to 0, the flip flop would change and become reset. However, this
would be pointless since the output of the flip flop would always change on every pulse applied to this
data input.

The "CLOCK" or "ENABLE" input is used to avoid this for isolating the data input from the flip flop's
latching circuitry. When the clock input is set to true, the D input condition is only copied to the output
Q. This forms the basis of another sequential device referred to as D Flip Flop.

When the clock input is set to 1, the "set" and "reset" inputs of the flip-flop are both set to 1. So it will not
change the state and store the data present on its output before the clock transition occurred. In simple
words, the output is "latched" at either 0 or 1.

Truth Table for the D-type Flip Flop

Symbols ↓ and ↑ indicates the direction of the clock pulse. D-type flip flop assumed these symbols as
edge-triggers.

T Flip Flop

In T flip flop, "T" defines the term "Toggle". In SR Flip Flop, we provide only a single input called
"Toggle" or "Trigger" input to avoid an intermediate state occurrence. Now, this flip-flop work as a Toggle

UNIT-IV 65
switch. The next output state is changed with the complement of the present state output. This process is
known as "Toggling"'.

We can construct the "T Flip Flop" by making changes in the "JK Flip Flop". The "T Flip Flop" has only
one input, which is constructed by connecting the input of JK flip flop. This single input is called T. In
simple words, we can construct the "T Flip Flop" by converting a "JK Flip Flop". Sometimes the "T Flip
Flop" is referred to as single input "JK Flip Flop".

Block diagram of the "T-Flip Flop" is given where T defines the "Toggle input", and CLK defines the
clock signal input.

T Flip Flop Circuit

There are the following two methods which are used to form the "T Flip Flop":

o By connecting the output feedback to the input in "SR Flips Flop".


o We pass the output that we get after performing the XOR operation of T and Q PREV output as the
D input in D Flip Flop.

Construction

The "T Flip Flop" is designed by passing the AND gate's output as input to the NOR gate of the "SR Flip
Flop". The inputs of the "AND" gates, the present output state Q, and its complement Q' are sent back to
each AND gate. The toggle input is passed to the AND gates as input. These gates are connected to the
Clock (CLK) signal. In the "T Flip Flop", a pulse train of narrow triggers are passed as the toggle input,
which changes the flip flop's output state. The circuit diagram of the "T Flip Flop" using "SR Flip Flop"
is given below:

UNIT-IV 66
The "T Flip Flop" is formed using the "D Flip Flop". In D flip - flop, the output after performing
the XOR operation of the T input with the output "QPREV" is passed as the D input. The logical circuit of
the "T-Flip Flop" using the "D Flip Flop" is given below:

The simplest construction of a D Flip Flop is with JK Flip Flop. Both the inputs of the "JK Flip Flop" are
connected as a single input T. Below is the logical circuit of the T Flip Flop" which is formed from the
"JK Flip Flop":

Truth Table of T Flip Flop

UNIT-IV 67
The upper NAND gate is enabled, and the lower NAND gate is disabled when the output Q To is set to 0.
make the flip flop in "set state(Q=1)", the trigger passes the S input in the flip flop.

The upper NAND gate is disabled, and the lower NAND gate is enabled when the output Q is set to 1.
The trigger passes the R input in the flip flop to make the flip flop in the reset state(Q=0).

Operations of T-Flip Flop

The next sate of the T flip flop is similar to the current state when the T input is set to false or 0.

o If toggle input is set to 0 and the present state is also 0, the next state will be 0.
o If toggle input is set to 0 and the present state is 1, the next state will be 1.

The next state of the flip flop is opposite to the current state when the toggle input is set to 1.

o If toggle input is set to 1 and the present state is 0, the next state will be 1.
o If toggle input is set to 1 and the present state is 1, the next state will be 0.

The "T Flip Flop" is toggled when the set and reset inputs alternatively changed by the incoming trigger.
The "T Flip Flop" requires two triggers to complete a full cycle of the output waveform. The frequency
of the output produced by the "T Flip Flop" is half of the input frequency. The "T Flip Flop" works as the
"Frequency Divider Circuit."

In "T Flip Flop", the state at an applied trigger pulse is defined only when the previous state is defined. It
is the main drawback of the "T Flip Flop".

The "T flip flop" can be designed from "JK Flip Flop", "SR Flip Flop", and "D Flip Flop" because the "T
Flip Flop" is not available as ICs. The block diagram of "T Flip Flop" using "JK Flip Flop" is given below:

UNIT-IV 68
Master-Slave JK Flip Flop

In "JK Flip Flop", when both the inputs and CLK set to 1 for a long time, then Q output toggle until the
CLK is 1. Thus, the uncertain or unreliable output produces. This problem is referred to as a race-round
condition in JK flip-flop and avoided by ensuring that the CLK set to 1 only for a very short time.

Explanation

The master-slave flip flop is constructed by combining two JK flip flops. These flip flops are connected
in a series configuration. In these two flip flops, the 1st flip flop work as "master", called the master flip
flop, and the 2nd work as a "slave", called slave flip flop. The master-slave flip flop is designed in such a
way that the output of the "master" flip flop is passed to both the inputs of the "slave" flip flop. The output
of the "slave" flip flop is passed to inputs of the master flip flop.

In "master-slave flip flop", apart from these two flip flops, an inverter or NOT gate is also used. For
passing the inverted clock pulse to the "slave" flip flop, the inverter is connected to the clock's pulse. In
simple words, when CP set to false for "master", then CP is set to true for "slave", and when CP set to true
for "master", then CP is set to false for "slave".

Working:

o When the clock pulse is true, the slave flip flop will be in the isolated state, and the system's state
may be affected by the J and K inputs. The "slave" remains isolated until the CP is 1. When the
CP set to 0, the master flip-flop passes the information to the slave flip flop to obtain the output.

UNIT-IV 69
o The master flip flop responds first from the slave because the master flip flop is the positive level
trigger, and the slave flip flop is the negative level trigger.
o The output Q'=1 of the master flip flop is passed to the slave flip flop as an input K when the input
J set to 0 and K set to 1. The clock forces the slave flip flop to work as reset, and then the slave
copies the master flip flop.
o When J=1, and K=0, the output Q=1 is passed to the J input of the slave. The clock's negative
transition sets the slave and copies the master.
o The master flip flop toggles on the clock's positive transition when the inputs J and K set to 1. At
that time, the slave flip flop toggles on the clock's negative transition.
o The flip flop will be disabled, and Q remains unchanged when both the inputs of the JK flip flop
set to 0.

Timing Diagram of a Master Flip Flop:

o When the clock pulse set to 1, the output of the master flip flop will be one until the clock input
remains 0.
o When the clock pulse becomes high again, then the master's output is 0, which will be set to 1
when the clock becomes one again.
o The master flip flop is operational when the clock pulse is 1. The slave's output remains 0 until the
clock is not set to 0 because the slave flip flop is not operational.
o The slave flip flop is operational when the clock pulse is 0. The output of the master remains one
until the clock is not set to 0 again.
o Toggling occurs during the entire process because the output changes once in the cycle.

UNIT-IV 70
Registers

A Register is a collection of flip flops. A flip flop is used to store single bit digital data. For storing a large
number of bits, the storage capacity is increased by grouping more than one flip flops. If we want to store
an n-bit word, we have to use an n-bit register containing n number of flip flops.

The register is used to perform different types of operations. For performing the operations, the CPU use
these registers. The faded inputs to the system will store into the registers. The result returned by the
system will store in the registers. There are the following operations which are performed by the registers:

Fetch:

It is used

o To take the instructions given by the users.


o To fetch the instruction stored into the main memory.

Decode:

The decode operation is used to interpret the instructions. In decode, the operation performed on the
instructions is identified by the CPU. In simple words, the decode operation is used to decode the
instructions.

Execute:

The execution operation is used to store the result produced by the CPU into the memory. After storing
this result, it is displayed on the user screen.

Types of Registers

There are various types of registers which are as follows:

UNIT-IV 71
MAR or Memory Address Register

The MAR is a special type of register that contains the memory address of the data and instruction. The
main task of the MAR is to access instruction and data from memory in the execution phase. The MAR
stores the address of the memory location where the data is to be read or to be stored by the CPU.

Program Counter

The program counter is also called an instruction address register or instruction pointer. The next memory
address of the instruction, which is going to be executed after completing the execution of current
instruction is contained in the program counter. In simple words, the program counter contains the memory
address of the location of the next instruction.

Accumulator Register

The CPU mostly uses an accumulator register. The accumulator register is used to store the system result.
All the results will be stored in the accumulator register when the CPU produces some results after
processing.

MDR or Memory Data Register

Memory Data Register is a part of the computer's control unit. It contains the data that we want to store in
the computer storage or the data fetched from the computer storage. The MDR works as a buffer that
contains anything for which the processor is ready to use it. The MDR contains the copied data of the
memory for the processor. Firstly the MDR holds the information, and then it goes to the decoder.

The data which is to be read out or written into the address location is contained in the Memory Data
Register.

UNIT-IV 72
The data is written in one direction when it is fetched from memory and placed into the MDR. In write
instruction, the data place into the MDR from another CPU register. This CPU register writes the data into
the memory. Half of the minimal interface between the computer storage and the microprogram is the
memory data address register, and the other half is the memory data register.

Index Register

The Index Register is the hardware element that holds the number. The number adds to the computer
instruction's address to create an effective address. In CPU, the index register is a processor register used
to modify the operand address during the running program.

Memory Buffer Register

Memory Buffer Register is mostly called MBR. The MBR contains the Metadata of the data and
instruction written in or read from memory. In simple words, it adds is used to store the upcoming
data/instruction from the memory and going to memory.

Data Register

The data register is used to temporarily store the data. This data transmits to or from a peripheral device.

Shift Register

A group of flip flops which is used to store multiple bits of data and the data is moved from one flip flop
to another is known as Shift Register. The bits stored in registers shifted when the clock pulse is applied
within and inside or outside the registers. To form an n-bit shift register, we have to connect n number of
flip flops. So, the number of bits of the binary number is directly proportional to the number of flip flops.
The flip flops are connected in such a way that the first flip flop's output becomes the input of the other
flip flop.

A Shift Register can shift the bits either to the left or to the right. A Shift Register, which shifts the bit to
the left, is known as "Shift left register", and it shifts the bit to the right, known as "Right left register".

The shift register is classified into the following types:

o Serial In Serial Out


o Serial In Parallel Out
o Parallel In Serial Out
o Parallel In Parallel Out
o Bi-directional Shift Register
o Universal Shift Register

UNIT-IV 73
Serial IN Serial OUT

In "Serial Input Serial Output", the data is shifted "IN" or "OUT" serially. In SISO, a single bit is shifted
at a time in either right or left direction under clock control.

Initially, all the flip-flops are set in "reset" condition i.e. Y3 = Y2 = Y1 = Y0 = 0. If we pass the binary
number 1111, the LSB bit of the number is applied first to the Din bit. The D3 input of the third flip flop,
i.e., FF-3, is directly connected to the serial data input D3. The output Y3 is passed to the data input d2 of
the next flip flop. This process remains the same for the remaining flip flops. The block diagram of
the "Serial IN Serial OUT" is given below.

Block Diagram:

Operation

When the clock signal application is disabled, the outputs Y3 Y2 Y1 Y0 = 0000. The LSB bit of the number
is passed to the data input Din, i.e., D3. We will apply the clock, and this time the value of D3 is 1. The
first flip flop, i.e., FF-3, is set, and the word is stored in the register at the first falling edge of the clock.
Now, the stored word is 1000.

The next bit of the binary number, i.e., 1, is passed to the data input D2. The second flip flop, i.e., FF-2, is
set, and the word is stored when the next negative edge of the clock hits. The stored word is changed to
1100.

UNIT-IV 74
The next bit of the binary number, i.e., 1, is passed to the data input D1, and the clock is applied. The third
flip flop, i.e., FF-1, is set, and the word is stored when the negative edge of the clock hits again. The stored
word is changed to 1110.

Similarly, the last bit of the binary number, i.e., 1, is passed to the data input D0, and the clock is applied.
The last flip flop, i.e., FF-0, is set, and the word is stored when the clock's negative edge arrives. The
stored word is changed to 1111.

Truth Table

Waveforms

UNIT-IV 75
Serial IN Parallel OUT

In the "Serial IN Parallel OUT" shift register, the data is passed serially to the flip flop, and outputs are
fetched in a parallel way. The data is passed bit by bit in the register, and the output remains disabled until
the data is not passed to the data input. When the data is passed to the register, the outputs are enabled,
and the flip flops contain their return value

Below is the block diagram of the 4-bit serial in the parallel-out shift register. The circuit having four D
flip-flops contains a clear and clock signal to reset these four flip flops. In SIPO, the input of the second
flip flop is the output of the first flip flop, and so on. The same clock signal is applied to each flip flop
since the flip flops synchronize each other. The parallel outputs are used for communication.

Block Diagram

Parallel IN Serial OUT

In the "Parallel IN Serial OUT" register, the data is entered in a parallel way, and the outcome comes
serially. A four-bit "Parallel IN Serial OUT" register is designed below. The input of the flip flop is the
output of the previous Flip Flop. The input and outputs are connected through the combinational circuit.
Through this combinational circuit, the binary input B0, B1, B2, B3 are passed. The shift mode and the load
mode are the two modes in which the "PISO" circuit works.

UNIT-IV 76
Load mode

The bits B0, B1, B2, and B3 are passed to the corresponding flip flops when the second, fourth, and sixth
"AND" gates are active. These gates are active when the shift or load bar line set to 0. The binary inputs
B0, B1, B2, and B3 will be loaded into the respective flip-flops when the edge of the clock is low. Thus,
parallel loading occurs.

Shift mode

The second, fourth, and sixth gates are inactive when the load and shift line set to 0. So, we are not able
to load data in a parallel way. At this time, the first, third, and fifth gates will be activated, and the shifting
of the data will be left to the right bit. In this way, the "Parallel IN Serial OUT" operation occurs.

Block Diagram

Parallel IN Parallel OUT

In "Parallel IN Parallel OUT", the inputs and the outputs come in a parallel way in the register. The inputs
A0, A1, A2, and A3, are directly passed to the data inputs D0, D1, D2, and D3 of the respective flip flop. The
bits of the binary input is loaded to the flip flops when the negative clock edge is applied. The clock pulse
is required for loading all the bits. At the output side, the loaded bits appear.

Block Diagram

UNIT-IV 77
Bidirectional Shift Register

The binary number after shifting each bit of the number to the left by one position will be equivalent to
the number produced by multiplying the original number by 2. In the same way, the binary number after
shifting each bit of the number to the right by one position will be equivalent to the number produced by
dividing the original number by 2.

For performing the multiplication and division operation using the shift register, it is required that the data
should be moved in both the direction, i.e., left or right in the register. Such registers are called
the "Bidirectional" shift register.

Below is the diagram of 4-bit "bidirectional" shift register where DR is the "serial right shift data
input", DL is the "left shift data input", and M is the "mode select input".

Block Diagram

Operations

1) Shift right operation(M=1)

UNIT-IV 78
o The first, third, fifth, and seventh AND gates will be enabled, but the second, fourth, sixth, and
eighth AND gates will be disabled.
o The data present on the data input DR is shifted bit by bit from the fourth flip flop to the first flip
flop when the clock pulse is applied. In this way, the shift right operation occurs.

2) Shift left operation(M=0)

o The second, fourth, sixth and eighth AND gates will be enabled, but the AND gates first, third,
fifth, and seventh will be disabled.
o The data present on the data input DR is shifted bit by bit from the first flip flop to the fourth flip
flop when the clock pulse is applied. In this way, the shift right operation occurs.

Universal Shift Register

A register where the data is shifted in one direction is known as the "uni-directional" shift register. A
register in which the data is shifted in both the direction is known as "bi-directional" shift register.
A "Universal" shift register is a special type of register that can load the data in a parallel way and shift
that data in both directions, i.e., right and left.

The input M, i.e., the mode control input, is set to 1 to perform the parallel loading operation. If this input
set to 0, then the serial shifting operation is performed. If we connect the mode control input with the
ground, then the circuit will work as a "bi-directional" register. The diagram of the universal shift register
is given below. When the input is passed to the serial input, the register performs the "serial left" operation.
When the input is passed to the input D, the register performs the serial right operation.

Block Diagram

Counters

UNIT-IV 79
A special type of sequential circuit used to count the pulse is known as a counter, or a collection of flip
flops where the clock signal is applied is known as counters.

The counter is one of the widest applications of the flip flop. Based on the clock pulse, the output of the
counter contains a predefined state. The number of the pulse can be counted using the output of the
counter.

Truth Table

There are the following types of counters:

o Asynchronous Counters
o Synchronous Counters

Asynchronous or ripple counters

The Asynchronous counter is also known as the ripple counter. Below is a diagram of the 2-
bit Asynchronous counter in which we used two T flip-flops. Apart from the T flip flop, we can also use
the JK flip flop by setting both of the inputs to 1 permanently. The external clock pass to the clock input
of the first flip flop, i.e., FF-A and its output, i.e., is passed to clock input of the next flip flop, i.e., FF-B.

Block Diagram

UNIT-IV 80
Signal Diagram

Operation

1. Condition 1: When both the flip flops are in reset condition.


Operation: The outputs of both flip flops, i.e., QA QB, will be 0.
2. Condition 2: When the first negative clock edge passes.
Operation: The first flip flop will toggle, and the output of this flip flop will change from 0 to 1.

UNIT-IV 81
The output of this flip flop will be taken by the clock input of the next flip flop. This output will
be taken as a positive edge clock by the second flip flop. This input will not change the second
flip flop's output state because it is the negative edge triggered flip flop.
So, QA = 1 and QB = 0
3. Condition 3: When the second negative clock edge is applied.
Operation: The first flip flop will toggle again, and the output of this flip flop will change from 1
to 0. This output will be taken as a negative edge clock by the second flip flop. This input will
change the second flip flop's output state because it is the negative edge triggered flip flop.
So, QA = 0 and QB = 1.
4. Condition 4: When the third negative clock edge is applied.
Operation: The first flip flop will toggle again, and the output of this flip flop will change from 0
to 1. This output will be taken as a positive edge clock by the second flip flop. This input will not
change the second flip flop's output state because it is the negative edge triggered flip flop.
So, QA = 1 and QB = 1
5. Condition 5: When the fourth negative clock edge is applied.
Operation: The first flip flop will toggle again, and the output of this flip flop will change from 1
to 0. This output will be taken as a negative edge clock by the second flip flop. This input will
change the output state of the second flip flop.
So, QA = 0 and QB = 0

Synchronous counters

In the Asynchronous counter, the present counter's output passes to the input of the next counter. So, the
counters are connected like a chain. The drawback of this system is that it creates the counting delay, and
the propagation delay also occurs during the counting stage. The synchronous counter is designed to
remove this drawback.

In the synchronous counter, the same clock pulse is passed to the clock input of all the flip flops. The
clock signals produced by all the flip flops are the same as each other. Below is the diagram of a 2-bit
synchronous counter in which the inputs of the first flip flop, i.e., FF-A, are set to 1. So, the first flip flop
will work as a toggle flip-flop. The output of the first flip flop is passed to both the inputs of the next JK
flip flop.

Logical Diagram

UNIT-IV 82
Signal Diagram

Operation

1. Condition 1: When both the flip flops are in reset condition.


Operation: The outputs of both flip flops, i.e., QA QB, will be 0.
So, QA = 0 and QB = 0

UNIT-IV 83
2. Condition 2: When the first negative clock edge passes.
Operation: The first flip flop will be toggled, and the output of this flip flop will be changed from
0 to 1. When the first negative clock edge is passed, the output of the first flip flop will be 0. The
clock input of the first flip flop and both of its inputs will set to 0. In this way, the state of the
second flip flop will remain the same.
So, QA = 1 and QB = 0
3. Condition 2: When the second negative clock edge is passed.
Operation: The first flip flop will be toggled again, and the output of this flip flop will be
changed from 1 to 0. When the second negative clock edge is passed, the output of the first flip
flop will be 1. The clock input of the first flip flop and both of its inputs will set to 1. In this way,
the state of the second flip flop will change from 0 to 1.
So, QA = 0 and QB = 1
4. Condition 2: When the third negative clock edge passes.
Operation: The first flip flop will toggle from 0 to 1, but at this instance, both the inputs and the
clock input set to 0. Hence, the outputs will remain the same as before.
So, QA = 1 and QB = 1
5. Condition 2: When the fourth negative clock edge passes.
Operation: The first flip flop will toggle from 1 to 0. At this instance, the inputs and the clock
input of the second flip flop set to 1. Hence, the outputs will change from 1 to 0.
So, QA = 0 and QB = 0

Ripple Counter

Ripple counter is a special type of Asynchronous counter in which the clock pulse ripples through the
circuit. The n-MOD ripple counter forms by combining n number of flip-flops. The n-MOD ripple counter
can count 2n states, and then the counter resets to its initial value.

Features of the Ripple Counter:

o Different types of flip flops with different clock pulse are used.
o It is an example of an asynchronous counter.
o The flip flops are used in toggle mode.
o The external clock pulse is applied to only one flip flop. The output of this flip flop is treated as a
clock pulse for the next flip flop.
o In counting sequence, the flip flop in which external clock pulse is passed, act as LSB.

UNIT-IV 84
Based on their circuitry design, the counters are classified into the following types:

Up Counter

The up-counter counts the states in ascending order.

Down Counter

The down counter counts the states in descending order.

Up-Down Counter

The up and down counter is a special type of bi-directional counter which counts the states either in the
forward direction or reverse direction. It also refers to a reversible counter.

Binary Ripple Counter

A Binary counter is a 2-Mod counter which counts up to 2-bit state values, i.e., 22 = 4 values. The flip
flops having similar conditions for toggling like T and JK are used to construct the Ripple counter. Below
is a circuit diagram of a binary ripple counter.

In the circuit design of the binary ripple counter, two JK flip flops are used. The high voltage signal is
passed to the inputs of both flip flops. This high voltage input maintains the flip flops at a state 1. In JK
flip flops, the negative triggered clock pulse use.

The outputs Q0 and Q1 are the LSB and MSB bits, respectively. The truth table of JK flip flop helps us to
understand the functioning of the counter.

UNIT-IV 85
When the high voltage to the inputs of the flip flops, the fourth condition is of the JK flip flop occurs. The
flip flops will be at the state 1 when we apply high voltage to the input of the flip-flop. So, the states of
the flip flops passes are toggled at the negative going end of the clock pulse. In simple words, the flip flop
toggle when the clock pulse transition takes place from 1 to 0.

The state of the output Q0 change when the negative clock edge passes to the flip flop. Initially, all the flip
flops are set to 0. These flip flop changes their states when the passed clock goes from 1 to 0. The JK flip
flop toggles when the inputs of the flip flops are one, and then the flip flop changes its state from 0 to 1.
For all the clock pulse, the process remains the same.

The output of the first flip flop passes to the second flip flop as a clock pulse. From the above timing
diagram, it is clear that the state of the second flip flop is changed when the output Q0 goes transition from
1 to 0. The outputs Q0 and Q1 treat as LSB and MSB. The counter counts the values 00, 01, 10, 11. After
counting these values, the counter resets itself and starts counting again from 00, 01, 10, and 1. The count
values until the clock pulses are passed to J0K0 flip flop.

Ring Counter

UNIT-IV 86
A ring counter is a special type of application of the Serial IN Serial OUT Shift register. The only
difference between the shift register and the ring counter is that the last flip flop outcome is taken as the
output in the shift register. But in the ring counter, this outcome is passed to the first flip flop as an input.
All of the remaining things in the ring counter are the same as the shift register.

In the Ring counter

No. of states in Ring counter = No. of flip-flop used

Below is the block diagram of the 4-bit ring counter. Here, we use 4 D flip flops. The same clock pulse is
passed to the clock input of all the flip flops as a synchronous counter. The Overriding input(ORI) is used
to design this circuit.

The Overriding input is used as clear and pre-set.

The output is 1 when the pre-set set to 0. The output is 0 when the clear set to 0. Both PR and CLR always
work in value 0 because they are active low signals.

1. PR = 0, Q = 1
2. CLR = 0, Q = 0

These two values(always fixed) are independent with the input D and the Clock pulse (CLK).

Working

The ORI input is passed to the PR input of the first flip flop, i.e., FF-0, and it is also passed to the clear
input of the remaining three flip flops, i.e., FF-1, FF-2, and FF-3. The pre-set input set to 0 for the first
flip flop. So, the output of the first flip flop is one, and the outputs of the remaining flip flops are 0. The
output of the first flip flop is used to form the ring in the ring counter and referred to as Pre-set 1.

UNIT-IV 87
In the above table, the highlighted 1's are pre-set 1.

The Pre-set 1 is generated when

o ORI input set to low, and that time the Clk doesn't care.
o When the ORI input set to high, and the low clock pulse signal is passed as the negative clock edge
triggered.

A ring forms when the pre-set 1 is shifted to the next flip-flop at each clock pulse.

So, 4-bit counter, 4 states are possible which are as follows:

1. 1 0 0 0
2. 0100
3. 0010
4. 0001

Types of Ring Counter

The ring counter is classified into two parts which are as follows:

Straight Ring Counter

The Straight Ring Counter refers to as One hot Counter. The outcome of the last flip-flop is passed to the
first flip-flop as an input. In the ring counter, the ORI input is passed to the PR input for the first flip flop
and to the clear input of the remaining flip flops.

Logic Diagram

UNIT-IV 88
Truth Table

Signal Diagram

UNIT-IV 89
Twisted Ring Counter

The Twisted Ring Counter refers to as a switch-tail ring Counter. Like the straight ring counter, the
outcome of the last flip-flop is passed to the first flip-flop as an input. In the twisted ring counter, the ORI
input is passed to all the flip flops as clear input.

Logic Diagram

Truth Table

Signal Diagram

UNIT-IV 90
Johnson Counter

The Johnson counter is similar to the Ring counter. The only difference between the Johnson counter and
the ring counter is that the outcome of the last flip flop is passed to the first flip flop as an input. But
in Johnson counter, the inverted outcome Q' of the last flip flop is passed as an input. The remaining work
of the Johnson counter is the same as a ring counter. The Johnson counter is also referred to as
the Creeping counter.

In Johnson counter

1. No. of states in Johnson counter = No. of flip-flop used


2. Number of used states=2n
3. Number of unused states=2n - 2*n

Below is the diagram of the 4-bit Johnson counter. Like Ring counter, four D flip flops are used in the 4-
bit Johnson counter, and the same clock pulse is passed to all the input of the flip flops.

Truth Table

CP Q1 Q2 Q3 Q4

0 0 0 0 0

1 1 0 0 0

UNIT-IV 91
2 1 1 0 0

3 1 1 1 0

4 1 1 1 1

5 0 1 1 1

6 0 0 1 1

7 0 1 1 1

The above table state that

o The counter produces the output 0000 when there is no clock input passed(0).
o The counter produces the output 1000 when the 1st clock pulse is passed to the flip flops.
o The counter produces the output 1100 when the 2nd clock pulse is passed to the flip flops.
o The counter produces the output 1110 when the 3rd clock pulse is passed to the flip flops.
o The counter produces the output 1111 when the 4th clock pulse is passed to the flip flops.
o The counter produces the output 0111 when the 5th clock pulse is passed to the flip flops.
o The counter produces the output 0011 when the 6th clock pulse is passed to the flip flops.
o The counter produces the output 0001 when the 7th clock pulse is passed to the flip flops.

Timing diagram

Advantages

o The number of flip flops in the Johnson counter is equal to the number of flip flops in the ring
counter, and the Johnson counter counts twice the number of states the ring counter can count.
o The Johnson counter can also be designed by using D or JK flip flop.
o The data is count in a continuous loop in the Johnson ring counter.

UNIT-IV 92
o The circuit of the Johnson counter is self-decoding.

Disadvantages

o The Johnson counter is not able to count the states in a binary sequence.
o In the Johnson counter, the unutilized states are greater than the states being utilized.
o The number of flip flops is equal to one half of the number of timing signals.
o It is possible to design the Johnson counter for any number of timing sequences.

Data Representation in Computer Organization

In computer organization, data refers to the symbols that are used to represent events, people, things and
ideas.

Data Representation

The data can be represented in the following ways:

Data

Data can be anything like a number, a name, notes in a musical composition, or the color in a photograph.
Data representation can be referred to as the form in which we stored the data, processed it and transmitted
it. In order to store the data in digital format, we can use any device like computers, smartphones, and
iPads. Electronic circuitry is used to handle the stored data.

Digitization

Digitization is a type of process in which we convert information like photos, music, number, text into
digital data. Electronic devices are used to manipulate these types of data. The digital revolution has
evolved with the help of 4 phases, starting with the big, expensive standalone computers and progressing
to today's digital world. All around the world, small and inexpensive devices are spreading everywhere.

Binary Digits

The binary digits or bits are used to show the digital data, which is represented by 0 and 1. The binary
digits can be called the smallest unit of information in a computer. The main use of binary digit is that it
can store the information or data in the form of 0s and 1s. It contains a value that can be on/off or true/false.
On or true will be represented by the 1, and off or false will be represented by the 0. The digital file is a
simple file, which is used to collect data contained by the storage medium like the flash drive, CD, hard
disk, or DVD.

UNIT-IV 93
Representing Numbers

The number can be represented in the following way:

Numeric Data

Numeric data is used to contain numbers, which helps us to perform arithmetic operations. The digital
devices use a binary number system so that they can represent numeric data. The binary number system
can only be represented by two digits 0 and 1. There can't be any other digits like 2 in the system. If we
want to represent number 2 in binary, then we will write it as 10.

UNIT-IV 94
Representing Text

The text can be represented in the following ways:

Character Data

Character data can be formed with the help of symbols, letters, and numerals, but they can?t be used in
calculations. Using the character data, we can form our address, hair colour, name, etc. Character data
normally takes the data in the form of text. With the help of the text, we can describe many things like our
father name, mother name, etc.

Digital Devices

Several types of codes are employed by the digital devices to represent character data, including Unicode,
ASCII, and other types of variants. The full form of ASCII is American Standard Code for Information
Interchange. It is a type of character encoding standard, which is used for electronic communication. With
the help of telecommunication equipment, computers and many other devices, ASCII code can represent
the text. The ASCII code needs 7 bits for each character, where the unique character is represented by
every single bit. For the uppercase letter A, the ASCII code is represented as 1000001.

Extended ASCII

Extended ASCII can be described as a superset of ASCII. The ASCII set uses 7 bits to represent every
character, but the Extended ASCII uses 8 bits to represent each character. The extended ASCII contains
7 bits of ASCII characters and 1 bit for additional characters. Using the 7 bits, the ASCII code provides
code for 128 unique symbols or characters, but Extended ASCII provides code for 256 unique symbols or
characters. For the uppercase letter A, the Extended ASCII code is represented as 01000001.

Unicode

Unicode is also known as the universal character encoding standard. Unicode provides a way through
which an individual character can be represented in the form of web pages, text files, and other documents.
Using ASCII, we can only represent the basic English characters, but with the help of Unicode, we can
represent characters from all languages around the World.

ASCII code provides code for 128 characters, while Unicode provide code for roughly 65,000 characters
with the help of 16 bits. In order to represent each character, ASCII code only uses 1 bit, while Unicode
supports up to 4 bytes. The Unicode encoding has several different types, but UTF-8 and UTF-16 are the
most commonly used. UTF-8 is a type of variable length coding scheme. It has also become the standard
character encoding, which is used on the web. Many software programs also set UTF-8 as their default
encoding.

UNIT-IV 95
ASCII Code

ASCII code can be used for numerals like phone numbers and social security numbers. ASCII text
contains plain and unformatted text. This type of file will be saved in a text file format, which contains a
name ending with .txt. These files are labelled differently on different systems, like Windows operating
system labelled these files as "Text document" and Apple devices labelled these files as "Plain Text".
There will have no formatting in the ASCII text files. If we want to make the documents with styles and
formats, then we have to embed formatting codes in the text.

Microsoft Excel

Microsoft word is used to create formatted text and documents. It uses the DOCX format to do this. If we
create a new document using the Microsoft Word 2007 or later version, then it always uses DOCX as the
default file format. Apple pages use PAGES format to produce the documents. As compared to Microsoft
Word, it is simpler to create and edit documents using page format. Adobe Acrobat uses the PDF format to
create the documents. The files that saved in the PDF format cannot be modified. But we can easily print
and share these files. If we save our document in PDF format, then we cannot change that file into the
Microsoft Office file or any other file without specified software.

HTML is the hypertext markup language. It is used for document designing, which will be displayed in a
web browser. It uses HTML format to design the documents. In HTML, hypertext is a type of text in any
document containing links through which we can go to other places in the document or in other documents
also. The markup language can be called as a computer language. In order to define the element within a
document, this language uses tags.

UNIT-IV 96
Representing Bits and Bytes

The bits and bytes can be represented in the following ways:

Bits and Bytes

In the field of digital communication or computers, bits are the most basic unit of information or smallest
unit of data. It is short of binary digit, which means it can contain only one value, either 0 or 1. So bits
can be represented by 0 or 1, - or +, false or true, off or on, or no or yes. Many technologies are based on
bits and bytes, which is extensively useful to describe the network access speed and storage capacity. The
bit is usually abbreviated as a lowercase b.

In order to execute the instructions and store the data, the bits are grouped into multiple bits, which are
known as bytes. Bytes can be defined as a group of eight bits, and it is usually abbreviated as an uppercase
B. If we have four bytes, it will equal 32 bits (4*8 = 32), and 10 bytes will equal 80 bits (8*10 = 80).

Uses

Bits are used for data rates like speeds while movie download, speed while internet connection, etc. Bytes
are used to get the storage capacity and file sizes. When we are reading something related to digital
devices, it will be frequently encountered references like 90 kilobits per second, 1.44 megabytes, 2.8
gigahertz, and 2 terabytes. To quantify digital data, we have many options such as Kilo, Mega, Giga, Tera
and many more similar terms, which are described as follows:

104 KB: Kb is also called a kilobyte or Kbyte. It is mostly used while referring to the size of small
computer files.

56 Kbps: Kbps is also called kilobit, Kbit or Kb. The 56 kbps means 56 kilobits per second which are used
to show the slow data rates. If our internet speed is 56 kbps, we have to face difficulty while connecting
more than one device, buffering while streaming videos, slow downloading, and many other internet
connectivity problems.

UNIT-IV 97
50 Mbps: Mbps is also called Megabit, MB or Mbit. The 50 Mbps means 50 Megabit per second, which
are used to show the faster data rates. If our internet speed is 50 Mbps, we will experience online activity
without any buffering, such as online gaming, downloading music, streaming HD, web browsing, etc. 50
Mbps or more than that will be known as fast internet speed. With the help of fast speed, we can easily
handle more than one online activity for more than one user at a time without major interruptions in
services.

3.2 MB: 3.2 MB is also called Megabyte, MB or MByte. It is used when we are referring to the size of
files, which contains videos and photos.

100 Gbit: 100 Gbit is also called Gigabit or GB. It is used to show the really fast network speeds.

16 GB: 16 GB is also called Gigabyte, GB or GByte. It is used to show the storage capacity.

ADVERTISEMENT

Data Compression

The digital data is compressed to reduce transmission times and file size. Data compression is the process
of reducing the number of bits used to represent data. Data compression typically uses encoding techniques
to compress the data. The compressed data will help us to save storage capacity, reduce costs for storage
hardware, increase file transfer speed.

Compression uses some programs, which also uses algorithms and functions to find out the way to reduce
the data size. Compression can be referred "zipping". The process of reconstructing files will be known
as unzipping or extracting. The compressed files will contain .gz, or.tar.gz, .pkg, or .zip at the end of the
files. Compression can be divided into two techniques: Lossless compression and Lossy compression.

UNIT-IV 98
Lossless Compression

As the name implies, lossless compression is the process of compressing the data without any loss of
information or data. If we compressed the data with the help of lossless compression, then we can exactly
recover the original data from the compressed data. That means all the information can be completely
restored by lossless compression.

Many applications want to use data loss compression. For example, lossless compression can be used in
the format of ZIP files and in the GNU tool gzip. The lossless data compression can also be used as a
component within the technologies of lossy data compression. It is generally used for discrete data like
word processing files, database records, some images, and information of the video.

According to this image, when we compress the original data using the lossless, we are able to restore all
the original data.

Lossy Compression

Lossy compression is the process of compressing the data, but that data cannot be recovered 100% of
original data. This compression is able to provide a high degree of compression, and the result of this
compression will be in smaller compressed files. But in this process, some number of video frames, sound
waves and original pixels are removed forever.

If the compression is greater, then the size of files will be smaller. Business data and text, which needs a
full restoration, will never use lossy compression. Nobody likes to lose the information, but there are a lot
of files that are very large, and we don't have enough space to maintain all of the original data or many
times, we don't require all the original data in the first place. For example, videos, photos and audio
recording files to capture the beauty of our world. In this case, we use lossy compression.

UNIT-IV 99
According to this image, when we compress the original data using the lossy, we are only able to restore
some amount of data. We will not restore 100% of the original data.

Representation of Data/Information

Computer does not understand human language. Any data, viz., letters, symbols, pictures, audio, videos,
etc., fed to computer should be converted to machine language first. Computers represent data in the
following three forms −

Number System

We are introduced to concept of numbers from a very early age. To a computer, everything is a number,
i.e., alphabets, pictures, sounds, etc., are numbers. Number system is categorized into four types −

 Binary number system consists of only two values, either 0 or 1


 Octal number system represents values in 8 digits.
 Decimal number system represents values in 10 digits.
 Hexadecimal number system represents values in 16 digits.

Number System

System Base Digits

Binary 2 01

Octal 8 01234567

Decimal 10 0123456789

0123456789ABCDE
Hexadecimal 16
F

Bits and Bytes

Bits − A bit is a smallest possible unit of data that a computer can recognize or use. Computer usually
uses bits in groups.

Bytes − group of eight bits is called a byte. Half a byte is called a nibble.

UNIT-IV 10
0
The following table shows conversion of Bits and Bytes −

Byte Value Bit Value

1 Byte 8 Bits

1024 Bytes 1 Kilobyte

1024 Kilobytes 1 Megabyte

1024 Megabytes 1 Gigabyte

1024 Gigabytes 1 Terabyte

1024 Terabytes 1 Petabyte

1024 Petabytes 1 Exabyte

1024 Exabytes 1 Zettabyte

1024 Zettabytes 1 Yottabyte

1024 Yottabytes 1 Brontobyte

1024 Brontobytes 1 Geopbytes

Text Code

Text code is format used commonly to represent alphabets, punctuation marks and other symbols. Four
most popular text code systems are −

 EBCDIC
 ASCII
 Extended ASCII
 Unicode

UNIT-IV 10
1
EBCDIC

Extended Binary Coded Decimal Interchange Code is an 8-bit code that defines 256 symbols. Given
below is the EBCDIC Tabular column

ASCII

American Standard Code for Information Interchange is an 8-bit code that specifies character values
from 0 to 127.

ASCII Tabular column

ASCII Code Decimal Value Character

0000 0000 0 Null prompt

0000 0001 1 Start of heading

0000 0010 2 Start of text

0000 0011 3 End of text

0000 0100 4 End of transmit

0000 0101 5 Enquiry

0000 0110 6 Acknowledge

0000 0111 7 Audible bell

UNIT-IV 10
2
0000 1000 8 Backspace

0000 1001 9 Horizontal tab

0000 1010 10 Line Feed

Extended ASCII

Extended American Standard Code for Information Interchange is an 8-bit code that specifies character
values from 128 to 255.

Extended ASCII Tabular column

Unicode

Unicode Worldwide Character Standard uses 4 to 32 bits to represent letters, numbers and symbol.

Unicode Tabular Column

Number System and Base Conversions



Electronic and Digital systems may use a variety of different number systems, (e.g. Decimal,
Hexadecimal, Octal, Binary), or even Duodecimal or less well known but better named Uncial. All the
other bases other than Decimal result from computer usage. Uncial (named from Latin for 1/12 “uncia”
the base twelve analogue of Decimal from the Latin word for 1/10 “decima”).

UNIT-IV 10
3
A number N in base or radix b can be written as:
(N)b = dn-1 dn-2 -- -- -- -- d1 d0 . d-1 d-2 -- -- -- -- d-m
In the above, dn-1 to d0 is the integer part, then follows a radix point, and then d -1 to d-m is the fractional
part.
dn-1 = Most significant bit (MSB)
d-m = Least significant bit (LSB)

1. Decimal to Binary
(10.25)10

Note: Keep multiplying the fractional part with 2 until decimal part 0.00 is obtained.
(0.25)10 = (0.01)2
Answer: (10.25)10 = (1010.01)2

Binary to Decimal

(1010.01)2
1x23 + 0x22 + 1x21+ 0x20 + 0x2 -1 + 1x2 -2 = 8+0+2+0+0+0.25 = 10.25
(1010.01)2 = (10.25)10

Decimal to Octal

(10.25)10
(10)10 = (12)8
Fractional part:

UNIT-IV 10
4
0.25 x 8 = 2.00

Note: Keep multiplying the fractional part with 8 until decimal part .00 is obtained.
(.25)10 = (.2)8
Answer: (10.25)10 = (12.2)8

Octal to Decimal

(12.2)8
1 x 81 + 2 x 80 +2 x 8-1 = 8+2+0.25 = 10.25
(12.2)8 = (10.25)10

Hexadecimal to Binary
To convert from Hexadecimal to Binary, write the 4-bit binary equivalent of hexadecimal.

(3A)16 = (00111010)2
6. Binary to Hexadecimal
To convert from Binary to Hexadecimal, start grouping the bits in groups of 4 from the right-end and
write the equivalent hexadecimal for the 4-bit binary. Add extra 0’s on the left to adjust the groups.
1111011011
0011 1101 1011
(001111011011 )2 = (3DB)16
7. Binary to Octal
To convert from binary to octal, start grouping the bits in groups of 3 from the right end and write the
equivalent octal for the 3-bit
binary. Add 0’s on the left to adjust the groups.
Example:
111101101

UNIT-IV 10
5
111 101 101
(111101101)2 = (755)8

1’s and 2’s complement of a Binary Number

 a Binary Number as a string, print its 1’s and 2’s complements.


Given

1’s complement of a binary number is another binary number obtained by toggling all bits in it, i.e.,
transforming the 0 bit to 1 and the 1 bit to 0.In the 1’s complement format , the positive numbers remain
unchanged . The negative numbers are obtained by taking the 1’s complement of positive counterparts.
for example +9 will be represented as 00001001 in eight-bit notation and -9 will be represented as
11110110, which is the 1’s complement of 00001001.
Examples:

SKIP
1's complement of "0111" is "1000"
1's complement of "1100" is "0011"

2’s complement of a binary number is 1, added to the 1’s complement of the binary number. In the 2’s
complement representation of binary numbers, the MSB represents the sign with a ‘0’ used for plus sign
and a ‘1’ used for a minus sign. the remaining bits are used for representing magnitude. positive
magnitudes are represented in the same way as in the case of sign-bit or 1’s complement
representation. Negative magnitudes are represented by the 2’s complement of their positive counterparts.

Examples:
2's complement of "0111" is "1001"
2's complement of "1100" is "0100"

Fixed Point Representation



Real numbers have a fractional component. This article explains the real number representation method
using fixed points. In digital signal processing (DSP) and gaming applications, where performance is
usually more important than precision, fixed point data encoding is extensively used.
The Binary Point: Fractional values such as 26.5 are represented using the binary point concept. The
decimal point in a decimal numeral system and a binary point are comparable. It serves as a divider
between a number’s integer and fractional parts.
For instance, the weight of the coefficient 6 in the number 26.5 is 10 0, or 1. The weight of the coefficient
5 is 10-1 or (5/10 = 1/2 = 0.5).
2 * 101 + 6 * 100 + 5 * 10-1 = 26.5

UNIT-IV 10
6
All Your Queries Answered | DSA to Development Program | GeeksforGeeks

SKIP
2 * 10 + 6 * 1 + 0.5 = 26.5
A “binary point” can be created using our binary representation and the same decimal point concept. A
binary point, like in the decimal system, represents the coefficient of the expression 2 0 = 1. The weight of
each digit (or bit) to the left of the binary point is 2 0, 21, 22, and so forth. The binary point’s rightmost
digits (or bits) have weights of 2-1, 2-2, 2-3, and so on.
For illustration, the number 11010.12 represents the value:
11010.12
= 1 * 24 + 1 * 23 + 0 * 22 + 1 * 21 + 0* 20 + 1 * 2-1
= 16 + 8 + 2 + 0.5
= 26.5

Shifting Pattern:

When an integer is shifted right by one bit in a binary system, it is comparable to being divided by two.
Since we cannot represent a digit to the right of a binary point in the case of integers since there is no
fractional portion, this shifting operation is an integer division.
 A number is always divided by two when the bit pattern of the number is shifted to the right
by one bit.
 A number is multiplied by two when it is moved left one bit.

How to write Fixed Point Number?

Understanding fixed point number representation requires knowledge of the shifting process described
above. Simply by implicitly establishing the binary point to be at a specific place of a numeral, we can
define a fixed point number type to represent a real number in computers (or any hardware, in general).
Then we will just use this implicit standard to express numbers.
Two arguments are all that are required to theoretically create a fixed point type:

UNIT-IV 10
7
1. Width of the number representation.
2. Binary point position within the number.
the notation fixed<w, b>, where “w” stands for the overall amount of bits used (the width of a number)
and “b” stands for the location of the binary point counting from the least significant bit (counting from
0).

Unsigned representation:

For example, fixed<8,3> signifies an 8-bit fixed-point number, the rightmost 3 bits of which are fractional.
Representation of a real number:
00010.1102
= 1 * 21 + 1 * 2-1 + 1 * 2-2
= 2 + 0.5 + 0.25
= 2.75

Signed representation:

Negative integers in binary number systems must be encoded using signed number representations. In
mathematics, negative numbers are denoted by a minus sign (“-“) before them. In contrast, numbers are
exclusively represented as bit sequences in computer hardware, with no additional symbols.
Signed binary numbers (+ve or -ve) can be represented in one of three ways:
1. Sign-Magnitude form
2. 1’s complement form
3. 2’s complement form
Sign-Magnitude form: In sign-magnitude form, the number’s sign is represented by the MSB (Most
Significant Bit also called as Leftmost Bit), while its magnitude is shown by the remaining bits (In the
case of 8-bit representation Leftmost bit is the sign bit and remaining bits are magnitude bit).
55 10 = 001101112
−55 10 = 101101112
1’s complement form: By complementing each bit in a signed binary integer, the 1’s complement of a
number can be derived. A result is a negative number when a positive number is complemented by 1.
Similar to this, complementing a negative number by 1 results in a positive number.
55 10 = 001101112

UNIT-IV 10
8
−55 10 = 110010002
2’s complement form: By adding one to the signed binary number’s 1’s complement, a binary number can
be converted to its 2’s complement. Therefore, a positive number’s 2’s complement results in a negative
number. The complement of a negative number by two yields a positive number.
55 10 = 11001000 + 1 (1’s complement + 1 = 2’s complement)
-55 10 = 11001001 2

Fixed Point representation of negative number:

Consider the number -2.5, fixed<w,b> width = 4 bit, binary point = 1 bit (assume the binary point is at
position 1). First, represent 2.5 in binary, then find its 2’s complement and you will get the binary fixed-
point representation of -2.5.
2.5 10 = 0101 2
-2.5 10 = 1010 2 + 1 (1’s complement + 1 = 2’s complement)
-2.5 10 = 1011 2

1’s complement representation range:

One bit is essentially used as a sign bit for 1’s complement numbers, leaving you with only 7 bits to
store the actual number in an 8-bit number.
Therefore, the biggest number is just 127 (anything greater would require 8 bits, making it appear to be a
negative number).
The least figure is likely to be -127 or -128 as well.
1’s complement:
127 = 01111111 : 1s complement is 10000000
128 = 10000000 : 1s complement is 01111111
We can see that storing -128 in 1’s complement is impossible (since the top bit is unset and it looks like a
positive number)
The 1’s complement range is -127 to 127.

2’s complement representation range:

UNIT-IV 10
9
Additionally, one bit in 2’s complement numbers is effectively used as a sign bit, leaving you with only 7
bits to store the actual number in an 8-bit integer.
2’s complement:
127 = 01111111 : 2s complement is 10000001
128 = 10000000 : 2s complement is 10000000
we can see that we can store -128 in 2s complement.
The 2s complement range is -128 to 127.

Advantages of Fixed Point Representation:

 Integer representation and fixed point numbers are indeed close relatives.
 Because of this, fixed point numbers can also be calculated using all the arithmetic operations
a computer can perform on integers.
 They are just as simple and effective as computer integer arithmetic.
 To conduct real number arithmetic using fixed point format, we can reuse all the hardware
designed for integer arithmetic.

Disadvantages of Fixed Point Representation:

 Loss in range and precision when compared to representations of floating point numbers.
Floating Point Representation – Basics

 are posts on representation of floating point format. The objective of this article is to provide a
There
brief introduction to floating point format.
The following description explains terminology and primary details of IEEE 754 binary floating-point
representation. The discussion confines to single and double precision formats.
Usually, a real number in binary will be represented in the following format,

ImIm-1…I2I1I0.F1F2…FnFn-1
All Your Queries Answered | DSA to Development Program | GeeksforGeeks
Where Im and Fn will be either 0 or 1 of integer and fraction parts respectively.
A finite number can also represented by four integers components, a sign (s), a base (b), a significant
(m), and an exponent (e). Then the numerical value of the number is evaluated as

(-1)s x m x be ________ Where m < |b|


Depending on base and the number of bits used to encode various components, the IEEE 754 standard
defines five basic formats. Among the five formats, the binary32 and the binary64 formats are single
precision and double precision formats respectively in which the base is 2.

UNIT-IV 11
0
Table – 1 Precision Representation

Precision Base Sign Exponent Significant

Single precision 2 1 8 23+1

Double precision 2 1 11 52+1

Single Precision Format:


As mentioned in Table 1 the single precision format has 23 bits for significant (1 represents implied bit,
details below), 8 bits for exponent and 1 bit for sign.
For example, the rational number 9÷2 can be converted to single precision float format as following,

9(10) ÷ 2(10) = 4.5(10) = 100.1(2)


The result said to be normalized, if it is represented with leading 1 bit, i.e. 1.001 (2) x 22. (Similarly when
the number 0.000000001101(2) x 23 is normalized, it appears as 1.101(2) x 2-6). Omitting this implied 1 on
left extreme gives us the mantissa of float number. A normalized number provides more accuracy than
corresponding de-normalized number. The implied most significant bit can be used to represent even more
accurate significant (23 + 1 = 24 bits) which is called subnormal representation. The floating point
numbers are to be represented in normalized form.
The subnormal numbers fall into the category of de-normalized numbers. The subnormal representation
slightly reduces the exponent range and can’t be normalized since that would result in an exponent which
doesn’t fit in the field. Subnormal numbers are less accurate, i.e. they have less room for nonzero bits in
the fraction field, than normalized numbers. Indeed, the accuracy drops as the size of the subnormal
number decreases. However, the subnormal representation is useful in filing gaps of floating point scale
near zero.
In other words, the above result can be written as (-1)0 x 1.001(2) x 22 which yields the integer components
as s = 0, b = 2, significant (m) = 1.001, mantissa = 001 and e = 2. The corresponding single precision
floating number can be represented in binary as shown below,

Where the exponent field is supposed to be 2, yet encoded as 129 (127+2) called biased exponent. The
exponent field is in plain binary format which also represents negative exponents with an encoding (like
sign magnitude, 1’s complement, 2’s complement, etc.). The biased exponent is used for the representation
of negative exponents. The biased exponent has advantages over other negative representations in
performing bitwise comparing of two floating point numbers for equality.

UNIT-IV 11
1
A bias of (2n-1 – 1), where n is # of bits used in exponent, is added to the exponent (e) to get biased
exponent (E). So, the biased exponent (E) of single precision number can be obtained as

E = e + 127
The range of exponent in single precision format is -128 to +127. Other values are used for special
symbols.
Note: When we unpack a floating point number the exponent obtained is the biased exponent. Subtracting
127 from the biased exponent we can extract unbiased exponent.

Double Precision Format:


As mentioned in Table – 1 the double precision format has 52 bits for significant (1 represents implied
bit), 11 bits for exponent and 1 bit for sign. All other definitions are same for double precision format,
except for the size of various components.

Precision:
The smallest change that can be represented in floating point representation is called as precision. The
fractional part of a single precision normalized number has exactly 23 bits of resolution, (24 bits with the
implied bit). This corresponds to log(10) (223) = 6.924 = 7 (the characteristic of logarithm) decimal digits
of accuracy. Similarly, in case of double precision numbers the precision is log (10) (252) = 15.654 = 16
decimal digits.

Accuracy:
Accuracy in floating point representation is governed by number of significant bits, whereas range is
limited by exponent. Not all real numbers can exactly be represented in floating point format. For any
numberwhich is not floating point number, there are two options for floating point approximation, say,
the closest floating point number less than x as x_ and the closest floating point number greater than x as
x+. A rounding operation is performed on number of significant bits in the mantissa field based on the
selected mode. The round down mode causes x set to x_, the round up mode causes x set to x+, the round
towards zero mode causes x is either x_ or x+ whichever is between zero and. The round to nearest mode
sets x to x_ or x+ whichever is nearest to x. Usually round to nearest is most used mode. The closeness
of floating point representation to the actual value is called as accuracy.

Special Bit Patterns:


The standard defines few special floating point bit patterns. Zero can’t have most significant 1 bit, hence
can’t be normalized. The hidden bit representation requires a special technique for storing zero. We will
have two different bit patterns +0 and -0 for the same numerical value zero. For single precision floating
point representation, these patterns are given below,
0 00000000 00000000000000000000000 = +0
1 00000000 00000000000000000000000 = -0
Similarly, the standard represents two different bit patterns for +INF and -INF. The same are given
below,
0 11111111 00000000000000000000000 = +INF
1 11111111 00000000000000000000000 = -INF

UNIT-IV 11
2
All of these special numbers, as well as other special numbers (below) are subnormal numbers, represented
through the use of a special bit pattern in the exponent field. This slightly reduces the exponent range, but
this is quite acceptable since the range is so large.
An attempt to compute expressions like 0 x INF, 0 ÷ INF, etc. make no mathematical sense. The standard
calls the result of such expressions as Not a Number (NaN). Any subsequent expression with NaN yields
NaN. The representation of NaN has non-zero significant and all 1s in the exponent field. These are shown
below for single precision format (x is don’t care bits),
x 11111111 m0000000000000000000000
Where m can be 0 or 1. This gives us two different representations of NaN.
0 11111111 00000000000000000000001 _____________ Signaling NaN (SNaN)
0 11111111 10000000000000000000001 _____________Quiet NaN (QNaN)
Usually QNaN and SNaN are used for error handling. QNaN do not raise any exceptions as they
propagate through most operations. Whereas SNaN are which when consumed by most operations will
raise an invalid exception.

Error Detection Codes: Parity Bit Method

 Detection Codes : The binary information is transferred from one location to another location
Error
through some communication medium. The external noise can change bits from 1 to 0 or 0 to 1.This
changes in values changes the meaning of actual message and is called error. For efficient data transfer,
there should be an error detection and correction codes. An error detection code is a binary code that
detects digital errors during transmission. To detect error in the received message, we add some extra bits
to the actual data.
Without addition of redundant bits, it is not possible to detect errors in the received message. There are 3
ways in which we can detect errors in the received message :
1. Parity Bit
2. CheckSum

KIP
3. Cyclic Redundancy Check (CRC)
We’ll be understanding the parity bit method in this article in depth :-
Parity Bit Method : A parity bit is an extra bit included in binary message to make total number of 1’s
either odd or even. Parity word denotes number of 1’s in a binary string. There are two parity system –
even and odd parity checks.
1. Even Parity Check: Total number of 1’s in the given data bit should be even. So if the total number of
1’s in the data bit is odd then a single 1 will be appended to make total number of 1’s even else 0 will be
appended(if total number of 1’s are already even). Hence, if any error occurs, the parity check circuit will
detect it at the receiver’s end. Let’s understand this with example, see the below diagram :

UNIT-IV 11
3
Even Parity Check (fig – 1.1)

In the above image, as we can see the data bits are ‘1011000’ and since this is even parity check that
we’re talking about, 1 will be appended as the parity bit (highlighted in red) to make total count of 1’s
even in the data sent. So here, our parity bit is 1. If the total count of 1 in the given data bits were
already even, then 0 would’ve been appended.

2. Odd Parity Check: In odd parity system, if the total number of 1’s in the given binary string (or data
bits) are even then 1 is appended to make the total count of 1’s as odd else 0 is appended. The receiver
knows that whether sender is an odd parity generator or even parity generator. Suppose if sender is an odd
parity generator then there must be an odd number of 1’s in received binary string. If an error occurs to a
single bit that is either bit is changed to 1 to 0 or 0 to 1, received binary bit will have an even number of
1’s which will indicate an error.
Take reference from fig(1.1) and rather than appending the 1 as parity bit, append 0 because total number
of 1’s are already odd.

UNIT-IV 11
4
Some more examples :-
Message
(XYZ) P(Odd) P(Even)

000 1 0

001 0 1

010 0 1

011 1 0

100 0 1

101 1 0

110 1 0

111 0 1

UNIT-IV 11
5
Figure – Error Detection with Odd Parity Bit
Limitations :
1. The limitation of this method is that only error in a single bit would be identified and we also cannot
determine the exact location of error in the data bit.
2. If the number of bits in even parity check increase or decrease (data changed) but remained to be even
then it won’t be able to detect error as the number of bits are still even and same goes for odd parity
check.
See the below image for more details :

Can’t detect error (Even Parity Check)

In the above example, the the data bits has been changed but as we can see the total count of 1’s remain
to be even, the error can’t be detected even though the message’s meaning has been changed. You can
visualize the same for odd parity check the number of 1’s will remain odd, even if the data bits have
been changed and the odd parity check won’t be able to detect error.
Points to Remember :

 In 1’s complement of signed number +0 and -0 has two different representation.


 The range of signed magnitude representation of an 8-bit number in which 1-bit is used as a
signed bit as follows -27 to +27.
 Floating point number is said to be normalized if most significant digit of mantissa is one.
For example, 6-bit binary number 001101 is normalized because of two leading 0’s.
 Booth algorithm that uses two n bit numbers for multiplication gives results in 2n bits.

UNIT-IV 11
6
 The booth algorithm uses 2’s complement representation of numbers and work for both
positive and negative numbers.
 If k-bits are used to represent exponent then bits number = (2k-1) and range of exponent = –
(2k-1 -1) to (2k-1).

Register Transfer Language

A digital computer system exhibits an interconnection of digital modules such as registers, decoders,
arithmetic elements, and Control logic.

These digital modules are interconnected with some common data and control paths to form a complete
digital system.

Moreover, digital modules are best defined by the registers and the operations that are performed on the
data stored in them.

The operations performed on the data stored in registers are called Micro-operations.

The internal hardware organization of a digital system is best defined by specifying:

o The set of registers and the flow of data between them.


o The sequence of micro-operations performed on the data which are stored in the registers.
o The control paths that initiates the sequence of micro-operation

The Register Transfer Language is the symbolic representation of notations used to specify the sequence
of micro-operations.

In a computer system, data transfer takes place between processor registers and memory and between
processor registers and input-output systems. These data transfer can be represented by standard notations
given below:

o Notations R0, R1, R2..., and so on represent processor registers.


o The addresses of memory locations are represented by names such as LOC, PLACE, MEM, etc.
o Input-output registers are represented by names such as DATA IN, DATA OUT and so on.
o The content of register or memory location is denoted by placing square brackets around the name
of the register or memory location.

Register Transfer

The term Register Transfer refers to the availability of hardware logic circuits that can perform a given
micro-operation and transfer the result of the operation to the same or another register.

UNIT-IV 11
7
Most of the standard notations used for specifying operations on various registers are stated below.

o The memory address register is designated by MAR.


o Program Counter PC holds the next instruction's address.
o Instruction Register IR holds the instruction being executed.
o R1 (Processor Register).
o We can also indicate individual bits by placing them in parenthesis. For instance, PC (8-15), R2
(5), etc.
o Data Transfer from one register to another register is represented in symbolic form by means of
replacement operator. For instance, the following statement denotes a transfer of the data of
register R1 into register R2.

1. R2 ← R1

o Typically, most of the users want the transfer to occur only in a predetermined control condition.
This can be shown by following if-then statement:
If (P=1) then (R2 ← R1); Here P is a control signal generated in the control section.
o It is more convenient to specify a control function (P) by separating the control variables from the
register transfer operation. For instance, the following statement defines the data transfer operation
under a specific control function (P).

1. P: R2 ← R1

The following image shows the block diagram that depicts the transfer of data from R1 to R2.

Here, the letter 'n' indicates the number of bits for the register. The 'n' outputs of the register R1 are
connected to the 'n' inputs of register R2.

UNIT-IV 11
8
A load input is activated by the control variable 'P' which is transferred to the register R2.

Bus and Memory Transfers

A digital system composed of many registers, and paths must be provided to transfer information from
one register to another. The number of wires connecting all of the registers will be excessive if separate
lines are used between each register and all other registers in the system.

A bus structure, on the other hand, is more efficient for transferring information between registers in a
multi-register configuration system.

A bus consists of a set of common lines, one for each bit of register, through which binary information is
transferred one at a time. Control signals determine which register is selected by the bus during a particular
register transfer.

The following block diagram shows a Bus system for four registers. It is constructed with the help of four
4 * 1 Multiplexers each having four data inputs (0 through 3) and two selection inputs (S1 and S2).

We have used labels to make it more convenient for you to understand the input-output configuration of
a Bus system for four registers. For instance, output 1 of register A is connected to input 0 of MUX1.

The two selection lines S1 and S2 are connected to the selection inputs of all four multiplexers. The
selection lines choose the four bits of one register and transfer them into the four-line common bus.

When both of the select lines are at low logic, i.e. S1S0 = 00, the 0 data inputs of all four multiplexers are
selected and applied to the outputs that forms the bus. This, in turn, causes the bus lines to receive the
content of register A since the outputs of this register are connected to the 0 data inputs of the multiplexers.

UNIT-IV 11
9
Similarly, when S1S0 = 01, register B is selected, and the bus lines will receive the content provided by
register B.

The following function table shows the register that is selected by the bus for each of the four possible
binary values of the Selection lines.

A bus system can also be constructed using three-state gates instead of multiplexers.

The three state gates can be considered as a digital circuit that has three gates, two of which are signals
equivalent to logic 1 and 0 as in a conventional gate. However, the third gate exhibits a high-impedance
state.

The most commonly used three state gates in case of the bus system is a buffer gate.

The graphical symbol of a three-state buffer gate can be represented as:

The following diagram demonstrates the construction of a bus system with three-state buffers.

UNIT-IV 12
0
o The outputs generated by the four buffers are connected to form a single bus line.
o Only one buffer can be in active state at a given point of time.
o The control inputs to the buffers determine which of the four normal inputs will communicate with
the bus line.
o A 2 * 4 decoder ensures that no more than one control input is active at any given point of time.

Memory Transfer

Most of the standard notations used for specifying operations on memory transfer are stated below.

o The transfer of information from a memory unit to the user end is called a Read operation.
o The transfer of new information to be stored in the memory is called a Write operation.
o A memory word is designated by the letter M.
o We must specify the address of memory word while writing the memory transfer operations.
o The address register is designated by AR and the data register by DR.
o Thus, a read operation can be stated as:

1. Read: DR ← M [AR]
o The Read statement causes a transfer of information into the data register (DR) from the memory
word (M) selected by the address register (AR).

UNIT-IV 12
1
o And the corresponding write operation can be stated as:

1. Write: M [AR] ← R1
o The Write statement causes a transfer of information from register R1 into the memory word (M)
selected by address register (AR).

Arithmetic Micro-operations

In general, the Arithmetic Micro-operations deals with the operations performed on numeric data stored
in the registers.

The basic Arithmetic Micro-operations are classified in the following categories:

1. Addition
2. Subtraction
3. Increment
4. Decrement
5. Shift

Some additional Arithmetic Micro-operations are classified as:

1. Add with carry


2. Subtract with borrow
3. Transfer/Load, etc.

The following table shows the symbolic representation of various Arithmetic Micro-operations.

Symbolic Representation Description

UNIT-IV 12
2
R3 ← R1 + R2 The contents of R1 plus R2 are transferred to R3.

R3 ← R1 - R2 The contents of R1 minus R2 are transferred to R3.

R2 ← R2' Complement the contents of R2 (1's complement)

R2 ← R2' + 1 2's complement the contents of R2 (negate)

R3 ← R1 + R2' + 1 R1 plus the 2's complement of R2 (subtraction)

R1 ← R1 + 1 Increment the contents of R1 by one

R1 ← R1 - 1 Decrement the contents of R1 by one

Stored Program Organization

• The simplest way to organize a computer is to have Processor Register and instruction code with
two parts. The first part specifies the operation to be performed and second specifies an address.
The memory address tells where the operand in memory will be found.

Computer Registers

• Computer registers are high-speed memory storing units. It is an element of the computer
processor. It can carry any type of information including a bit sequence or single data.
• A register should be 32 bits in length for a 32-bit instruction computer. Registers can be
numbered relies upon the processor design and language rules.
• The instructions in a computer are saved in memory locations and implemented one after another
at a time. The function of the control unit is to fetch the instruction from the memory and
implement it. The control does the similar for all the instructions in the memory in sequential
order.
• A counter is needed to maintain a path of the next instruction to be implemented and evaluate its
address. The figure shows the registers with their memories. The memory addresses are saved in
multiple registers. These requirements certainly state the use for registers in a computer.

UNIT-IV 12
3
Register Number Register Name Function
Symbol of Bits

OUTR 8 Output register It holds output character

INPR 8 Input register It holds input character.

PC 12 Program Counter It holds the address of the instruction.

AR 12 Address Register It holds an address for memory.

DR 16 Data Register It holds memory operand.

AC 16 Accumulator It’s a processor register.

IR 16 Instruction Register It holds an instruction code.

TR 16 Temporary Register It holds temporary data.


Computer Instructions
• Computer instructions are a set of machine language instructions that a particular processor
understands and executes. A computer performs tasks on the basis of the instruction provided.
• An instruction comprises of groups called fields. These fields include:
• The Operation code (Opcode) field which specifies the operation to be performed.
• The Address field which contains the location of the operand, i.e., register or memory location.
• The Mode field which specifies how the operand will be located.
• Computer Instructions
• Computer instructions are a set of machine language instructions that a particular processor
understands and executes. A computer performs tasks on the basis of the instruction provided.
An instruction comprises of groups called fields. These fields include:
• The Operation code (Opcode) field which specifies the operation to be performed.
• The Address field which contains the location of the operand, i.e., register or memory location.
• The Mode field which specifies how the operand will be located.

UNIT-IV 12
4
Timing and Control

The timing for all registers in the basic computer is controlled by a master clock generator. The clock
pulses are applied to all flip-flops and registers in the system, including the flip-flops and registers in the
control unit. The clock pulses do not change the state of a register unless the register is enabled by a
control signal. The control signals are generated in the control unit and provide control inputs for the
multiplexers in the common bus, control inputs in processor registers, and microoperations for the
accumulator.
There are two major types of control organization:
1. hardwired control and
2. microprogrammed control.

In the hardwired organization, the control logic is implemented with gates, flip-flops, decoders, and other
digital circuits. It has the advantage that it can be optimized to produce a fast mode of operation. In the
microprogrammed organization, the control information is stored in a control memory. The control
memory is programmed to initiate the required sequence of microoperations. A hardwired control, as the
name implies, requires changes in the wiring among the various components if the design has to be
modified or changed.
In the microprogrammed control, any required changes or modifications can be done by updating the
microprogram in control memory.

The block diagram of the control unit is


It consists of two decoders,
1. a sequence counter, and
2. a number of control logic gates.
Instruction Cycle
• A program residing in the memory unit of a computer consists of a sequence of instructions.
These instructions are executed by the processor by going through a cycle for each instruction.
In a basic computer, each instruction cycle consists of the following phases:
1. Fetch instruction from memory.
2. Decode the instruction.
3. Read the effective address from memory.
4. Execute the instruction.

UNIT-IV 12
5
• The input-output terminals send and receive information.
• The amount of information transferred will always have eight bits of an alphanumeric code.
• The information generated through the keyboard is shifted into an input register 'INPR'.
• The information for the printer is stored in the output register 'OUTR'.
• Registers INPR and OUTR communicate with a communication interface serially and with the
AC in parallel.
• The transmitter interface receives information from the keyboard and transmits it to INPR.
• The receiver interface receives information from OUTR and sends it to the printer serially.

Memory-Reference Instructions - STA, LDA and BSA

• The decoded D; for i = 0, 1, 2, 3, 4, 5, and 6 from the operation decoder that belongs to each
instruction is included in the table. The effective address of the instruction is in the address
register AR and was placed there during timing signal T2 when I = 0, or during timing signal T3
when I = 1.
• The execution of the memory-reference instructions starts with timing signal T4• The symbolic
description of each instruction is specified in the table in terms of register transfer notation.
• The actual execution of the instruction in the bus system will require a sequence of
microoperations. This is because data stored in memory cannot be processed directly. The data
must be read from memory to a register where they can be operated on with logic circuits.
• We now explain the operation of each instruction and list the control functions and
microoperations needed for their execution.
A flowchart that summarizes all the microoperations is presented at the end of this section

What is Machine Language?

Machine language is a low-level language made up of binary numbers or bits that a computer can
understand. It is also known as machine code or object code and is extremely tough to comprehend.
The only language that the computer understands is machine language. All programmes and programming
languages, such as Swift and C++, produce or run programmes in machine language before they are run
on a computer.
When a specific task, even the smallest process executes, machine language is transported to the system
processor. Computers are only able to understand binary data as they are digital devices.
Examples of Machine language

UNIT-IV 12
6
• The text "Hello World" would be written in the machine language:
• Another example of machine language is given below, which will display the letter "A" 1000
times on the screen.

What is Assembly Language?

• Each personal computer has a microprocessor that manages the computer's arithmetical, logical,
and control activities.
• Each family of processors has its own set of instructions for handling various operations such as
getting input from keyboard, displaying information on screen and performing various other
jobs. These set of instructions are called 'machine language instructions’.
• A processor understands only machine language instructions, which are strings of 1's and 0's.
However, machine language is too obscure and complex for using in software development.
• So, the low-level assembly language is designed for a specific family of processors that
represents various instructions in symbolic code and a more understandable form.

Advantages of Assembly Language


• Having an understanding of assembly language makes one aware of −
• How programs interface with OS, processor, and BIOS;
• How data is represented in memory and other external devices;
• How the processor accesses and executes instruction;
• How instructions access and process data;
• How a program accesses external devices.
Other advantages of using assembly language are −
• It requires less memory and execution time;
• It allows hardware-specific complex jobs in an easier way;
• It is suitable for time-critical jobs;
• It is most suitable for writing interrupt service routines and other memory resident programs.

Assembler
• The Assembler is a Software that converts an assembly language code to machine code.
• It takes basic Computer commands and converts them into Binary Code that Computer’s
Processor can use to perform its Basic Operations. These instructions are assembler language or
assembly language.
• We can also name an assembler as the compiler of assembly language. This is because a
compiler converts the high-level language to machine language.
• On the other hand, an assembler is doing the same task but, for assembly language, the name
compiler of assembly language.

SUBROUTINES

• Subroutines are programs that are used by other routines to accomplish a particular task. A
subroutine can be called from any point within the main body of the micro-program. Frequently,
many micro-programs contain identical sections of code. Microinstructions can be saved by
employing subroutines that use common sections of microcode.

UNIT-IV 12
7
• For example, the sequence of micro-operations needed to generate the effective address of the
operand for instruction is common to all memory reference instructions. This sequence could be a
subroutine that is called from within many other routines to execute the effective address
computation.
• Micro-programs that use subroutines must have a provision for storing the return address during a
subroutine call and restoring the address during a subroutine return. This may be accomplished by
placing the incremented output from the control address register into a subroutine register and
branching to the beginning of the subroutine.
• The subroutine register can then become the source for transferring the address for the return to
the main routine. The best way to structure a register file that stores addresses for subroutines is to
organize the registers in a last-in, first-out (LIFO) stack.
• The instruction that transfers program control to a subroutine is known by different names. The
most common names used are called subroutine, jump to the subroutine, branch to the subroutine,
or branch and save the address.

Addressing Sequencing

• The control memory is used to store the microinstructions in groups. Here each group is used to
specify a routine. The control memory of each computer has the instructions which contain their
micro-programs routine.
• These micro-programs are used to generate the micro-operations that will be used to execute the
instructions. Suppose the address sequencing of control memory is controlled by the hardware. In
that case, that hardware must be capable to branch from one routine to another routine and also
able to apply sequencing of microinstructions within a routine.
• When we try to execute a single instruction of computer, the control must undergo the following
steps:
• When the power of a computer is turned on, we have to first load an initial address into the CAR
(control address register). This address can be described as the first microinstruction address. With
the help of this address, we are able to activate the instruction fetch routine.
• Then, the control memory will go through the routine, which will be used to find out the effective
address of operand.
• In the next step, a micro-operation will be generated, which will be used to execute the instruction
fetched from memory.

• We are able to transform the bits of instruction code into an address with the help of control
memory where routine is located. This process can be called the mapping process. The control
memory required the capabilities of address sequencing, which is described as follows:
• On the basis of the status bit conditions, the address sequencing selects the conditional branch or
unconditional branch.
• Addressing sequence is able to increment the CAR (Control address register).
• It provides the facility for subroutine calls and returns.
• A mappings process is provided by the addressing sequence from the instructions bits to a control
memory address.

UNIT-IV 12
8
Design of Control Unit

• The Control Unit is classified into two major categories:


1. Hardwired Control
2. Microprogrammed Control

Hardwired Control

• The Hardwired Control organization involves the control logic to be implemented with gates,
flip-flops, decoders, and other digital circuits.

UNIT-IV 12
9
• A Hard-wired Control consists of two decoders, a sequence counter, and a number of logic gates.
• An instruction fetched from the memory unit is placed in the instruction register (IR).
• The component of an instruction register includes; I bit, the operation code, and bits 0 through
11.
• The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.
• The outputs of the decoder are designated by the symbols D0 through D7.
• The operation code at bit 15 is transferred to a flip-flop designated by the symbol I.
• The operation codes from Bits 0 through 11 are applied to the control logic gates.
• The Sequence counter (SC) can count in binary from 0 through 15.

Micro-programmed Control

In Microprogrammed Control, the micro-operations are performed by executing a program consisting of


micro-instructions.

• The Control memory address register specifies the address of the micro-instruction.
• The Control memory is assumed to be a ROM, within which all control information is
permanently stored.

UNIT-IV 13
0
• The control register holds the microinstruction fetched from the memory.
• The micro-instruction contains a control word that specifies one or more micro-operations for the
data processor.
• While the micro-operations are being executed, the next address is computed in the next address
generator circuit and then transferred into the control address register to read the next
microinstruction.
• The next address generator is often referred to as a micro-program sequencer, as it determines
the address sequence that is read from control memory.

General Register Organization

• A set of flip-flops forms a register. A register is a unique high-speed storage area in the CPU.
They include combinational circuits that implement data processing. The information is always
defined in a register before processing. The registers speed up the implementation of programs.
• Registers implement two important functions in the CPU operation are as follows −
• It can support a temporary storage location for data. This supports the directly implementing
programs to have fast access to the data if required.
• It can save the status of the CPU and data about the directly implementing program.

Stack Organization

• Stack is also known as the Last In First Out (LIFO) list. It is the most important feature in the
CPU. It saves data such that the element stored last is retrieved first. A stack is a memory unit
with an address register.
• This register influence the address for the stack, which is known as Stack Pointer (SP). The stack
pointer continually influences the address of the element that is located at the top of the stack.
• It can insert an element into or delete an element from the stack. The insertion operation is
known as push operation and the deletion operation is known as pop operation. In a computer
stack, these operations are simulated by incrementing or decrementing the SP register.

UNIT-IV 13
1
Register Stack
• The stack can be arranged as a set of memory words or registers. Consider a 64-word register
stack arranged as displayed in the figure.
• The stack pointer register includes a binary number, which is the address of the element present
at the top of the stack. The three-element A, B, and C are located in the stack.
• The element C is at the top of the stack and the stack pointer holds the address of C that is 3. The
top element is popped from the stack through reading memory word at address 3 and
decrementing the stack pointer by 1.
• Then, B is at the top of the stack and the SP holds the address of B that is 2. It can insert a new
word, the stack is pushed by incrementing the stack pointer by 1 and inserting a word in that
incremented location.

Instruction Formats

Instruction includes a set of operation codes and operands that manage with the operation codes.
Instruction format supports the design of bits in an instruction. It contains fields including opcode,
operands, and addressing mode.
The instruction length is generally preserved in multiples of the character length, which is 8 bits. When
the instruction length is permanent, several bits are assigned to opcode, operands, and addressing modes.
The function of allocating bits in the instruction can be interpreted by considering the following
elements −
• Number of addressing modes
• Number of operands
• Number of CPU registers
• Number of register sets
• Number of address lines

UNIT-IV 13
2
Addressing Modes

• The operands of the instructions can be located either in the main memory or in the CPU
registers.
• If the operand is placed in the main memory, then the instruction provides the location address in
the operand field. Many methods are followed to specify the operand address.
• The different methods/modes for specifying the operand address in the instructions are known as
addressing modes.

Types of Addressing Modes

There are various types of Addressing Modes which are as follows −


• Implied Mode − In this mode, the operands are specified implicitly in the definition of the
instruction. For example, the instruction "complement accumulator" is an implied-mode
instruction because the operand in the accumulator register is implied in the definition of the
instruction. All register reference instructions that use an accumulator are implied-mode
instructions.
• Immediate Mode − In this mode, the operand is specified in the instruction itself. In other words,
an immediate-mode instruction has an operand field instead of an address field. The operand field
includes the actual operand to be used in conjunction with the operation determined in the
instruction. Immediate-mode instructions are beneficial for initializing registers to a constant
value.
• Register Mode − In this mode, the operands are in registers that reside within the CPU. The specific
register is selected from a register field in the instruction. A k-bit field can determine any one of
the 2k registers.
• Register Indirect Mode − In this mode, the instruction defines a register in the CPU whose contents
provide the address of the operand in memory. In other words, the selected register includes the
address of the operand rather than the operand itself.
• A reference to the register is then equivalent to specifying a memory address. The advantage of a
register indirect mode instruction is that the address field of the instruction uses fewer bits to select
a register than would have been required to specify a memory address directly.
• Autoincrement or Autodecrement Mode &minuend; This is similar to the register indirect mode
except that the register is incremented or decremented after (or before) its value is used to access
memory. When the address stored in the register defines a table of data in memory, it is necessary
to increment or decrement the register after every access to the table. This can be obtained by using
the increment or decrement instruction.

UNIT-IV 13
3
• Direct Address Mode − In this mode, the effective address is equal to the address part of the
instruction. The operand resides in memory and its address is given directly by the address field
of the instruction. In a branch-type instruction, the address field specifies the actual branch address.
• Indirect Address Mode − In this mode, the address field of the instruction gives the address where
the effective address is stored in memory. Control fetches the instruction from memory and uses
its address part to access memory again to read the effective address.
• Indexed Addressing Mode − In this mode, the content of an index register is added to the address
part of the instruction to obtain the effective address. The index register is a special CPU register
that contains an index value. The address field of the instruction defines the beginning address of
a data array in memory.

RISC Processor

• RISC stands for Reduced Instruction Set Computer. In Reduced Instruction Set Computer (RISC)
architecture, the instruction set of the computer is simplified to reduce the execution time. RISC
has a small set of instructions, which generally include register-to-register operations.
• Thus, data is stored in processor registers for computations, and results of the computations are
transferred to the memory using store instructions. All operations are performed within the
registers of the CPU. In RISC, all instructions have simple register addressing and hence use less
number of addressing modes.
• RISC uses relatively a simple instruction format and is easy to decode. Here, the instruction length
can be fixed and aligned on word boundaries. The RISC processors can execute one instruction
per clock cycle.
• This is done using pipelining, which involves overlapping the fetch, decode, and execute phases
of two or three instructions. As RISC takes relatively a large number of registers in the processor
unit, it takes less time to execute its program when compared to CISC.

Features of RISC Processor

• There are various features of CISC Processor that are as follows −


• It can relatively few instructions.
• It can relatively few addressing modes.
• It is used for memory access limited to load and store instructions.
• All operations are done within the registers of the CPU.
• It can fixed-length, easily decoded instruction format.
• It is used for single-cycle instruction execution.
• It can be hardwired rather than micro-programmed control.

CISC Processor

• CISC stands for Complex Instruction Set Computer. It comprises a complex instruction set. It
incorporates a variable-length instruction format. Instructions that require register operands can
take only two bytes.
• The instructions that require two memory addresses can take five bytes to include the complete
instruction code. Thus, CISC has a variable-length encoding of instructions and the execution of

UNIT-IV 13
4
instructions may take a varying number of clock cycles. The CISC processor provides direct
manipulation of operands that are in memory.
• The task of a compiler is to generate a sequence of machine instructions for each high-level
language statement. The task is simplified if there are machine instructions that implement the
statements directly. The essential goal of a CISC architecture is to attempt to support a single
machine instruction for each statement that is written in a high-level language.

• Example − An ADD instruction will use index addressing to specify one operand in memory and
direct addressing to specify the second operand in memory. This instruction would use another
memory location to store the result. Thus, this instruction would use three memory references for
execution.
• Many CISC architectures read the inputs and write their outputs in the memory system instead of
a register file. As CISC architecture takes a large number of addressing modes, more hardware
logic is required to implement them. This reduces the computation speed.
• The CISC architecture attempts to provide a single machine instruction for the statements that are
written in a high-level language.

Features of CICS Processor


There are various features of CISC Processor that are as follows −
• A large number of instructions-typically from 100 to 250 instructions
• Some instructions that perform specialized tasks and are used infrequently.
• A large variety of addressing modes-typically from 5 to 20 different modes.
• It can variable-length instruction formats.
• It is used for instructions that manipulate operands in memory.

Parallel Processing
• Parallel processing can be described as a class of techniques which enables the system to achieve
simultaneous data-processing tasks to increase the computational speed of a computer system.
• A parallel processing system can carry out simultaneous data-processing to achieve faster
execution time. For instance, while an instruction is being processed in the ALU component of the
CPU, the next instruction can be read from memory.
• The primary purpose of parallel processing is to enhance the computer processing capability and
increase its throughput, i.e. the amount of processing that can be accomplished during a given
interval of time.
• A parallel processing system can be achieved by having a multiplicity of functional units that
perform identical or different operations simultaneously. The data can be distributed among
various multiple functional units.

UNIT-IV 13
5
Pipelining
• The term Pipelining refers to a technique of decomposing a sequential process into sub-operations,
with each sub-operation being executed in a dedicated segment that operates concurrently with all
other segments.
• The most important characteristic of a pipeline technique is that several computations can be in
progress in distinct segments at the same time. The overlapping of computation is made possible
by associating a register with each segment in the pipeline. The registers provide isolation between
each segment so that each can operate on distinct data simultaneously.
• The structure of a pipeline organization can be represented simply by including an input register
for each segment followed by a combinational circuit.
• Let us consider an example of combined multiplication and addition operation to get a better
understanding of the pipeline organization.
• R1 ← Ai, R2 ← Bi Input Ai, and Bi R3 ← R1 * R2, R4 ← Ci Multiply, and input Ci R5 ← R3 +
R4 Add Ci to product

UNIT-IV 13
6
Arithmetic Pipeline
• Arithmetic Pipelines are mostly used in high-speed computers. They are used to implement
floating-point operations, multiplication of fixed-point numbers, and similar computations
encountered in scientific problems.
• To understand the concepts of arithmetic pipeline in a more convenient way, let us consider an
example of a pipeline unit for floating-point addition and subtraction.
• X = A * 2a = 0.9504 * 103 Y = B * 2b = 0.8200 * 102

• The combined operation of floating-point addition and subtraction is divided into four segments.
Each segment contains the corresponding suboperation to be performed in the given pipeline.
The suboperations that are shown in the four segments are:
1. Compare the exponents by subtraction.
2. Align the mantissas.
3. Add or subtract the mantissas.
4. Normalize the result.

UNIT-IV 13
7
Instruction Pipeline
• Pipeline processing can occur not only in the data stream but in the instruction stream as well.
• Most of the digital computers with complex instructions require instruction pipeline to carry out
operations like fetch, decode and execute instructions.
In general, the computer needs to process each instruction with the following sequence of steps.
1. Fetch instruction from memory.
2. Decode the instruction.
3. Calculate the effective address.
4. Fetch the operands from memory.
5. Execute the instruction.
6. Store the result in the proper place.
• Each step is executed in a particular segment, and there are times when different segments may
take different times to operate on the incoming information. Moreover, there are times when two
or more segments may require memory access at the same time, causing one segment to wait until
another is finished with the memory.

• The organization of an instruction pipeline will be more efficient if the instruction cycle is divided
into segments of equal duration. One of the most common examples of this type of organization is
a Four-segment instruction pipeline.
• A four-segment instruction pipeline combines two or more different segments and makes it as a
single one. For instance, the decoding of the instruction can be combined with the calculation of
the effective address into one segment.
• The following block diagram shows a typical example of a four-segment instruction pipeline. The
instruction cycle is completed in four segments.

UNIT-IV 13
8
Vector(Array) Processor and its Types

Array processors are also known as multiprocessors or vector processors. They perform computations on
large arrays of data. Thus, they are used to improve the performance of the computer.
Types of Array Processors
• There are basically two types of array processors:
1. Attached Array Processors
2. SIMD Array Processors

Attached Array Processors


• An attached array processor is a processor which is attached to a general purpose computer and
its purpose is to enhance and improve the performance of that computer in numerical
computational tasks. It achieves high performance by means of parallel processing with multiple
functional units.

SIMD Array Processors

UNIT-IV 13
9
• SIMD is the organization of a single computer containing multiple processors operating in parallel.
The processing units are made to operate under the control of a common control unit, thus
providing a single instruction stream and multiple data streams.
• It contains a set of identical processing elements (PE's), each of which is having a local memory
M. Each processor element includes an ALU and registers. The master control unit controls all the
operations of the processor elements. It also decodes the instructions and determines how the
instruction is to be executed.
• The main memory is used for storing the program. The control unit is responsible for fetching the
instructions. Vector instructions are send to all PE's simultaneously and results are returned to the
memory.
• The best known SIMD array processor is the ILLIAC IV computer developed by the Burroughs
corps. SIMD processors are highly specialized computers. They are only suitable for numerical
problems that can be expressed in vector or matrix form and they are not suitable for other types
of computations.

Peripheral Devices

Peripheral devices are those devices that are linked either internally or externally to a computer. These
devices are commonly used to transfer data.
The most common processes that are carried out in a computer are entering data and displaying processed
data.
Several devices can be used to receive data and display processed data.
The devices used to perform these functions are called peripherals or I/O devices.

Modes of I/O Data Transfer


Data transfer between the central unit and I/O devices can be handled in generally three types of modes
which are given below:
1. Programmed I/O
2. Interrupt Initiated I/O
3. Direct Memory Access
Programmed I/O
• Programmed I/O instructions are the result of I/O instructions written in computer program. Each
data item transfer is initiated by the instruction in the program.
• Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.
Interrupt Initiated I/O
• In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is time consuming process because it keeps the processor
busy needlessly.
• This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After receiving
the interrupt signal, the CPU stops the task which it is processing and service the I/O transfer and
then returns back to its previous processing task.

Direct Memory Access

UNIT-IV 14
0
• Removing the CPU from the path and letting the peripheral device manage the memory buses
directly would improve the speed of transfer. This technique is known as DMA.
• In this, the interface transfer data to and from the memory through memory bus. A DMA controller
manages to transfer data between peripherals and memory unit.
• Many hardware systems use DMA such as disk drive controllers, graphic cards, network cards and
sound cards etc. It is also used for intra chip data transfer in multicore processors.
• In DMA, CPU would initiate the transfer, do other operations while the transfer is in progress and
receive an interrupt from the DMA controller when the transfer has been completed.

Types of Interrupts:
• Following are some different types of interrupts:
Hardware Interrupts

• When the signal for the processor is from an external device or hardware then this interrupts is
known as hardware interrupt.
• Let us consider an example: when we press any key on our keyboard to do some action, then this
pressing of the key will generate an interrupt signal for the processor to perform certain action.
Such an interrupt can be of two types:
• Maskable Interrupt
• The hardware interrupts which can be delayed when a much high priority interrupt has occurred
at the same time.
• Non Maskable Interrupt
• The hardware interrupts which cannot be delayed and should be processed by the processor
immediately.

Software Interrupts

• The interrupt that is caused by any internal system of the computer system is known as
a software interrupt. It can also be of two types:
• Normal Interrupt
• The interrupts that are caused by software instructions are called normal software interrupts.
• Exception
• Unplanned interrupts which are produced during the execution of some program are
called exceptions, such as division by zero.

Interconnection Structures

• The collection of paths connecting the various modules is called the interconnecting structure.
• All the units must be connected
• Different type of connection for different type of unit
• Memory
• Input/Output
• CPU
• Memory Connection
• Receives and sends data
• Receives addresses (of locations)

UNIT-IV 14
1
• Receives control signals
• Read
• Write
• Timing
• I/O Connection
• Similar to memory from computer’s viewpoint
• Output
• Receive data from computer
• Send data to peripheral
• Input
• Receive data from peripheral
• Send data to computer
• Receive control signals from computer
• Send control signals to peripherals
• eg. spin disk
• Receive addresses from computer
• eg. port number to identify peripheral
• Send interrupt signals (control)
• CPU Connection
• Reads instruction and data
• Writes out data (after processing)
• Sends control signals to other units
• Receives (& acts on) interrupts

What is Inter Process Communication?

• In general, Inter Process Communication is a type of mechanism usually provided by the operating
system (or OS). The main aim or goal of this mechanism is to provide communications in between
several processes. In short, the intercommunication allows a process letting another process know
that some event has occurred.
• Let us now look at the general definition of inter-process communication, which will explain the
same thing that we have discussed above.

Definition

• "Inter-process communication is used for exchanging useful information between numerous


threads in one or more processes (or programs)."
• To understand inter process communication, you can consider the following given diagram that
illustrates the importance of inter-process communication:

Role of Synchronization in Inter Process Communication

• It is one of the essential parts of inter process communication. Typically, this is provided by
interprocess communication control mechanisms, but sometimes it can also be controlled by
communication processes.

UNIT-IV 14
2
• These are the following methods that used to provide the synchronization:
1. Mutual Exclusion
2. Semaphore
3. Barrier
4. Spinlock
• Mutual Exclusion:-
• It is generally required that only one process thread can enter the critical section at a time. This
also helps in synchronization and creates a stable state to avoid the race condition.
• Semaphore:-
• Semaphore is a type of variable that usually controls the access to the shared resources by several
processes. Semaphore is further divided into two types which are as follows:
1. Binary Semaphore
2. Counting Semaphore

COMPUTER ARITHMETIC

Introduction:

Data is manipulated by using the arithmetic instructions in digital computers. Data is


manipulated to produce results necessary to give solution for the computation
problems. The Addition, subtraction, multiplication and division are the four basic
arithmetic operations. If we want then we can derive other operations by using these
four operations.

To execute arithmetic operations there is a separate section called arithmetic


processing unit in central processing unit. The arithmetic instructions are performed
generally on binary or decimal data. Fixed-point numbers are used to represent integers
or fractions. We can have signed or unsigned negative numbers. Fixed-point addition
is the simplest arithmetic operation.

If we want to solve a problem then we use a sequence of well-defined steps. These


steps are collectively called algorithm. To solve various problems we give algorithms.

In order to solve the computational problems, arithmetic instructions are used in digital
computers that manipulate data. These instructions perform arithmetic calculations.

And these instructions perform a great activity in processing data in a digital


computer. As we already stated that with the four basic arithmetic operations
addition, subtraction, multiplication and division, it is possible to derive other
arithmetic operations and solve scientific problems by means of numerical
analysis methods.

UNIT-IV 14
3
A processor has an arithmetic processor(as a sub part of it) that executes
arithmetic operations. The data type, assumed to reside in processor, registers
during the execution of an arithmetic instruction. Negative numbers may be in
a signed magnitude or signed complement representation. There are three ways
of representing negative fixed point - binary numbers signed magnitude,
signed 1’s complement or signed 2’s complement. Most computers use the
signed magnitude representation for the mantissa.

Addition and Subtraction :


Addition and Subtraction with Signed –Magnitude Data

We designate the magnitude of the two numbers by A and B. Where the signed
numbers are added or subtracted, we find that there are eight different
conditions to consider, depending on the sign of the numbers and the operation
performed. These conditions are listed in the first column of Table 4.1. The
other columns in the table show the actual operation to be performed with the
magnitude of the numbers. The last column is needed to present a negative
zero. In other words, when two equal numbers are subtracted, the result should
be +0 not -0.

The algorithms for addition and subtraction are derived from the table and can
be stated as follows (the words parentheses should be used for the subtraction
algorithm)

UNIT-IV 14
4
Addition and Subtraction of Signed-Magnitude Numbers

Computer Arithmetic 2 Addition and Subtraction

SIGNED MAGNITUDEADDITION AND SUBTRACTION


Addition: A + B ; A: Augend; B: Addend
Subtraction: A - B: A: Minuend; B: Subtrahend

Add Subtract Magnitude


Operation Magnitude When A>B When A<B When A=B
(+A) + (+B) +(A + B)
(+A) + (- B) +(A - B) - (B - A) +(A - B)
(- A) + (+B) - (A - B) +(B - A) +(A - B)
(- A) + (- B) - (A + B)
(+A) - (+B) +(A - B) - (B - A) +(A - B)
(+A) - (- B) +(A + B)
(- A) - (+B) - (A + B)
(- A) - (- B) - (A - B) +(B - A) +(A - B)

Hardware Implementation B Register

AVF Complementer M(Mode Control)

Output Input
E Parallel Adder
Carry Carry
S
A Register Load Sum

Computer Organization Prof. H. Yoon

Computer Arithmetic 3 Addition and Subtraction

Computer Organization Prof. H. Yoon

UNIT-IV 14
5
Algorithm:

The flowchart is shown in Figure 7.1. The two signs A, and B, are compared by an
exclusive-OR gate.

If the output of the gate is 0 the signs are


identical; If it is 1, the signs are different.

For an add operation, identical signs dictate that the magnitudes be added. For a
subtract operation, different signs dictate that the magnitudes be added.
The magnitudes are added with a microoperation EA A + B, where EA is a register
that combines E and A. The carry in E after the addition constitutes an overflow if it is
equal to 1. The value of E is transferred into the add-overflow flip-flop AVF.

The two magnitudes are subtracted if the signs are different for an add operation or
identical for a subtract operation. The magnitudes are subtracted by adding A to the 2's
complemented B. No overflow can occur if the numbers are subtracted so AVF is
cleared to 0.

1 in E indicates that A >= B and the number in A is the correct result. If this numbs is
zero, the sign A must be made positive to avoid a negative zero.

0 in E indicates that A < B. For this case it is necessary to take the 2's complement of
the value in A. The operation can be done with one microoperation A A' +1.
However, we assume that the A register has circuits for microoperations complement
and increment, so the 2's complement is obtained from these two microoperations.

In other paths of the flowchart, the sign of the result is the same as the sign of A. so no
change in A is required. However, when A < B, the sign of the result is the complement
of the original sign of A. It is then necessary to complement A, to obtain the correct
sign.

The final result is found in register A and its sign in As. The value in AVF provides an
overflow indication. The final value of E is immaterial.

Figure 7.2 shows a block diagram of the hardware for implementing the addition and
subtraction operations.

It consists of registers A and B and sign flip-flops As and Bs.


Subtraction is done by adding A to the 2's complement of B.

The output carry is transferred to flip-flop E , where it can be checked to determine


the relative magnitudes of two numbers.

The add-overflow flip-flop AVF holds the overflow bit when A and B are added.

The A register provides other microoperations that may be needed when we specify
the sequence of steps in the algorithm.

UNIT-IV 14
6
Multiplication Algorithm:
In the beginning, the multiplicand is in B and the multiplier in Q. Their corresponding signs are in
Bs and Qs respectively. We compare the signs of both A and Q and set to corresponding sign of
the product since a double-length product will be stored in registers A and Q. Registers A and E
are cleared and the sequence counter SC is set to the number of bits of the multiplier. Since an
operand must be stored with its sign, one bit of the word will be occupied by the sign and the
magnitude will consist of n-1 bits.

Now, the low order bit of the multiplier in Qn is tested. If it is 1, the multiplicand (B) is added to
present partial product (A), 0 otherwise. Register EAQ is then shifted once to the right to form the
new partial product. The sequence counter is decremented by 1 and its new value checked. If it is
not equal to zero, the process is repeated and a new partial product is formed. When SC = 0 we
stops the process.

UNIT-IV 14
7
Booth’s algorithm :
Booth algorithm gives a procedure for multiplying binary integers in signed- 2’s
complement representation.

It operates on the fact that strings of 0’s in the multiplier require no addition but just

UNIT-IV 14
8
shifting, andk+1a string of 1’s in the multiplier from bit weight 2k to weight 2m can be
treated as 2 – 2m.

For example, the binary number 001110 (+14) has a string 1’s from 23 to 21 (k=3, m=1).
The number can be represented as 2k+1 – 2m. = 24 – 21 = 16 – 2 = 14. Therefore, the
multiplication M X 14, where M is the multiplicand and 14 the multiplier, can be done
as M X 24 – M X 21.
Thus the product can be obtained by shifting the binary multiplicand M four times to
the left and subtracting M shifted left once.

As in all multiplication schemes, booth algorithm requires examination of the


multiplier bits and shifting of partial product.

Prior to the shifting, the multiplicand may be added to the partial product, subtracted
from the partial, or left unchanged according to the following rules:

UNIT-IV 14
9
1. The multiplicand is subtracted from the partial product upon encountering
the first least significant 1 in a string of 1’s in the multiplier.

2. The multiplicand is added to the partial product upon encountering the first
0 in a string of 0’s in the multiplier.

3. The partial product does not change when multiplier bit is identical to
the previous multiplier bit.

The algorithm works for positive or negative multipliers in 2’s complement


representation.

This is because a negative multiplier ends with a string of 1’s and the last operation
will be a subtraction of the appropriate weight.

The two bits of the multiplier in Qn and Qn+1 are inspected.

If the two bits are equal to 10, it means that the first 1 in a string of 1 's has been
encountered. This requires a subtraction of the multiplicand from the partial product in
AC.

If the two bits are equal to 01, it means that the first 0 in a string of 0's has been
encountered. This requires the addition of the multiplicand to the partial product in
AC.

When the two bits are equal, the partial product does not change.

Division Algorithms
Division of two fixed-point binary numbers in signed magnitude representation is performed with
paper and pencil by a process of successive compare, shift and subtract operations. Binary division
is much simpler than decimal division because here the quotient digits are either 0 or 1 and there is
no need to estimate how many times the dividend or partial remainder fits into the divisor. The
division process is described in Figure

The devisor is compared with the five most significant bits of the dividend. Since the 5-
bit number is smaller than B, we again repeat the same process. Now the 6-bit number
is greater than B, so we place a 1 for the quotient bit in the sixth position above the
dividend. Now we shift the divisor once to the right and subtract it from the dividend.
The difference is known as a partial remainder because the division could have stopped
here to obtain a quotient of 1 and a remainder equal to the partial
UNIT-IV 15
0
remainder. Comparing a partial remainder with the divisor continues the process. If the
partial remainder is greater than or equal to the divisor, the quotient bit is equal to
1. The divisor is then shifted right and subtracted from the partial remainder. If the
partial remainder is smaller than the divisor, the quotient bit is 0 and no subtraction is
needed. The divisor is shifted once to the right in any case. Obviously the result gives
both a quotient and a remainder.

Hardware Implementation for Signed-Magnitude Data

In hardware implementation for signed-magnitude data in a digital computer, it is


convenient to change the process slightly. Instead of shifting the divisor to the right,
two dividends, or partial remainders, are shifted to the left, thus leaving the two numbers
in the required relative position. Subtraction is achieved by adding A to the 2's
complement of B. End carry gives the information about the relative magnitudes.

The hardware required is identical to that of multiplication. Register EAQ is now shifted
to the left with 0 inserted into Qn and the previous value of E is lost. The example is
given in Figure 4.10 to clear the proposed division process. The divisor is stored in the
B register and the double-length dividend is stored in registers A and Q. The dividend
is shifted to the left and the divisor is subtracted by adding its 2's complement value. E

Hardware Implementation for Signed-Magnitude Data

Algorithm:

UNIT-IV 15
1
Example of Binary Division with Digital Hardware

Floating-point Arithmetic operations :

UNIT-IV 15
2
In many high-level programming languages we have a facility for specifying floating-point
numbers. The most common way is by a real declaration statement. High level programming
languages must have a provision for handling floating-point arithmetic operations. The operations
are generally built in the internal hardware. If no hardware is available, the compiler must be
designed with a package of floating-point software subroutine. Although the hardware method is
more expensive, it is much more efficient than the software method. Therefore, floating- point
hardware is included in most computers and is omitted only in very small ones.

Basic Considerations :

There are two part of a floating-point number in a computer - a mantissa m and an exponent e. The
two parts represent a number generated from multiplying m times a radix r raised to the value of e.
Thus

m x re

The mantissa may be a fraction or an integer. The position of the radix point and the value of the
radix r are not included in the registers. For example, assume a fraction representation and a radix
10. The decimal number 537.25 is represented in a register with m = 53725 and e = 3 and is
interpreted to represent the floating-point number

.53725 x 103

A floating-point number is said to be normalized if the most significant digit of the mantissa in
nonzero. So the mantissa contains the maximum possible number of significant digits. We cannot
normalize a zero because it does not have a nonzero digit. It is represented in floating-point by all
0’s in the mantissa and exponent.

Floating-point representation increases the range of numbers for a given register. Consider a
computer with 48-bit words. Since one bit must be reserved for the sign, the range of fixed-point
integer numbers will be + (247 – 1), which is approximately + 1014. The 48 bits can be used to
represent a floating-point number with 36 bits for the mantissa and 12 bits for the exponent.
Assuming fraction representation for the mantissa and taking the two sign bits into consideration,
the range of numbers that can be represented is

+ (1 – 2-35) x 22047

This number is derived from a fraction that contains 35 1’s, an exponent of 11 bits (excluding its
sign), and because 211–1 = 2047. The largest number that can be accommodated is approximately
10615. The mantissa that can accommodated is 35 bits (excluding the sign) and if considered as an
integer it can store a number as large as (235 –1). This is approximately equal to 1010, which is
equivalent to a decimal number of 10 digits.

Computers with shorter word lengths use two or more words to represent a floating-point number.
An 8-bit microcomputer uses four words to represent one floating-point number. One word of 8
bits are reserved for the exponent and the 24 bits of the other three words are used in the mantissa.

UNIT-IV 15
3
Arithmetic operations with floating-point numbers are more complicated than with fixed-point
numbers. Their execution also takes longer time and requires more complex hardware. Adding or
subtracting two numbers requires first an alignment of the radix point since the exponent parts must
be made equal before adding or subtracting the mantissas. We do this alignment by shifting one
mantissa while its exponent is adjusted until it becomes equal to the other exponent. Consider the
sum of the following floating-point numbers:

.5372400 x 102

+ .1580000 x 10-1

Floating-point multiplication and division need not do an alignment of the mantissas. Multiplying
the two mantissas and adding the exponents can form the product. Dividing the mantissas and
subtracting the exponents perform division.

The operations done with the mantissas are the same as in fixed-point numbers, so the two can
share the same registers and circuits. The operations performed with the exponents are compared
and incremented (for aligning the mantissas), added and subtracted (for multiplication) and
division), and decremented (to normalize the result). We can represent the exponent in any one of
the three representations - signed-magnitude, signed 2’s complement or signed 1’s complement.

Biased exponents have the advantage that they contain only positive numbers. Now it becomes
simpler to compare their relative magnitude without bothering about their signs. Another advantage
is that the smallest possible biased exponent contains all zeros. The floating-point representation of
zero is then a zero mantissa and the smallest possible exponent.

Register Configuration

The register configuration for floating-point operations is shown in figure 4.13. As a rule, the same
registers and adder used for fixed-point arithmetic are used for processing the mantissas. The
difference lies in the way the exponents are handled.

The register organization for floating-point operations is shown in Fig. 4.13. Three registers are
there, BR, AC, and QR. Each register is subdivided into two parts. The mantissa part has the same
uppercase letter symbols as in fixed-point representation. The exponent part may use corresponding
lower-case letter symbol.

UNIT-IV 15
4
Computer Arithmetic 14 Floating Point Arithmetic

FLOATING POINT ARITHMETIC OPERATIONS

F = m x re
where m: Mantissa
r: Radix

Registers for Floating Point Arithmetic

Bs B b BR

Parallel Adder
E Parallel Adder
and Comparator

a AC
As A1 A

Qs Q q QR

Computer Organization Prof. H. Yoon

Figure 4.13: Registers for Floating Point arithmetic operations

Assuming that each floating-point number has a mantissa in signed-magnitude representation and
a biased exponent. Thus the AC has a mantissa whose sign is in As, and a magnitude that is in A.
The diagram shows the most significant bit of A, labeled by A1. The bit in his position must be a 1
to normalize the number. Note that the symbol AC represents the entire register, that is, the
concatenation of As, A and a.

In the similar way, register BR is subdivided into Bs, B, and b and QR into Qs, Q and q. A parallel-
adder adds the two mantissas and loads the sum into A and the carry into E. A separate parallel
adder can be used for the exponents. The exponents do not have a district sign bit because they are
biased but are represented as a biased positive quantity. It is assumed that the floating- point number
are so large that the chance of an exponent overflow is very remote and so the exponent overflow
will be neglected. The exponents are also connected to a magnitude comparator that provides three
binary outputs to indicate their relative magnitude.

The number in the mantissa will be taken as a fraction, so they binary point is assumed to reside to
the left of the magnitude part. Integer representation for floating point causes certain scaling
problems during multiplication and division. To avoid these problems, we adopt a fraction
representation.

The numbers in the registers should initially be normalized. After each arithmetic operation, the
result will be normalized. Thus all floating-point operands are always normalized.

UNIT-IV 15
5
Addition and Subtraction of Floating Point Numbers
During addition or subtraction, the two floating-point operands are kept in AC and BR. The sum
or difference is formed in the AC. The algorithm can be divided into four consecutive parts:

1. Check for zeros.

2. Align the mantissas.

3. Add or subtract the mantissas

4. Normalize the result

A floating-point number cannot be normalized, if it is 0. If this number is used for computation, the
result may also be zero. Instead of checking for zeros during the normalization process we check
for zeros at the beginning and terminate the process if necessary. The alignment of the mantissas
must be carried out prior to their operation. After the mantissas are added or subtracted, the result
may be un-normalized. The normalization procedure ensures that the result is normalized before it
is transferred to memory.

If the magnitudes were subtracted, there may be zero or may have an underflow in the result. If the
mantissa is equal to zero the entire floating-point number in the AC is cleared to zero. Otherwise, the
mantissa must have at least one bit that is equal to 1. The mantissa has an underflow if the most
significant bit in position A1, is 0. In that case, the mantissa is shifted left and the exponent decremented.
The bit in A1 is checked again and the process is repeated until A1 = 1. When A1 = 1, the mantissa is
normalized and the operation is completed.

UNIT-IV 15
6
Algorithm for Floating Point Addition and Subtraction

UNIT-IV 15
7
Multiplication:
Computer Arithmetic 17 Floating Point Arithmetic

Qs  As + Bs
Q0
SC  n-1
EA  A+B’+1

Computer Organization Prof. H. Yoon

15
8
MEMORY ORGANIZATION
: Memory Hierarchy, Main Memory, Auxiliary

memory, Associative Memory, Cache Memory,

Virtual Memory

Memory Hierarchy :
Memory Organization 2 Memory Hierarchy

Computer Organization Computer Architectures Lab

memory address map of RAM and


ROM.
Main Memory
The main memory is the central storage unit in a computer system.

Primary memory holds only those data and instructions on which computer is

15
9
currently working.

It has limited capacity and data is lost


when power is switched off. It is generally
made up of semiconductor device.
These memories are not as fast as registers.
The data and instruction required to be processed
reside in main memory. It is divided into two
subcategories RAM and ROM.

Memory address map of RAM and ROM

The designer of a computer system must calculate the amount of


memory required for the particular application and assign it to
either RAM or ROM.

The interconnection between memory and processor is then established


from knowledge of the size of memory needed and the type of RAM and
ROM chips available.

The addressing of memory can be established by means of a table that


specifies the memory address assigned to each chip.

The table, called a memory address map, is a pictorial representation of


assigned address space for each chip in the system, shown in table 9.1.
To demonstrate with a particular example, assume that a computer
system needs 512 bytes of RAM and 512 bytes of ROM.

The RAM and ROM chips to be used are specified in figure 9.1 and figure 9.2.
Memory address map of RAM and ROM

Figure 9.1: Typical RAM chip

Figure 9.2: Typical ROM chip

16
0
The component column specifies whether a RAM or a ROM chip is used.

The hexadecimal address column assigns a range of hexadecimal


equivalent addresses for each chip.

The address bus lines are listed in the third column.

Although there are 16 lines in the address bus, the table shows only 10
lines because the other 6 are not used in this example and are assumed
to be zero.

The small x's under the address bus lines designate those lines that must
be connected to the address inputs in each chip.

The RAM chips have 128 bytes and need seven address lines. The ROM
chip has 512 bytes and needs 9 address lines.

The x's are always assigned to the low-order bus lines: lines 1 through 7
Data

for the RAM and lines 1 through 9 for the ROM.


It is now necessary to distinguish between four RAM chips by assigning to
each a different address. For this particular example we choose bus lines 8
Data

and 9 to represent four distinct binary combinations.

The table clearly shows that the nine low-order bus lines constitute a
memory space for RAM equal to 29 = 512 bytes.
The distinction between a RAM and ROM address is done with another
bus line. Here we choose line 10 for this purpose.

When line 10 is 0, the CPU selects a RAM, and when this line is equal to 1, it selects the ROM

Memory connections to CPU :


- RAM and ROM chips are connected to a CPU through the data and address buses

- The low-order lines in the address bus select the byte within the chips and other lines
in the address bus select a particular chip through its chip select inputs.

Memory Organization 5 Main Memory

CONNECTION OF MEMORY TO CPU


Address bus CPU
16-11 10 9 8 7-1 RD WR Data bus

16
Decoder 1
3 2 1 0 CS1
CS2
Data

RD 128 x 8
WR
Data
Data

Computer Organization Computer Architectures Lab

Auxiliary Memory :
 Magnetic Tape: Magnetic tapes are used for large computers like mainframe
computers where large volume of data is stored for a longer time. In PC also you
can use tapes in the form of cassettes. The cost of storing data in tapes is
inexpensive. Tapes consist of magnetic materials that store data permanently. It
can be 12.5 mm to 25 mm wide plastic film-type and 500 meter to 1200 meter long
which is coated with magnetic material. The deck is connected to the central
processor and information is fed into or read from the tape through the processor.
It’s similar to cassette tape recorder.
Magnetic tape is an information storage medium consisting of a magnetisable coating on a
thin plastic strip. Nearly all recording tape is of this type, whether used for video with a
video cassette recorder, audio storage (reel-to-reel tape, compact audio cassette, digital audio
tape (DAT), digital linear tape (DLT) and other formats including 8-track cartridges) or
general purpose digital data storage using a computer (specialized tape formats, as well as the
above- mentioned compact audio cassette, used with home computers of the 1980s, and DAT,
used for backup in workstation installations of the 1990s).

 Magneto-optical and optical tape storage products have been developed using
many of the same concepts as magnetic storage, but have achieved little
commercial success.

 Magnetic Disk: You might have seen the gramophone record, which is circular like
a disk and coated with magnetic material. Magnetic disks used in computer are
made on the same principle. It rotates with very high speed inside the computer
drive. Data is stored on both the surface of the disk. Magnetic disks are most
popular for direct access storage device. Each disk consists of a number of invisible

16
2
concentric circles called tracks. Information is recorded on tracks of a disk surface in
the form of tiny magnetic spots. The presence of a magnetic spot represents one bit
and its absence represents zero bit. The information stored in a disk can be read
many times without affecting the stored data. So the reading operation is non-
destructive. But if you want to write a new data, then the existing data is erased
from the disk and new data is recorded. For Example-Floppy Disk.

The primary computer storage device. Like tape, it is magnetically recorded and can be re-
recorded over and over. Disks are rotating platters with a mechanical arm that moves a
read/write head between the outer and inner edges of the platter's surface. It can take as long
as one second to find a location on a floppy disk to as little as a couple of milliseconds on a
fast hard disk. See hard disk for more details.

The disk surface is divided into concentric tracks (circles within circles). The thinner the tracks,
the more storage. The data bits are recorded as tiny magnetic spots on the tracks. The smaller
the spot, the more bits per inch and the greater the storage.

Sectors
Tracks are further divided into sectors, which hold a block of data that is read or written at one
time; for example, READ SECTOR 782, WRITE SECTOR 5448. In order to update the disk,
one or more sectors are read into the computer, changed and written back to disk. The operating
system figures out how to fit data into these fixed spaces. Modern disks have more sectors in
the outer tracks than the inner ones because the outer radius of the platter is greater than the
inner radius

Block diagram of Magnetic Disk


Optical Disk: With every new application and software there is greater demand for memory
capacity. It is the necessity to store large volume of data that has led to the development of
optical disk storage medium. Optical disks can be divided into the following categories:

1. Compact Disk/ Read Only Memory (CD-ROM


2. Write Once, Read Many (WORM)
3. Erasable Optical Disk

16
3
Associative Memory :Content Addressable Memory (CAM).

The time required to find an item stored in memory can be reduced


considerably if stored data can be identified for access by the content of
the data itself rather than by an address.

A memory unit accessed by content is called an associative memory or


content addressable memory (CAM).

This type of memory is accessed simultaneously and in parallel on the


basis of data content rather than by specific address or location.

The block diagram of an associative memory is shown in figure 9.3.

It consists of a memory array and logic form words with n bits per word.
The argument register A and key register K each have n bits,
one for each bit of a word. The match register M has m bits,
one for each memory word.
Each word in memory is compared in parallel with the content of the argument
register.

The words that match the bits of the argument register set a
corresponding bit in the match register.

After the matching process, those bits in the match register that have been
set indicate the fact that their corresponding words have been matched.

Reading is accomplished by a sequential access to memory for those


words whose corresponding bits in the match register have been set.

Hardware Organization
The key register provides a mask for choosing a particular field or key

in the argument word. The entire argument is compared with each

memory word if the key register contains all 1's.

16
4
Otherwise, only those bits in the argument that have 1st in their
corresponding position of the key register are compared.

Thus the key provides a mask or identifying piece of information


which specifies how the reference to memory is made.

To illustrate with a numerical example, suppose that the argument


register A and the key register K have the bit configuration shown below.

Only the three leftmost bits of A are compared with memory words
because K has 1's in these position.
A 101 111100

K 111 000000

Word1 100 111100 no match

Word2 101 000001 match

Word 2 matches the unmasked argument field because the three


leftmost bits of the argument and the word are equal.

Figure 9.4: Associative memory of m


word, n cells per word.
The relation between the memory array and external registers in an
associative memory is shown in figure 9.4.

The cells in the array are marked by the letter C with two subscripts.

The first subscript gives the word number and the second specifies the

bit position in the word. Thus cell Cij is the cell for bit j in words i.

A bit Aj in the argument register is compared with all the bits in column
j of the array provided that Kj =1.

This is done for all columns j = 1, 2... n.

If a match occurs between all the unmasked bits of the argument


and the bits in word i, the corresponding bit Mi in the match
register is set to 1.

If one or more unmasked bits of the argument and the word do not match, Mi is

16
5
cleared to 0.

Cache Memory :
Cache is a fast small capacity memory that should hold those
information which are most likely to be accessed.
The basic operation of the cache is, when the CPU needs to access
memory, the cache is examined.

If the word is found in the cache, it is read from the fast memory. If the
word addressed by the CPU is not found in the cache, the main memory
is accessed to read the word.

The transformation of data from main memory to cache memory is


referred to as a mapping process.

Associative mapping
Consider the main memory can store 32K words of 12 bits each.
The cache is capable of storing 512 of these words at any given time.
For every word stored in cache, there is a
duplicate copy in main memory. The CPU
communicates with both memories.
It first sends a 15-bit address to cache. If there is a hit, the CPU accepts the
12-bit data from cache, if there is miss, the CPU reads the word from main
memory and the word is then transferred to cache.

Figure

9.5:
Associative

mapping

cache (all

numbers in

octal)

22
16
6
The associative memory stores both the address and content

(data) of the memory word. This permits any location in cache

to store any word from main memory.

The figure 9.5 shows three words presently stored in the cache. The address
value of 15 bits is shown as a five-digit octal number and its corresponding
12-bit word is shown as a four-digit octal number.

A CPU address of 15 bits is placed in the argument register and


the associative memory is searched for a matching address.

If the address is found the corresponding 12-bit data


is read and sent to CPU. If no match occurs, the main
memory is accessed for the word.
The address data pairs then transferred to the associative cache memory.

If the cache is full, an address data pair must be displaced to make room
for a pair that is needed and not presently in the cache.

This constitutes a first-in first-one (FIFO) replacement policy.

direct mapping in organization of cache


memory:
The CPU address of 15 bits is divided into two fields.

The nine least significant bits constitute the index field and the
remaining six bits from the tag field.

The figure 9.6 shows that main memory needs an address that includes both the
tag and the index.

Figure 9.6: Addressing relationships between main and cache memories


The number of bits in the index field is equal to the number of address bits
required to access the cache memory.
The internal organization of the words in the cache memory is as shown in figure
9.7.

16
7
Figure 9.7: Direct mapping cache organization

Each word in cache consists of the data word and its associated tag.

When a new word is first brought into the cache, the tag bits are stored alongside
the data bits.

When the CPU generates a memory request the index field is used for the
address to access the cache.

The tag field of the CPU address is compared with the tag in the word read from
the cache.

If the two tags match, there is a hit and the desired data word is in cache.
If there is no match, there is a miss and the required word is
read from main memory. It is then stored in the cache
together with the new tag, replacing the previous value.

The word at address zero is presently stored in the cache (index = 000, tag = 00,
data = 1220).
Suppose that the CPU now wants to access the word at address 02000.

The index address is 000, so it is used to access the cache. The two

tags are then compared. The cache tag is 00 but the address tag is

02, which does not produce a match.

Therefore, the main memory is accessed and the data word 5670 is

transferred to the CPU. The cache word at index address 000 is

then replaced with a tag of 02 and data of 5670.

The disadvantage of direct mapping is that two words with the same
index in their address but with different tag values cannot reside in cache
memory at the same time.
The comparison logic is done by an associative search of the tags
in the set similar to an associative memory search: thus the name
"set-associative”.

16
8
When a miss occurs in a set-associative cache and the set is full, it is
necessary to replace one of the tag-data items with a new value.

The most common replacement algorithms used are: random replacement,


first-in first-out (FIFO), and least recently used (LRU).

Write-through and Write-back cache write


method.
Write Through
The simplest and most commonly used procedure is to update main
memory with every memory write operation.

The cache memory being updated in parallel if it contains the word at the
specified address. This is called the write-through method.

This method has the advantage that main memory always contains the

same data as the cache. This characteristic is important in systems with

direct memory access transfers.

It ensures that the data residing in main memory are valid at all
times so that an I/O device communicating through DMA would
receive the most recent updated data.

Write-Back (Copy-Back)
The second procedure is called the write-back method.
In this method only the cache location is updated during a write operation.

The location is then marked by a flag so that later when the word is
removed from the cache it is copied into main memory.

The reason for the write-back method is that during the time a word
resides in the cache, it may be updated several times.

However, as long as the word remains in the cache, it does not matter
whether the copy in main memory is out of date, since requests from the
word are filled from the cache.

It is only when the word is displaced from the cache that an accurate copy
need be rewritten into main memory.

Virtual Memory

Virtual memory is used to give programmers the illusion that they have a
very large memory at their disposal, even though the computer actually
has a relatively small main memory.

16
9
A virtual memory system provides a mechanism for translating program-
generated addresses into correct main memory locations.

Address space

An address used by a programmer will be called a virtual address, and the set of
such addresses is known as address space.

Memory space
An address in main memory is called a location or physical address. The set of
such locations is called the memory space.

Auxiliary Memory Main Memory 32k=

Program 1

Data 1,1

Data 1,2

Program 2

Data 2,1

Address space 1024k=210


As an illustration, consider a computer with a main-memory capacity of
32K words (K = 1024). Fifteen bits are needed to specify a physical
address in memory since 32K = 215.
Suppose that the computer has available auxiliary memory for storing
220 = 1024K words.

Thus auxiliary memory has a capacity for storing information


equivalent to the capacity of 32 main memories.

Denoting the address space by N and the memory space by M, we then


have for this example N = 1024K and M = 32K.

In a multiprogramming computer system, programs and data are


transferred to and from auxiliary memory and main memory based on
demands imposed by the CPU.

Suppose that program 1 is currently being executed in the CPU. Program


1 and a portion of its associated data are moved from auxiliary memory
into main memory as shown in figure 9.9.

Portions of programs and data need not be in contiguous locations in


memory since information is being moved in and out, and empty spaces

17
0
may be available in scattered locations in memory.

In our example, the address field of an instruction code will consist of


20 bits but physical memory addresses must be specified with only 15
bits.

Thus CPU will reference instructions and data with a 20-bit address, but
the information at this address must be taken from physical memory
because access to auxiliary storage for individual words will be too long.

Address mapping using pages.


AThe table implementation of the address mapping is simplified if the
information in the address space and the memory space are each
divided into groups of fixed size.

The physical memory is broken down into groups of equal size called
blocks, which may range from 64 to 4096 words each.

The term page refers to groups of address space of the same size.
Consider a computer with an address space of 8K and a memory space of 4K.

If we split each into groups of 1K words we obtain eight pages and four
blocks as shown in figure 9.9

At any given time, up to four pages of address space may reside in


main memory in any one of the four blocks.

Figure 9.10 Address and Memory

Figure 9.11: Memory table in paged system

17
1
The organization of the memory mapping table in a paged system is
shown in figure 9.10.

The memory-page table consists of eight words, one for each page.

The address in the page table denotes the page number and the content
of the word give the block number where that page is stored in main
memory.

The table shows that pages 1, 2, 5, and 6 are now available in main
memory in blocks 3, 0, 1, and 2, respectively.
A presence bit in each location indicates whether the page has been
transferred from auxiliary memory into main memory.

A 0 in the presence bit indicates that this page is not available in main memory.
The CPU references a word in memory with a virtual address of 13 bits.

The three high-order bits of the virtual address specify a page number
and also an address for the memory-page table.

The content of the word in the memory page table at the page number
address is read out into the memory table buffer register.

If the presence bit is a 1, the block number thus read is transferred to the
two high- order bits of the main memory address register.

The line number from the virtual address is transferred into the 10 low-
order bits of the memory address register.

A read signal to main memory transfers the content of the word to the
main memory buffer register ready to be used by the CPU.

If the presence bit in the word read from the page table is 0, it signifies
that the content of the word referenced by the virtual address does not
reside in main memory.

Segment
A segment is a set of logically related instructions or data elements
associated with a given name.

Logical address
The address generated by segmented program is called a logical address.

Segmented page mapping

The length of each segment is allowed to grow and contract according to

17
2
the needs of the program being executed. Consider logical address
shown in figure 9.12.

Figure 9.12: Logical to physical address mapping

The logical address is partitioned into three fields.

The segment field specifies a segment number.

The page field specifies the page within the segment and word field
gives specific word within the page.

A page field of k bits can specify up to 2k pages.


A segment number may be associated with just one page or with as
many as 2k pages.

Thus the length of a segment would vary according to the number of


pages that are assigned to it.

The mapping of the logical address into a physical address is done by


means of two tables, as shown in figure 9.12.

The segment number of the logical address specifies the address for the
segment table.
The entry in the segment table is a pointer address for a page table base.
The page table base is added to the page number given in the logical address.

The sum produces a pointer address to an entry in the page table.


The concatenation of the block field with the word field produces the
final physical mapped address.

The two mapping tables may be stored in two separate small memories
or in main memory.

In either case, memory reference from the CPU will require three
accesses to memory: one from the segment table, one from the page
table and the third from main memory.

17
3
This would slow the system significantly when compared to a
conventional system that requires only one reference to memory.

17
4

You might also like