[go: up one dir, main page]

0% found this document useful (0 votes)
27 views35 pages

Chapter One COA

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 35

1

Computer Organization & Architecture

Chapter one

Digital logic and digital system

Computer Architecture: is a specification detailing how a set of software and hardware


technology standards interact to form a computer system or platform. In short computer
architecture refers to how a computer system is designed and what technologies it is compatible
with.

Computer architecture is linked to the art of determining the needs of the user/ system/
technology, and creating logical design and standards based on those requirements.

It is concerned with the structure and behavior of the computer as seen by the user.

It includes the information formats, the instruction set and techniques for addressing memory.

Computer Organization : INCLUDE ONLY HARDWARE PART(COMPONENTS)

is concerned with the way the hardware computer operate and the way they are connected
together to form the computer system.

The various components are assumed to be in place and the task is to investigate the
organizational structure to verify that the computer parts operation as intended.

Computer Design:

is concerned with the hardware design of the computer.

Once the computer specification is formulated, it is the task of the designer to develop hardware
for the system.

It is concerned with the determination of what hardware should be used and how the parts should
be connected.

It is the aspect of computer hardware and sometimes referred to as computer implementation.


Digital Logic Circuits

Digital logic is the representation of signals and sequences of a digital circuit through numbers. It
is the basis for digital computing and provides a fundamental understanding on how circuits and
hardware communicate within a computer.

Digital logic is the foundation for digital computers


Digital logic is typically embedded into most electronic devices, including calculators,
computers, video games and watches.

Digital Computers

Digital computers use the binary number system, which has two digits, 0 and 1

A binary digit is called a bit.

Bits are grouped together as bytes and words to form some type of representation within the
computer.

A sequence of instructions for the computer is known as program.

Block diagram of a digital computer as shown fig 1.1

The hardware of the computer is usually divided into three major parts.

The Central processing Unit (CPU): contains an arithmetic and logic unit for manipulating
data, a number of registers for storing data, and control circuits for fetching and executing
instructions.
The memory of a computer; contains storage for instructions and data, it is called a Random
Access Memory (RAM) the CPU can access any location in memory at random and retrieve the
binary information within a fixed interval of time.

The input and output processor: contains electronic circuit for communication and controlling
the transfer of information between the computer and the outside world.

The input and output device: connected to the computer include keyboards, printers, terminals,
magnetic disk drives and other communication devices.

Digital logic circuits can be broken down into two subcategories- combinational and sequential.


Combinational logic changes “instantly”- the output of the circuit responds as soon as the input
changes (with some delay, of course, since the propagation of the signal through the circuit
elements takes a little time). Sequential circuits have a clock signal, and changes propagate
through stages of the circuit on edges of the clock.

Typically, a sequential circuit will be built up of blocks of combinational logic separated by


memory elements that are activated by a clock signal.

Fundamental building blocks of digital logic

 Logic gates
 Flip-flops
 Counters,
 Registers,

Logic gates

They are switches inside a computer

Logic gates are the switches that turn ON or OFF depending on what the user is doing!

They are the building blocks for how computers work.

Logic gates turn ON when a certain condition is true, and OFF when the condition is false

They check whether or not the information they get follows a certain rule

They either spit out the answer true (ON) or false (OFF)

Remember:

True= ON = 1= High

False = OFF=0= Low

Types of Logic Gates!


Major logic gates: NOT, AND, OR

There are also other logic gates like: NAND, NOR, XOR and XNOR.(derived from three forme)

In a circuit schematic each logic gate is represented by different pictures.

The picture used to represent logic gates are called logic diagram.

NOT Gate

NOT is the most simple logic gate.

All it does is taking in an input that is either ON or OFF and spits out the opposite.

So for a 1 it will give a 0, and for a 0 it will give a 1.

Another name for a NOT gate is inverter, because it inverts (makes opposite) the input.

Ā with a

This is also shown as A'

A A’

0 1

1 0

AND
The symbol “ • ” is used for logical multiplication operator

Unlike NOT, AND needs two inputs

It only turns on when both inputs are ON

If only one input is on, it spits out OFF


A
If both inputs are off, it spits out OFF X

Input A Input B Output A.B B


0 0 0 Diagram for AND gate

0 1 0

1 0 0

1 1 1

1 = High 0 = Low

OR

“ + “ is used logical addition operator. It is known as “OR” operator.

OR also needs two inputs

OR needs one input to be ON for it to spit out ON

It is also ON when both inputs are ON

It is OFF when both inputs are OFF

Diagram for OR gate

Input A Input B Output A+B

0 0 0

0 1 1

1 0 1

1 1 1
1= HIGH 0 = LOW

NAND gate

This is a NOT-AND gate which is equal to an AND gate followed by a NOT gate.

The outputs of all NAND gates are true if any of the inputs are false.

The symbol is an AND gate with a small circle on the output.

The small circle represents inversion.

Input A Input B Output A.BDiagram for NAND gate

0 0 1

0 1 1

1 0 1

1 1 0

1= HIGH 0 = LOW

NOR gate

This is a NOT-OR gate which is equal to OR agate followed by a NOT gate.

The outputs of all NOR gates are false if any of the inputs are true.

The symbol is an OR gate with a small circle on the output.

The small circle represents inversion.

Diagram for NOR gate


Input A Input B Output A+B

0 0 1

0 1 0

1 0 0

1 1 0

= HIGH 0 = LOW

Exclusive OR/Exclusive NOR (XOR/XNOR)

XOR and XNOR are useful logic functions. Both have two or more inputs

The output of XOR gate is ON if and only if the two inputs are not the same.

The output of XNOR gate is ON if and only if the two inputs are the same.

NB. For n>2 inputs, the output of the XOR is ON for an odd number of ON inputs.

. For n>2 inputs, the output of the XNOR is ON for an even number of ON inputs.

Output
Diagram for XNOR gate
Input A Input B
A
DiagramBfor XOR gate Output
Input A Input B
0 0 0
A B

0 1 1
0 0 1

1 0 1
0 1 0

1 1 0
1 0 0

1= HIGH 0 = LOW
1 1 1

1= HIGH 0 = LOW
Boolean algebra: is an algebraic system of logic introduced by George Boole in 1854.

In the mid-1800’s, algebra, which simplified the representation and manipulation of


propositional logic, was developed by the English mathematician, George Boole (1815-1864).

It became known as Boolean algebra, after its developer’s name.

It deals with binary variables and logic operations operating on those variables.

Defined with a set of elements, a set of operators, a number of axioms or postulates.

Postulates of Boolean algebra:

Postulate 1:

A = 0,if and only if, A is not equal to 1

A = 1, if only if, A is not equal to 0

Postulate 2:

X+0=X

X•1=X

Postulate 3: Commutative Law

X+Y=Y+X

X•Y=Y•X

Postulate 4: Associative Law

x + (y + z) = (x + y) + z

x • (y • z) = (x • y) • z

Postulate 5: Distributive Law

x • (y + z) = x • y + x • z
x + y • z = (x + y) • ( x + z)

Postulate 6:

x+x=1

x•x=0

Principle of Duality:

The implication of this principle is that, any theorem in Boolean algebra has its dual obtainable
by interchanging “+“with “•” and “0” and “1”.

Column1 Column 2 Column3

Row1 1+1=1 1+0=0+1=1 0+0=0

Row2 0•0=0 0•1=1•0=0 1•1=1

Theorems of Boolean algebra:

Theorem 1 (Idempotent law)

x+x=x

x•x=x

Proof (a) Proof (b)

=x+x x= x • x

= (x + x) • 1 (postulate 2b) = x • x + 0 (postulate 2a)

= (x + x) • (x + x) (postulate 6a) = x • x • x (postulate 6b)

= x + x • x (postulate 5b) = x • (x + x) (postulate 5a)

= x + 0 (postulate 6b) = x • 1 (postulate6a)

=x =x

Theorem 2

X+1=1
X•0=0

Proof (a) Proof of (b) holds

=x+1 duality

= (x + 1) • 1 (postulate 2b)

= (x + 1) • (x + x) (postulate 6a)

=x+1•x (postulate 5b)

=x+x•1 (postulate 3b)

=x+x (postulate 2b)

=1 (postulate 6a)

Theorem 3 (Absorption law)

x+x•y=x

x • (x + y) = x

Proof of (a) Proof of (b)

=x+x•y (postulate 2b) holds by

= x • 1 + x • y (postulate 5a) duality

= x • (y + 1) (postulate 3a)

=x•1 (postulate 2a)

=x

Theorem 4 (Involution law)

x=x

Proof: By method of perfect induction


Theorem 5

x • (x + y) = x • y

x+x•y=x+y

x+xy=x

Prove for c

X+ xy= x (1 +y)………………1+ y=1

X (1) = x

Theorem of DE Morgan’s Law

x+y = x•y

x•y =x+y

Complement of a
function

The complement of a
function F when
expressed in a truth table is obtained by interchanging 1’s and 0’s in the values of F in the truth
table.

When the function is expressed in algebraic form the complement of the function can be derived
by means of De-Morgan’s Theorem.

The general form of DeMorgan’s theorem can be expressed as follows:

(x1+x2+x3+….Xn) = x1’x2’x3’…xn’
(x1x2x3…xn)’ =x1’+x2’+x3’+…+xn’

By changing all OR operation to AND operation and all OR operations and then complementing
each individual letter variable we can derive a simple procedure for obtaining the complement of
an algebraic expression.

Eg. F = AB+C’D’+B’D F’= (A’+B’)(C+D)(B+D’)

NB: The complement expression is obtained by interchanging AND and OR operations and
complementing each individual.

Karnaugh Maps & Logic Simplification 

Karnaugh Maps (K-Maps) are a graphical method of visualizing the 0’s and 1’s of a Boolean
function.The map method is known as the Karnaugh map or K-map.

K-Maps are very useful for performing Boolean minimization.

Karnaugh maps can be easier to use than Boolean equation minimization once you get used to it.

It will work on 2, 3, and 4 variable K-Maps in this class

Each combination of the variables in a truth table is called a min term.

There are 2n min terms for a function of n variables.

A K-map has a square for each ‘1’ or ‘0’ of a Boolean function.

One variable K-map has 21 = 2 squares.

Two variable K-map has 22 =4 squares

Three variables K-map has 23 = 8 squares

Four variables K-map has 24 = 16 squares


The variable names are listed across both the sides of the diagonal line into the corner of the
map.

The 0’s and the 1’s marked along each row and each column designate the value of the variables.

Each variable under the brackets contain half of the squares in the map where that variable
appears unprimed.

The minterm represent by a square is determined from the binary assignment of the variable
along the left top edges in the map.

Here the min term 5 the three variable maps are 101 of the second column. This minterm
represents a value for the binary variables A, B and C with A and C being unprimed and B being
primed.

The Boolean algebra can simplify by those two methods:

Sum-of- Products simplifications (SOP)

A Boolean function represented by a truth table is plotted into the map by inserting 1's into those
squares where the function is 1.

Boolean functions can then be simplified by identifying adjacent squares in the Karnaugh map
that contain a 1.

A square is considered adjacent to another square if it is next to, above, or below it. In addition,
squares at the extreme ends of the same horizontal row are also considered adjacent. The same
applies to the top and bottom squares of a column.
The objective to identify adjacent squares containing 1's and group them together.

some examples of simplification with 3-variable Karnaugh maps. We show how to map the
product terms of the unsimplified logic to the K-map. We illustrate how to identify groups of
adjacent cells which leads to a Sum-of-Products simplification of the digital logic.

Above we, place the 1’s in the K-map for each of the product terms, identify a group of two, then
write a p-term (product term) for the sole group as our simplified result.

Mapping the four product terms above yields a group of four covered by Boolean A’

Mapping the four p-terms yields a group of four, which is covered by one variable C.

After mapping the six p-terms above, identify the upper group of four, pick up the lower two
cells as a group of four by sharing the two with two more from the other group. Covering these
two with a group of four gives a simpler result. Since there are two groups, there will be two p-
terms in the Sum-of-Products result A’+B
The two product terms above form one group of two and simplifies to BC

Mapping the four p-terms yields a single group of four, which is B

Mapping the four p-terms above yields a group of four. Visualize the group of four by rolling up
the ends of the map to form a cylinder, then the cells are adjacent. We normally mark the group
of four as above left. Out of the variables A, B, C, there is a common variable: C’. C’ is a 0 over
all four cells. Final result is C’.

The six cells above from the unsimplified equation can be organized into two groups of four.
These two groups should give us two p-terms in our simplified result of A’ + C’.
We will simplify the logic using a Karnaugh map.

The Boolean equation for the output has four product terms. Map four 1’s corresponding to the
p-terms. Forming groups of cells, we have three groups of two. There will be three p-terms in the
simplified result, one for each group. See “Toxic Waste Incinerator”, Boolean algebra chapter for
a gate diagram of the result, which is reproduced below.
Below we repeat the Boolean algebra simplification of SOP

Below we repeat the Toxic waste incinerator Karnaugh map solution for comparison to the above
Boolean algebra simplification. This case illustrates why the Karnaugh map is widely used for
logic simplification.
The Karnaugh map method looks easier than the previous page of
boolean algebra.
Example: We will simplify the Boolean function. F (A, B, C) = Σ (3, 4, 6, 7)

There are four squares marked with 1’s, one for each minterm that produces 1 for the
function.

These squares belong to minterm 3,4,6,7

Two adjacent squares are combined in the third column. This column belongs to both B
and C produces the term BC.

The remaining two squares with 1’s in the two corner of the second row are adjacent and
belong to row columns of C’, so they produce the term AC’.

BC
A
0 00 01 11 10
1 3
4 7 6

Product-of-Sums Simplification (POS)

This approach is similar to the Sum-of-Products simplification, but identifying adjacent squares
containing 0’s instead of 1’s forms the groups of adjacent squares.

Then, instead of representing the function as a sum of products, the function is represented as a
product of sums.

"Don't Care" Conditions

Sometimes a situation arises in which some input variable combinations are not allowed. For
example, recall that in the BCD code there are six invalid combinations: 1010, 1011, 1100, 1101,
1110, and 1111. Since these un allowed states will never occur in an application involving the
BCD code, they can be treated as "don't care" terms with respect to their effect on the output.
That is, for these "don't care" terms either a 1 or a 0 may be assigned to the output: it really does
not matter since they will never occur. The "don't care" terms can be used to advantage on the
Karnaugh map. Fig.(5-9) shows that for each "don't care" term, an X is placed in the cell. When
grouping the 1 s, the Xs can be treated as 1s to make a larger grouping or as 0s if they cannot be
used to advantage. The larger a group, the simpler the resulting term will be. The truth table in
Fig.(5-9)(a) describes a logic function that has a 1 output only when the BCD code for 7,8, or 9
is present on the inputs. If the "don't cares" are used as 1s, the resulting expression for the
function is A + BCD, as indicated in part (b). If the "don't cares" are not used as 1s, the resulting
expression is ABC + ABCD: so you can see the advantage of using "don't care" terms to get the
simplest expression.

Combinational Logic Circuits

A combinational circuit is a connected arrangement of logic gates with a set of inputs and
outputs.

At any given time, the binary values of the outputs are a function of the binary values of the
inputs.

The design of a combinational circuit starts from a verbal outline of the problem and ends in a
logic circuit diagram.

The procedure involves the following steps:


 The problem is stated.
 The input and output variables are assigned letter symbols.
 The truth table that defines the relationship between inputs and outputs is derived.
 The logic diagram is drawn

Example 1 - Implementing Combinational Logic Circuits

Problem:

Consider the function.

Assign the levels to this function and implement.

Solution:

The levels can be assigned as follow

Using the levels assigned the circuit can then be drawn, as shown

Example 2 - Implementing Combinational Logic Circuits

Problem:
Consider the function .

Multiply out the brackets, assign the levels to this function and implement.

Solution:

This is the same function as used in the previous example, except this time we will multiply out
the brackets first and then assign the levels and implement the circuit, giving an alternative
implementation for the function.

Multiplying out the brackets for the function X gives .

Using the levels assigned the circuit can then be drawn,

ADDERS

Adders are important in computers and also in other types of digital systems in which numerical
data are processed.
Basic type of adders

The Half-Adder

A half-adder adds two bits and produces a sum and a carry output

Recall the basic rules for binary.

0+ 0= 0

0+ 1= 1

1+ 0= 1

1 + 1 = 10

The operations are performed by a logic circuit called a half-adder. The half-adder accepts two
binary digits on its inputs and produces two binary digits on its outputs, a sum bit and a carry bit.
A half-adder is represented by the logic symbol in Fig.(7-1). Half-Adder Logic: From the
operation of the half-adder as stated in Table 7-1, expressions can be derived for the sum and the
output carry as functions of the inputs. Notice that the output Carry (Cout) is a 1 only when both
A and B are 1s: therefore. Cout can be expressed

Cout = AB

Now observe that the sum output ( ∑ ) is a 1 only if the input variables A and B, are not equal.
The sum can therefore be expressed as the exclusive-OR of the input variables.

∑=AB

Fig.(7-1) Logic symbol for a half-adder.


Fig.(7-2) Half-adder logic diagram. Table 7-1

The Full-Adder

The second category of adder is the full-adder. The full-adder accepts two input bits and an input
carry and generates a sum output and an output carry.

The basic difference between a full-adder and a half-adder is that the full-adder accepts an input
carry. A logic symbol for a full-adder is shown in Fig.(7-3), and the truth table in Table 7-2
shows the operation of a full-adder.

Fig.(7-3) Logic symbol for a full-adder.


Fig.(7-4) Complete logic circuit for a full-adder.

∑ = A  B  Cin Cout = AB + (AB)Cin Notice in Fig.(7-4) there are two half-adders,


connected as shown in the block diagram of Fig.(7-5), with their output carries ORed.

Fig.(7-5) Arrangement of two half-adders to form a full-adder.

Example: For each of the three full-adders in Fig.(7-6), determine the outputs for the inputs
shown.
Additional examples of combinational circuits:

Decoders

A binary code of n bits is capable of representing up to 2n distinct elements of the coded


information

A decoder is a combinational circuit that converts binary information from the n coded inputs to
a maximum of 2n unique outputs

A decoder has n inputs and m outputs, where m ≤ 2n, and are called n-to-m-line decoders

Each output represents one of the combinations of the input variables .An enable input controls
operation of the decoder

.g. 2-to-4 decoder and its truth table.

Enable A0 A1 D0 D1 D2 D3

0 X X 0 0 0 0

1 0 0 1 0 0 0

1 0 1 0 1 0 0

1 1 0 0 0 1 0

1 1 1 0 0 0 1

Decoder expansion

 If only small decoders are available and we want bigger ones, then we can build bigger
decoders from smaller ones.

E.g. a 3-to-8 decoder from two 2-to-4 decoders


Encoders

An encoder is a digital circuit that performs the inverse of a decoder

An encoder has 2n (or less) input lines and n output lines

The output lines generate the binary code corresponding to the input value

E.g. 4-to-2 encoder

D3 D2 D1 D0 A1 A0

0 0 0 1 0 0

0 0 1 0 0 1

0 1 0 0 1 0

1 0 0 0 1 1

A0=D1+D3 and A1=D2+D3

Exercise: Draw the logic diagram for an 8-to-3 encoder

Multiplexer

A multiplexer (MUX) is a combinational circuit with 2n input data lines, n input select lines, and
one output line .The input selection lines determine which input data line is selected for the
output

A multiplexer acts like a television channel selector. All of the stations are broadcast constantly
to the television's input, but only the channel that has been selected is displayed

Fig 4-to-1 line Multiplexers


Rather than using a truth table to describe the circuit, we use function table with 2n rows is used ,
example for the 4-to-1 line multiplexer above we needs 64 rows, if we use truth table , then
instead we use function tables.

One row for each combination of the selection inputs

The MUX is also called a data selector

It is a many-to-one switch, also called a selector

Function table for 4-to- 1 line Multiplexers

Demultiplexers

The previous section described how multiplexers select one channel from a group of input
channels to be sent to a single output. Demultiplexers take a single input and select one channel
out of a group of output channels to which it will route the input. It's like having multiple printers
connected to a computer. A document can only be printed to one of the printers, so the computer
selects one out of the group of printers to which it will send its output.

The design of a demultiplexer is much like the design of a decoder. The decoder selected one of
many outputs to which it would send a zero. The difference is that the demultiplexer sends data
to that output rather than a zero.

The circuit of a demultiplexer is based on the non-active-low decoder where each output is
connected to an AND gate. An input is added to each of the AND gates that will contain the
demultiplexer's data input. If the data input equals one, then the output of the AND gate that is
selected by the selector inputs will be a one. If the data input equals zero, then the output of the
selected AND gate will be zero. Meanwhile, all of the other AND gates output a zero, i.e., no
data is passed to them. Figure 8-27 presents a demultiplexer circuit with two selector inputs.
S1 S0 Data D0 D1 D2 D3

0 0 0 0 0 0 0
0 0 1 1 0 0 0
0 1 0 0 0 0 0
0 1 1 0 1 0 0
1 0 0 0 0 0 0
1 0 1 0 0 1 0
1 1 0 0 0 0 0
1 1 1 0 0 0 1

Figure 8-28 Truth Table for a 1-Line-to-4-Line Demultiplexer

Sequential Circuits

Sequential circuits consist of a combinational circuit and some memory elements. The memory
elements are used to store information about the past. Therefore, in sequential circuits, the
present inputs and the history of the circuit determine the outputs. The history of the circuit is a
summary of the past inputs (or the effect of past inputs).
The memory elements used in sequential circuits are called flip-flops. Flip-flops can store or
remember a 0 or a 1 (a Boolean value). We use the term bit (binary digit) to refer to a Boolean
value. So, flip-flops can store a one-bit data (information). The value stored in a flip-flop is
called the state of the flip-flop, and is designated by the letter Q. A Flip-flop may have one or
more inputs used for changing it’s state (or the value it stores), and an output which is its state
(Q). A second output, Q’ is also usually provided, which is the complement of the state of the
flip-flop.

The following are some of the common flip-flop types used in sequential circuits:

SR (Set Reset) Flip-Flop

Present Next  
Inputs State

S R Q(t+1)  

0 0 Q(t) Store

0 1 0 Reset

1 0 1 Set

1 1 ? Not Allowed

Q(t) : Present State

Q(t+1): Next State

JK Flip-Flop

Present Next  
Inputs State

J K Q(t+1)  

0 0 Q(t) Store
0 1 0 Reset

1 0 1 Set

1 1 Q’(t+1) Toggle

Q(t) : Present State

Q(t+1): Next State

T (Togle) Flip-Flop

Present Next  
Input State

T Q(t+1)  

0 Q(t) Store

1 Q’(1) Toggle

Q(t) : Present State

Q(t+1): Next State

Register

A register is a group of flip-flops with each flip-flop capable of storing one bit of information

An n-bit register has a group of n flip-flops

A register may also have combinational gates that perform certain data-processing tasks

The flip-flops hold the data and the gates control when and how new data is transferred into the
register
The flip-flops have a common clock input

A common clear input is available to reset all the flip-flops asynchronously

Fig 4 bit Registers

The transfer of new data into a register is called loading the register

If all bits are loaded simultaneously with a common clock pulse transition, then the loading is
done in parallel

The load input determines the action to be taken with each clock pulse

If the load input is 1, then the data in the four inputs are transferred at the next positive clock
transition

If the load input is 0, the data inputs are inhibited and the output is fed back to simulate a no
change condition

Two basic types of registers are commonly used:

 Parallel registers and


 Shift registers.
NB: Figure 1.25 illustrates the operation of a parallel register using D flip-flops.

Fig 1.25 8 bit parallel register

2. SHIFT REGISTER:

A shift register accepts and/or transfers information serially.

A shift register is capable of shifting its binary information in one or both directions

The logical configuration is a chain of flip-flops, with the output of one connected to the input
of the next

The serial input determines what goes into the leftmost position during the shift

The serial output is taken from the output of the rightmost flip-flop

Fig 5 bit shift register


A bi-directional shift register can shift in both directions

The most general shift register has all the following capabilities:

An input for clock pulses to synchronize all operations

A shift-right operation and a serial input line associated with the shift-right

A shift-left operation and a serial input line associated with the shift-left
A parallel load operation and n input lines associated with the parallel transfer

n parallel output lines

Binary Counters

 A register that goes through a predetermined sequence of states upon the application of
input pulses
 Input pulses can be of regular intervals (like a clock) or occurring at irregular intervals
 A counter that follows the binary number sequence is called a binary counter

The input may occur at uniform intervals of time or randomly

Used to count the number of occurrences of an event and for generating timing signals to control
the sequence of operations

A counter that follows the binary number sequence is a binary counter

An n-bit binary counter is a register of n flip-flops and gates that follow a sequence of states
Consider the sequence 0000, 0001, 0010, 0011, 1000, …

The lsb is complemented each count

Every other bit is complemented iff all its lower-order bits are equal to 1

Natural to use either T or JK flip-flops since they both have a complement state

The counter has an enable input

Synchronous counters have a regular pattern with a common clock

The chain of AND gates generate the logic for the flip-flop inputs

Register transfer notation

A digital computer is characterized by its registers. The memory unit is merely a collection of
thousands of registers for storing digital information. The processor unit is composed of various
registers that store operands upon which operations are performed. The control unit uses registers
to keep track of various computer sequences, and every input or output device must have at least
one register to store the information transferred to or from the device. An inter register transfer
operation, a basic operation in digital systems, consists of a transfer of the information stored in
one register into another. The transfer of information among registers and demonstrates
pictorially the transfer of binary information from a keyboard into a register in the memory unit.
The input unit is assumed to have a keyboard, a control circuit, and an input register. Each time a
key is struck, the control enters into the input register an equivalent eight-bit alphanumeric
character code. The information from the input register is transferred into the eight least
significant cells of a processor register. After every transfer, the input register is cleared to
enable the control to insert a new eight-bit code when the keyboard is struck again. Each eight-
bit character transferred to the processor register is preceded by a shift of the previous character
to the next eight cells on its left. When a transfer of four characters is completed, the processor
register is full, and its contents are transferred into a memory register.

To process discrete quantities of information in binary form, a computer must be provided with
(1) devices that hold the data to be processed and (2) circuit elements that manipulate individual
bits of information. The device most commonly used for holding data is a register. Manipulation
of binary variables is done by means of digital logic circuits.

You might also like