[go: up one dir, main page]

0% found this document useful (0 votes)
29 views19 pages

Ut2 QB

Uploaded by

ishapatil0404
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views19 pages

Ut2 QB

Uploaded by

ishapatil0404
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

CHAPTER 4

1) Differentiate between hardwired control and micro programmed control


unit.

2) How does a Hardwired control unit differ from a Micro programmed control
unit?
A Hardwired Control Unit and a Microprogrammed Control Unit differ mainly in
how they generate control signals to execute instructions in a computer's
CPU.

Hardwired Control Unit uses fixed logic circuits, including gates, flip-flops, and
other combinational logic to generate specific control signals directly. The
control signals are predetermined and hard-coded into the hardware.
Microprogrammed Control Unit stores a set of microinstructions (microcode)
in memory. These microinstructions are executed one by one to generate
control signals, providing a more flexible way of controlling the CPU
operations.

Hardwired Control Unit is difficult to modify because the control signals are
hardcoded. Any change requires redesigning and replacing hardware
components, making it less adaptable. Microprogrammed Control Unit is easy
to modify since the control signals are generated from microinstructions
stored in memory. To modify or add new operations, you can simply update
the microprogram, making it more flexible.

Hardwired Control Unit is more complex in design, especially for complex


instruction sets, because the hardware needs to handle every instruction
individually with its own logic. Microprogrammed Control Unit is simpler to
design and extend, especially for complex CPUs, since new instructions can
be added or modified through software-like microprogramming.

In summary, a hardwired control unit is faster and more rigid, while a


microprogrammed control unit is slower but more flexible and easier to modify
or expand.

3) Explain the Micro instruction format.

Control Field (Control Signals):


This field specifies the control signals that are activated during the execution
of the microinstruction. These control signals directly control various parts of
the CPU (such as the ALU, registers, memory, etc.).
Example control signals:
ALU operation signals: Specifies arithmetic/logic operations (e.g., ADD, SUB,
AND).
Register load signals: Controls which register is loaded with data (e.g., R1 ←
ALU output).
Memory control signals: Specifies memory read/write operations (e.g., MAR
← PC).

Next Address Field (Branch Field or Address Field):


This field specifies the address of the next microinstruction to be executed. It
allows for conditional or unconditional branching within the microprogram.
There are two main methods for determining the next microinstruction:
Unconditional jump: The next address is explicitly defined.
Conditional jump: Based on the result of some condition (e.g., a zero flag or
carry flag), the control unit either moves to a specific microinstruction or
continues sequentially.

Condition Field (Branch Control):


This field is used for branching control and condition testing. It allows
conditional execution of microinstructions based on the result of a previous
operation (e.g., zero flag, carry flag).
The control unit checks the value of the condition flag, and based on that, it
can jump to another address or proceed to the next instruction in sequence.
Example conditions:
Z = 1 (Zero flag is set)
N = 0 (Negative flag is not set)

Micro-operation Field (Data Operation Field):


This field specifies the low-level operations to be performed, such as data
transfer between registers, arithmetic and logic operations, and memory
access.
Example micro-operations:
R1 ← R2 + R3 (Add the values in R2 and R3 and store the result in R1)
MDR ← Memory[MAR] (Load data from memory into the Memory Data
Register)
PC ← PC + 1 (Increment the Program Counter)

4) Explain the concept of a micro-programmed control unit and compare it


with a hardwired control unit. Describe the advantages and disadvantages
of using a micro-programmed control unit.
A Microprogrammed Control Unit is a type of control unit in a computer's CPU
that generates control signals using a set of predefined instructions called
microinstructions. These microinstructions are stored in a memory known as
control memory or microprogram memory. The collection of microinstructions
for a specific operation is called a microprogram.

The microprogrammed control unit works as follows:

Fetch Microinstructions: The control unit fetches a microinstruction from the


control memory.
Decode and Execute: The microinstruction is decoded, and the corresponding
control signals are generated to perform specific operations (e.g., moving
data, arithmetic calculations).
Next Instruction: Based on the current microinstruction, the control unit either
proceeds to the next microinstruction sequentially or jumps to another one
based on certain conditions.
This approach provides flexibility in defining and modifying the control signals
for CPU operations by changing the microinstructions instead of redesigning
hardware.

Comparison:

The Hardwired Control Unit generates control signals using fixed logic
circuits such as combinational logic gates, flip-flops, and decoders.
Control signals are created directly from hardware based on the current
instruction and the status of the CPU.
The control logic is "hardwired" or permanently fixed, meaning the design
is static and cannot be easily modified once implemented.
Main Idea: Instructions are executed by activating specific control signals
through predefined hardware paths.

The Microprogrammed Control Unit uses a sequence of


microinstructions stored in a special memory called control memory.
Each instruction is broken down into smaller micro-operations, and a
microprogram (a set of microinstructions) defines how each operation is
carried out.
The control unit reads and decodes these microinstructions to generate
control signals dynamically.
Main Idea: Instructions are executed by fetching and decoding
microinstructions stored in memory, which generate the required control
signals.

Advantages:
1) A micro‐programmed control unit is flexible and allows designers to
incorporate new and more powerful instructions as VLSI technology increases
the available chip area for the CPU
2) Allows any design errors discovered during the prototyping stage to be
removed.

Disadvantages:
1) Requires several clock cycles to execute each instruction, due to the
access time of the microprogram memory
5) Occupies a large portion (typically 55%) of the CPU chip area.

6) What is a state table method to design a hardwired control unit?


Here the behaviour of control unit is represented in the form of a table,
which is known as the state table.
• Here, each row represents the T-states and the columns represent the
instructions.
• Every intersection of the specific column to each row indicates which
control signal will be produced in the corresponding T- state of an
instruction.
• Here the hardware circuitry is designed for each column(i.e. for a
specific instruction) for producing control signals in different T-states.
Advantage –
•It is the simplest method.
•This method is mainly used for small instruction set processors(i.e. in
RISC processors).
Drawback –
• In modern processors ,there is a very large number of instruction set.
Therefore, the circuit becomes complicated to design, difficult to
debug, and if we make any modifications to the state table then the
large parts of the circuit need to be changed.
• Therefore ,this is not widely used for these kinds of processors.
• There are many redundancies in circuit design like the control signals
are required for fetching the instruction is common and which is
repeated for N number of instruction. So the cost of circuitry design
may increase.

7) Explain Micro instruction format and write a microprogram for the


instruction
ADD R1, R2

The instruction ADD R1, R2 means "Add the value in register R2 to the value in
register R1 and store the result in R1." This process can be divided into the
following steps in a microprogrammed control unit.

General Steps for the ADD Instruction:


- Fetch the Instruction: Retrieve the instruction from memory.
- Decode the Instruction: Identify that it's an ADD operation and recognize
the source (R2) and destination (R1) registers.
Execute the Instruction:
- Retrieve the values from R1 and R2.
- Perform the addition.
- Store the result back in R1.

Micro-Operations for Each Step:

Instruction Fetch:
MAR ← PC (Move the content of the Program Counter to the Memory Address
Register)
MDR ← Memory[MAR] (Move the content of the memory at the address in MAR
to the Memory Data Register)
IR ← MDR (Load the fetched instruction into the Instruction Register)
PC ← PC + 1 (Increment the Program Counter to point to the next instruction)

Instruction Decode:
Decode the instruction in IR to recognize the ADD R1, R2 operation. The control
unit identifies that the operation is an addition and the registers involved are R1
and R2.

Execute the Instruction:


TEMP ← R2 (Move the content of R2 into a temporary register, TEMP)
R1 ← R1 + TEMP (Add the content of TEMP to the value in R1 and store the
result in R1)

Explanation:
- Step 1-4 (Fetch): These steps retrieve the instruction from memory and
prepare for execution by updating the Program Counter.
- Step 5 (Decode): The control unit decodes the instruction to identify the
operation (ADD) and registers (R1 and R2).
- Step 6-7 (Execute): The value of R2 is fetched into a temporary register,
and then it's added to the value in R1, storing the result back into R1.
—---------------------------------------------------------------------------------------------------------
CHAPTER 5

1) Consider a 2-way set associative mapped cache of size 16 KB with block


size 256 bytes. The size of the main memory is 128 KB.
Find 1. Number of bits in tag 2. Tag directory size

2) Explain the concept of locality of reference


The cache is a small and very fast memory, interposed between the
processor and the main memory. Its purpose is to make the main memory
appear to the processor to be much faster than it actually is. The
effectiveness of this approach is based on a property of computer programs
called locality of reference. Locality of Reference refers to the tendency of a
computer program to access the same set of memory locations repetitively
over a short period of time. This behavior can be exploited to improve the
performance of the system by optimizing memory access patterns, especially
in cache memory management.

3) A block set associative cache memory consists of 128 blocks divided into 4
block sets.The main memory consists of 16384 blocks and each block
contains 256 eight-bit words. i) How many bits are required for addressing
main memory. ii) How many bits are needed to represent TAG, SET and
WORD fields.
4) Compare with suitable parameters SRAM with DRAM
5) Consider a direct mapped cache of size 512 KB with block size 1 KB.
There are 7 bits in the tag. Find-
Size of main memory
Tag directory size
6) Consider a direct mapped cache with block size 4 KB. The size of main
memory is 16 GB and there are 10 bits in the tag. Find-
Size of cache memory
Tag directory size

—---------------------------------------------------------------------------------------------------------
CHAPTER 6

1) List and explain the various pipeline Hazards.

Structural / Resource Dependency:


This dependency arises due to the resource conflict in the pipeline. A resource
conflict is a situation when more than one instruction tries to access the same
resource in the same cycle. A resource can be a register, memory, or ALU.

Instruction
1 2 3 4 5
/ Cycle
I1 IF(Mem) ID EX Mem
I2 IF(Mem) ID EX
I3 IF(Mem) ID EX
I4 IF(Mem) ID

In the above scenario, in cycle 4, instructions I1 and I4 are trying to access same
resource (Memory) which introduces a resource conflict.
To avoid this problem, we have to keep the instruction on wait until the required
resource (memory in our case) becomes available. This wait will introduce stalls
in the pipeline as shown below:

Solution for structural dependency:


To minimize structural dependency stalls in the pipeline, we use a hardware
mechanism called Renaming.
Renaming : According to renaming, we divide the memory into two independent
modules used to store the instruction and data separately called Code
memory(CM) and Data memory(DM) respectively. CM will contain all the
instructions and DM will contain all the operands that are required for the
instructions.

Instruction/
1 2 3 4 5 6 7
Cycle
I1 IF(CM) ID EX DM WB
I2 IF(CM) ID EX DM WB
I3 IF(CM) ID EX DM WB
I4 IF(CM) ID EX DM
I5 IF(CM) ID EX
I6 IF(CM) ID
I7 IF(CM)

Control Dependency

This type of dependency occurs during the transfer of control instructions


such as BRANCH, CALL, JMP, etc. On many instruction architectures, the
processor will not know the target address of these instructions when it needs
to insert the new instruction into the pipeline. Due to this, unwanted
instructions are fed to the pipeline.
Consider the following sequence of instructions in the program:
100: I1
101: I2 (JMP 250)
102: I3
.
.
250: BI1
Expected output: I1 -> I2 -> BI1
NOTE: Generally, the target address of the JMP instruction is known after ID
stage only.
Instruction/
1 2 3 4 5 6
Cycle
I1 IF ID EX MEM WB
ID
I2 IF EX Mem WB
(PC:250)
I3 IF ID EX Mem
BI1 IF ID EX

Output Sequence: I1 -> I2 -> Delay (Stall) -> BI1


As the delay slot performs no operation, this output sequence is equal to the
expected output sequence. But this slot introduces stall in the pipeline.

Solution for Control dependency Branch Prediction is the method through which
stalls due to control dependency can be eliminated. In this at 1st stage prediction
is done about which branch will be taken. For branch prediction Branch penalty is
zero.
Branch penalty : The number of stalls introduced during the branch operations in
the pipelined processor is known as branch penalty.
NOTE : As we see that the target address is available after the ID stage, so the
number of stalls introduced in the pipeline is 1. Suppose, the branch target
address would have been present after the ALU stage, there would have been 2
stalls. Generally, if the target address is present after the kth stage, then there
will be (k – 1) stalls in the pipeline.
Total number of stalls introduced in the pipeline due to branch instructions
= Branch frequency * Branch Penalty

Data Dependency
Occur when an instruction depends on the result of a previous instruction that
has not yet completed.
It occurs when there is a conflict in access of an operand location
E.g. Two instructions I1 and I2
I2 dependent on I1

Consider A=10
I1: A<- A +5
I2: B<- A X 2
3 types of data hazards
a) RAW
b) WAR
c) WAW
2) RAW hazard occurs when instruction J tries to read data before instruction
I writes it.
Eg:
I: R2 <- R1 + R3
J: R4 <- R2 + R3
3) WAR hazard occurs when instruction J tries to write data before instruction
I reads it.
Eg:
I: R2 <- R1 + R3
J: R3 <- R4 + R5
4) WAW hazard occurs when instruction J tries to write output before
instruction I writes it.
Eg:
I: R2 <- R1 + R3
J: R2 <- R4 + R5
5) WAR and WAW hazards occur during the out-of-order execution of the
instructions.

6) A program having 10 instructions (without Branch and Call instructions) is


executed on non-pipeline and pipeline processors. All instructions are of
the same length and have 4 pipeline stages and the time required for each
stage is 1nsec. (Assume the four stages as Fetch Instruction, Decode
Instruction, Execute Instruction, Write Output)
i.) Calculate time required to execute the program on a Non-pipeline and
Pipeline processor.
ii) Show the pipeline processor with a diagram.

Non-Pipeline Processor:
In a non-pipeline processor, each instruction is executed sequentially. Since
each instruction has 4 stages (Fetch, Decode, Execute, Write), and each stage
takes 1 nanosecond (nsec), the total time for one instruction is:
Time for one instruction = 4×1 nsec = 4 nsec
Time for 10 instructions = 10×4 nsec = 40nsec
Thus, the total time required on a non-pipelined processor is 40 nsec.

Pipeline Processor:
In a pipeline processor, after the first instruction enters the second stage, the
next instruction can enter the first stage, allowing for overlapping execution. The
pipeline allows each instruction to start every 1 nsec, but the first instruction still
takes 4 nsec to fully complete. The remaining instructions follow one after
another with a 1 nsec gap between them.

Time for first instruction = 4 nsec


After the first instruction, each additional instruction takes 1 nsec to complete.
Since there are 9 remaining instructions, they will take:
Time for remaining 9 instructions = 9×1 nsec = 9 nsec

Thus, the total time required for the pipeline processor is:
Total time = 4 nsec + 9 nsec =13 nsec

So, the total time required on a pipelined processor is 13 nsec.

At time 1, Instruction 1 starts the Fetch stage.


At time 2, Instruction 1 moves to the Decode stage, and Instruction 2 starts the
Fetch stage.
This pattern continues until Instruction 10 completes execution at time 13 nsec.
This overlapping of stages allows the pipeline processor to significantly reduce
the overall execution time compared to the non-pipeline processor.

7) Draw and explain 4 stage instruction pipelining and briefly describe the
hazards associated with it.

Fetch (IF):
The CPU fetches the instruction located at the address given by the PC
(Program Counter) from memory, increments the PC, and stores the instruction
in the Instruction Register (IR).

Decode (ID):
The CPU decodes the fetched instruction to understand the operation and
determine the sources of data (registers, memory, or immediate values). Control
signals are generated to execute the instruction.

Execute (EX):
The decoded instruction is executed. For example, the ALU performs operations
like addition, subtraction, or bitwise logic. If the instruction is a memory
load/store, memory addresses are calculated, or branch conditions are
evaluated.

Write-Back (WB):
The result of the operation is written to the appropriate register or memory
location, completing the instruction’s execution. For example, the result of an
arithmetic operation might be written to a general-purpose register.
Hazards:

Data Hazards:
Occur when an instruction depends on the result of a previous instruction that
has not yet completed.
Example: If I2 needs data from I1 before I1 has completed its execution.

Solutions:
Use techniques like forwarding (bypassing) or pipeline stalls to resolve data
hazards.

Control Hazards:
Occur when the pipeline makes wrong assumptions about the next instruction to
execute, usually due to branches (e.g., if/else conditions or loops).
Example: A branch instruction changes the flow of execution, and the pipeline
has already fetched the next sequential instruction, which may not be correct.

Solutions:
Use branch prediction or pipeline flushing to handle control hazards.

Structural Hazards:
Occur when two or more instructions require the same hardware resource at the
same time.
Example: Both the fetch and write-back stages need access to the memory
simultaneously.

Solutions:
Use resource duplication (e.g., separate caches for instruction and data memory)
to mitigate structural hazards.

You might also like