[go: up one dir, main page]

0% found this document useful (0 votes)
25 views11 pages

Microprocessor

The document provides a comprehensive overview of the evolution of microprocessors, detailing key models from the Intel 8085 to the Intel Core i9, highlighting their specifications and applications. It categorizes microprocessors into generations based on advancements in architecture, processing power, and functionality, from 4-bit to modern 64-bit processors. Additionally, it discusses the operational modes of the Intel 8086 microprocessor, memory segmentation, addressing modes, and architectural differences between Von Neumann and Harvard architectures.

Uploaded by

devmewada2505
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views11 pages

Microprocessor

The document provides a comprehensive overview of the evolution of microprocessors, detailing key models from the Intel 8085 to the Intel Core i9, highlighting their specifications and applications. It categorizes microprocessors into generations based on advancements in architecture, processing power, and functionality, from 4-bit to modern 64-bit processors. Additionally, it discusses the operational modes of the Intel 8086 microprocessor, memory segmentation, addressing modes, and architectural differences between Von Neumann and Harvard architectures.

Uploaded by

devmewada2505
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Microprocessor

1. Intel 8085 (1976) –

8-bit Microprocessor used in early computers, single 5V power supply, Clock Speed: 3 MHz, Memory Addressing: 16-
bit (can access 64 KB memory), No in-built multiplication and division instructions. Early calculators, traffic light
controllers, simple embedded systems

2. Intel 8086 (1978) –


First 16-bit processor, process more data per instruction than 8085 , segmented memory architecture (divided memory
into 64 KB segments), early personal computers, Clock Speed: 5-10 MHz ,Registers: 16-bit registers , Memory Addressing:
20-bit , IBM PC and early personal computers.

3. INTEL 8088 (1979) -

16-bit, created as a cheaper version of Intel’s 8086 , 16-bit processor with an 8-bit external bus, This chip became the most
popular in the computer industry when IBM used it for its first PC.

4. INTEL 80186 & 80188 (1982)

16-bit , Clock speed was 6 MHz , 80188 was a cheaper version of 80186, additional components like:Interrupt Controller,Clock
Generator,Local Bus Controller,Counters.

5. Intel 80286 (1982) –


Improved performance over 8086, Introduced protected mode, allowing multiple applications to run simultaneously
without crashing, Clock Speed: 6-25 MHz ,Memory Addressing: 24-bit, IBM PC AT (Advanced Technology), early business
computers

6. Intel 80386 (1986) – first 32-bit , data bus is 32-bit and address bus is 32-bit , clock 16 MHz to 33 MHz , Intel 80386
became the best selling microprocessor in history.

7. Intel 80486 (1989) - 32-bit , clock speed 16 MHz to 100 MHz , 8 KB of cache memory was introduced.

8. INTEL PENTIUM (1993)- 32-bit , originally named 80586, clock speed 66 MHz, data bus is 32-bit andaddress bus is 32
bit, Cache memory:8 KB for instructions , 8 KB for data.

9. Intel Pentium Series (pro ,ii,iii,iv) (1993-2000s) - The Pentium series revolutionized computing by introducing
superscalar architecture (executing multiple instructions per clock cycle). Greatly improved graphics and
multimedia processing. Dual pipelines, better graphics, Used in high-end computers and server, MMX technology for
better multimedia performance,High-speed performance.used for Personal computers, gaming, office applications, early
internet browsing.

10. INTEL DUAL CORE(2006) :- 32-bit or 64-bit , It has two cores. Both the cores have there own internal bus and L1 cache,
but share the external bus and L2 cache. It supported SMT technology. SMT: Simultaneously Multi-Threading. Adobe
Photoshop supported SMT.

11. INTEL CORE I3 (2010) :- 64-bit , 2 physical cores , clock speed is from 2.93 GHz to 3.33 GHz, 64 KB of L1 cache per core,
512 KB of L2 cache and 4 MB of L3 cache.

12. INTEL CORE I5 (2009) :- 64-bit , 4 physical cores , clock speed is from 2.40 GHz to 3.60 GHz, It has 64 KB of L1 cache per
core, 256 KB of L2 cache and 8 MB of L3 cache.

13. INTEL CORE I7 (2008) :- 64-bit , clock speed is from 2.66 GHz to 3.33 GHz , It has 64 KB of L1 cache per core, 256 KB of L2
cache and 8 MB of L3 cache.

14. INTEL CORE I9 (2017):- 64-bit , clock speed is from 3.33 GHz to 5.2 GHz, It has 64 KB of L1 cache per core, 1 MB of L2
cache per core and 13.75 MB of L3 cache.
Module 2
Question 1.

Sol :- Microprocessors have evolved through different generations, each marked by advancements in architecture,
processing power, and functionality. These generations are classified based on factors like bit-size, instruction set,
processing speed, and integration of components.

1st Generation (1971-1973) - 4-bit & Early 8-bit Processors :-


Based on PMOS (Positive Metal Oxide Semiconductor) technology. Limited processing power and slow clock speeds. Could
process 4-bit data at a time. Used in simple calculators and embedded systems.

Eg Intel 4004 (1971) – First commercial microprocessor, 4-bit, 740 kHz clock speed , Intel 8008 (1972) – First 8-bit
microprocessor.

2nd Generation (1974-1978) - 8-bit Microprocessors :-


Built using NMOS (Negative Metal Oxide Semiconductor) technology, which improved speed. Could process 8-bit data at a time.
Better memory addressing and support for external peripherals. Used in early personal computers and industrial applications.

Eg Intel 8080 (1974) – First widely used 8-bit microprocessor. Intel 8085 (1976) – Improved version of 8080, single 5V power
supply, used in early computing devices.

3rd Generation (1978-1985) - 16-bit Microprocessors:-


Processed 16-bit data, allowing faster and more powerful computations. Segmented memory architecture,
meaning it could access more memory efficiently. Introduction of multiprogramming and higher clock
speeds. Used in early IBM PCs and business computers.
Intel 8086 (1978) – First 16-bit microprocessor, foundation of the x86 architecture , Intel 8088 (1979) – Similar to 8086 but
with an 8-bit external data bus, used in the first IBM PC (1981).

Intel 8086 fits into the 3rd generation because:


✔ It was the first 16-bit microprocessor, making it significantly more powerful than 8-bit processors like the 8085.
✔ Introduced segmented memory addressing, allowing it to access 1 MB of memory (much more than previous generations).
✔ Used in early personal computers, setting the stage for future x86 processors.

4th Generation (1985-1995) - 32-bit Microprocessors :-


Transition from 16-bit to 32-bit processing. Pipelining and cache memory introduced for better performance.Better multitasking
and support for graphical user interfaces (GUIs).Used in personal computers, workstations, and gaming.

Examples:Intel 80386 (1986) – First 32-bit microprocessor, introduced virtual memory support. Intel 80486 (1989) –
Improved performance, built-in floating-point unit (FPU), and better cache.

5th Generation (1993-Present) - 64-bit & Modern Microprocessors :-


Superscalar architecture (executing multiple instructions per cycle).Multi-core processors, where multiple cores share tasks to
improve efficiency.Hyper-threading, virtualization, AI acceleration introduced. Used in modern personal computers, gaming, AI,
and cloud computing.

Examples:Intel Pentium Series (1993-2000s) – Revolutionized computing with superscalar processing. Intel Core Series (i3,
i5, i7, i9) – Modern processors with multi-core technology and high-speed performance.
Placement of Intel 8086 in 3rd Generation

✔ Intel 8086 (1978) belongs to the 3rd generation because it introduced 16-bit architecture, allowing faster data processing
and better memory management.
✔ It laid the foundation for the x86 architecture, which is still used in modern processors.
✔ The segmented memory model was a key innovation that allowed 1 MB of memory access, a huge jump from the 8-bit
processors of the 2nd generation.

Why is 8086 Important?

➡ The entire modern x86 processor family (Pentium, Core i3/i5/i7/i9) is based on the architecture introduced by the 8086.
➡ It was used in IBM’s first PC, making it a historic milestone in computing.

1.Baby (4-bit) – Simple calculators (4004, 8008).


2.School Kid (8-bit) – Early computers (8080, 8085).
3.Teenager (16-bit) – IBM PCs and multitasking (8086, 8088).
4.College Student (32-bit) – Windows, Graphics, Speed (80386, 80486).
5.Professional (64-bit & Multi-core) – Modern fast processors (Pentium, Core i3/i5/i7/i9).

Q2 :- Microprocessor 8086

The processor operates in both . If a system requires , which mode should be used and why? Explain how the control signals
change in this mode compared to the other.

• Modes of Operation in 8086 Microprocessor :- The Intel 8086 microprocessor operates in two modes:

1. Minimum Mode (Single Processor System)

2. Maximum Mode (Multiprocessor System)

• When to Use Each Mode?

If a system requires a single processor, then Minimum Mode should be used.

If a system requires multiple processors (like coprocessors or multiprocessor configurations), then Maximum Mode
should be used.

• Minimum Mode (Single Processor Mode)


In this mode, the 8086 processor works alone and controls all system operations by itself.
It is simpler because it does not need any external bus controller.
The processor itself generates all the signals needed to control memory, input/output devices, and interrupts

Working : -

1. The processor fetches instructions from memory.


2. If it needs data from memory or an I/O device, it generates the appropriate control signals (M/IO̅, RD̅, WR̅).
3. The memory or I/O device responds, and data transfer happens.
4. The processor processes the data and executes the required operation.

Personal Computers (PCs) , Simple Embedded Systems (like calculators, traffic light controllers
• Maximum Mode (Multiprocessor Mode)
In this mode, multiple processors or coprocessors (like Intel 8087 math coprocessor) work together.
The 8086 processor does not control all the operations by itself. Instead, an external Bus Controller (Intel 8288) is
used to generate control signals.

Example :- High-Performance Servers , Multiprocessor Systems

Working :-

1. The 8086 processor works along with other processors or coprocessors.

2. Since multiple processors are involved, Intel 8288 Bus Controller is used to manage communication and signal
generation.

3. The processor executes tasks in parallel with other processors, increasing efficiency.

4. The LOCK̅ signal ensures that important operations are not interrupted by other processors.

If only one processor is used → Minimum Mode should be used because it is simpler and requires fewer components.

If multiple processors or coprocessors are used → Maximum Mode should be used because it allows better coordination
between processors for faster performance.

Question 2 :- The model allows access to , but individual segments are limited to . If a program needs to process a , how can it
efficiently access memory beyond a single segment? Provide an demonstrating your approach.

Memory Segmentation in 8086

• The 8086 microprocessor uses a segmented memory architecture.

• It has a 20-bit address bus, allowing it to access 1 MB (220^ {20}20 = 1,048,576 bytes) of memory.

• However, each segment is limited to 64 KB (because segment registers are 16-bit).

This means a program cannot directly access memory beyond 64 KB within a single segment.

How to Access Memory Beyond a Single Segment?


To access memory beyond a single 64 KB segment, we use segment overlapping by changing the segment register (CS, DS, SS,
ES) while keeping the offset within 64 KB.

Approach: Changing the Segment Register

1. Load a new segment base address into the segment register.

2. Adjust the offset to point to the correct memory location in the new segment.

3. Use this new segment:offset pair to access memory beyond the previous segment's limit.

Question 3:- An instruction in is given as follows: MOV AX, [BX + SI + 0x20]


(a) Identify the used to access memory.
(b) If, calculate the accessed.

Ans = MOV AX, [BX + SI + 0x20] This instruction moves data from a memory location into the AX register.

(a) Identify the Addressing Mode

The addressing mode used here is Indexed Based with Displacement.

• MOV AX, [BX + SI + 0x20] → This means AX = Memory[BX + SI + 0x20].


• BX (Base Register) – Holds a base address.
• SI (Source Index Register) – Used for indexed addressing.
• 0x20 (32 in decimal) – This is a displacement (constant offset).

This is a combination of:Base Register (BX) , Index Register (SI) , Displacement (0x20)
This makes it "Base Indexed with Displacement Addressing Mode".

(b) Calculate the Accessed Memory Address

To calculate the effective memory address, we need values for BX, SI, and the displacement.

Assumption:

Let's assume:

• BX = 0x1000 (4096 in decimal)

• SI = 0x0050 (80 in decimal)

• Displacement = 0x20 (32 in decimal)

Effective Address Calculation:

Effective Address=BX+SI+Displacement

Substituting the values:

=0x1000+0x0050+0x20

Converting to decimal:

=4096+80+32

=4208 (0x1070 in hexadecimal)

Thus, the memory address accessed is 0x1070 (4208 in decimal).


Question 4:-

The enables parallel execution of instruction fetch and execution phases. However, certain conditions cause pipeline or Explain
two such scenarios and describe how they affect execution time. Provide an example where a leads to a pipeline flush.

Ans :- The 8086 microprocessor uses pipelining to enable parallel execution of instruction fetch and execution phases. This
improves efficiency by allowing the CPU to fetch the next instruction while the current instruction is being executed.

However, certain conditions cause pipeline stalls or flushes, reducing execution efficiency.

1.Structural Hazard (Resource Conflict)

• Cause: When two stages of the pipeline need the same resource simultaneously.

• Effect: The pipeline stalls (delays) until the required resource is free.

Example:

• In early processors without separate instruction and data memory, if an instruction fetch and a data read/write need
access to the same memory, the CPU must wait for one to complete before proceeding.

2.Data Hazard (Data Dependency)

• Cause: When an instruction depends on the result of a previous instruction that has not yet completed.

• Effect: The CPU must wait for the previous instruction to finish execution before continuing.

3. Control Hazard (Branch or Jump Hazard)

• Cause: When the CPU fetches the wrong instruction due to a branch or jump.

• Effect: The pipeline is flushed (emptied), and instruction fetching restarts from the correct location.

Question :- A microprocessor needs to access 120 GB of memory, where each location stores a single bit. Determine the
minimum number of address lines required for this configuration.

To determine the minimum number of address lines required to access 120 GB of memory, follow these steps:

Step 1: Understand Memory Size and Addressing

• The total memory size is given as 120 GB (Gigabytes).

• Each memory location stores a single bit (not a byte).


Step 2: Convert Memory Size to Bits

Since 1 GB = 2^{30}bytes and 1 byte = 8 bits:

1 GB=8×2^30 bits

120 GB=120×8×2^30 bits

960×2^30 bits

960×1,073,741,824 bits

1,029,139,558,400 bits

Find the Minimum Number of Address Lines

To address N memory locations, the required number of address lines (A) is determined by:

2^A≥Total Memory Locations

2^A≥1,029,139,558,400

Taking log base 2 on both sides:

A≥log2(1,029,139,558,400)

log2(1,029,139,558,400)≈40

, A = 40 address lines are required.

Question :- Explain the key differences between Von Neumann and Harvard architectures, discussing their benefits and
drawbacks. Provide examples of processors utilizing each type.

Detailed Explanation of Von Neumann and Harvard Architectures

Computer architectures define how a system stores and processes data and instructions. The two major architectures are:

1.Von Neumann Architecture – Uses a single memory for instructions and data.
2.Harvard Architecture – Uses separate memory for instructions and data.

1. Von Neumann Architecture

What is it?

The Von Neumann Architecture was proposed by John von Neumann in 1945. It is the foundation of modern computers. It uses:

• One memory for both program instructions and data.

• One bus (communication path) to transfer both data and instructions.

This means that the CPU must fetch instructions and data one by one, leading to a bottleneck (delay).

Key Features

✔ Single memory for data and instructions → Less hardware needed.


✔ Single bus → Simpler design.
✔ Sequential execution → Instructions and data must be fetched one after another.

Example
• Intel processors (8086, Pentium, Core i3/i5/i7/i9).

• AMD Ryzen processors used in PCs and laptops.

Advantages

Simple Design – Only one memory and one bus make it easy to build.
Cost-Effective – Uses less hardware, reducing cost.
Efficient Memory Use – The same memory is shared between instructions and data, making it flexible.

Disadvantages

Slow Execution – Since instructions and data share the same bus, they cannot be fetched at the same time. This causes the
Von Neumann Bottleneck.
Risk of Overwriting – A faulty program might modify its own instructions and crash the system.

2. Harvard Architecture

What is it?

The Harvard Architecture was developed at Harvard University for early computers. It improves performance by separating:

• Instruction memory → Stores program instructions.

• Data memory → Stores data values.

• Two separate buses → One for instructions, one for data.

Since data and instructions travel separately, they can be fetched at the same time, making execution much faster.

Key Features

✔ Separate memory for instructions and data → No interference.


✔ Two buses → Parallel execution of instruction and data access.
✔ Higher speed and efficiency.

Example

• AVR microcontrollers (used in Arduino).

• DSP (Digital Signal Processors) for high-speed signal processing.

Advantages

Faster Execution – CPU can fetch data and instructions at the same time.
No Bottleneck – No waiting because both data and instructions have their own paths.
More Secure – Instructions cannot modify themselves, reducing errors.

Disadvantages

More Complex Design – Two separate memories and buses require additional hardware.
Higher Cost – More components make it expensive.
Less Flexible – Fixed memory allocation (programs cannot modify instructions easily
MODULE 03
Question:- Describe the main features of the 8051 microcontroller and how it differs from a microprocessor, providing relevant
examples.

The 8051 microcontroller is one of the most commonly used 8-bit microcontrollers. It was developed by Intel in 1980 and is
widely used in embedded systems, automation, robotics, and industrial applications.

What is a Microcontroller?

A microcontroller is a small computer on a single chip. It has a CPU (processor), memory (RAM & ROM), and input/output
ports (I/O) built into a single chip. This makes it useful for controlling devices like washing machines, remote controls, and
microwave ovens.

Main Features of the 8051 Microcontroller

1. 8-bit CPU (Central Processing Unit)

• The 8051 microcontroller is an 8-bit processor, meaning it processes 8-bit data at a time.

• Example: If we add two numbers, they should be 8-bit numbers (0 to 255 in decimal).

2 . 16-bit Address Bus

• The 8051 can access up to 64 KB of program memory (ROM) and 64 KB of data memory (RAM).

• It uses a 16-bit address bus to locate memory addresses.

3. 128 Bytes of RAM (Internal Memory)

• Used for storing temporary data while executing programs.

4 . 4 KB of ROM (Program Memory)

• The 8051 comes with 4 KB of built-in ROM, which stores the program permanently.

• Example: A washing machine program stored in ROM will run every time you press "Start."

5 . 32 I/O Pins (Input/Output Ports)

• The 8051 has 4 input/output (I/O) ports (P0, P1, P2, P3), each with 8 pins.
• Used to connect LEDs, sensors, switches, and motors.

• Example: A microcontroller in a traffic light system controls the red, yellow, and green lights.

6 . 2 Timers and 1 Serial Port

• Timers help in generating time delays and counting events.

• Serial Port (UART) allows communication between the microcontroller and a computer or another device.

7 . 5 Interrupt Sources

• Interrupts are signals that tell the microcontroller to pause the current task and handle an urgent task.

• Example: If you press a button to stop a machine, the microcontroller immediately stops the motor using an interrupt.

8 . Harvard Architecture

• The 8051 uses the Harvard architecture, meaning it has separate memory for program (ROM) and data (RAM).

• This makes the 8051 faster than Von Neumann-based processors, where both program and data share the same
memory.

Small Size – Everything is inside a single chip.


Low Power Consumption – Ideal for battery-powered devices.
Cost-Effective – Cheaper than microprocessors.
Faster Execution – Uses Harvard architecture for speed.
Reliable – Used in many real-world applications like traffic lights, medical devices, and industrial automation.

Conclusion

• The 8051 microcontroller is a self-sufficient computing unit, making it ideal for automation and control applications.

• A microprocessor, on the other hand, is just a CPU and requires additional components like RAM, ROM, and I/O devices
to function.

• The 8051 is widely used in embedded systems due to its simplicity, low power consumption, and built-in peripherals.
Question :- Provide a detailed explanation of an assembler, a compiler, and an emulator, focusing on their functionality,
importance in programming, and role in system development.

In programming and system development, assemblers, compilers, and emulators play essential roles in translating, executing,
and testing code. Let’s understand each one in detail.

1. Assembler
An assembler is a software tool that converts assembly language code into machine code (binary instructions) that a
computer's CPU can execute.
Functionality
• Takes assembly language code (mnemonics like MOV, ADD, SUB) as input.
• Translates mnemonics into machine code (binary format) that the processor understands.
• Generates an object file (.obj or .o) and sometimes a listing file (.lst) for debugging.
Importance in Programming
• Used in low-level programming where direct control of hardware is needed.
• Efficient and fast execution compared to high-level languages.
• Essential for embedded systems, device drivers, and operating systems.

2. Compiler
A compiler is a program that translates high-level programming language (C, Java, Python) into machine code (binary)
so that the computer can execute it.
Functionality
• Reads entire high-level source code.
• Converts it into machine code or an intermediate representation.
• Detects and reports errors in the source code.
• Produces an executable file (.exe, .out) that can run on the target system.
Importance in Programming
• Enables development in high-level languages like C, C++, and Java.
• Improves portability – the same high-level code can be compiled for different processors.
• Optimizes code for better performance and memory usage

3. Emulator
An emulator is a software or hardware tool that mimics the behavior of a different system, allowing software or
hardware designed for one platform to run on another.
Functionality
• Simulates hardware and software environments for testing and debugging.
• Allows programs designed for one architecture (like ARM) to run on another (like x86).
• Useful for embedded systems development, gaming, and OS testing.
Importance in System Development
• Helps test software without real hardware.
• Used in game emulators (e.g., playing PlayStation games on a PC).
• Essential for embedded system testing (e.g., running Android apps on a Windows PC).

Assembler is used to convert assembly language into machine code.


Compiler is used to convert high-level language into machine code.
Emulator is used to simulate another system to run or test software without real hardware.

You might also like