[go: up one dir, main page]

0% found this document useful (0 votes)
69 views8 pages

Different Types of Computer Architecture

The document provides an overview of computer architecture, detailing its definition, importance, and key components such as the CPU, memory, and I/O devices. It compares Harvard and Von Neumann architectures, discussing their advantages and limitations, including the Von Neumann bottleneck problem. Additionally, it introduces Flynn's classification of computer architectures and concludes with insights on future trends and the significance of understanding these architectures for technology development.

Uploaded by

snehasarkar20.7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views8 pages

Different Types of Computer Architecture

The document provides an overview of computer architecture, detailing its definition, importance, and key components such as the CPU, memory, and I/O devices. It compares Harvard and Von Neumann architectures, discussing their advantages and limitations, including the Von Neumann bottleneck problem. Additionally, it introduces Flynn's classification of computer architectures and concludes with insights on future trends and the significance of understanding these architectures for technology development.

Uploaded by

snehasarkar20.7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

SILIGURI INSTITUTE OF TECHNOLOGY

NAME- SNEHA SARKAR


ROLL NO- 11900123134
DEPATMENT- CSE
SECTION- C
SEMESTER-Third
SUBMITTED TO-PROFESSEOR Mrs. AnkitaSinha
Introduction to Computer Architecture

• Definition:
• Computer architecture refers to the conceptual design and fundamental
operational structure of a computer system.
• It encompasses the hardware and software resources, as well as the logical
aspects of how data is processed and managed.
• Importance:
• Understanding computer architecture is crucial for optimizing performance,
enhancing computing efficiency, and guiding the development of new
technologies.
Basic Computer Architecture

• Components:
• Central Processing Unit (CPU): The brain of the computer, responsible for
executing instructions.
• Memory: Where data and instructions are stored temporarily or
permanently.
• Input/Output (I/O) Devices: Interfaces for the user and external devices to
interact with the computer.
• Bus: Communication pathways that transfer data between components.
• Process Flow:
• Instructions are fetched from memory, decoded by the CPU, executed, and
the results are stored back in memory or sent to an output device.
Harvard Architecture
• Definition:
• A computer architecture with physically separate storage and signal pathways for
instructions and data.
• Components:
• Separate Memories: Two distinct memory modules, one for instructions and another for
data.
• Separate Buses: Independent buses for transferring instructions and data, allowing
simultaneous access.
• Advantages:
• Increased Speed: Parallel processing of instructions and data enhances computational
speed.
• Efficiency: Minimizes bottlenecks by eliminating competition between instruction and data
fetches.
• Applications:
• Commonly used in embedded systems, digital signal processors (DSPs), and microcontrollers.
Von Neumann Architecture

• Definition:
• A computer architecture where both instructions and data share the same memory
space and data pathways.
• Components:
• Single Memory: Unified memory for storing both program instructions and data.
• Single Bus: A single bus system for fetching instructions and data sequentially.
• Advantages:
• Simplicity: Easier to design and program, making it suitable for general-purpose
computing.
• Flexibility: Can execute self-modifying code, useful for certain types of algorithms.
Von Neumann Bottleneck Problem

• Explanation:
• A performance limitation caused by the shared bus architecture, where the CPU must fetch
instructions and data sequentially, leading to delays.
• Causes:
• Shared Bus: The same bus is used for both instruction fetches and data transfers, leading to
competition and delays.
• Memory Access Delays: The CPU often waits for data or instructions to be fetched from
memory, slowing down execution.
• Impact:
• Reduced Throughput: Limits the maximum throughput of a system, affecting performance,
especially in high-speed computing.
• Mitigations:
• Caching: Utilizing small, fast memory caches to store frequently used data and instructions.
• Pipelining: Breaking down instruction execution into multiple stages that can be processed
concurrently.
Flynn’s Classification of Computer Architecture
• Overview:
• A classification system proposed by Michael J. Flynn in 1966, categorizing computer
architectures based on instruction and data streams.
• Categories:
• Single Instruction, Single Data (SISD):
• A traditional sequential architecture where one instruction operates on one data set at a time.
• Single Instruction, Multiple Data (SIMD):
• A parallel architecture where one instruction operates on multiple data sets simultaneously, often used in
vector processing.
• Multiple Instruction, Single Data (MISD):
• A rare and specialized architecture where multiple instructions operate on a single data set, often for
fault-tolerant systems.
• Multiple Instruction, Multiple Data (MIMD):
• A highly parallel architecture where multiple instructions operate on multiple data sets simultaneously,
used in multiprocessor systems.
• Applications:
• SISD in traditional CPUs, SIMD in GPUs, MISD in redundant systems, and MIMD in
supercomputers.
Conclusion

• Summary:
• Different computer architectures offer various strengths and weaknesses,
making them suitable for specific applications.
• The choice of architecture impacts performance, complexity, and power
efficiency.
• Future Trends:
• Ongoing research into quantum computing, neuromorphic architectures, and
advanced parallel processing techniques.
• Final Thoughts:
• Understanding these architectures helps in selecting the right system for
specific tasks and contributes to innovation in computing technologies.

You might also like