Distributed Computing System Models
Distributed computing means splitting work and data across multiple devices instead of
relying on one central system.
This setup improves efficiency, scalability, and fault tolerance. To understand distributed
systems better,
we divide them into three models:
1.Physical
2.Fundamental and
3. architectural model .
I. Physical Model
The physical model shows the hardware used in a distributed system. It explains how
devices are connected and interact.
1.Nodes
• These are devices like computers or servers that perform tasks and communicate
with other nodes.
• Each node has its own operating system and software to handle tasks like storage,
processing, and communication.
2.Links
• Links are the connections between nodes, like wires (copper, fiber optics) or
wireless signals.
Types of linkss
Point-to-point: Connects two nodes directly.
Broadcast: One node sends data to multiple nodes.
Multi-access: Many nodes share a single communication channel.
3.Middleware
• Software that helps nodes communicate and manage resources.
• It handles tasks like fault tolerance, synchronization, and security.
4.Network Topology
• The layout of nodes and links in the system, such as star, mesh, bus, or ring.
• The choice depends on the use case and requirements.
5.Communication Protocols
Rules for data transfer, like TCP/IP, HTTP, or MQTT. These ensure smooth communication
between nodes
II. Fundamental Model
The fundamental model is a basic framework to understand distributed systems. It
focuses on essential concepts like interaction and behavior.
1.Interaction Model
Explains how nodes communicate, focusing on response time and delays.
2.Remote Procedure Call (RPC)
Allows a program to request a service or function from a different node as if it were local.
III. Architectural Model
The Architectural Model describes the design, structure, and interaction of components
in a distributed computing system. It helps in organizing the system efficiently to meet
scalability, performance, and reliability requirements
1.Client-Server Model
The Client-Server Model is a centralized approach where the system is divided into two
parts: clients and servers.
Clients: These are devices or applications that request services, like web browsers or
user applications.
Servers: These are systems or applications that provide services, like databases or file
storage systems.
Working Process:
Clients send requests (e.g., to retrieve data).
Servers process these requests and respond with the required data or services.
Protocols Used:
Commonly uses HTTP, TCP/IP, or FTP for communication.
Examples:
Websites (clients: browsers; servers: web hosting systems).
Cloud computing platforms like AWS.
2.Peer-to-Peer (P2P) Model
Unlike the client-server model, the Peer-to-Peer Model is decentralized, where all devices
(peers) are equal and can act as both clients and servers.
Working Process:
Each peer can directly request resources or services from another peer without relying
on a central server.
Peers communicate using protocols defined by the P2P system
Key Features:
Highly dynamic: Peers can join or leave at any time without affecting the overall system.
Resources like files or processing power are distributed across peers.
Examples:
File-sharing applications like BitTorrent.
Decentralized networks like blockchain.
3.Layered Model
The Layered Model organizes the system into layers, where each layer performs a specific
function.
Structure:
Layers communicate only with adjacent layers through well-defined protocols.
A hierarchical structure ensures that changes in one layer do not affect others.
Example Layers:
Application Layer: Provides services to users, like web browsing or email.
Transport Layer: Ensures reliable data transfer between devices (e.g., TCP).
Network Layer: Handles data routing (e.g., IP).
Examples:
The OSI model and TCP/IP stack follow a layered approach.
4. Microservices Model
The Microservices Model divides a large application into small, independent services.
Each service performs a single task and can be managed individually.
Working Process:
Services are deployed on different servers or nodes and communicate through
lightweight protocols like REST or gRPC.
Each service is independent, so issues in one service won’t disrupt others.
Examples:
E-commerce platforms: Separate microservices for payment, inventory, and user
authentication.
Applications like Netflix, Amazon, and Uber use microservices to handle their large-scale
operations.