[go: up one dir, main page]

0% found this document useful (0 votes)
15 views22 pages

Computer Network

A computer network connects independent computers to share information and resources, utilizing hardware and software for communication. The goals of computer networks include resource sharing, reliability, scalability, communication, cost efficiency, and remote access. Different network architectures, such as peer-to-peer and client-server, as well as various types of networks (PAN, LAN, MAN, WAN), are discussed, along with the OSI model and data link layer functionalities including error detection and flow control.

Uploaded by

akshitthakur371
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views22 pages

Computer Network

A computer network connects independent computers to share information and resources, utilizing hardware and software for communication. The goals of computer networks include resource sharing, reliability, scalability, communication, cost efficiency, and remote access. Different network architectures, such as peer-to-peer and client-server, as well as various types of networks (PAN, LAN, MAN, WAN), are discussed, along with the OSI model and data link layer functionalities including error detection and flow control.

Uploaded by

akshitthakur371
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

COMPUTER NETWORK

Unit 1

Computer Network?

A computer network is a system that connects many independent computers to share information (data) and resources. The integration of computers and other different devices allows
users to communicate more easily. A computer network is a collection of two or more computer systems that are linked together. A network connection can be established using
either cable or wireless media. Hardware and software are used to connect computers and tools in any network.

Goals of Computer Networks

1. Resource Sharing
Allow users to share hardware (printers, storage), software, and data.

2. Reliability
Ensure data is safely transmitted and received, even in case of failures.

3. Scalability
Support adding more users and devices without major changes.

4. Communication
Enable users to communicate via emails, chats, video calls, etc.

5. Cost Efficiency
Reduce cost by sharing expensive resources and using centralized management.

6. Remote Access
Allow users to access data and resources from anywhere.

components of a data communication system


 Sender: The device or user that initiates and sends the message.
 Message: The data or information being transmitted.

 Medium: The physical or wireless path through which the message travels.
 Protocol: A set of rules that govern data communication.
 Receiver: The device or user that receives and interprets the message.

Types of Network Architecture

Computer network Architecture refers to the physical and logical design of how computers are organized and tasks are allocated in a network. The two main types of network
Architecture are peer-to-peer and client server Architecture.

. Peer-to-Peer (P2P) Architecture

 Definition: In a peer-to-peer network, all computers (called peers) are equal and can act as both clients and servers. Each device can share its own resources directly with
others.

 Features:

o No central server.

o Easy to set up and cost-effective.

o Each peer can access shared files from others.

o Suitable for small networks (like home or small office setups).

 Advantages:

o Simple and inexpensive.

o No need for dedicated server hardware.

 Disadvantages:

o Difficult to manage in large networks.

o Less secure and harder to back up data.


 Example: File sharing between laptops at home using a shared folder.

2. Client-Server Architecture

 Definition: In a client-server network, one or more central servers provide resources and services to multiple client devices.

 Features:

o Centralized control through a server.

o Clients request services; servers respond.

o Common in business and large-scale networks.

 Advantages:

o High security and manageability.

o Easy to back up and update data centrally.

o Scalable for large organizations.

 Disadvantages:

o Requires powerful and costly server hardware.

o If the server fails, clients may lose access.

 Example: A company email server accessed by all employee computers.

Classifications & Types

Type Description Pros Cons Example

- Very portable
PAN Connects personal devices in a small range (up to 10 - Very limited range Bluetooth between phone &
- Low cost
(Personal Area Network) meters) - Low speed earbuds
- Easy to set up

- High speed
LAN - Limited to small area
Connects devices in a home, school, or office - Easy to manage Office network with shared printer
(Local Area Network) - Maintenance required
- Cost-effective

MAN - Covers larger area - Expensive setup


Connects LANs across a city or campus City-wide cable TV or metro Wi-Fi
(Metropolitan Area Network) - Faster than WAN - Difficult to manage

- Covers very large area - High cost


WAN
Connects computers across countries or continents - Facilitates global - Slower speed The Internet
(Wide Area Network)
communication - Less secure

. Layered Architecture: Protocol hierarchy (rgpv)

Interfaces and Services in Layered Network Architecture

1. Interface

Definition:

An interface in layered architecture is the boundary or point of interaction between two adjacent layers within the same system (host or device). It is the means by which an upper
layer requests services from the layer directly below it.

Purpose:

 Provides a clear separation between layers.

 Enables modularity: Each layer can be designed, updated, or replaced independently as long as the interface remains consistent.

 Hides the internal implementation of the lower layer (encapsulation).

 Defines how data and control information flow between layers.

Characteristics:

 Defines the syntax and semantics of interactions.

 Includes the format of messages passed between layers.


 Specifies the service primitives (basic operations) used.

2. Services

Definition:

A service is a set of well-defined operations and functionalities that a layer offers to the layer above it. It abstracts the internal complexities and provides a simplified interface to upper
layers.

Examples of services by layers:

 Physical Layer: Bit transmission service.

 Data Link Layer: Reliable frame delivery service.

 Network Layer: Routing and forwarding packets service.

 Transport Layer: Reliable end-to-end data transfer service.

 Application Layer: Various application-specific services like file transfer, email, web browsing.

Connection Oriented services

Connection-oriented services involve setting up a dedicated path between the source and destination before data transfer begins. These services ensure that data is delivered in
the correct sequence and without errors. In a connection-oriented service, the Handshake method is used to establish the connection between sender and receiver. Before data
transmission starts, connection-oriented services create a dedicated communication channel between the sender and the recipient As the connection is kept open until all data is
successfully transferred, this guarantees dependable data delivery. One example is TCP (Transmission Control Protocol), which ensures error-free and accurate data packet
delivery.

Key Characteristics:

 Connection Setup: A connection is established via a handshake before data transfer.

 Reliable Delivery: Data packets arrive in order, without loss or duplication.

 Flow Control & Error Control: Ensures smooth data flow and corrects errors.

 Connection Termination: Connection is formally closed after data transfer.

Pros:

 Guarantees reliable and ordered data delivery.

 Easier to manage flow control and error recovery.

 Suitable for applications requiring high reliability (e.g., file transfers, web browsing).

Cons:

 Connection setup introduces initial delay.

 More overhead due to connection management.

 Uses more network resources as the connection is maintained during the session.

What is Connection-Less Service?

Connectionless services send data without establishing a dedicated connection between the source and destination. Each data packet is treated independently, and there is no
guarantee of delivery or sequencing. Connection-less Service does not give a guarantee of reliability. In this, Packets do not follow the same path to reach their
destination. Connectionless Services deliver individual data packets without first making a connection. Since each packet is sent separately, delivery, order, and mistake correction
cannot be guaranteed. As a result, the service is quicker but less dependable. UDP (User Datagram Protocol) is one example, which is frequently used for streaming where dependability
is not as important as speed.

Examples of Connectionless Services

 UDP (User Datagram Protocol) in the TCP/IP suite.

 Postal services (analogous to sending letters without confirmation of receipt).

Key Characteristics:
 No prior connection setup.

 Packets may take different routes.

 Delivery is not guaranteed; packets may arrive out of order, duplicated, or lost.

 No connection termination needed.

Pros:

 Low overhead since no connection setup or termination.

 Faster communication start.

 Efficient for applications that can tolerate some data loss.

Cons:

 No guarantee of packet delivery or order.

 Error recovery must be handled by the application layer (if needed).

 Not suitable for critical data transfer.

What is OSI Model? - Layers of OSI Model

The OSI (Open Systems Interconnection) Model is a set of rules that explains how different computer systems communicate over a network. OSI Model was developed by
the International Organization for Standardization (ISO). The OSI Model consists of 7 layers and each layer has specific functions and responsibilities. This layered approach makes it
easier for different devices and technologies to work together. OSI Model provides a clear structure for data transmission and managing network issues. The OSI Model is widely used as
a reference to understand how network systems function.

ISOOSI Reference Model: Principle, Model, Descriptions of various layers (online)

comparison with TCP/IP(online)

Principals of physical layer: transmission Media(online) Principals of physical layer: Media, Bandwidth, Data rate and Modulations(rgpv)

Transmission media is the physical medium through which data is transmitted from one device to another within a network. These media can be wired or wireless. The choice of
medium depends on factors like distance, speed, and interference

Unit 2

Data Link Layer: Need, Services Provided


Data Link Layer is the second layer of seven-layer Open System Interconnection (OSI) reference model of computer networking and lies just
above Physical Layer. It is responsible for receiving and getting data bits usually from Physical Layer and then converting these bits into groups,
known as data link frames so that it can be transmitted further. It is also responsible to handle errors that might arise due to transmission of
bits.

Sub-Layers of The Data Link Layer

Logical Link Control (LLC

This sublayer of the data link layer deals with multiplexing, the flow of data among applications and other services, and LLC is responsible for
providing error messages and acknowledgments as well.

Media Access Control (MAC)

MAC sublayer manages the device's interaction, responsible for addressing frames, and also controls physical media access. The data link layer
receives the information in the form of packets from the Network layer, it divides packets into frames and sends those frames bit-by-bit to the
underlying physical layer

Services Provided by Data Link Layer

1. Framing:

o Divides the data stream into manageable frames for transmission.

2. Error Detection and Correction:

o Uses mechanisms like checksums, CRC (Cyclic Redundancy Check) to detect errors.

o May also correct some errors using techniques like Hamming Code.

3. Flow Control:

o Prevents fast sender from overwhelming a slow receiver (e.g., using stop-and-wait, sliding window).

4. Access Control (MAC - Media Access Control):

o Determines which device can transmit when multiple devices share the same medium.

o Important in broadcast networks like Ethernet and Wi-Fi.

5. Physical Addressing:

o Adds the MAC address of source and destination to each frame.

Framing (rgpv, notes)

Flow Control?

Flow Control is a technique used in data communication to manage the rate of data transmission between a sender and a receiver. It ensures
that the sender does not overwhelm the receiver by sending data faster than the receiver can process and store it.

Why is Flow Control needed?

 Prevents data loss when the receiver's buffer is full.

 Ensures smooth and efficient data transfer.

 Helps maintain synchronization between sender and receiver.

Approaches to Flow Control

1. Feedback-based Flow Control:


The sender transmits data and waits for the receiver’s acknowledgment before sending more. The receiver controls the flow by informing the sender about its
ability to process data.

2. Rate-based Flow Control:


The sender limits its transmission rate based on a built-in protocol mechanism, without waiting for feedback from the receiver, to prevent overwhelming it.

Stop-and-Wait Flow Control


The sender breaks data into frames and sends one frame at a time. It waits for an acknowledgment from the receiver before sending the next
frame. This continues until an End of Transmission (EOT) frame is sent. Only one frame is in transit at a time, which can cause inefficiency if
there’s a long propagation delay.

Pros:

 Simple to implement

 Easy error control

Cons:

 This method is fairly slow.

 In this, only one packet or frame can be sent at a time.

 It is very inefficient and makes the transmission process very slow.

Go-Back-N Flow Control

In Go-Back-N, the sender can send multiple frames (up to a window size) without waiting for an acknowledgment. If an error occurs or a frame
is lost, the receiver discards that frame and all following frames. The sender then goes back and retransmits that frame and all subsequent
ones.

Pros:

 Better efficiency than Stop-and-Wait

 Allows multiple frames in transit (higher throughput)

Cons:

 Wastes bandwidth if one frame is lost (all following frames are retransmitted)

 Receiver must discard out-of-order frames

Error Control in Data Link Layer

Error control in the Data Link Layer ensures that data frames are transmitted accurately from sender to receiver. It is not mandatory but serves as an optimization
to detect and correct errors such as lost or corrupted frames. When errors occur, the receiver may not get the correct data, and the sender remains unaware. To

handle this, protocols use Automatic Repeat Request (ARQ) to detect errors and retransmit affected frames, ensuring reliable communication .
Ways of doing Error Control : There are basically two ways of doing Error control as given below :

1. Error Detection : Error detection, as the name suggests, simply means detection or identification of errors. These errors may occur
due to noise or any other impairments during transmission from transmitter to the receiver, in communication system. It is a class of
techniques for detecting garbled i.e. unclear and distorted data or messages.

2. Error Correction : Error correction, as the name suggests, simply means correction or solving or fixing of errors. It simply means
reconstruction and rehabilitation of original data that is error-free. But error correction method is very costly and very hard.

Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single binary digit) of a
transmitted data unit is
altered during transmission, resulting in an incorrect or corrupted data unit.

Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is affected. Although multiple-bit
errors are relatively rare when compared to single-bit errors, they can still occur, particularly in high-noise or high-interference
digital environments.

Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst error. This error causes a
sequence of consecutive incorrect values.
Error Detection Techniques (copy)

1. Parity Check – Adds a parity bit (even/odd) to detect single-bit errors. (copy)
2. Checksum – Adds a value derived from data bits; receiver recalculates and compares.

3. Cyclic Redundancy Check (CRC) – Divides data by a polynomial; detects burst errors effectively.

Error Correction Techniques

1. Automatic Repeat Request (ARQ) – Retransmits lost or corrupted frames:

o Stop-and-Wait ARQ

o Go-Back-N ARQ

o Selective Repeat ARQ

2. Forward Error Correction (FEC) – Adds redundant data so the receiver can correct errors without retransmission.

checksum

Checksum error detection is a method used to identify errors in transmitted data. The process involves dividing the data into equally sized
segments and using a 1's complement to calculate the sum of these segments. The calculated sum is then sent along with the data to the
receiver. At the receiver's end, the same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.

 Sender Side:

 Data is divided into equal-sized blocks (e.g., 16 bits).

 All blocks are added using binary addition.

 The 1’s complement of the sum is calculated — this becomes the checksum.

 The data plus checksum is sent to the receiver.

 Receiver Side:

 All received blocks (including the checksum) are added.

 The 1’s complement of the result is taken.

 If the result is all zeros, there is no error; otherwise, an error is detected.

Advantages:

 Simple and fast

 Detects most common transmission errors

❌ Disadvantages:

 May not detect all types of errors (e.g., if bits cancel each other out)

Cyclic Redundancy Check (CRC)

 Unlike the checksum scheme, which is based on addition, CRC is based on binary division.

 In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of the data unit so that the
resulting data unit becomes exactly divisible by a second, predetermined binary number.

 At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the data unit is
assumed to be correct and is therefore accepted.

 A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.

CRC Working

We have given dataword of length n and divisor of length k.

Step 1: Append (k-1) zero's to the original message

Step 2: Perform modulo 2 division

Step 3: Remainder of division = CRC

Step 4: Code word = Data with append k-1 zero's + CRC


Note:

 CRC must be k-1 bits

 Length of Code word = n+k-1 bits

Example: Let's data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC method discussed below.

Advantages:

 Detects burst errors effectively

 Very reliable and widely used (Ethernet, USB, etc.)

❌ Disadvantages:

 Can only detect errors, not correct them

 Slightly complex compared to parity or checksum

Protocols for Noiseless (Error-Free) Channels

 Designed assuming no errors occur during transmission (no lost, corrupted, or duplicated frames).

 Mainly used for theoretical purposes or as a foundation for more complex protocols.

 Not practical for real-world communication since channels usually have noise.

2. Protocols for Noisy (Error-Causing) Channels

 Designed to handle errors such as lost, corrupted, or duplicated frames.

 Used in real-life applications where communication channels are not perfect.

 Include mechanisms for error detection and correction.

(copy)

1-bit, Go-Back-N, Selective Repeat (rgpv)

Hybrid ARQ (HARQ): Hybrid Automatic Repeat reQuest (HARQ) is a protocol that combines ARQ (Automatic Repeat reQuest) and FEC
(Forward Error Correction) to improve the reliability and efficiency of data transmission, especially over unreliable or noisy communication
channels (e.g., wireless networks, mobile networks).

🔹 Why combine ARQ and FEC?

 ARQ alone: Sends data and waits for an acknowledgment (ACK). If an error is detected (e.g., via a checksum), the receiver asks for a
retransmission.

 FEC alone: Adds redundant bits to allow the receiver to detect and correct errors without asking for a retransmission.
 HARQ: Uses FEC to correct errors when possible, and requests retransmission only if correction fails—reducing retransmissions and
improving performance.

🔹 How HARQ Works

1. Sender encodes data with error-correcting codes (e.g., Turbo, LDPC).

2. Transmits the packet to the receiver.

3. Receiver tries to decode:

o If successful → sends ACK.

o If decoding fails → stores received data and sends NACK (negative acknowledgment).

4. Sender retransmits more redundancy (not always the same packet).

5. Receiver combines original and retransmitted packets to improve chances of successful decoding.

🔹 Types of Hybrid ARQ

Type Description

Type I HARQ Combines FEC and ARQ directly. If error correction fails, the whole packet is retransmitted.

Type II HARQ On failure, additional redundancy bits are sent instead of the full packet. The receiver combines them with the original.

Type III HARQ Like Type II, but the receiver can decode each retransmission independently, allowing even more flexibility and efficiency.

🔹 Advantages of HARQ

 Reduces retransmissions compared to basic ARQ.

 Increases throughput and efficiency.

 Better suited for wireless and mobile networks, where errors are common.

 Adaptive: can use soft combining techniques like Chase Combining or Incremental Redundancy to improve decoding.

Protocol verification: Finite State Machine Models & Petri net models. (copy)

Address Resolution Protocol (ARP) -

Address Resolution Protocol is a communication protocol used for discovering physical address associated with given network address.
Typically, ARP is a network layer to data link layer mapping process, which is used to discover MAC address for given Internet Protocol Address.
In order to send the data to destination, having IP address is necessary but not sufficient; we also need the physical address of the destination
machine. ARP is used to get the physical address (MAC address) of destination machine.

How ARP Works:

1. A device wants to send data to another device on the same subnet.

2. It knows the IP address of the target but not the MAC address.

3. It sends a broadcast ARP Request:


"Who has IP address 192.168.1.10? Tell 192.168.1.5"

4. The device with the IP 192.168.1.10 responds with an ARP Reply:


"192.168.1.10 is at MAC address AA:BB:CC:DD:EE:FF"

5. The sender stores this IP–MAC mapping in its ARP cache and proceeds to send the packet.

Reverse Address Resolution Protocol (RARP) -


Reverse ARP is a networking protocol used by a client machine in a local area network to request its Internet Protocol address (IPv4) from
the gateway-router's ARP table. The network administrator creates a table in gateway-router, which is used to map the MAC address to
corresponding IP address. When a new machine is setup or any machine which don't have memory to store IP address, needs an IP
address for its own use. So the machine sends a RARP broadcast packet which contains its own MAC address in both sender and receiver
hardware address field.

How RARP Works:

1. A diskless machine, which knows only its MAC address, sends a RARP request.

2. A RARP server on the same LAN looks up the MAC in its table.

3. The server responds with the corresponding IP address.

Unit 4
Network Layer

The Network Layer is the 5th Layer from the top and the 3rd layer from the Bottom of the OSI Model. It is one of the most important layers which plays a key role in data transmission.
The main job of this layer is to maintain the quality of the data and pass and transmit it from its source to its destination. It also handles routing, which means that it chooses the best
path to transmit the data from the source to its destination, not just transmitting the packet. There are several important protocols that work in this layer

The Network Layer is responsible for the delivery of packets from the source host to the destination host across multiple interconnected networks. It handles logical addressing, routing,
fragmentation, and error handling.
Functions of Network Layer
Logical Addressing – Assigns unique IP addresses to devices.
Routing – Selects the best path for data to reach its destination.
Packet Forwarding – Moves packets between devices based on IP.
Fragmentation/Reassembly – Splits large packets and reassembles them.
Error Handling – Uses ICMP to report errors and diagnostics.
Traffic Control – Manages traffic by avoiding network congestion through flow control techniques.

Need for the Network Layer


In a computer network, not all devices are in the same local network. Therefore, communication between devices in different networks requires:
🔹 a. Routing Across Multiple Networks:
 Unlike the Data Link Layer, which operates within a single local network (LAN), the Network Layer enables communication between different networks using routers.
🔹 b. Logical Addressing:
 Devices in different networks are identified using logical addresses (e.g., IP addresses), enabling global identification.
🔹 c. Path Selection:
 The Network Layer chooses the optimal path for the packet to travel from source to destination based on factors like distance, speed, and congestion.
🔹 d. Handling Heterogeneity:
 The Network Layer provides a common communication method over various physical and data link protocols.

Services Provided by the Network Layer


1. Logical Addressing
o Provides unique IP addresses to devices for identification across networks.
2. Routing
o Determines the best path for data to travel from source to destination.
3. Packet Forwarding
o Transfers packets from one node to the next until they reach the destination.
4. Fragmentation and Reassembly
o Breaks down large packets into smaller ones and reassembles them at the destination.
5. Error Reporting
o Uses protocols like ICMP to report issues like unreachable destinations or timeouts.
6. Traffic Control
o Helps manage congestion and optimize data flow in the network.
Design Issues in the Network Layer
1. Addressing
o Assigning unique IP addresses to devices.
2. Routing
o Finding the best path for data to reach the destination.
3. Packet Forwarding
o Deciding how packets move from source to destination.
4. Fragmentation
o Breaking large packets into smaller ones to fit the network.
5. Congestion Control
o Avoiding network overload and delays.
6. Error Handling
o Detecting and reporting errors in packet delivery.
7. Security
o Protecting data from attacks or unauthorized access.

Routing Algorithms?

Routing algorithms are methods used by routers to determine the best path (or route) for forwarding data packets from a source to a destination across interconnected networks In
this process, a routing table is created which contains information regarding routes that data packets follow. Various routing algorithms are used for the purpose of deciding which route
an incoming data packet needs to be transmitted on to reach the destination efficiently
routing algorithm property:
1. Correctness – Ensures the algorithm computes valid, loop-free paths.
2. Simplicity – Easy to understand, implement, and manage.
3. Robustness – Continues to work correctly under failures or changes.
4. Stability – Avoids frequent or unnecessary route updates.
5. Efficiency – Finds the most optimal or least-cost path.
6. Convergence – Quickly updates routes after network changes.
7. Scalability – Performs efficiently in large, growing networks.
Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network topology or traffic load changes. The changes in routing decisions are reflected in the topology as well
as the traffic of the network. Also known as dynamic routing, these make use of dynamic information such as current topology, load, delay, etc. to select routes. Optimization
parameters are distance, number of hops, and estimated transit time.

Further, these are classified as follows:


 Isolated: In this method each, node makes its routing decisions using the information it has without seeking information from other nodes. The sending nodes don't have
information about the status of a particular link
 Centralized: In this method, a centralized node has entire information about the network and makes all the routing decisions.
Non-Adaptive Algorithms
These are the algorithms that do not change their routing decisions once they have been selected. This is also known as static routing as a route to be taken is computed in advance and
downloaded to routers when a router is booted.

Further, these are classified as follows:


 Flooding: This adapts the technique in which every incoming packet is sent on every outgoing line except from which it arrived. One problem with this is that packets may
go in a loop and as a result of which a node may receive duplicate packets.
 Random walk: In this method, packets are sent host by host or node by node to one of its neighbors randomly.

Shortest Path Algorithm in Computer Network


In between sending and receiving data packets from the sender to the receiver, it will go through many routers and subnets. So as a part of increasing the efficiency in routing the data
packets and decreasing the traffic, we must find the shortest path.
What is Shortest Path Routing?
It refers to the algorithms that help to find the shortest path between a sender and receiver for routing the data packets through the network in terms of shortest distance, minimum
cost, and minimum time.
 It is mainly for building a graph or subnet containing routers as nodes and edges as communication lines connecting the nodes.
 Hop count is one of the parameters that is used to measure the distance.
 Hop count: It is the number that indicates how many routers are covered. If the hop count is 6, there are 6 routers/nodes and the edges connecting them.
 Another metric is a geographic distance like kilometers.
 We can find the label on the arc as the function of bandwidth, average traffic, distance, communication cost, measured delay, mean queue length, etc.
Common Shortest Path Algorithms
 Dijkstra’s Algorithm
 Bellman Ford’s Algorithm
 Floyd Warshall’s Algorithm

Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the minimum distance between a node and all other nodes in a given graph. Here we can consider node as a router
and graph as a network. It uses weight of edge .ie, distance between the nodes to find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited nodes as the current node.
3: For each neighbor, N, of the current node:
 Calculate the potential new distance by adding the current distance of the current node with the weight of the edge connecting the current node to N.
 If the potential new distance is smaller than the current distance of node N, update N's current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has the smallest current distance and continue this process.
Example:
Consider the graph G:
Graph G Now,we will start normalising graph one by one starting from node 0.
step 1 Nearest neighbour of 0 are 2 and 1 so we will normalize them first .
step 3 Similarly we will normalize other node considering it should not form a cycle and will keep track in visited nodes.
Advantages:
 Finds the most efficient path.
 Works well for networks with static topology.
 Guarantees optimal solution.
Limitations:
 Cannot handle negative edge weights
 Requires complete knowledge of the network (link-state info)

Bellman Ford’s Algorithm


The Bell man Ford’s algorithm is a single source graph search algorithm which help us to find the shortest path between a source vertex and any other vertex in a give graph. We can use
it in both weighted and unweighted graphs. This algorithm is slower than Dijkstra's algorithm and it can also use negative edge weight.
Algorithm
1: First we Initialize all vertices v in a distance array dist[] as INFINITY.
2: Then we pick a random vertex as vertex 0 and assign dist[0] =0.
3: Then iteratively update the minimum distance to each node (dist[v]) by comparing it with the sum of the distance from the source node (dist[u]) and the edge weight (weight) N-1
times.
4: To identify the presence of negative edge cycles, with the help of following cases do one more round of edge relaxation.
 We can say that a negative cycle exists if for any edge uv the sum of distance from the source node (dist[u]) and the edge weight (weight) is less than the current distance
to the largest node(dist[v])
 It indicates the absence of negative edge cycle if none of the edges satisfies case1.
Example: Bellman ford detecting negative edge cycle in a graph.
Consider the Graph G:

Graph G
Outcome: The graph contains a negative cycle in the path from node D to node F and then to node E.
Advantages:
 Handles negative edge weights
 Can detect negative cycles
 Simpler to implement than Dijkstra
Limitations:
 Slower than Dijkstra’s algorithm
 Doesn’t work if negative cycles exist (infinite shortest path)

HIERARCHICAL ROUTING ALGORITHM


A hierarchical routing algorithm is an approach to routing where networks are structured into layers or hierarchies, allowing efficient and scalable route management. This method is
particularly useful for large networks, as it helps to reduce the size of routing tables, minimize route calculation complexity, and improve the overall performance and scalability of the
network.

Broadcast Routing
Broadcast routing plays a role, in computer networking and telecommunications. It involves transmitting data, messages, or signals from one source to destinations within a network.
Unlike routing (one-to-one communication) or multicast routing (one-to-many communication) broadcast routing ensures that information reaches all devices or nodes within the
network.

Broadcasting in computer networks is a type of communication mechanism that allows the message to be received by all the nodes of a network. The term broadcast in general refers
to the transmission of signals from radio or televisions.
Every broadcasted signal is stopped at layer-3 network layer of OSI or to be more practical - at the router. A more technical example of Broadcasting would be: The Address-Resolution-
Protocol request (ARP-Request) whenever a host needs to resolve an IP address to its corresponding MAC address it will broadcast a signal asking "Who this IP address belongs to?" and
this broadcasted signal is received by every single node in a network domain and then an appropriate node will respond accordingly.
Key Points on Broadcasting
 Data is sent to all the nodes/stations in the network domain.
 A special broadcast address exist for every network which is used to receive a broadcasted message.
 Not every device want to receive the broadcasted message.
 It generates the most network traffic because the broadcasted message is sent to every node in the network.
 It is less secure. A sensitive message shouldn't be sent to everyone and hence it should be kept in mind before broadcasting a message.
 Examples : Address Resolution Protocol (ARP) requests, Dynamic Host Configuration Protocol (DHCP) requests.

Types of Broadcast Routing:


Type Description
Limited Broadcast Sent to all devices within a LAN (using IP 255.255.255.255)
Type Description
Directed Broadcast Sent to all devices in a specific network (e.g., 192.168.1.255)
Flooding Packet is forwarded to all neighbors, and they forward further
Multicast-based More efficient — only sends to interested group members, not all nodes

Pros (Advantages)
1. Message to All – Delivers data to every node without knowing individual addresses.
2. Simple to Implement – Easy to configure as it requires no complex routing logic.
3. Used for Discovery – Ideal for protocols like ARP or DHCP that need to find other devices.
4. No Need for Target Info – Works without needing destination-specific information.

❌ Cons (Disadvantages)
1. Broadcast Storms – Excessive broadcasts can overwhelm the network.
2. Inefficient – Wastes bandwidth by sending data to all nodes, even if they don’t need it.
3. Redundant Transmissions – Can lead to repeated delivery of the same packet.
4. Not Scalable – Becomes problematic in large or complex networks.
5. Security Risks – Exposes data to all devices, increasing vulnerability.

Multicast Routing.
Multicast is a method of group communication where the sender sends data to multiple receivers or nodes present in the network simultaneously. Multicasting is
a type of one-to-many and many-to-many communication as it allows sender or senders to send data packets to multiple receivers at once across LANs or WANs.
This process helps in minimizing the data frame of the network because at once the data can be received by multiple nodes
Multicasting is considered as the special case of broadcasting as.it works in similar to Broadcasting, but in Multicasting, the information is sent to the targeted or
specific members of the network. This task can be accomplished by transmitting individual copies to each user or node present in the network, but sending
individual copies to each user is inefficient and might increase the network latency. To overcome these shortcomings, multicasting allows a single transmission
that can be split up among the multiple users, consequently, this reduces the bandwidth of the signal.

Unit 3
MAC Sub layer: MAC Addressing
In computer networks, especially in the Data Link Layer (Layer 2) of the OSI model, the MAC (Media Access Control) sublayer plays a crucial role in managing how
devices access the shared medium. A core function of the MAC sublayer is MAC addressing.
A MAC (Media Access Control) address is a unique 48-bit hardware address assigned to a device's Network Interface Card (NIC) during manufacturing. It is also
known as the physical address and is used at the Data Link Layer by the MAC sublayer for local network communication.
MAC Address Format:
So a MAC Address is a 12-digit hexadecimal number (48-bit binary number), which is mostly represented by Colon-Hexadecimal notation
 12-digit hexadecimal (e.g., 00:1A:2B:3C:4D:5E)
 First 6 digits: OUI (Organizationally Unique Identifier) – identifies the manufacturer
 Last 6 digits: Unique to the device
Types of MAC Addresses
There are three main types of MAC (Media Access Control) addresses based on how they are used and assigned:
1. Unicast MAC Address
 Definition: Identifies a single unique device on a network.
 Use: For direct communication between two devices.
 Example: A switch uses a unicast MAC address to forward a frame to a specific computer.
2. Multicast MAC Address
 Definition: Used to send data to a group of devices, not just one.
 Use: For services like video conferencing or streaming where multiple receivers are involved.
 Address Pattern: Starts with 01:00:5E
 Example: 01:00:5E:xx:xx:xx
3. Broadcast MAC Address
 Definition: Used to send data to all devices on the local network.
 Address: FF:FF:FF:FF:FF:FF (all bits set to 1)
 Use: Common in protocols like ARP (Address Resolution Protocol)
Why MAC Address is Important
The MAC (Media Access Control) address is essential for enabling accurate and efficient communication between devices on a local area network (LAN).
 Unique ID: Identifies each device on a network.
 Local Communication: Enables data transfer within a LAN.
 Used by Switches: For forwarding data to the correct device.
 Supports ARP: Maps IP to MAC for proper delivery.
 Security: Used in MAC filtering and network access control.
 Built-in: No manual setup—pre-assigned in hardware.

Binary Exponential Back-off (BEB) Algorithm


The Back-off algorithm is a collision resolution mechanism used in random access MAC protocols like CSMA/CD, mainly in Ethernet networks. When two stations
(e.g., A and B) transmit at the same time, a collision occurs. To avoid continuous collisions and deadlock, the back-off algorithm introduces a random delay
before retransmission. Time is divided into discrete slots (Tslot), and each station picks a random number K from a range that increases with each collision,
defined as K = [0, 2ⁿ – 1], where n is the number of collisions. The waiting time is calculated as K × Tslot. The station with the shorter waiting time retransmits
first, while others wait, reducing the chances of repeated collisions. If another collision happens, the range doubles, increasing the back-off time, and this process
continues until a transmission succeeds. This mechanism ensures reliable and fair access to the network channel.
Example
Two stations A and B transmit at the same time → collision occurs (n = 1).
They pick a random K from {0, 1} to decide **waiting time = K × Tslot`.
Outcomes:
 Both choose K = 0 → Both wait 0 → Collision
 A = 0, B = 1 → A sends first → A wins
 A = 1, B = 0 → B sends first → B wins
 Both choose K = 1 → Both wait 1 Tslot → Collision
🔢 Probabilities:
 A wins = 1/4
 B wins = 1/4
 Collision = 2/4

Distributed Random Access Schemes/Contention Schemes


Random Access Schemes are network protocols where multiple devices share a communication channel without a fixed schedule. Each device transmits
whenever it has data, which can lead to collisions if two or more devices send at the same time. To handle collisions, these schemes include methods for
detecting or avoiding them and for retransmitting lost data.
Subdivisions are ALOHA, Slotted ALOHA, and CSMA. They are simple and decentralized but can be inefficient under heavy traffic due to collisions.
ALOHA is one of the earliest random access protocols used for wireless communication. In ALOHA, a device sends data whenever it has data to transmit, without
checking if the channel is free. If two devices transmit at the same time, their packets collide and are lost. The devices then wait for a random time before
retransmitting.
There are two types of aloha (rgpv)
Pure ALOHA
Slotted ALOHA

CSMA
CSMA reduces collisions by requiring a station to sense the channel before transmitting. If the channel is idle, the station sends data; if busy, it waits. However,
collisions can still occur due to propagation delay—two stations may sense the channel as idle simultaneously and transmit, causing a collision.
How CSMA Works:
 Before a station transmits, it listens (senses) the channel to check if it is free (no other station is transmitting).
 If the channel is idle, the station transmits immediately.
 If the channel is busy, the station waits until the channel becomes free before transmitting
Types of CSMA
1. 1-Persistent CSMA
 How it works:
Node senses the channel; if idle, transmits immediately. If busy, keeps sensing continuously until channel is idle, then transmits right away.
 Pros:
o Minimizes delay before transmission once channel is free.
o Simple and fast to send when the channel is idle.
 Cons:
o High chance of collision if multiple nodes wait and transmit immediately when channel becomes free.
o Can cause congestion due to continuous sensing.
2. Non-Persistent CSMA
 How it works:
Node senses the channel; if busy, waits a random time before sensing again instead of continuously sensing.
 Pros:
o Reduces chance of collisions by randomizing retransmission attempts.
o Less channel congestion compared to 1-persistent.
 Cons:
o Higher average delay due to random waiting.
o Less efficient channel utilization when the channel is free.
3. P-Persistent CSMA
 How it works:
Used in time-slotted systems (like Wi-Fi). If channel is idle, transmit with probability p. Otherwise, wait for next slot and repeat.
 Pros:
o Balances collision probability and transmission delay.
o Efficient for high traffic in slotted channels.
 Cons:
o Requires synchronization of time slots.
o Choice of p affects performance; improper tuning can degrade throughput.
4. O-Persistent CSMA
 How it works:
Nodes have a predetermined priority order. Each node waits for its turn to transmit when the medium is idle.
 Pros:
o Collision-free transmission due to strict order.
o Predictable and fair access based on priority.
 Cons:
o Complex to manage priorities.
o Lower flexibility; nodes with lower priority may suffer long delays.

CSMA/CD?

CSMA/CD is a network protocol used primarily in wired Ethernet networks to regulate how devices respond to data collisions on a shared communication
medium. It improves the basic CSMA mechanism by detecting collisions during transmission and reacting to them efficiently.

Why is CSMA/CD Needed?

In a shared medium like early Ethernet (using hubs or coaxial cables), multiple devices may transmit at the same time, causing data collisions. CSMA/CD
reduces wasted bandwidth by detecting and managing these collisions.

Working of CSMA/CD (Step-by-Step):

1. Carrier Sensing
A device checks (listens to) the medium to see if another device is transmitting.
2. Transmission
If the medium is idle, the device starts transmitting its frame.
3. Collision Detection
While transmitting, the device continues to listen. If it detects a voltage change or interference, a collision has occurred.
4. Jam Signal
The device stops sending its data and instead sends a jam signal to inform all other devices that a collision has occurred.
5. Backoff Algorithm
Each device involved waits for a random time before attempting to retransmit. The waiting time increases exponentially after each collision (using
Binary Exponential Backoff).

Example Scenario:

 Device A and Device B both sense the channel as idle.


 They both start transmitting at the same time.
 A collision occurs.
 Both devices detect the collision.
 They stop, send a jam signal, wait a random time, and then try again.

Advantages of CSMA/CD:

 Better than ALOHA or basic CSMA due to collision detection.


 Works well in networks with light to moderate traffic.
 Efficient use of bandwidth in shared-medium environments.

Disadvantages:

 Not suitable for wireless networks (hard to listen while transmitting).


 Performance degrades with high traffic or large network sizes.
 Delay increases as number of collisions increases.

CSMA/CA

Carrier sense multiple access with collision avoidance. The process of collisions detection involves sender receiving acknowledgement signals. If there is just one
signal(its own) then the data is successfully sent but if there are two signals(its own and the one with which it has collided) then it means a collision has occurred.
To distinguish between these two cases, collision must have a lot of impact on received signal. However it is not so in wired networks, so CSMA/CA is used in
this case.

How CSMA/CA Works:

1. Carrier Sensing:
The station listens to the channel to check if it is idle.
2. Wait (Backoff):
If the channel is busy, the station waits for a random time (called backoff time).
3. Request to Send (RTS): (Optional, used in RTS/CTS mechanism)
The station sends an RTS to the access point or receiver.
4. Clear to Send (CTS): (Optional)
If the receiver is ready, it replies with a CTS.
5. Data Transmission:
After getting CTS (or if RTS/CTS is not used and the medium is idle), the station sends data.
6. Acknowledgement (ACK):
The receiver sends an ACK after successfully receiving the data.

CSMA/CA Collision Avoidance

 Interframe Space (IFS): After sensing the medium idle, the station waits a short time (IFS) before transmitting, to avoid collisions due to propagation
delay. IFS varies by station priority.
 Contention Window: Time is divided into slots; the station picks a random slot to wait before sending. If the medium is busy, the timer pauses and
resumes when idle again.
 Acknowledgement: If no ACK is received before timeout, the sender retransmits the data.

Advantages:

 Works well in wireless environments.


 Reduces chances of collision using RTS/CTS and backoff.
 Prevents hidden terminal and exposed terminal problems.

Disadvantages:

 More overhead due to RTS/CTS and ACK.


 Lower efficiency compared to CSMA/CD in low-traffic conditions.
 Doesn’t guarantee collision-free transmission.

Collision Free Protocols:

Almost all collisions can be avoided in CSMA/CD but they can still occur during the contention period. The collision during the contention period adversely
affects the system performance, this happens when the cable is long and length of packet are short. This problem becomes serious as fiber optics network came
into use. Here we shall discuss some protocols that resolve the collision during the contention period.

 Bit-map Protocol
 Binary Countdown
 Limited Contention Protocols
 The Adaptive Tree Walk Protocol

Bit-map Protocol:

Bit map protocol is collision free Protocol. In bitmap protocol method, each contention period consists of exactly N slots. If any station has to send frame,
then it transmits a 1 bit in the corresponding slot. For example, if station 2 has a frame to send, it transmits a 1 bit to the 2 nd slot. In general, Station 1
Announce the fact that it has a frame questions by inserting a 1 bit into slot 1. In this way, each station has complete knowledge of which station wishes to
transmit. There will never be any collisions because everyone agrees on who goes next. Protocols like this in which the desire to transmit is broadcasting for
the actual transmission are called Reservation Protocols.

How It Works:
 Time is divided into slots, and each station is assigned a unique bit position (or slot number).
 A control frame called a bit-map is sent where each bit corresponds to one device.
 If a station wants to transmit, it sets its corresponding bit to 1 in the bit-map.
 After the bit-map frame is sent, each station transmits in the order of set bits (from lowest to highest).
Advantages:
 Collision-free: No two stations transmit at the same time
 Fair: Each station gets a chance based on its bit position
❌ Disadvantages:
 Wastes bandwidth if most stations are idle
 Scalability issue: The bit-map grows as the number of devices increases
Binary countdown
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In binary countdown, binary station addresses are used. A station wanting
to use the channel broadcast its address as binary bit string starting with the high order bit. All addresses are assumed of the same length
How It Works:
1. Unique Binary Addresses:
Each station is assigned a unique binary address of the same length (say 4 bits).
2. Simultaneous Transmission Attempt:
When multiple stations want to transmit, they simultaneously send their addresses bit-by-bit, starting from the most significant bit (MSB).
3. Bitwise Arbitration:
o At each bit position, all stations transmit their bit.
o If a station transmits a 0 but detects a 1 on the line, it immediately drops out, since another station has a higher priority (bit value 1 > 0).
o Stations continue this process for all bits until only one remains.
4. Winner Transmits:
The station with the highest binary address wins the arbitration and gains the right to transmit its data frame.
5. Repeat Cycle:
After the winning station transmits, the process repeats for the next round of contention.
Example:
Assume 3 stations want to transmit, with binary addresses:
 A = 1001
 B = 1010
 C = 1100
Bit-wise comparison:
MSB → 1 (all continue)
Next → 0 0 1 → A & B drop (they sent 0, heard 1)
Winner → C (1100)
Advantages:
 No collisions
 Deterministic – always one winner
 Efficient for networks with limited and known number of stations
❌ Disadvantages:
 Not fair – higher-address stations always win
 Requires unique binary addresses and synchronization
 Scalability: Bit-length increases with number of stations

MLMA Limited Contention Protocols

MLMA stands for Multiple Logical Message Access protocols that aim to reduce contention in shared communication channels by limiting how many stations
can attempt to transmit at the same time.

Key Points:
 These protocols limit the number of stations allowed to compete for the channel simultaneously.
 The channel is divided into multiple logical channels or time slots, reducing collisions.
 Stations are assigned logical groups or priorities and allowed to transmit in a controlled manner.
 Contention is restricted to a smaller subset of stations at a time, improving efficiency.

Adaptive Tree Walk Protocol

A method to resolve collisions by recursively splitting contending stations into smaller groups and testing them one by one until the transmitting
station is found.

 Purpose: Efficiently resolve collisions when multiple stations attempt to transmit simultaneously.

 How it works:

 When a collision occurs, the group of contending stations is split into smaller subsets (like branches of a tree).
 The protocol tests subsets sequentially or adaptively to find which subset has stations ready to transmit.
 This "walking" through the tree continues recursively, reducing the number of contenders until the transmitting station is identified.
 The "adaptive" part means the protocol dynamically adjusts the subdivision based on contention.

 Advantage: Reduces collision overhead and channel idle time by quickly isolating the transmitting station.

Adaptive Tree Walk Protocol fig (1.3)

Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.

IEEE 802 Series Standards

The IEEE 802 series, developed by the Institute of Electrical and Electronics Engineers (IEEE), is a family of standards specifically for LAN
(Local Area Networks) and MAN (Metropolitan Area Networks). These standards operate primarily at the Data Link Layer (Layer 2) and
Physical Layer (Layer 1) of the OSI model.

Key 802 Standards:

 802.1 — Bridges and Network Management


Covers network management and standards for bridging and network architecture.
 802.2 — Logical Link Control (LLC)
Defines the upper part of the Data Link Layer, providing flow and error control.
 802.3 — Ethernet (Wired LAN)
Defines wired Ethernet standards (10 Mbps to multi-Gbps speeds).
 802.4 — Token Bus (obsolete)
Token passing over a bus topology.
 802.5 — Token Ring
Token-passing ring network standard.
 802.11 — Wireless LAN (Wi-Fi)
Standards for wireless local area networking.
 802.15 — Wireless Personal Area Networks (WPAN)
Includes Bluetooth and other short-range wireless standards.
 802.16 — Wireless Metropolitan Area Networks (WiMAX)
Broadband wireless access over metropolitan areas.
 802.17 — Resilient Packet Ring
For optical ring networks.
 802.20 — Mobile Broadband Wireless Access
Mobile broadband networks.
 802.22 — Wireless Regional Area Networks
Wireless networks over large areas using TV white spaces.
Unit 5

Transport Layer:

The Transport Layer is the fourth layer in the OSI (Open Systems Interconnection) model and is responsible for end-to-end communication,
reliability, and flow control between devices in a network. Designing this layer involves addressing several key issues to ensure efficient and
reliable data transmission.

Design Issues,

 . Reliability -- Maintaining reliability over an unreliable network.


 Flow Control --- Balancing sender speed and receiver capacity.
 Congestion Control --- Adapting to changing network conditions to maintain performance.
 . Segmentation and Reassembly -- Ensuring correct order and completeness of data segments.
 . End-to-End Communication -- Handling communication across multiple hops and heterogeneous networks.
 Error Control --- Providing error recovery with minimal overhead.

UDP: Header Format,

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable
and connectionless protocol. So, there is no need to establish a connection before data transfer. The UDP helps to establish low-latency and loss-tolerating
connections over the network. The UDP enables process-to-process communication.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to 60 bytes. The first 8 Bytes contain all necessary header information
and the remaining part consists of data. UDP port number fields are each 16 bits long, therefore the range for port numbers is defined from 0 to 65535; port
number 0 is reserved. Port numbers help to distinguish different user requests or processes.

UDP Header

 Source Port: Source Port is a 2 Byte long field used to identify the port number of the source.
 Destination Port: It is a 2 Byte long field, used to identify the port of the destined packet.
 Length: Length is the length of UDP including the header and the data. It is a 16-bits field.
 Checksum: Checksum is 2 Bytes long field. It is the 16-bit one's complement of the one's complement sum of the UDP header, the pseudo-header of
information from the IP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets.
TCP: Connection Management
TCP (Transmission Control Protocol) is a connection-oriented protocol, meaning it establishes a reliable connection between sender and receiver before data
transmission begins. The connection management in TCP involves three main phases:
🔹 1. Connection Establishment (Three-Way Handshake)
This is used to synchronize sequence numbers and establish connection parameters between client and server.
🧱 Steps:
1. SYN → Client sends a SYN (synchronize) packet to the server with an initial sequence number.
2. SYN-ACK → Server responds with SYN-ACK, acknowledging the client's SYN and sending its own SYN.
3. ACK → Client sends an ACK, acknowledging the server's SYN.
✅ Connection is now established, and data transfer can begin.
🔸
2. Data Transfer
 After the connection is established, both sides can send and receive data.
 Data is transmitted in segments, each with a sequence number and acknowledgment.
 TCP ensures:
o Reliable delivery
o Ordered data
o Flow and congestion control
🔹 3. Connection Termination (Four-Way Handshake)
Either the client or server can initiate termination. It uses a four-segment exchange to close the connection gracefully.
🧱 Steps:
1. FIN → One side sends a FIN to signal it wants to close.
2. ACK → The other side acknowledges the FIN.
3. FIN → The second side sends its own FIN.
4. ACK → The first side acknowledges.
What is Flow Control? (https://www.scaler.com/topics/computer-network/tcp-flow-control/)
Flow control is a technique used to regulate the flow of data between different nodes in a network. It ensures that a sender does not overwhelm a receiver with
too much data too quickly. The goal of flow control is to prevent buffer overflow, which can lead to dropped packets and poor network performance.
Advantages of Flow Control
 Prevents buffer overflow: Flow control prevents buffer overflow by regulating the rate at which data is sent from the sender to the receiver.
 Helps in handling different data rates: Flow control helps in handling different data rates by regulating the flow of data to match the capacity of the
receiving device.
 Efficient use of network resources: Flow control helps in the efficient use of network resources by avoiding packet loss and reducing the need for
retransmissions.
Disadvantages of Flow Control
 May cause delays: Flow control may cause delays in data transmission as it regulates the rate of data flow.
 May not be effective in congested networks: Flow control may not be effective in congested networks where the congestion is caused by multiple
sources.
 May require additional hardware or software: Flow control may require additional hardware or software to implement the flow control mechanism.

What is Congestion Control?


Congestion control is a technique used to prevent congestion in a network. Congestion occurs when too much data is being sent over a network, and the network
becomes overloaded, leading to dropped packets and poor network performance.
Slow Start Phase
 Purpose: To probe the network capacity without overloading it at the beginning of the connection.
 How it works:
o TCP starts with a small Congestion Window (cwnd) (typically 1 MSS – Maximum Segment Size).
o For every ACK received, cwnd doubles — this results in exponential growth.
o This continues until it reaches a threshold value called ssthresh (Slow Start Threshold).
✅ Result: Rapid increase in sending rate to quickly utilize available bandwidth.
🚧 2. Congestion Avoidance Phase
 Purpose: To avoid congestion once the network capacity is roughly estimated.
 How it works:
o When cwnd reaches ssthresh, TCP enters Congestion Avoidance.
o In this phase, cwnd increases linearly: by 1 MSS per Round Trip Time (RTT).
o This slower growth reduces the risk of congestion.
✅ Result: Controlled, gradual increase to optimize throughput and stability.
🚨 3. Congestion Detection Phase
 Purpose: To respond to signs of network congestion (typically packet loss).
 How it works:
o Loss detected by timeout:
 Set ssthresh = cwnd / 2,
 Reset cwnd = 1 MSS,
 Re-enter Slow Start.
o Loss detected by 3 duplicate ACKs (fast retransmit):
 Set ssthresh = cwnd / 2,
 cwnd = ssthresh,
 Enter Congestion Avoidance (called Fast Recovery).
✅ Result: Quickly reduces the sending rate to relieve network congestion.

Advantages of Congestion Control


 Prevents network congestion: Congestion control prevents network congestion by regulating the rate at which data is sent from the sender to the
receiver.
 Efficient use of network resources: Congestion control helps in efficient use of network resources by reducing the number of lost packets and
retransmissions.
 Fair allocation of network resources: Congestion control ensures a fair allocation of network resources by regulating the rate of data flow for all
sources.
Disadvantages of Congestion Control
 May cause delays: Congestion control may cause delays in data transmission as it regulates the rate of data flow.
 May require additional hardware or software: Congestion control may require additional hardware or software to implement the congestion control
mechanism.
 May lead to underutilization of network resources: Congestion control may lead to underutilization of network resources if the congestion is not
severe.

TCP Header Format (Explained Simply)

The TCP header is a structured set of fields that carry control information for reliable data transfer. It has a minimum size of 20 bytes, and can expand (up to 60
bytes) if options are used.

TCP Segment structure -


A TCP segment consists of data bytes to be sent and a header that is added to the data by TCP as shown:

The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there are no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:

 Source Port Address -


A 16-bit field that holds the port address of the application that is sending the data segment.

 Destination Port Address -


A 16-bit field that holds the port address of the application in the host that is receiving the data segment.

 Sequence Number -
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte that is sent in that particular segment. It is used to reassemble the
message at the receiving end of the segments that are received out of order.

 Acknowledgement Number -
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the receiver expects to receive next. It is an acknowledgement for the
previous bytes being received successfully.

 Header Length (HLEN) -


This is a 4-bit field that indicates the length of the TCP header by a number of 4-byte words in the header, i.e if the header is 20 bytes(min length of
TCP header), then this field will hold 5 (because 5 x 4 = 20) and the maximum length: 60 bytes, then it'll hold the value 15(because 15 x 4 = 60).
Hence, the value of this field is always between 5 and 15.

 Control flags -
These are 6 1-bit control bits that control connection establishment, connection termination, connection abortion, flow control, mode of transfer etc.
Their function is:

o URG: Urgent pointer is valid


o ACK: Acknowledgement number is valid( used in case of cumulative acknowledgement)
o PSH: Request for push
o RST: Reset the connection
o SYN: Synchronize sequence numbers
o FIN: Terminate the connection
 Window size -
This field tells the window size of the sending TCP in bytes.

 Checksum -
This field holds the checksum for error control. It is mandatory in TCP as opposed to UDP.

 Urgent pointer -
This field (valid only if the URG control flag is set) is used to point to data that is urgently required that needs to reach the receiving process at the
earliest. The value of this field is added to the sequence number to get the byte number of the last urgent byte.

You might also like