Computer Network
Computer Network
Unit 1
Computer Network?
A computer network is a system that connects many independent computers to share information (data) and resources. The integration of computers and other different devices allows
users to communicate more easily. A computer network is a collection of two or more computer systems that are linked together. A network connection can be established using
either cable or wireless media. Hardware and software are used to connect computers and tools in any network.
1. Resource Sharing
Allow users to share hardware (printers, storage), software, and data.
2. Reliability
Ensure data is safely transmitted and received, even in case of failures.
3. Scalability
Support adding more users and devices without major changes.
4. Communication
Enable users to communicate via emails, chats, video calls, etc.
5. Cost Efficiency
Reduce cost by sharing expensive resources and using centralized management.
6. Remote Access
Allow users to access data and resources from anywhere.
Medium: The physical or wireless path through which the message travels.
Protocol: A set of rules that govern data communication.
Receiver: The device or user that receives and interprets the message.
Computer network Architecture refers to the physical and logical design of how computers are organized and tasks are allocated in a network. The two main types of network
Architecture are peer-to-peer and client server Architecture.
Definition: In a peer-to-peer network, all computers (called peers) are equal and can act as both clients and servers. Each device can share its own resources directly with
others.
Features:
o No central server.
Advantages:
Disadvantages:
2. Client-Server Architecture
Definition: In a client-server network, one or more central servers provide resources and services to multiple client devices.
Features:
Advantages:
Disadvantages:
- Very portable
PAN Connects personal devices in a small range (up to 10 - Very limited range Bluetooth between phone &
- Low cost
(Personal Area Network) meters) - Low speed earbuds
- Easy to set up
- High speed
LAN - Limited to small area
Connects devices in a home, school, or office - Easy to manage Office network with shared printer
(Local Area Network) - Maintenance required
- Cost-effective
1. Interface
Definition:
An interface in layered architecture is the boundary or point of interaction between two adjacent layers within the same system (host or device). It is the means by which an upper
layer requests services from the layer directly below it.
Purpose:
Enables modularity: Each layer can be designed, updated, or replaced independently as long as the interface remains consistent.
Characteristics:
2. Services
Definition:
A service is a set of well-defined operations and functionalities that a layer offers to the layer above it. It abstracts the internal complexities and provides a simplified interface to upper
layers.
Application Layer: Various application-specific services like file transfer, email, web browsing.
Connection-oriented services involve setting up a dedicated path between the source and destination before data transfer begins. These services ensure that data is delivered in
the correct sequence and without errors. In a connection-oriented service, the Handshake method is used to establish the connection between sender and receiver. Before data
transmission starts, connection-oriented services create a dedicated communication channel between the sender and the recipient As the connection is kept open until all data is
successfully transferred, this guarantees dependable data delivery. One example is TCP (Transmission Control Protocol), which ensures error-free and accurate data packet
delivery.
Key Characteristics:
Flow Control & Error Control: Ensures smooth data flow and corrects errors.
Pros:
Suitable for applications requiring high reliability (e.g., file transfers, web browsing).
Cons:
Uses more network resources as the connection is maintained during the session.
Connectionless services send data without establishing a dedicated connection between the source and destination. Each data packet is treated independently, and there is no
guarantee of delivery or sequencing. Connection-less Service does not give a guarantee of reliability. In this, Packets do not follow the same path to reach their
destination. Connectionless Services deliver individual data packets without first making a connection. Since each packet is sent separately, delivery, order, and mistake correction
cannot be guaranteed. As a result, the service is quicker but less dependable. UDP (User Datagram Protocol) is one example, which is frequently used for streaming where dependability
is not as important as speed.
Key Characteristics:
No prior connection setup.
Delivery is not guaranteed; packets may arrive out of order, duplicated, or lost.
Pros:
Cons:
The OSI (Open Systems Interconnection) Model is a set of rules that explains how different computer systems communicate over a network. OSI Model was developed by
the International Organization for Standardization (ISO). The OSI Model consists of 7 layers and each layer has specific functions and responsibilities. This layered approach makes it
easier for different devices and technologies to work together. OSI Model provides a clear structure for data transmission and managing network issues. The OSI Model is widely used as
a reference to understand how network systems function.
Principals of physical layer: transmission Media(online) Principals of physical layer: Media, Bandwidth, Data rate and Modulations(rgpv)
Transmission media is the physical medium through which data is transmitted from one device to another within a network. These media can be wired or wireless. The choice of
medium depends on factors like distance, speed, and interference
Unit 2
This sublayer of the data link layer deals with multiplexing, the flow of data among applications and other services, and LLC is responsible for
providing error messages and acknowledgments as well.
MAC sublayer manages the device's interaction, responsible for addressing frames, and also controls physical media access. The data link layer
receives the information in the form of packets from the Network layer, it divides packets into frames and sends those frames bit-by-bit to the
underlying physical layer
1. Framing:
o Uses mechanisms like checksums, CRC (Cyclic Redundancy Check) to detect errors.
o May also correct some errors using techniques like Hamming Code.
3. Flow Control:
o Prevents fast sender from overwhelming a slow receiver (e.g., using stop-and-wait, sliding window).
o Determines which device can transmit when multiple devices share the same medium.
5. Physical Addressing:
Flow Control?
Flow Control is a technique used in data communication to manage the rate of data transmission between a sender and a receiver. It ensures
that the sender does not overwhelm the receiver by sending data faster than the receiver can process and store it.
Pros:
Simple to implement
Cons:
In Go-Back-N, the sender can send multiple frames (up to a window size) without waiting for an acknowledgment. If an error occurs or a frame
is lost, the receiver discards that frame and all following frames. The sender then goes back and retransmits that frame and all subsequent
ones.
Pros:
Cons:
Wastes bandwidth if one frame is lost (all following frames are retransmitted)
Error control in the Data Link Layer ensures that data frames are transmitted accurately from sender to receiver. It is not mandatory but serves as an optimization
to detect and correct errors such as lost or corrupted frames. When errors occur, the receiver may not get the correct data, and the sender remains unaware. To
handle this, protocols use Automatic Repeat Request (ARQ) to detect errors and retransmit affected frames, ensuring reliable communication .
Ways of doing Error Control : There are basically two ways of doing Error control as given below :
1. Error Detection : Error detection, as the name suggests, simply means detection or identification of errors. These errors may occur
due to noise or any other impairments during transmission from transmitter to the receiver, in communication system. It is a class of
techniques for detecting garbled i.e. unclear and distorted data or messages.
2. Error Correction : Error correction, as the name suggests, simply means correction or solving or fixing of errors. It simply means
reconstruction and rehabilitation of original data that is error-free. But error correction method is very costly and very hard.
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single binary digit) of a
transmitted data unit is
altered during transmission, resulting in an incorrect or corrupted data unit.
Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is affected. Although multiple-bit
errors are relatively rare when compared to single-bit errors, they can still occur, particularly in high-noise or high-interference
digital environments.
Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst error. This error causes a
sequence of consecutive incorrect values.
Error Detection Techniques (copy)
1. Parity Check – Adds a parity bit (even/odd) to detect single-bit errors. (copy)
2. Checksum – Adds a value derived from data bits; receiver recalculates and compares.
3. Cyclic Redundancy Check (CRC) – Divides data by a polynomial; detects burst errors effectively.
o Stop-and-Wait ARQ
o Go-Back-N ARQ
2. Forward Error Correction (FEC) – Adds redundant data so the receiver can correct errors without retransmission.
checksum
Checksum error detection is a method used to identify errors in transmitted data. The process involves dividing the data into equally sized
segments and using a 1's complement to calculate the sum of these segments. The calculated sum is then sent along with the data to the
receiver. At the receiver's end, the same process is repeated and if all zeroes are obtained in the sum, it means that the data is correct.
Sender Side:
The 1’s complement of the sum is calculated — this becomes the checksum.
Receiver Side:
Advantages:
❌ Disadvantages:
May not detect all types of errors (e.g., if bits cancel each other out)
Unlike the checksum scheme, which is based on addition, CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of the data unit so that the
resulting data unit becomes exactly divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the data unit is
assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
CRC Working
Example: Let's data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC method discussed below.
Advantages:
❌ Disadvantages:
Designed assuming no errors occur during transmission (no lost, corrupted, or duplicated frames).
Mainly used for theoretical purposes or as a foundation for more complex protocols.
Not practical for real-world communication since channels usually have noise.
(copy)
Hybrid ARQ (HARQ): Hybrid Automatic Repeat reQuest (HARQ) is a protocol that combines ARQ (Automatic Repeat reQuest) and FEC
(Forward Error Correction) to improve the reliability and efficiency of data transmission, especially over unreliable or noisy communication
channels (e.g., wireless networks, mobile networks).
ARQ alone: Sends data and waits for an acknowledgment (ACK). If an error is detected (e.g., via a checksum), the receiver asks for a
retransmission.
FEC alone: Adds redundant bits to allow the receiver to detect and correct errors without asking for a retransmission.
HARQ: Uses FEC to correct errors when possible, and requests retransmission only if correction fails—reducing retransmissions and
improving performance.
o If decoding fails → stores received data and sends NACK (negative acknowledgment).
5. Receiver combines original and retransmitted packets to improve chances of successful decoding.
Type Description
Type I HARQ Combines FEC and ARQ directly. If error correction fails, the whole packet is retransmitted.
Type II HARQ On failure, additional redundancy bits are sent instead of the full packet. The receiver combines them with the original.
Type III HARQ Like Type II, but the receiver can decode each retransmission independently, allowing even more flexibility and efficiency.
🔹 Advantages of HARQ
Better suited for wireless and mobile networks, where errors are common.
Adaptive: can use soft combining techniques like Chase Combining or Incremental Redundancy to improve decoding.
Protocol verification: Finite State Machine Models & Petri net models. (copy)
Address Resolution Protocol is a communication protocol used for discovering physical address associated with given network address.
Typically, ARP is a network layer to data link layer mapping process, which is used to discover MAC address for given Internet Protocol Address.
In order to send the data to destination, having IP address is necessary but not sufficient; we also need the physical address of the destination
machine. ARP is used to get the physical address (MAC address) of destination machine.
2. It knows the IP address of the target but not the MAC address.
5. The sender stores this IP–MAC mapping in its ARP cache and proceeds to send the packet.
1. A diskless machine, which knows only its MAC address, sends a RARP request.
2. A RARP server on the same LAN looks up the MAC in its table.
Unit 4
Network Layer
The Network Layer is the 5th Layer from the top and the 3rd layer from the Bottom of the OSI Model. It is one of the most important layers which plays a key role in data transmission.
The main job of this layer is to maintain the quality of the data and pass and transmit it from its source to its destination. It also handles routing, which means that it chooses the best
path to transmit the data from the source to its destination, not just transmitting the packet. There are several important protocols that work in this layer
The Network Layer is responsible for the delivery of packets from the source host to the destination host across multiple interconnected networks. It handles logical addressing, routing,
fragmentation, and error handling.
Functions of Network Layer
Logical Addressing – Assigns unique IP addresses to devices.
Routing – Selects the best path for data to reach its destination.
Packet Forwarding – Moves packets between devices based on IP.
Fragmentation/Reassembly – Splits large packets and reassembles them.
Error Handling – Uses ICMP to report errors and diagnostics.
Traffic Control – Manages traffic by avoiding network congestion through flow control techniques.
Routing Algorithms?
Routing algorithms are methods used by routers to determine the best path (or route) for forwarding data packets from a source to a destination across interconnected networks In
this process, a routing table is created which contains information regarding routes that data packets follow. Various routing algorithms are used for the purpose of deciding which route
an incoming data packet needs to be transmitted on to reach the destination efficiently
routing algorithm property:
1. Correctness – Ensures the algorithm computes valid, loop-free paths.
2. Simplicity – Easy to understand, implement, and manage.
3. Robustness – Continues to work correctly under failures or changes.
4. Stability – Avoids frequent or unnecessary route updates.
5. Efficiency – Finds the most optimal or least-cost path.
6. Convergence – Quickly updates routes after network changes.
7. Scalability – Performs efficiently in large, growing networks.
Adaptive Algorithms
These are the algorithms that change their routing decisions whenever network topology or traffic load changes. The changes in routing decisions are reflected in the topology as well
as the traffic of the network. Also known as dynamic routing, these make use of dynamic information such as current topology, load, delay, etc. to select routes. Optimization
parameters are distance, number of hops, and estimated transit time.
Dijkstra’s Algorithm
The Dijkstra’s Algorithm is a greedy algorithm that is used to find the minimum distance between a node and all other nodes in a given graph. Here we can consider node as a router
and graph as a network. It uses weight of edge .ie, distance between the nodes to find a minimum distance route.
Algorithm:
1: Mark the source node current distance as 0 and all others as infinity.
2: Set the node with the smallest current distance among the non-visited nodes as the current node.
3: For each neighbor, N, of the current node:
Calculate the potential new distance by adding the current distance of the current node with the weight of the edge connecting the current node to N.
If the potential new distance is smaller than the current distance of node N, update N's current distance with the new distance.
4: Make the current node as visited node.
5: If we find any unvisited node, go to step 2 to find the next node which has the smallest current distance and continue this process.
Example:
Consider the graph G:
Graph G Now,we will start normalising graph one by one starting from node 0.
step 1 Nearest neighbour of 0 are 2 and 1 so we will normalize them first .
step 3 Similarly we will normalize other node considering it should not form a cycle and will keep track in visited nodes.
Advantages:
Finds the most efficient path.
Works well for networks with static topology.
Guarantees optimal solution.
Limitations:
Cannot handle negative edge weights
Requires complete knowledge of the network (link-state info)
Graph G
Outcome: The graph contains a negative cycle in the path from node D to node F and then to node E.
Advantages:
Handles negative edge weights
Can detect negative cycles
Simpler to implement than Dijkstra
Limitations:
Slower than Dijkstra’s algorithm
Doesn’t work if negative cycles exist (infinite shortest path)
Broadcast Routing
Broadcast routing plays a role, in computer networking and telecommunications. It involves transmitting data, messages, or signals from one source to destinations within a network.
Unlike routing (one-to-one communication) or multicast routing (one-to-many communication) broadcast routing ensures that information reaches all devices or nodes within the
network.
Broadcasting in computer networks is a type of communication mechanism that allows the message to be received by all the nodes of a network. The term broadcast in general refers
to the transmission of signals from radio or televisions.
Every broadcasted signal is stopped at layer-3 network layer of OSI or to be more practical - at the router. A more technical example of Broadcasting would be: The Address-Resolution-
Protocol request (ARP-Request) whenever a host needs to resolve an IP address to its corresponding MAC address it will broadcast a signal asking "Who this IP address belongs to?" and
this broadcasted signal is received by every single node in a network domain and then an appropriate node will respond accordingly.
Key Points on Broadcasting
Data is sent to all the nodes/stations in the network domain.
A special broadcast address exist for every network which is used to receive a broadcasted message.
Not every device want to receive the broadcasted message.
It generates the most network traffic because the broadcasted message is sent to every node in the network.
It is less secure. A sensitive message shouldn't be sent to everyone and hence it should be kept in mind before broadcasting a message.
Examples : Address Resolution Protocol (ARP) requests, Dynamic Host Configuration Protocol (DHCP) requests.
Pros (Advantages)
1. Message to All – Delivers data to every node without knowing individual addresses.
2. Simple to Implement – Easy to configure as it requires no complex routing logic.
3. Used for Discovery – Ideal for protocols like ARP or DHCP that need to find other devices.
4. No Need for Target Info – Works without needing destination-specific information.
❌ Cons (Disadvantages)
1. Broadcast Storms – Excessive broadcasts can overwhelm the network.
2. Inefficient – Wastes bandwidth by sending data to all nodes, even if they don’t need it.
3. Redundant Transmissions – Can lead to repeated delivery of the same packet.
4. Not Scalable – Becomes problematic in large or complex networks.
5. Security Risks – Exposes data to all devices, increasing vulnerability.
Multicast Routing.
Multicast is a method of group communication where the sender sends data to multiple receivers or nodes present in the network simultaneously. Multicasting is
a type of one-to-many and many-to-many communication as it allows sender or senders to send data packets to multiple receivers at once across LANs or WANs.
This process helps in minimizing the data frame of the network because at once the data can be received by multiple nodes
Multicasting is considered as the special case of broadcasting as.it works in similar to Broadcasting, but in Multicasting, the information is sent to the targeted or
specific members of the network. This task can be accomplished by transmitting individual copies to each user or node present in the network, but sending
individual copies to each user is inefficient and might increase the network latency. To overcome these shortcomings, multicasting allows a single transmission
that can be split up among the multiple users, consequently, this reduces the bandwidth of the signal.
Unit 3
MAC Sub layer: MAC Addressing
In computer networks, especially in the Data Link Layer (Layer 2) of the OSI model, the MAC (Media Access Control) sublayer plays a crucial role in managing how
devices access the shared medium. A core function of the MAC sublayer is MAC addressing.
A MAC (Media Access Control) address is a unique 48-bit hardware address assigned to a device's Network Interface Card (NIC) during manufacturing. It is also
known as the physical address and is used at the Data Link Layer by the MAC sublayer for local network communication.
MAC Address Format:
So a MAC Address is a 12-digit hexadecimal number (48-bit binary number), which is mostly represented by Colon-Hexadecimal notation
12-digit hexadecimal (e.g., 00:1A:2B:3C:4D:5E)
First 6 digits: OUI (Organizationally Unique Identifier) – identifies the manufacturer
Last 6 digits: Unique to the device
Types of MAC Addresses
There are three main types of MAC (Media Access Control) addresses based on how they are used and assigned:
1. Unicast MAC Address
Definition: Identifies a single unique device on a network.
Use: For direct communication between two devices.
Example: A switch uses a unicast MAC address to forward a frame to a specific computer.
2. Multicast MAC Address
Definition: Used to send data to a group of devices, not just one.
Use: For services like video conferencing or streaming where multiple receivers are involved.
Address Pattern: Starts with 01:00:5E
Example: 01:00:5E:xx:xx:xx
3. Broadcast MAC Address
Definition: Used to send data to all devices on the local network.
Address: FF:FF:FF:FF:FF:FF (all bits set to 1)
Use: Common in protocols like ARP (Address Resolution Protocol)
Why MAC Address is Important
The MAC (Media Access Control) address is essential for enabling accurate and efficient communication between devices on a local area network (LAN).
Unique ID: Identifies each device on a network.
Local Communication: Enables data transfer within a LAN.
Used by Switches: For forwarding data to the correct device.
Supports ARP: Maps IP to MAC for proper delivery.
Security: Used in MAC filtering and network access control.
Built-in: No manual setup—pre-assigned in hardware.
CSMA
CSMA reduces collisions by requiring a station to sense the channel before transmitting. If the channel is idle, the station sends data; if busy, it waits. However,
collisions can still occur due to propagation delay—two stations may sense the channel as idle simultaneously and transmit, causing a collision.
How CSMA Works:
Before a station transmits, it listens (senses) the channel to check if it is free (no other station is transmitting).
If the channel is idle, the station transmits immediately.
If the channel is busy, the station waits until the channel becomes free before transmitting
Types of CSMA
1. 1-Persistent CSMA
How it works:
Node senses the channel; if idle, transmits immediately. If busy, keeps sensing continuously until channel is idle, then transmits right away.
Pros:
o Minimizes delay before transmission once channel is free.
o Simple and fast to send when the channel is idle.
Cons:
o High chance of collision if multiple nodes wait and transmit immediately when channel becomes free.
o Can cause congestion due to continuous sensing.
2. Non-Persistent CSMA
How it works:
Node senses the channel; if busy, waits a random time before sensing again instead of continuously sensing.
Pros:
o Reduces chance of collisions by randomizing retransmission attempts.
o Less channel congestion compared to 1-persistent.
Cons:
o Higher average delay due to random waiting.
o Less efficient channel utilization when the channel is free.
3. P-Persistent CSMA
How it works:
Used in time-slotted systems (like Wi-Fi). If channel is idle, transmit with probability p. Otherwise, wait for next slot and repeat.
Pros:
o Balances collision probability and transmission delay.
o Efficient for high traffic in slotted channels.
Cons:
o Requires synchronization of time slots.
o Choice of p affects performance; improper tuning can degrade throughput.
4. O-Persistent CSMA
How it works:
Nodes have a predetermined priority order. Each node waits for its turn to transmit when the medium is idle.
Pros:
o Collision-free transmission due to strict order.
o Predictable and fair access based on priority.
Cons:
o Complex to manage priorities.
o Lower flexibility; nodes with lower priority may suffer long delays.
CSMA/CD?
CSMA/CD is a network protocol used primarily in wired Ethernet networks to regulate how devices respond to data collisions on a shared communication
medium. It improves the basic CSMA mechanism by detecting collisions during transmission and reacting to them efficiently.
In a shared medium like early Ethernet (using hubs or coaxial cables), multiple devices may transmit at the same time, causing data collisions. CSMA/CD
reduces wasted bandwidth by detecting and managing these collisions.
1. Carrier Sensing
A device checks (listens to) the medium to see if another device is transmitting.
2. Transmission
If the medium is idle, the device starts transmitting its frame.
3. Collision Detection
While transmitting, the device continues to listen. If it detects a voltage change or interference, a collision has occurred.
4. Jam Signal
The device stops sending its data and instead sends a jam signal to inform all other devices that a collision has occurred.
5. Backoff Algorithm
Each device involved waits for a random time before attempting to retransmit. The waiting time increases exponentially after each collision (using
Binary Exponential Backoff).
Example Scenario:
Advantages of CSMA/CD:
Disadvantages:
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions detection involves sender receiving acknowledgement signals. If there is just one
signal(its own) then the data is successfully sent but if there are two signals(its own and the one with which it has collided) then it means a collision has occurred.
To distinguish between these two cases, collision must have a lot of impact on received signal. However it is not so in wired networks, so CSMA/CA is used in
this case.
1. Carrier Sensing:
The station listens to the channel to check if it is idle.
2. Wait (Backoff):
If the channel is busy, the station waits for a random time (called backoff time).
3. Request to Send (RTS): (Optional, used in RTS/CTS mechanism)
The station sends an RTS to the access point or receiver.
4. Clear to Send (CTS): (Optional)
If the receiver is ready, it replies with a CTS.
5. Data Transmission:
After getting CTS (or if RTS/CTS is not used and the medium is idle), the station sends data.
6. Acknowledgement (ACK):
The receiver sends an ACK after successfully receiving the data.
Interframe Space (IFS): After sensing the medium idle, the station waits a short time (IFS) before transmitting, to avoid collisions due to propagation
delay. IFS varies by station priority.
Contention Window: Time is divided into slots; the station picks a random slot to wait before sending. If the medium is busy, the timer pauses and
resumes when idle again.
Acknowledgement: If no ACK is received before timeout, the sender retransmits the data.
Advantages:
Disadvantages:
Almost all collisions can be avoided in CSMA/CD but they can still occur during the contention period. The collision during the contention period adversely
affects the system performance, this happens when the cable is long and length of packet are short. This problem becomes serious as fiber optics network came
into use. Here we shall discuss some protocols that resolve the collision during the contention period.
Bit-map Protocol
Binary Countdown
Limited Contention Protocols
The Adaptive Tree Walk Protocol
Bit-map Protocol:
Bit map protocol is collision free Protocol. In bitmap protocol method, each contention period consists of exactly N slots. If any station has to send frame,
then it transmits a 1 bit in the corresponding slot. For example, if station 2 has a frame to send, it transmits a 1 bit to the 2 nd slot. In general, Station 1
Announce the fact that it has a frame questions by inserting a 1 bit into slot 1. In this way, each station has complete knowledge of which station wishes to
transmit. There will never be any collisions because everyone agrees on who goes next. Protocols like this in which the desire to transmit is broadcasting for
the actual transmission are called Reservation Protocols.
How It Works:
Time is divided into slots, and each station is assigned a unique bit position (or slot number).
A control frame called a bit-map is sent where each bit corresponds to one device.
If a station wants to transmit, it sets its corresponding bit to 1 in the bit-map.
After the bit-map frame is sent, each station transmits in the order of set bits (from lowest to highest).
Advantages:
Collision-free: No two stations transmit at the same time
Fair: Each station gets a chance based on its bit position
❌ Disadvantages:
Wastes bandwidth if most stations are idle
Scalability issue: The bit-map grows as the number of devices increases
Binary countdown
Binary countdown protocol is used to overcome the overhead 1 bit per binary station. In binary countdown, binary station addresses are used. A station wanting
to use the channel broadcast its address as binary bit string starting with the high order bit. All addresses are assumed of the same length
How It Works:
1. Unique Binary Addresses:
Each station is assigned a unique binary address of the same length (say 4 bits).
2. Simultaneous Transmission Attempt:
When multiple stations want to transmit, they simultaneously send their addresses bit-by-bit, starting from the most significant bit (MSB).
3. Bitwise Arbitration:
o At each bit position, all stations transmit their bit.
o If a station transmits a 0 but detects a 1 on the line, it immediately drops out, since another station has a higher priority (bit value 1 > 0).
o Stations continue this process for all bits until only one remains.
4. Winner Transmits:
The station with the highest binary address wins the arbitration and gains the right to transmit its data frame.
5. Repeat Cycle:
After the winning station transmits, the process repeats for the next round of contention.
Example:
Assume 3 stations want to transmit, with binary addresses:
A = 1001
B = 1010
C = 1100
Bit-wise comparison:
MSB → 1 (all continue)
Next → 0 0 1 → A & B drop (they sent 0, heard 1)
Winner → C (1100)
Advantages:
No collisions
Deterministic – always one winner
Efficient for networks with limited and known number of stations
❌ Disadvantages:
Not fair – higher-address stations always win
Requires unique binary addresses and synchronization
Scalability: Bit-length increases with number of stations
MLMA stands for Multiple Logical Message Access protocols that aim to reduce contention in shared communication channels by limiting how many stations
can attempt to transmit at the same time.
Key Points:
These protocols limit the number of stations allowed to compete for the channel simultaneously.
The channel is divided into multiple logical channels or time slots, reducing collisions.
Stations are assigned logical groups or priorities and allowed to transmit in a controlled manner.
Contention is restricted to a smaller subset of stations at a time, improving efficiency.
A method to resolve collisions by recursively splitting contending stations into smaller groups and testing them one by one until the transmitting
station is found.
Purpose: Efficiently resolve collisions when multiple stations attempt to transmit simultaneously.
How it works:
When a collision occurs, the group of contending stations is split into smaller subsets (like branches of a tree).
The protocol tests subsets sequentially or adaptively to find which subset has stations ready to transmit.
This "walking" through the tree continues recursively, reducing the number of contenders until the transmitting station is identified.
The "adaptive" part means the protocol dynamically adjusts the subdivision based on contention.
Advantage: Reduces collision overhead and channel idle time by quickly isolating the transmitting station.
Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to send), conflict
Slot-1 : C* (all nodes under node 1 can try}, C sends
Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict
Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict
Slot-4 : E* (all nodes under E can try), E sends
Slot-5 : F* (all nodes under F can try), F sends
Slot-6 : H* (all nodes under node 6 can try to send), H sends.
The IEEE 802 series, developed by the Institute of Electrical and Electronics Engineers (IEEE), is a family of standards specifically for LAN
(Local Area Networks) and MAN (Metropolitan Area Networks). These standards operate primarily at the Data Link Layer (Layer 2) and
Physical Layer (Layer 1) of the OSI model.
Transport Layer:
The Transport Layer is the fourth layer in the OSI (Open Systems Interconnection) model and is responsible for end-to-end communication,
reliability, and flow control between devices in a network. Designing this layer involves addressing several key issues to ensure efficient and
reliable data transmission.
Design Issues,
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable
and connectionless protocol. So, there is no need to establish a connection before data transfer. The UDP helps to establish low-latency and loss-tolerating
connections over the network. The UDP enables process-to-process communication.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to 60 bytes. The first 8 Bytes contain all necessary header information
and the remaining part consists of data. UDP port number fields are each 16 bits long, therefore the range for port numbers is defined from 0 to 65535; port
number 0 is reserved. Port numbers help to distinguish different user requests or processes.
UDP Header
Source Port: Source Port is a 2 Byte long field used to identify the port number of the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined packet.
Length: Length is the length of UDP including the header and the data. It is a 16-bits field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one's complement of the one's complement sum of the UDP header, the pseudo-header of
information from the IP header, and the data, padded with zero octets at the end (if necessary) to make a multiple of two octets.
TCP: Connection Management
TCP (Transmission Control Protocol) is a connection-oriented protocol, meaning it establishes a reliable connection between sender and receiver before data
transmission begins. The connection management in TCP involves three main phases:
🔹 1. Connection Establishment (Three-Way Handshake)
This is used to synchronize sequence numbers and establish connection parameters between client and server.
🧱 Steps:
1. SYN → Client sends a SYN (synchronize) packet to the server with an initial sequence number.
2. SYN-ACK → Server responds with SYN-ACK, acknowledging the client's SYN and sending its own SYN.
3. ACK → Client sends an ACK, acknowledging the server's SYN.
✅ Connection is now established, and data transfer can begin.
🔸
2. Data Transfer
After the connection is established, both sides can send and receive data.
Data is transmitted in segments, each with a sequence number and acknowledgment.
TCP ensures:
o Reliable delivery
o Ordered data
o Flow and congestion control
🔹 3. Connection Termination (Four-Way Handshake)
Either the client or server can initiate termination. It uses a four-segment exchange to close the connection gracefully.
🧱 Steps:
1. FIN → One side sends a FIN to signal it wants to close.
2. ACK → The other side acknowledges the FIN.
3. FIN → The second side sends its own FIN.
4. ACK → The first side acknowledges.
What is Flow Control? (https://www.scaler.com/topics/computer-network/tcp-flow-control/)
Flow control is a technique used to regulate the flow of data between different nodes in a network. It ensures that a sender does not overwhelm a receiver with
too much data too quickly. The goal of flow control is to prevent buffer overflow, which can lead to dropped packets and poor network performance.
Advantages of Flow Control
Prevents buffer overflow: Flow control prevents buffer overflow by regulating the rate at which data is sent from the sender to the receiver.
Helps in handling different data rates: Flow control helps in handling different data rates by regulating the flow of data to match the capacity of the
receiving device.
Efficient use of network resources: Flow control helps in the efficient use of network resources by avoiding packet loss and reducing the need for
retransmissions.
Disadvantages of Flow Control
May cause delays: Flow control may cause delays in data transmission as it regulates the rate of data flow.
May not be effective in congested networks: Flow control may not be effective in congested networks where the congestion is caused by multiple
sources.
May require additional hardware or software: Flow control may require additional hardware or software to implement the flow control mechanism.
The TCP header is a structured set of fields that carry control information for reliable data transfer. It has a minimum size of 20 bytes, and can expand (up to 60
bytes) if options are used.
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there are no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:
Sequence Number -
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte that is sent in that particular segment. It is used to reassemble the
message at the receiving end of the segments that are received out of order.
Acknowledgement Number -
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the receiver expects to receive next. It is an acknowledgement for the
previous bytes being received successfully.
Control flags -
These are 6 1-bit control bits that control connection establishment, connection termination, connection abortion, flow control, mode of transfer etc.
Their function is:
Checksum -
This field holds the checksum for error control. It is mandatory in TCP as opposed to UDP.
Urgent pointer -
This field (valid only if the URG control flag is set) is used to point to data that is urgently required that needs to reach the receiving process at the
earliest. The value of this field is added to the sequence number to get the byte number of the last urgent byte.