[go: up one dir, main page]

0% found this document useful (0 votes)
19 views88 pages

UNIT 2 Error Detection and Correction

This document covers the link layer of networking, focusing on error detection and correction techniques, multiple access protocols, and data transmission methods. It discusses various types of errors, redundancy, coding techniques, and protocols like Aloha, CSMA, and TDMA for managing data transmission in shared channels. Additionally, it explains the importance of ensuring data integrity and the mechanisms used to achieve this in network communications.

Uploaded by

rkawadkar2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views88 pages

UNIT 2 Error Detection and Correction

This document covers the link layer of networking, focusing on error detection and correction techniques, multiple access protocols, and data transmission methods. It discusses various types of errors, redundancy, coding techniques, and protocols like Aloha, CSMA, and TDMA for managing data transmission in shared channels. Additionally, it explains the importance of ensuring data integrity and the mechanisms used to achieve this in network communications.

Uploaded by

rkawadkar2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Unit 2

Introduction to the Link Layer:


Error-Detection and -Correction Techniques, Multiple
Access Links and Protocols, Switched Local Area
Networks, Link Virtualization: A Network as a Link
Layer, Data Center Networking ,Retrospective: A Day in
the Life of a Web Page Request.
Note

Data can be corrupted


during transmission.

Some applications require that


errors be detected and
corrected.
10-1 INTRODUCTION

Let us first discuss some issues related, directly


or indirectly, to error detection and correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Forward Error Correction Versus Retransmission
Coding
Note

In a single-bit error, only 1 bit in the


data unit has changed.
Figure 10.1 Single-bit
error
Note

A burst error means that 2 or more bits


in the data unit have changed.
Figure Burst error of length 8

10.7
Note

To detect or correct errors, we need


to send extra (redundant) bits with
data.
Figure The structure of encoder and decoder

10.9
Figure XORing of two single bits or two words
10-2 BLOCK CODING

In block coding, we divide our message into blocks,


each of k bits, called datawords.. We add r redundant
bits to each block to make the length n = k + r.... The
resulting n-bit blocks are called codewords..

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance

10.11
Figure Datawords and codewords in block coding
Figure 10.6 Process of error detection in block
coding
Note

The Hamming distance between


two words is the number of
differences between corresponding
bits.
1. The
Example 10.4
Hamming
distance d(000,
011) Let
is 2 us find theHamming distance between two
because
pairs of words.

1. The Hamming distance d(000, 011) is 2


because

2. The Hamming distance d(10101, 11110) is 3 because


Note

The minimum Hamming distance is


the smallest Hamming distance
between all possible pairs in a set of
words.
Example 10.5

Find theminimum Hamming distance of thecoding


scheme in Table 10.1.
Solution
We first find all Hamming distances.

The dmin in this case is 2.


Example 10.6

Find theminimum Hamming distance of thecoding


scheme in Table 10.2.

Solution
We first find all the Hamming distances.

The dmin in this case is 3.


Note

To guarantee the detection of up to


s errors in all cases, the minimum
Hamming distance in a block
code must be dmin = s + 1.

10.19
A simple parity-check code is a
single-bit error-detecting
code in which
n = k + 1 with dmin = 2.

10.20
Note

A simple parity-check code can


detect an odd number of
errors.

10.21
10-4 CYCLIC CODES

Cyclic codes are special linear block codes with one


extra property. In a cyclic code, if a codeword is
cyclically shifted (rotated), the result is another
codeword.

Topics discussed in this section:


Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Other Cyclic Codes

10.22
Figure 10.15 Division in CRC
encoder

10.23
Figure 10.16 Division in the CRC decoder for two
cases

10.24
Note

The divisor in a cyclic code is


normally called the generator
polynomial
or simply the generator.

10.25
Note

In a cyclic code,
If s(x) ≠ 0, one or more bits is corrupted.
If s(x) = 0, either

a. No bit is corrupted. or
b. Some bits are corrupted, but
the decoder failed to detect
them.
10.26
Note

In a cyclic code, those e(x) errors


that are divisible by g(x) are not
caught.

10.27
Note

If the generator has more than one


term and the coefficient of x0 is 1,
all single errors can be caught.

10.28
Note

If a generator cannot divide xt + 1


(t between 0 and n – 1),
then all isolated double
errors can be detected.

10.29
10-5 CHECKSUM

The last error detection method we discuss here is


called the checksum. The checksum is used in the
Internet by several protocols although not at the data
link layer. However, we briefly discuss it here to
complete our discussion on error checking

Topics discussed in this section:


Idea
One’s Complement
Internet Checksum

10.30
Note

Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using one’s complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.

10.31
Note

Receiver site:
1. The message (including checksum) is
divided into 16-bit words.
2. All words are added using one’s
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.

10.32
Multiple Access Protocols in Computer Network

• The data link layer is used in a computer network to transmit


the data between two devices or nodes. It divides the layer into
parts such as data link control and the multiple access
resolution /protocol.

• The upper layer has the responsibility to flow control and the
error control in the data link layer, and hence it is termed
as logical of data link control. Whereas the lower sub-layer is
used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the
multiple access resolutions.
What is a multiple access protocol?

• When a sender and receiver have a dedicated link to


transmit data packets, the data link control is enough to
handle the channel.
• Suppose there is no dedicated path to communicate or
transfer the data between two devices. In that case,
multiple stations access the channel and simultaneously
transmits the data over the channel. It may create
collision and cross talk.

• Hence, the multiple access protocol is required to reduce


the collision and avoid crosstalk between the channels.
What is a multiple access protocol?
A. Random Access Protocol
• In this protocol, all the station has the equal priority to send the
data over a channel. In random access protocol, one or more
stations cannot depend on another station nor any station control
another station.
• Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends
the data over a channel, there may be a collision or data conflict.
• Due to the collision, the data frame packets may be lost or
changed. And hence, it does not receive by the receiver end.

• Following are the different methods of random-access protocols


for broadcasting frames on the channel.

1. Aloha
2. CSMA
3. CSMA/CD
4. CSMA/CA
ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used
in a shared medium to transmit data. Using this method, any station can
transmit data across a network simultaneously when a data frameset is
available for transmission.
Aloha Rules
• Any station can transmit data to a channel at any time.
• It does not require any carrier sensing.
• Collision and data frames may be lost during the transmission of data
through multiple stations.
• Acknowledgement of the frames exists in Aloha. Hence, there is no
collision detection.
• It requires retransmission of data after some random amount of time.
ALOHA Random Access Protocol
Pure Aloha

Whenever data is available for sending over a channel at


stations, we use Pure Aloha. In pure Aloha, when each station
transmits data to a channel without checking whether the
channel is idle or not, the chances of collision may occur, and
the data frame can be lost. When any station transmits the
data frame to a channel, the pure Aloha waits for the
receiver's acknowledgment. If it does not acknowledge the
receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the
station may assume the frame has been lost or destroyed.
Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.
Pure Aloha
Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's


efficiency because pure Aloha has a very high possibility of
frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station
wants to send a frame to a shared channel, the frame can only
be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable
to send data to the beginning of the slot, the station will have
to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time
slot.
Slotted Aloha
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access


protocol to sense the traffic on a channel (idle or busy) before
transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait
until the channel becomes idle. Hence, it reduces the chances
of a collision on a transmission medium.
CSMA (Carrier Sense Multiple Access)

CSMA Access Modes


1-Persistent: In the 1-Persistent mode of CSMA that defines
each node, first sense the shared channel and if the channel is
idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the
1-persistent
frame unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines


before transmitting the data, each node must sense the
channel, and if the channel is inactive, it immediately sends
the data. Otherwise, the station must wait for a random time
(not continuously), and when the channel is found to be idle,
it transmits the frames.
CSMA (Carrier Sense Multiple Access)

P-Persistent: It is the combination of 1-Persistent and


Non-persistent modes. The P-Persistent mode defines that
each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not
transmitted, it waits for a (q = 1-p probability) random time
and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the


superiority of the station before the transmission of the frame
on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data.
1-persistent CSMA (Carrier Sense Multiple Access)
p-persistent CSMA (Carrier Sense Multiple Access)
Non-persistent CSMA (Carrier Sense Multiple Access)
Comparision
arameter 1-persistent CSMA p-persistent CSMA Non-persistent CSMA

It sends with the probability It sends with the probability It send when channel is
Carrier Sense
of 1 when channel is idle. of p when channel is idle. idle.

It will wait for the random


It continuously senses the It waits for the next time
Waiting amount of time to check the
channel or carrier. slot.
carrier.

Less chances as compared Less chances as compared


There is highest chances of
Chances of Collision to 1-persistence and to 1-persistence but more
collision in this.
non-persistence. than the p-persistence.

It’s utilization is above


It’s utilization is above
1-persistent as not all the
ALOHA as frames are only It’s utilization is depend
Utilization stations constantly check
sent when the channel is upon the probability p.
the channel at the same
idle.
time.

It is small as station will


send whenever channel is
It is low as frames are sent It is large when p is small
found idle but longer than
Delay Low Load when the channel become as station will not always
1-persistent since it checks
idle. send when channel is idle.
for the random time when
busy.

It is large when the It is longer than


probability p of sending is 1-persistent as channel is
Delay High Load It is high due to collision.
CSMA (Carrier Sense Multiple Access)
CSMA/ CD

It is a carrier sense multiple access/ collision detection network


protocol to transmit data frames. The CSMA/CD protocol works
with a medium access control layer. Therefore, it first senses the
shared channel before broadcasting the frames, and if the channel is
idle, it transmits a frame to check whether the transmission was
successful. If the frame is successfully received, the station sends
another frame. If any collision is detected in the CSMA/CD, the
station sends a jam/ stop signal to the shared channel to terminate
data transmission. After that, it waits for a random time before
sending a frame to a channel.(wired transmission)

Transmission Time >=2* Propagation Delay


Length/Bandwidth>=2* Propagation Delay
Length >=2* Propagation Delay*Bandwidth
CSMA/ CA

It is a carrier sense multiple access/collision avoidance network


protocol for carrier transmission of data frames. It is a protocol that
works with a medium access control layer. When a data frame is sent
to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own)
acknowledgments, that means the data frame has been successfully
transmitted to the receiver. But if it gets two signals (its own and one
more in which the collision of frames), a collision of the frame
occurs in the shared channel. Detects the collision of the frame when
a sender receives an acknowledgment signal. (Wireless
Transmession)
B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared


channel. In the controlled access method, each station
interacts and decides to send a data frame by a particular
station approved by all other stations. It means that a single
station cannot send the data frames unless all other stations
are not approved. It has three types of controlled
access: Reservation, Polling, and Token Passing.
C. Channelization Protocols

It is a channelization protocol that allows the total


usable bandwidth in a shared channel to be shared
across multiple stations based on their time,
distance and codes. It can access all the stations at
the same time to send the data frames to the
channel.

Following are the various methods to access the channel based on their
time, distance and codes:
• FDMA (Frequency Division Multiple Access)
• TDMA (Time Division Multiple Access)
• CDMA (Code Division Multiple Access)
FDMA

It is a frequency division multiple access (FDMA) method used


to divide the available bandwidth into equal bands so that
multiple users can send data through a different frequency to the
subchannel. Each station is reserved with a particular band to
prevent the crosstalk between the channels and interferences of
stations.
TDMA

Time Division Multiple Access (TDMA) is a channel access


method. It allows the same frequency bandwidth to be shared
across multiple stations. And to avoid collisions in the shared
channel, it divides the channel into different frequency slots that
allocate stations to transmit the data frames. The
same frequency bandwidth into the shared channel by dividing
the signal into various time slots to transmit it. However,
TDMA has an overhead of synchronization that specifies each
station's time slot by adding synchronization bits to each slot.
CDMA

• The code division multiple access (CDMA) is a channel access


method. In CDMA, all stations can simultaneously send the data over
the same channel. It means that it allows each station to transmit the
data frames with full frequency on the shared channel at all times. It
does not require the division of bandwidth on a shared channel based
on time slots.

• If multiple stations send data to a channel simultaneously, their data


frames are separated by a unique code sequence. Each station has a
different unique code for transmitting the data over a shared channel.
For example, there are multiple users in a room that are continuously
speaking. Data is received by the users if only two-person interact with
each other using the same language. Similarly, in the network, if
different stations communicate with each other simultaneously with
different code language.
LAN Switching:

LAN stands for Local-area Network. It is a computer network


that covers a relatively small area such as within a building or
campus of up to a few kilometers in size. LANs are generally
used to connect personal computers and workstations in
company offices to share common resources, like printers, and
exchange information.
LAN switching is a technology that promises to increase the
efficiency of local area networks and solve the current
bandwidth problems. Examples of Lan Switching are as
follows:
Wired LAN: Ethernet, Hub, Switch
Wireless LAN: Wi-fi
LAN Switching:
LAN Switching:

Advantages of LAN Switching:


•It can give rise to an increase in network scalability, which means
that network can expand as the demand grows.
•Each network user can experience good and improved
bandwidth performance.
•The setup of LAN is easy as compared to other switching
techniques.
LAN Switching:

Disadvantages of LAN Switching:


• The cost of setting up a LAN network is quite High.
• privacy violations is another disadvantage as one LAN
user/administrator can check the personal files of every user
present in that network.
• since each one has the power to check other users data
security is a major issue.
• LAN faces many problems mainly related to hardware
problems and system failure. so a good cost of maintenance.
• since all the computers are connected to the network a
virus-infected to one of the computers may cause a spread to
all the computers present in that network.
LAN Switching:

Applications of LAN Switching:


• A LAN could be used to connect printers, desktops, file
servers, storage of arrays.
• LANs direct traffic between endpoints in a local area
server.
Error Control

Detection and correction of errors


Lost frames
Damaged frames
Automatic repeat request
• Error detection
• Positive acknowledgment
• Retransmission after timeout
• Negative acknowledgement and retransmission
Automatic Repeat Request (ARQ)

Stop and wait


Go back N
Selective reject (selective retransmission)
Sliding Window Protocols

Go Back N Sliding Window Protocol

Selective Repeat Sliding Window Protocol

65
Flow Control

• Ensuring the sending entity does not overwhelm the


receiving entity
• Preventing buffer overflow
• Transmission time
• Time taken to emit all bits into medium
• Propagation time
• Time for a bit to traverse the link
Model of Frame Transmission
Stop and Wait

• Source transmits frame


• Destination receives frame and replies with
acknowledgement
• Source waits for ACK before sending next frame
• Destination can stop flow by not send ACK
• Works well for a few large frames
Fragmentation

Large block of data may be split into small frames


• Limited buffer size
• Errors detected sooner (when whole frame received)
• On error, retransmission of smaller frames is needed
• Prevents one station occupying medium for long periods
Stop and wait becomes inadequate
Stop and Wait Link Utilization
Sliding Windows Flow Control

• Allow multiple frames to be in transit


• Receiver has buffer W long
• Transmitter can send up to W frames without ACK
• Each frame is numbered
• ACK includes number of next frame expected
• Sequence number bounded by size of field (k)
• Frames are numbered modulo 2k
Sliding Window Diagram
Example Sliding Window
Sliding Window Enhancements

• Receiver can acknowledge frames without permitting


further transmission (Receive Not Ready)
• Must send a normal acknowledge to resume
• If duplex, use piggybacking

• If no data to send, use acknowledgement frame


• If data but no acknowledgement to send, send last
acknowledgement number again, or have ACK valid flag (TCP)
Stop and Wait -
Diagram
Stop and Wait - Pros and Cons

Simple
Inefficient
Go Back N (1)

• Based on sliding window


• If no error, ACK as usual with next frame expected
• Use window to control number of outstanding frames
• If error, reply with rejection

• Discard that frame and all future frames until error frame received
correctly

• Transmitter must go back and retransmit that frame and all subsequent
frames
Go Back N -
Diagram
Selective Repeat-
Diagram
Link Virtualization: A Network as a Link Layer

Introduction to Link Virtualization


• Definition: Link virtualization is a technique that
abstracts the physical link layer, allowing multiple
logical networks (virtual networks) to operate over a
shared physical infrastructure.

• Purpose: The goal of link virtualization is to enhance


the efficiency, scalability, and flexibility of network
resources by decoupling the logical view of the
network from the physical resources.
Key Components of Link Virtualization

• Virtual Links: These are logical connections that exist


between virtual nodes (virtual machines, containers,
etc.). Virtual links are mapped to the physical links of
the underlying infrastructure.

• Virtual Nodes: Represent logical entities in a virtual


network. These could be virtual machines, containers,
or even applications that interact over virtual links.

• Physical Infrastructure: The actual hardware and


physical links that support the virtual network. This
includes switches, routers, and physical cabling.
Advantages of Link Virtualization
• Resource Efficiency: Multiple virtual networks can share
the same physical infrastructure, leading to better
utilization of resources.

• Scalability: Virtual networks can be easily scaled by


adjusting the logical connections, without the need for
changes in the physical infrastructure.

• Isolation: Link virtualization allows for the creation of


isolated virtual networks on the same physical
infrastructure, which enhances security and ensures that
issues in one virtual network do not affect others.

• Flexibility: It enables dynamic reconfiguration of networks


to meet changing demands without the need for physical
intervention.
Applications of Link Virtualization

• Cloud Computing: Link virtualization is crucial in cloud


environments where multiple tenants share the same
physical infrastructure. It allows for the creation of isolated
virtual networks for different users.
• Data Centers: In modern data centers, link virtualization is
used to manage large-scale network environments
efficiently, enabling rapid deployment of virtual networks
and services.
• Telecommunication Networks: Telecom operators use link
virtualization to offer virtual private networks (VPNs) to
customers, providing secure and isolated network
connections over shared infrastructure.
Introduction to Data Center Networking

• Data Center Networking: Refers to the architecture, design,


and management of the communication infrastructure
within a data center. It involves the use of network devices
like switches, routers, firewalls, and load balancers to
ensure that data can be efficiently transferred within the
data center and to external networks.

• Importance: Data centers are the backbone of modern web


services, hosting everything from websites to cloud
applications. Effective networking in data centers is crucial
for ensuring high availability, low latency, and secure
access to data and services.
A Day in the Life of a Web Page Request

Overview: Understanding how a web page request is processed


within a data center provides insights into the complexity of
data center networking. This retrospective view helps to
illustrate the interaction between different network
components and how they work together to deliver content to
end-users.
The Journey of a Web Page Request

1. User Request Initiation:


• The process begins when a user enters a URL in their browser and hits enter.
• This triggers a DNS (Domain Name System) lookup to translate the human-readable
domain name into an IP address of the web server.

2. Arrival at the Data Center:


• Once the IP address is resolved, the request is routed over the internet and arrives at the
data center's edge router.
• The edge router forwards the request to a load balancer to distribute incoming requests
across multiple servers.
The Journey of a Web Page Request

3. Load Balancer and Server Selection:


• The load balancer determines which server within the data center should handle the
request based on factors like server load, latency, and availability.
• The chosen server is typically part of a server farm or cluster, designed to handle large
volumes of requests.
4. Request Processing:
• The selected server processes the request by retrieving the required web page from
storage (e.g., a database or a file system).
• This step may involve querying a database, fetching static content, or even processing
application logic if the request is dynamic.
The Journey of a Web Page Request

5. Data Retrieval and Networking:


• The server may need to communicate with other servers or storage systems within the
data center to gather all the necessary components of the web page.
• Data center networks typically employ high-speed, low-latency interconnections like
leaf-spine architectures to ensure fast data retrieval.

6. Content Delivery:
• Once the server has compiled the web page, the content is sent back to the load balancer.
• The load balancer then forwards the response through the edge router, back to the user’s
device via the internet.

You might also like