Chapter3 (Autosaved)
Chapter3 (Autosaved)
Bit stuffing
(a) The original data.
(b) The data as they appear on the line.
(c) The data as they are stored in receiver’s memory after de-stuffing.
• Each frame begins and ends with a special bit pattern 01111110
• When ever the sender’s data link layer identifies five
consecutive 1’s it stuffs bit 0 into the outgoing stream.
• Whenever receiver sees five consecutive 1’s followed by 0 it
de-stuffs the 0 bit.
Flow Control Techniques
1. Stop-and-wait Flow Control
2. Sliding Window Flow Control
Stop-and-Wait Flow Control
• The sender waits for an acknowledgement from the
receiver after every frame, which is transmitted by the
sender.
• It indicates the willingness of the receiver to accept
another frame by sending back an acknowledgement
to the sender.
• The sender must wait until it receives the
acknowledgement before sending next frame.
• The receiver thus can stop the flow of data simply by
withholding acknowledgement.
Since the remainder is zero, the received bit string is accepted. To get the data bit sequence, we remove n − 1
bits from end of received bit string. Hence, the data bit sequence is 11001100.
Error Correction
• To find P1, select positions that has first bit as 1 from
• Hamming code is an error correcting code. LSB i.e., positions 1,3,5,7.
• Hamming codes are linear block codes. • To find P2, select positions that has second bit as 1
• Parity bits are used here. from LSB i.e., positions 2,3,6,7.
• They are inserted in between the data bits. • To find P4, select positions that has third bit as 1 from
• The most commonly used is a 7-bit Hamming LSB i.e., positions 4, 5, 6, 7.
code. • Parity can be even or odd.
• Structure of a 7-bit Hamming code: If we want to find even parity, then number of 1’s
excluding parity bit has to be even. If yes, then the
parity bit becomes 0; otherwise, the parity bit
becomes 1.
If we want to find odd parity, then number of 1’s
excluding parity bit has to be odd. If yes, then the
(D → Data bits, P → Parity bits) parity bit becomes 0; otherwise, the parity bit
becomes 1.
• Parity bits are in position 2m; where m = 0, 1, 2, ….
• Computing the values of parity bits:
ERROR CONTROL
• When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it is received corrupted.
• In both cases, the receiver does not receive the correct data-frame and sender does not know anything about any loss.
• In such case, both sender and receiver are equipped with some protocols which helps them to detect transit errors such as los s of
data-frame.
• Hence, either the sender retransmits the data-frame or the receiver may request to resend the previous data-frame.
• Requirements for error control mechanism:
• Error detection: The sender and receiver, either both or any, must ascertain that there is some error in the transit.
• Positive ACK: When the receiver receives a correct frame, it should acknowledge it.
• Negative ACK: When the receiver receives a damaged frame or a duplicate frame, it sends a NACK back to the sender and the
sender must retransmit the correct frame.
• Retransmission: The sender maintains a clock and sets a timeout period. If an acknowledgement of a data-frame previously
transmitted does not arrive before the timeout the sender retransmits the frame, thinking that the frame or it’s acknowledgement
is lost in transit.
Stop-and-Wait ARQ
• The sender maintains a timeout counter.
• When a frame is sent, the sender starts the timeout
counter.
• If acknowledgement of frame comes in time, the
sender transmits the next frame in queue.
• If acknowledgement does not come in time, the
sender assumes that either the frame or its
acknowledgement is lost in transit. Sender
retransmits the frame and starts the timeout
counter.
• If a negative acknowledgement is received, the
sender retransmits the frame.
Go-Back-N ARQ
• In Go-Back-N ARQ method, both sender and receiver
maintain a window.
• The sending-window size enables the sender to send
multiple frames without receiving the
acknowledgement of the previous ones.
• The receiving-window enables the receiver to receive
multiple frames and acknowledge them. The receiver
keeps track of incoming frame’s sequence number.
• When the sender sends all the frames in window, it
checks up to what sequence number it has received
positive acknowledgement.
• If all frames are positively acknowledged, the sender
sends next set of frames.
• If sender finds that it has received NACK or has not
receive any ACK for a particular frame, it retransmits all
the frames after which it does not receive any positive
ACK.
Selective Repeat ARQ
• In Selective-Repeat ARQ, the
receiver while keeping track of
sequence numbers, buffers the
frames in memory and sends NACK
for only frame which is missing or
damaged.
• The sender in this case, sends only
packet for which NACK is received.
High Level Data Link Protocol (HDLC)
• HDLC (High-Level Data Link Control) is a bit-oriented protocol.
• Used for communication over the point-to-point and multipoint
links.
• Implements the mechanism of ARQ(Automatic Repeat Request).
• Full-duplex communication is possible.
• Widely used protocol and offers reliability, efficiency, and a high level
of flexibility.
Three types of stations in HDLC
• Primary Station: This station mainly looks after data management. In
the case of the communication between the primary and secondary
station, it is the responsibility of the primary station to connect and
disconnect the data link. The frames issued by the primary station are
commonly known as commands.
• Secondary Station: The secondary station operates under the control
of the primary station. The frames issued by the secondary stations
are commonly known as responses.
• Combined Station: The combined station acts as both Primary
stations as well as Secondary stations. The combined station issues
both commands as well as responses.
Transfer Modes in HDLC
• The HDLC protocol offers two modes of transfer that mainly can be
used in different configurations. These are as follows:
1. Normal Response Mode(NRM)
2. Asynchronous Response Mode (ARM)
3. Asynchronous Balance Mode(ABM)
Normal Response Mode(NRM)
• In this mode, the configuration of
the station is unbalanced.
• There is one primary station and
multiple secondary stations where
the primary station can send the
commands and the secondary
station can only respond.
• This mode is used for both point-
to-point as well as multiple-point
links.
Asynchronous Response Mode (ARM)
• Asynchronous Response Mode (ARM) is an unbalanced configuration
in which secondary terminals may transmit without permission
from the primary terminal.
• However, there is still a distinguished primary terminal which retains
responsibility for line initialization, error recovery, and logical
disconnect.
Asynchronous Balance Mode(ABM)
• In this mode, the configuration of the
station is balanced.
• In this mode, the link is point-to-point,
and each station can function as a primary
and as secondary.
• Asynchronous Balance mode(ABM) is a
commonly used mode today.
HDLC Frames
• There are three types of frames defined in the HDLC:
• Information Frames(I-frames): These frames are used to transport the user
data and the control information that is related to the user data. If the first
bit of the control field is 0 then it is identified as I-frame.
• Supervisory Frames(S-frames) These frames are only used to transport the
control information. If the first two bits of the control field are 1 and 0 then
the frame is identified as S-frame.
• Unnumbered Frames(U-Frames) These frames are mainly reserved for
system management. These frames are used for exchanging control
information between the communicating devices. If the first two bits of the
control field are 1 and 1 then the frame is identified as U-frame.
1. Flag Field
This field of the HDLC frame is mainly a sequence of 8-bit having the bit
HDLC Frame Structure pattern 01111110 and it is used to identify the beginning and end of the
frame. The flag field mainly serves as a synchronization pattern for the
receiver.
2. Address Field
It is the second field of the HDLC frame and it mainly contains the
address of the secondary station. This field can be 1 byte or several bytes
long which mainly depends upon the need of the network. In case if the
frame is sent by the primary station, then this field contains the
address(es) of the secondary stations. If the frame is sent by the
secondary station, then this field contains the address of the primary
station.
3. Control Field
This is the third field of the HDLC frame and it is a 1 or 2-byte segment of
the frame and is mainly used for flow control and error control. Bits
interpretation in this field mainly depends upon the type of the frame.
4. Information Field
This field of the HDLC frame contains the user's data from the network
layer or the management information. The length of this field varies from
one network to another.
5. FCS Field
FCS means Frame check sequence and it is the error detection field in the
HDLC protocol. There is a 16 bit CRC code for error detection.
Frame Format
Point-to-Point Protocol
• PPP protocol is a byte-oriented protocol.
• The PPP protocol is mainly used to establish a direct connection between
two nodes.
• The PPP protocol mainly provides connections over multiple links.
• This protocol defines how two devices can authenticate with each other.
• PPP protocol also defines the format of the frames that are to be
exchanged between the devices.
• This protocol also defines how the data of the network layer are
encapsulated in the data link frame.
• The PPP protocol defines how the two devices can negotiate the
establishment of the link and then can exchange the data.
PPP Frame Format
1. Flag
The PPP frame mainly starts and ends with a 1-byte flag field that has the bit pattern: 01111110. It is important
to note that this pattern is the same as the flag pattern used in HDLC. But there is a difference too and that is PPP
is a byte-oriented protocol whereas the HDLC is a bit-oriented protocol.
2. Address
The value of this field in PPP protocol is constant and it is set to 11111111 which is a broadcast address. The two
parties can negotiate and can omit this byte.
3. Control
The value of this field is also a constant value of 11000000. We have already told you that PPP does not provide
any flow control and also error control is limited to error detection. The two parties can negotiate and can omit
this byte.
4. Protocol
This field defines what is being carried in the data field. It can either be user information or other information. By
default, this field is 2 bytes long.
5. Payload field
This field carries the data from the network layer. The maximum length of this field is 1500 bytes. This can also be
negotiated between the endpoints of communication.
6. FCS
It is simply a 2-byte or 4-byte standard CRC(Cyclic redundancy check).
Transition Phases in the PPP Protocol
Dead
In this phase, the link is not being used. No active carrier is there at the physical layer and the line is simply quiet.
Establish
If one of the nodes starts the communication then the connection goes into the established phase. In this phase,
options are negotiated between the two parties. In case if the negotiation is done successfully then the system goes
into the Authenticate phase (in case if there is the requirement of authentication otherwise goes into the network
phase.)
Several packets are exchanged here.
Authenticate
This is an optional phase. During the establishment phase, the two nodes may decide not to skip this phase. If the
two nodes decide to proceed with the authentication then they send several authentication packets.
If the result of this is successful then the connection goes into the networking phase otherwise goes into the
termination phase.
Network
In this phase, the negotiation of the protocols of the network layer takes place. The PPP protocol specifies that the
two nodes establish an agreement of the network layer before the data at the network layer can be exchanged. The
reason behind this is PPP supports multiple protocols at the network layer.
In case if any node is running multiple protocols at the network layer simultaneously then the receiving node needs
to know that which protocol will receive the data.
Open
In this phase the transfer of the data takes place. Whenever a connection reaches this phase, then the exchange of
data packets can be started. The connection remains in this phase until one of the endpoints in the communication
terminates the connection.
Terminate
In this phase, the connection is terminated. There is an exchange of several packets between two ends for house
cleaning and then closing the link.
Components of PPP/ PPP stack
Basically, PPP is a layered protocol. There are three components of the PPP protocol and
these are as follows:
•Link Control Protocol
•Authentication Protocol
•Network Control Protocol
Dynamic addressing Does not offer dynamic addressing. Dynamic addressing is used.
Compatibility with other protocols Cannot be operated with non-Cisco Interoperable with non-Cisco
devices. devices also.
Medium Access Control
Functions of MAC Layer
• It provides an abstraction of the physical layer to the LLC and upper layers of the
OSI network.
• It is responsible for encapsulating frames so that they are suitable for
transmission via the physical medium.
• It resolves the addressing of the source station as well as the destination station,
or groups of destination stations.
• It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
• It also performs collision resolution and initiates retransmission in case of
collisions.
• It generates the frame check sequences and thus contributes to protection
against transmission errors.
Channel Allocation Problem
• When there is more than one user who desires access to a shared
network channel, an algorithm is deployed for channel allocation
among the competing users.
• The network channel may be a single cable or optical fiber connecting
multiple nodes, or a portion of the wireless spectrum.
• Channel allocation algorithms allocate the wired channels and
bandwidths to the users, who may be base stations, access points or
terminal equipment.
Channel Allocation Schemes
• Channel Allocation may be done using two schemes −
1. Static Channel Allocation
2. Dynamic Channel Allocation
Static Channel Allocation
• In a static channel allocation scheme, a fixed portion of the frequency
channel is allotted to each user.
• For N competing users, the bandwidth is divided into N channels using
frequency division multiplexing (FDM), and each portion is assigned to one
user.
• This scheme is also referred to as fixed channel allocation or fixed channel
assignment.
• In this allocation scheme, there is no interference between the users since
each user is assigned a fixed channel.
• However, it is not suitable in the case of a large number of users with
variable bandwidth requirements.
Dynamic Channel Allocation
• In a dynamic channel allocation scheme, frequency bands are not permanently assigned to the
users.
• Instead, channels are allotted to users dynamically as needed, from a central pool.
• The allocation is done considering a number of parameters so that transmission interference is
minimized.
• This allocation scheme optimizes bandwidth usage and results in faster transmissions.
• Dynamic channel allocation is further divided into centralized and distributed allocation.
• Possible assumptions include:
1. Station Model: Assumes that each of N stations independently produces frames. Once the
frame is generated at the station, the station does nothing until the frame has been
successfully transmitted.
2. Single Channel Assumption: In this allocation, all stations are equivalent and can send and
receive on that channel.
3. Collision Assumption: If two frames overlap time-wise, then that’s a collision. Any collision is
an error, and both frames must be retransmitted. Collisions are the only possible error.
4. Time can be divided into Slotted or Continuous.
5. Stations can sense a channel if it is busy before they try it.
ALOHA
• ALOHA is a multiple-access protocol for the transmission of data via a
shared network channel.
• It operates in the medium access control sublayer (MAC sublayer).
• In ALOHA, each node or station transmits a frame without trying to
detect whether the transmission channel is idle or busy.
• If the channel is idle, then the frames will be successfully transmitted.
• If two frames attempt to occupy the channel simultaneously, the
collision of frames will occur and the frames will be discarded.
• These stations may choose to retransmit the corrupted frames
repeatedly until successful transmission occurs.
Pure ALOHA
• In pure ALOHA, the time of transmission is
continuous.
• Time is not slotted and stations can transmit
whenever they want.
• There is a high possibility of collision and the
colliding frames will be destroyed.
• If frames collide and get destroyed, then the
sender waits for a random amount of time and
resends the frame.
Vulnerable time for Pure ALOHA
• The vulnerable time is in which there is a possibility of
collision.
• We assume that the stations send fixed-length frames
with each frame taking Tfr Sec to send.
• The following figure shows the vulnerable time for
station A.
• Station A sends a frame at time t.
• Now imagine station B has already sent a frame between
(t - Tfr) and t. This leads to a collision between the frames
from station A and station B. The end of B's frame
collides with the beginning of A's frame.
• On the other hand, suppose that station C sends a frame
between t and (t + Tfr). Here, there is a collision between
frames from station A and station C. The beginning of C's
frame collides with the end of A's frame.
Slotted ALOHA
• Slotted ALOHA reduces the number of
collisions and doubles the capacity of pure
ALOHA.
• The shared channel is divided into a number
of discrete time intervals called slots.
• A station can transmit only at the beginning
of each slot.
• However, there can still be collisions if more
than one station tries to transmit at the
beginning of the same time slot.
Vulnerable time for Slotted ALOHA
Efficiency of ALOHA
Performance of Pure and Slotted ALOHA
Pure ALOHA Vs Slotted ALOHA
Q2. A pure ALOHA network transmits 200-bit frames on a shared channel of 200 kbps. What
is the throughput if the system (all stations together) produces:
a. 1000 frames per second
b. 500 frames per second
c. 250 frames per second
c. If the system creates 250 frames per second, this is (1/4) frame per millisecond. The load is (1/4). In this
case S = G× e−2G or S = 0.152 (15.2 percent). This means that the throughput is 250 × 0.152 = 38 frames.
Only 38 frames out of 250 will probably survive.
c. If the system creates 250 frames per second, this is (1/4) frame per millisecond. The load is (1/4). In this
case S = G× e−G or S = 0.195 (19.5 percent). This means that the throughput is 250 × 0.195 = 49 frames.
Only 49 frames out of 250 will probably survive.
Carrier Sense Multiple Access (CSMA) Protocol
• Carrier Sense: A station can sense the channel to see if anyone is using it. If the channel is being
used, then the station will not attempt to use the channel.
• CSMA works on the principle of "Listen before Talking" or "Sense before Transmit".
• Types:
(a) 1-Persistent CSMA
(b) Non-persistent CSMA
(c) p- Persistent CSMA
(d) CSMA/CD
1-Persistent CSMA
• When a station needs to send data, it first listens Advantage
to the channel. • Due to carrier sense property, 1-persistent CSMA gives
better performance than the ALOHA systems.
• If the channel is busy, the station waits till the Drawbacks
channel becomes free.
• Propagation Delay: It is possible that just after a station
• When the channel becomes free, a station can begins transmitting, another station becomes ready to
transmit a frame. send and it will sense the channel. If the first station’s
• A collision occurs when two stations detect an idle signal has not yet reached the 2nd station, the 2nd
channel at the same time and simultaneously station will sense an idle channel and will begin sending
send frames. its data. This will lead to a collision.
• If a collision occurs, the station waits a random • Assume that station 2 and station 3 are waiting for
amount of time and starts all over again. station 1 to finish its transmission. Immediately after
• It is called 1-persistent as the station will transmit station 1 finishes transmitting, both station 2 and station
with a probability of 1, when it finds the channel 3 begin transmitting at the same time thus leading to a
idle. collision.
Non-persistent CSMA
• A station senses the channel when it wants to send data.
• If the channel is idle, the station begins sending the data.
• However, if the channel is busy, the station does not continually sense the
channel like 1-persistent CSMA. Instead, it waits a random period of time
and then checks the channel again.
• Disadvantage
This leads to longer delays than 1-persistent CSMA.
• Advantage
This algorithm leads to better channel utilization.
p-persistent CSMA
• It is used for slotted channels.
• When a station becomes ready to send, it senses the channel.
• If channel is idle, station transmits within that slot with a probability p
and defers from sending with a probability q = 1- p.
• If p > q, then the station transmits, else if p < q, then the station does
not transmit and waits till the next slot and again checks if p > q or p < q.
• This process is repeated until either the frame has been transmitted or
another station has started transmitting.
CSMA/CD (Carrier Sense Multiple Access with
Collison Detection)
• Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a
network protocol for carrier transmission that operates in the Medium
Access Control (MAC) layer.
• It senses or listens to whether the shared channel for transmission is busy
or not, and defers transmissions until the channel is free.
• The collision detection technology detects collisions by sensing
transmissions from other stations.
• On detection of a collision, the station stops transmitting, sends a jam
signal, and then waits for a random time interval before retransmission.
• The maximum permissible attempts to transmit a packet after detecting
collision for a node is 15.
• If after incrementing, the attempts get more than 15 in the count then the
packet gets discarded due to excessive collision. However, in case the count
for the same is less than 15 then the respective node will prepare to
retransmit the data packet over the channel again.
• For this, it will calculate the back-off time and will wait for the completion
of that time duration. Once this is done, the node will again go for checking
the availability of the channel and whether it is free or not in order to
resume the transmission.
CSMA/CD Frame Format
1. Preamble: It is seven bytes (56 bits) that provides bit synchronization. It consists of alternating 0s and 1s.
The purpose is to provide alert and timing pulse.
2. Start Frame Delimiter (SFD): It is one-byte field with unique pattern: 10 10 1011. It marks the beginning
of frame.
3. Destination Address (DA): It is six-byte field that contains physical address of packet’s destination.
4. Source Address (SA): It is also a six-byte field and contains the physical address of source or last device to
forward the packet (most recent router to receiver).
5. Length: This two-byte field specifies the length or number of bytes in data field.
6. Data: It can be of 46 to 1500 bytes, depending upon the type of frame and the length of the information
field.
7. Frame Check Sequence (FCS): This four-byte field contains CRC for error detection.
CSMA/ CA (Carrier Sense Multiple Access with
Collison Avoidance)
• CSMA/CA stands for Carrier Sense Multiple
Access with Collision Avoidance.
• It means that it is a network protocol that uses to avoid a
collision rather than allowing it to occur, and it does not deal
with the recovery of packets after a collision.
• In CSMA/CA, whenever a station wants to send a data frame
to a channel, it checks whether it is in use.
• If the shared channel is busy, the station waits until the
channel enters idle mode.
• Hence, we can say that it reduces the chances of collisions
and makes better use of the medium to send data packets
more efficiently.
LAN stands for Local Area Network VLAN stands for Virtual Local Area Network
The network packet is advertised to each and every The network packet is only transmitted to a specific
device in a LAN. broadcast domain in a VLAN.