Module 3
The data link layer
The data link layer is responsible for moving frames from one
hop (node) to the next.
❑ Framing.
❑ Physical addressing.
❑ Flow control.
❑ Error control.
❑ Access control.
High Level Data Link Control --
HDLC is characterized by :
-Station type
-Configurations
-Response modes
-Frame formats
Three types of stations –
1) Primary station,
- Responsible for the operation of the link.
- Frames issued by the primary stations are called commands.
2) Secondary stations.
- Operates under the control of the primary station.
-Frames issued are called responses.
-Primary maintains a separate logical link with each secondary.
3)Combined stations.
-Combines the features of both primary and secondary.
-May issue both commands and responses.
HDLC Configuration
HDLC Configuration
HDLC Configuration
HDLC Modes
NRM – Normal Response Mode –
Standard primary – Secondary relationship.
Secondary device must get permission from
primary before transmitting.
ARM -- Asynchronous response mode –
Secondary may initiate transmission without the
permission of primary whenever the channel
is idle.
All transmission is through the primary.
ABM – Asynchronous balanced mode –
All stations are equal and any station can initiate
transmission.
HDLC Modes
HDLC Frame Types
• I – Frame –Information frames
• S- Frame - Supervisory frames
• U- Frame – Unnumbered frames
HDLC Frame Types
HDLC Frame Types
HDLC Frame Types
HDLC Flag Field
Flag field. The flag field of an HDLC frame is an 8-bit
sequence with the bit pattern 01111110 that identifies
both the beginning and the end of a frame and serves as
a synchronization pattern for the receiver.
Control field. The control field is a 1- or 2-byte segment
of the frame used for flow and error control. The
interpretation of bits in this field depends on the frame
type. We discuss this field later and describe its format
for each frame type.
o Information field. The information field contains the
user's data from the network layer or management
information. Its length can vary from one network to
another.
o FCS field. The frame check sequence (FCS) is the
HDLC error detection field. It can contain either a 2- or
4-byte ITU-T CRC
Bit stuffing and unstuffing
Bit stuffing is the process of adding one extra 0 whenever five consecutive
1s follow a 0 in the data, so that the receiver does not mistake the pattern
01111110 for a flag.
Bit stuffing - Bit unstuffing -
HDLC Address Field
Address field. The second field of an HDLC frame contains the address of the
secondary station. If a primary station created the frame, it contains a to
address. If a secondary creates the frame, it contains a from address. An
address field can be 1 byte or several bytes long, depending on the needs of
the network. One byte can identify up to 128 stations (l bit is used for another
purpose). Larger networks require multiple-byte address fields. If the address
field is only 1 byte, the last bit is always a 1. If the address is more than 1 byte,
all bytes but the last one will end with 0; only the last will end with 1. Ending
each intermediate byte with 0 indicates to the receiver that there are more
address bytes to come.
HDLC Control Field
Poll/Final
HDLC Information Field
HDLC FCS Field
Use of P/F Field
Use of P/F Field
Use of P/F Field
Use of P/F Field
Use of P/F Field
U-Frame Control Field
U-Frame Control Field
Polling Example
Figure 11-28
Selecting Example
WCB/McGraw-Hill The McGraw-Hill Companies, Inc., 1998
Peer-to-Peer Example
Peer-to-Peer Example
Stop and Wait
Sliding Window
Sender Sliding Window
Receiver Sliding Window
Sliding Window Example
Stop and Wait ARQ
Damaged Frame
Lost Frame
Lost ACK
Sliding Window ARQ Go back n
Damaged Frame
Lost Frame
Lost ACK
Selective Reject
Multiple-access protocols
RANDOM ACCESS
In random access or contention methods, no station is
superior to another station and none is assigned the
control over another. No station permits, or does not
permit, another station to send. At each instance, a
station that has data to send uses a procedure defined by
the protocol to make a decision on whether or not to
send.
Frames in a pure ALOHA network
Procedure for pure ALOHA protocol
Vulnerable time for pure ALOHA protocol
The throughput for pure ALOHA is
S = G × e −2G .
The maximum throughput
Smax = 0.184 when G= (1/2).
Frames in a slotted ALOHA network
Slotted ALOHA
Pure ALOHA has a vulnerable time of 2 x Tfr . This is so because there is no rule that
defines when the station can send. A station may send soon after another station has
started or soon before another station has finished. Slotted ALOHA was invented to
improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of Tfr s and force the station to
send only at the beginning of the time slot. Figure 12.6 shows an example of frame
collisions in slotted ALOHA.
The throughput for slotted ALOHA is
S = G × e−G .
The maximum throughput
Smax = 0.368 when G = 1.
Vulnerable time for slotted ALOHA protocol
Carrier Sense Multiple Access (CSMA)
To minimize the chance of collision and, therefore, increase the performance,
the CSMA method was developed. The chance of collision can be reduced if a
station senses the medium before trying to use it. Carrier sense multiple
access (CSMA) requires that each station first listen to the medium (or check
the state of the medium) before sending. In other words, CSMA is based on the
principle "sense before transmit" or "listen before talk.“ CSMA can reduce the
possibility of collision, but it cannot eliminate it. The reason for this is shown in
Figure 12.8, a space and time model of a CSMA network. Stations are
connected to a shared channel (usually a dedicated medium). The possibility of
collision still exists because of propagation delay; when a station
sends a frame, it still takes time (although very short) for the first bit to reach
every station and for every station to sense it. In other words, a station may
sense the medium and find it idle, only because the first bit sent by another
station has not yet been received. At time tI' station B senses the medium and
finds it idle, so it sends a frame. At time t2 (t2> tI)' station C senses the medium
and finds it idle because, at this time, the first bits from station B have not
reached station C. Station C also sends a frame. The two signals collide and
both frames are destroyed.
Space/time model of the collision in CSMA
Vulnerable time in CSMA
Vulnerable Time
The vulnerable time for CSMA is the propagation time Tp . This is the time
needed for a signal to propagate from one end of the medium to the other.
When a station sends a frame, and any other station tries to send a frame
during this time, a collision will result. But if the first bit of the frame reaches the
end of the medium, every station will already have heard the bit and will refrain
from sending. Figure 12.9 shows the worst case. The leftmost station A sends a
frame at time tl' which reaches the rightmost station D at time tl + Tp . The gray
area shows the vulnerable area in time and space.
Persistence Methods
What should a station do if the channel is busy? What should a station do if the
channel is idle? Three methods have been devised to answer these questions:
the I-persistent method, the nonpersistent method, and the p-persistent
method. Figure shows the behavior of three persistence methods when a
station finds a channel busy
Behavior of Non persistence method -
Non-Persistent: In this scheme, the broadcast channel is not
monitored continuously. The sender sences it at random
time intervals and transmits whenever the carrier is idle. This
decreases the probability of collisions. But, it is not efficient
in a low load situation, where number of collisions are
anyway small.
Behavior of 1 - persistence method -
1-Persistent: In this scheme, transmission proceeds
immediately if the carrier is idle. However, if the carrier is
busy, then sender continues to sense the carrier until it
becomes idle. The main problem here is that, if more than
one transmitters are ready to send, a collision is
GUARANTEED!!
Behavior of p persistence method -
p-Persistent: Even if a sender finds the carrier to be idle, it uses a
probabilistic distribution to determine whether to transmit or not. Put
simply, "toss a coin to decide". If the carrier is idle, then transmission
takes place with a probability p and the sender waits with a probability 1-
p. This scheme is a good trade off between the Non-persistent and 1-
persistent schemes. So, for low load situations, p is high (example: 1-
persistent); and for high load situations, p may be lower. Clearly, the
value of p plays an important role in determining the performance of this
protocol. Also the same p is likely to provide different performance at
different loads.
Flow diagram for three persistence methods
Collision of the first bit in CSMA/CD
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
The CSMA method does not specify the procedure following a collision. Carrier
sense multiple access with collision detection (CSMA/CD) augments the
algorithm to handle the collision. In this method, a station monitors the medium
after it sends a frame to see if the transmission was successful. If so, the station
is finished. If, however, there is a collision, the frame is sent again. To better
understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. Although each station continues to send bits in the
frame until it detects the collision, we show what happens as the first bits
collide. In Figure 12.12, stations A and C are involved in the collision
Collision and abortion in CSMA/CD
Flow diagram for the CSMA/CD
Energy level during transmission, idleness, or collision
Carrier Sense Multiple Access with
Collision Avoidance (CSMA/CA)
The basic idea behind CSMA/CD is that a station needs to be able to receive
while transmitting to detect a collision. When there is no collision, the station
receives one signal: its own signal. When there is a collision, the station
receives two signals: its own signal and the signal transmitted by a second
station. To distinguish between these two cases, the received signals in these
two cases must be significantly different. In other words, the signal from the
second station needs to add a significant amount of energy to the one created
by the first station.
Timing in CSMA/CA
In CSMA/CA, the IFS can also be used to
define the priority of a station or a frame.
In CSMA/CA, if the station finds the
channel busy, it does not restart the timer of
the contention window;
it stops the timer and restarts it when the
channel becomes idle.
Flow diagram for CSMA/CA
CONTROLLED ACCESS
In controlled access, the stations consult one another to find which
station has the right to send. A station cannot send unless it has been
authorized by other stations. We discuss three popular controlled-
access methods.
Reservation
Polling
Token Passing
Reservation access method
In the reservation method, a station needs to make a reservation before
sending data. Time is divided into intervals. In each interval, a reservation
frame precedes the data frames sent in that interval If there are N stations in
the system, there are exactly N reservation minislots in the reservation frame.
Each minislot belongs to a station. When a station needs to send a
data frame, it makes a reservation in its own minislot. The stations that have
made reservations
can send their data frames after the reservation frame.
Figure shows a situation with five stations and a five-minislot reservation
frame. In the first interval, only stations 1, 3, and 4 have made reservations. In
the second
interval, only station 1 has made a reservation.
Select and poll functions in polling access method
Polling
Polling works with topologies in which one device is designated as a primary
station and the other devices are secondary stations. All data exchanges must
be made through the primary device even when the ultimate destination is a
secondary device. The primary device controls the link; the secondary devices
follow its instructions. It is up to the primary device to determine which device is
allowed to use the channel at a given time. The primary device, therefore, is
always the initiator of a session
.
Poll
The poll function is used by the primary device to solicit transmissions from the
secondary devices. When the primary is ready to receive data, it must ask (poll) each
device in turn if it has anything to send. When the first secondary is approached, it
responds either with a NAK frame if it has nothing to send or with data (in the form of
a data frame) if it does. If the response is negative (a NAK frame), then the primary
polls the next secondary in the same manner until it finds one with data to send. When
the response is positive (a data frame), the primary reads the frame and returns an
acknowledgment (ACK frame), verifying its receipt.
Select
The select function is used whenever the primary device has something to send.
Remember that the primary controls the link. If the primary is neither sending nor
receiving data, it knows the link is available.
If it has something to send, the primary device sends it. What it does not know,
however, is whether the target device is prepared to receive. So the primary must alert
the secondary to the upcoming transmission and wait for an acknowledgment of the
secondary's ready status. Before sending data, the primary creates and transmits a
select (SEL) frame, one field of which includes the address of the intended secondary.
Logical ring and physical topology in token-passing access method
Token Passing
In the token-passing method, the stations in a network are organized in a logical
ring. In other words, for each station, there is a predecessor and a successor. The
predecessor is the station which is logically before the station in the ring; the
successor is the station which is after the station in the ring. The current station is
the one that is accessing the channel now. The right to this access has been passed
from the predecessor to the current station. The right will be passed to the
successor when the current station has no more data to send.
But how is the right to access the channel passed from one station to another? In
this method, a special packet called a token circulates through the ring. The
possession of the token gives the station the right to access the channel and send
its data. When a station has some data to send, it waits until it receives the token
from its predecessor. It then holds the token and sends its data. When the station
has no more data to send, it releases the token, passing it to the next logical station
in the ring. The station cannot send data until it receives the token again in the next
round. In this process, when a station receives the token and has no data to send, it
just passes the data to the next station.
Token management is needed for this access method. Stations must be limited in
the time they can have possession of the token. The token must be monitored to
ensure it has not been lost or destroyed. For example, if a station that is holding the
token fails, the token will disappear from the network. Another function of token
management is to assign priorities to the stations and to the types of data being
transmitted. And finally, token management is needed to make low-priority stations
Token Ring is formed by the nodes connected in ring format as shown in
the diagram below. The principle used in the token ring network is that a
token is circulating in the ring and whichever node grabs that token will
have right to transmit the data.
Whenever a station wants to transmit a frame it inverts a single bit of the
3-byte token which instantaneously changes it into a normal data packet.
Because there is only one token, there can at most be one transmission at
a time.
Since the token rotates in the ring it is guaranteed that every node gets
the token with in some specified time. So there is an upper bound on the
time of waiting to grab the token so that starvation is avoided.
There is also an upper limit of 250 on the number of nodes in the network.
Error Detection Code
Single-bit error
In a single-bit error, only 1 bit in the data unit has changed.
Burst error of length 8
A burst error means that 2 or more bits in the data unit have changed.
The structure of encoder and decoder
To detect or correct errors, we need to send extra (redundant) bits with data.
Hamming Code -
D7 D6 D5 P4 D3 P2 P1
Parity bit P1 is associated with data bits D3, D5 and D7.
Parity bit P2 is associated with data bits D3, D6 and D7.
Parity bit P1 is associated with data bits D5, D6 and D7.
Either even parity or odd parity is used for generating the
code word.
Example - A four bit data word 1010 is to be encoded
using even parity Hamming code. What is the binary value
after encoding ?
1 0 1 P4 0 P2 P1
Parity bit P1 ( D3, D5 and D7) 0 1 1 So P1 = 0
Parity bit P2 (D3, D6 and D7) 0 0 1 So P2 = 1
Parity bit P1 ( D5, D6 and D7 ) 1 0 1 So P4 = 0
Hence the encoded binary value is 1010010
Cyclic Redundancy Check
CRC encoder and decoder
Sender Side (Generation of Encoded Data from Data and Generator
Polynomial (or Key)):
1.The binary data is first augmented by adding k-1 zeros in the end of the data
2.Use modulo-2 binary division to divide binary data by the key and store
remainder of division.
3.Append the remainder at the end of the data to form the encoded data and send
the same
Receiver Side (Check if there are errors introduced in transmission)
Perform modulo-2 division again and if the remainder is 0, then there are no errors.
In this article we will focus only on finding the remainder i.e. check word and the
code word.
Modulo 2 Division:
The process of modulo-2 binary division is the same as the familiar division process
we use for decimal numbers. Just that instead of subtraction, we use XOR here.
•n each step, a copy of the divisor (or data) is XORed with the k bits of the dividend
(or key).
•The result of the XOR operation (remainder) is (n-1) bits, which is used for the next
step after 1 extra bit is pulled down to make it n bits long.
•When there are no bits left to pull down, we have a result. The (n-1)-bit remainder
which is appended at the sender side.
In the encoder, the dataword has k bits (4 here); the codeword has n bits (7
here). The size of the dataword is augmented by adding n - k (3 here) Os to the
right-hand side of the word. The n-bit result is fed into the generator. The
generator uses a divisor of size n - k + I (4 here), predefined and agreed upon.
The generator divides the augmented dataword by the divisor (modulo-2
division). The quotient ofthe division is discarded; the remainder (r2rl ro) is
appended to the dataword to create the codeword.
The decoder receives the possibly corrupted codeword. A copy of all n bits is
fed to the checker which is a replica of the generator. The remainder produced
by the checker is a syndrome of n - k (3 here) bits, which is fed to the decision
logic analyzer. The analyzer has a simple function. If the syndrome bits are all
0s, the 4 leftmost bits of the codeword are accepted as the dataword
(interpreted as no error); otherwise, the 4 bits are discarded (error).
Division in CRC encoder
Encoder
Let us take a closer look at the encoder. The encoder takes the
dataword and augments it with n - k number of 0s. It then divides
the augmented dataword by the divisor, as shown in Figure .
The process of modulo-2 binary division is the same as the
familiar division process we use for decimal numbers. However, in
this case addition and subtraction are the same. We use the XOR
operation to do both.
As in decimal division, the process is done step by step. In each
step, a copy of the divisor is XORed with the 4 bits of the dividend.
The result of the XOR operation (remainder) is 3 bits (in this
case), which is used for the next step after 1 extra bit is pulled
down to make it 4 bits long. There is one important point we need
to remember in this type of division. If the leftmost bit of the
dividend (or the part used in each step) is 0, the step cannot use
the regular divisor; we need to use an all-0s divisor. When there
are no bits left to pull down, we have a result. The 3-bit remainder
forms the check bits (r2' rl' and ro). They are appended to the
dataword to create the codeword.
Decoder
The codeword can change during transmission. The decoder does
the same division process as the encoder. The remainder of the
division is the syndrome. If the syndrome is all Os, there is no
error; the dataword is separated from the received codeword and
accepted. Otherwise, everything is discarded. Figure shows two
cases: The left hand figure shows the value of syndrome when no
error has occurred; the syndrome is 000. The right-hand part of
the figure shows the case in which there is one single error.
The syndrome is not all 0 s (it is 011).
Division in the CRC decoder for two cases
CHECKSUM
At the source, the message is first divided into m-bit units. The
generator then creates an extra m-bit unit called the checksum, which
is sent with the message. At the destination, the checker creates a new
checksum from the combination of the message and sent checksum. If
the new checksum is all 0s, the message is accepted; otherwise, the
message is discarded . Note that in the real implementation, the
checksum unit is not necessarily added at the end of the message; it
can be inserted in the middle of the message.
Suppose the message is a list of five 4-bit numbers that we want
to send to a destination. In addition to sending these numbers, we
send the sum of the numbers. For example, if the set of
numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0, 6, 36), where
36 is the sum of the original numbers.
The receiver adds the five numbers and compares the result with
the sum. If the two are the same, the receiver assumes no error,
accepts the five numbers, and discards the sum. Otherwise,
there is an error somewhere and the message is not accepted.