Connection-Oriented Service is basically a technique that is typically used to transport and
send data at session layer. The data streams or packets are transferred or delivered to receiver in
a similar order in which they have seen transferred by sender. It is actually a data transfer method
among two devices or computers in a different network, that is designed and developed after
telephone system. Whenever a network implements this service, it sends or transfers data or
message from sender or source to receiver or destination in correct order and manner.
This connection service is generally provided by protocols of both network layer (signifies
different path for various data packets that belongs to same message) as well as transport layer
(use to exhibits independence among packets rather than different paths that various packets
belong to same message will follow).
Operations :
There is a sequence of operations that are needed to b followed by users. These operations are
given below :
1. Establishing Connection –
It generally requires a session connection to be established just before any data is
transported or sent with a direct physical connection among sessions.
1. Transferring Data or Message –
When this session connection is established, then we transfer or send message or
data.
2. Releasing the Connection –
After sending or transferring data, we release connection.
Different Ways :
There are two ways in which connection-oriented services can be done. These ways
are given below :
1. Circuit-Switched Connection –
Circuit-switching networks or connections are generally known as connection-
oriented networks. In this connection, a dedicated route is being established among
sender and receiver, and whole data or message is sent through it. A dedicated
physical route or a path or a circuit is established among all communication nodes,
and after that, data stream or message is sent or transferred.
2. Virtual Circuit-Switched Connection –
Virtual Circuit-Switched Connection or Virtual Circuit Switching is also known as
Connection-Oriented Switching. In this connection, a preplanned route or path is
established before data or messages are transferred or sent. The message Is
transferred over this network is such a way that it seems to user that there is a
dedicated route or path from source or sender to destination or receiver.
Types of Connection-Oriented Service :
Advantages :
It kindly support for quality of service is an easy way.
This connection is more reliable than connectionless service.
Long and large messages can be divided into various smaller messages so that it
can fit inside packets.
Problems or issues that are related to duplicate data packets are made less severe.
Disadvantages :
In this connection, cost is fixed no matter how traffic is.
It is necessary to have resource allocation before communication.
If any route or path failures or network congestions arise, there is no alternative way available
to continue communication.
Unit 4
Network layer design
In real world scenario, networks under same administration are generally scattered geographically. There may exist requirement of
connecting two different networks of same kind as well as of different kinds. Routing between two networks is called
internetworking.
Networks can be considered different based on various parameters such as, Protocol, topology, Layer-2 network and addressing
scheme.
In internetworking, routers have knowledge of each other’s address and addresses beyond them. They can be statically configured
go on different network or they can learn by using internetworking routing protocol.
Routing protocols which are used within an organization or administration are called Interior Gateway Protocols or IGP. RIP, OSPF
are examples of IGP. Routing between different organizations or administrations may have Exterior Gateway Protocol, and there is
only one EGP i.e. Border Gateway Protocol.
Tunneling
If they are two geographically separate networks, which want to communicate with each other, they may deploy a dedicated line
between or they have to pass their data through intermediate networks.
Tunneling is a mechanism by which two or more same networks communicate with each other, by passing intermediate
networking complexities. Tunneling is configured at both ends.
When the data enters from one end of Tunnel, it is tagged. This tagged data is then routed inside the
intermediate or transit network to reach the other end of Tunnel. When data exists the Tunnel its tag is
removed and delivered to the other part of the network.
Both ends seem as if they are directly connected and tagging makes data travel through transit network
without any modifications.
Packet Fragmentation
Most Ethernet segments have their maximum transmission unit (MTU) fixed to 1500 bytes. A data packet can
have more or less packet length depending upon the application. Devices in the transit path also have their
hardware and software capabilities which tell what amount of data that device can handle and what size of
packet it can process.
If the data packet size is less than or equal to the size of packet the transit network can handle, it is
processed neutrally. If the packet is larger, it is broken into smaller pieces and then forwarded. This is called
packet fragmentation. Each fragment contains the same destination and source address and routed through
transit path easily. At the receiving end it is assembled again.
If a packet with DF (don’t fragment) bit set to 1 comes to a router which can not handle the packet because of
its length, the packet is dropped.
When a packet is received by a router has its MF (more fragments) bit set to 1, the router then knows that it is
a fragmented packet and parts of the original packet is on the way.
If packet is fragmented too small, the overhead is increases. If the packet is fragmented too large,
intermediate router may not be able to process it and it might get dropped.
Every computer in a network has an IP address by which it can be uniquely identified and addressed. An IP
address is Layer-3 (Network Layer) logical address. This address may change every time a computer
restarts. A computer can have one IP at one instance of time and another IP at some different time.
Address Resolution Protocol(ARP)
While communicating, a host needs Layer-2 (MAC) address of the destination machine which belongs to the
same broadcast domain or network. A MAC address is physically burnt into the Network Interface Card (NIC)
of a machine and it never changes.
On the other hand, IP address on the public domain is rarely changed. If the NIC is changed in case of some
fault, the MAC address also changes. This way, for Layer-2 communication to take place, a mapping between
the two is required.
To know the MAC address of remote host on a broadcast domain, a computer wishing
to initiate communication sends out an ARP broadcast message asking, “Who has this
IP address?” Because it is a broadcast, all hosts on the network segment (broadcast
domain) receive this packet and process it. ARP packet contains the IP address of
destination host, the sending host wishes to talk to. When a host receives an ARP
packet destined to it, it replies back with its own MAC address.
Once the host gets destination MAC address, it can communicate with remote host
using Layer-2 link protocol. This MAC to IP mapping is saved into ARP cache of both
sending and receiving hosts. Next time, if they require to communicate, they can directly
refer to their respective ARP cache.
Reverse ARP is a mechanism where host knows the MAC address of remote host but
requires to know IP address to communicate.
Internet Control Message Protocol (ICMP)
ICMP is network diagnostic and error reporting protocol. ICMP belongs to IP protocol
suite and uses IP as carrier protocol. After constructing ICMP packet, it is encapsulated
in IP packet. Because IP itself is a best-effort non-reliable protocol, so is ICMP.
Any feedback about network is sent back to the originating host. If some error in the
network occurs, it is reported by means of ICMP. ICMP contains dozens of diagnostic
and error reporting messages.
ICMP-echo and ICMP-echo-reply are the most commonly used ICMP messages to
check the reachability of end-to-end hosts. When a host receives an ICMP-echo
request, it is bound to send back an ICMP-echo-reply. If there is any problem in the
transit network, the ICMP will report that problem.
Internet Protocol Version 4 (IPv4)
IPv4 is 32-bit addressing scheme used as TCP/IP host addressing mechanism. IP
addressing enables every host on the TCP/IP network to be uniquely identifiable.
IPv4 provides hierarchical addressing scheme which enables it to divide the network
into sub-networks, each with well-defined number of hosts. IP addresses are divided
into many categories:
Class A - it uses first octet for network addresses and last three octets for host
addressing
Class B - it uses first two octets for network addresses and last two for host
addressing
Class C - it uses first three octets for network addresses and last one for host
addressing
Class D - it provides flat IP addressing scheme in contrast to hierarchical structure
for above three.
Class E - It is used as experimental.
IPv4 also has well-defined address spaces to be used as private addresses (not
routable on internet), and public addresses (provided by ISPs and are routable on
internet).
Though IP is not reliable one; it provides ‘Best-Effort-Delivery’ mechanism.
Internet Protocol Version 6 (IPv6)
Exhaustion of IPv4 addresses gave birth to a next generation Internet Protocol version
6. IPv6 addresses its nodes with 128-bit wide address providing plenty of address
space for future to be used on entire planet or beyond.
IPv6 has introduced Anycast addressing but has removed the concept of broadcasting.
IPv6 enables devices to self-acquire an IPv6 address and communicate within that
subnet. This auto-configuration removes the dependability of Dynamic Host
Configuration Protocol (DHCP) servers. This way, even if the DHCP server on that
subnet is down, the hosts can communicate with each other.
IPv6 provides new feature of IPv6 mobility. Mobile IPv6 equipped machines can roam
around without the need of changing their IP addresses.
IPv6 is still in transition phase and is expected to replace IPv4 completely in coming
years. At present, there are few networks which are running on IPv6. There are some
transition mechanisms available for IPv6 enabled networks to speak and roam around
different networks easily on IPv4. These are:
Dual stack implementation
Tunneling
NAT-PT
Unit 2
Data Link Layer
o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from
the bottom.
o The communication channel that connects the adjacent nodes is known as links,
and in order to move the datagram from source to the destination, the datagram
must be moved across an individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across
an individual link.
o The Data link layer protocol defines the format of the packet exchanged across
the nodes as well as the actions such as Error detection, retransmission, flow
control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP.
o An important characteristic of a Data Link Layer is that datagram can be handled
by different link layer protocols on different links in a path. For example, the
datagram is handled by Ethernet on the first link, PPP on the second link.
o Following services are provided by the Data Link Layer:
Framing & Link access: Data Link Layer protocols encapsulate each network frame
within a Link layer frame before the transmission across the link. A frame consists of a
data field in which network layer datagram is inserted and a number of data fields. It
specifies the structure of the frame as well as a channel access protocol by which frame
is to be transmitted over the link.
o Reliable delivery: Data Link Layer provides a reliable delivery service, i.e.,
transmits the network layer datagram without any error. A reliable delivery
service is accomplished with transmissions and acknowledgements. A data link
layer mainly provides the reliable delivery service over the links as they have
higher error rates and they can be corrected locally, link at which an error occurs
rather than forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate than it can
process the frame. Without flow control, the receiver's buffer can overflow, and
frames can get lost. To overcome this problem, the data link layer uses the flow
control to prevent the sending node on one side of the link from overwhelming
the receiving node on another side of the link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data
Link Layer protocol provides a mechanism to detect one or more errors. This is
achieved by adding error detection bits in the frame and then receiving node can
perform an error check.
o Error correction: Error correction is similar to the Error detection, except that
receiving node not only detect the errors but also determine where the errors
have occurred in the frame.
o Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit
the data at the same time. In a Half-Duplex mode, only one node can transmit
the data at the same time
Unit 2
What is Flow Control in Data Link Layer ?
Flow control is a set of procedures that restrict the amount of data a sender should send before it
waits for some acknowledgment from the receiver.
Flow Control is an essential function of the data link layer.
It determines the amount of data that a sender can send.
It makes the sender wait until an acknowledgment is received from the receiver’s end.
Methods of Flow Control are Stop-and-wait , and Sliding window.
Stop-and-wait Protocol
Stop-and-wait protocol works under the assumption that the communication channel
is noiseless and transmissions are error-free.
Working :
The sender sends data to the receiver.
The sender stops and waits for the acknowledgment.
The receiver receives the data and processes it.
The receiver sends an acknowledgment for the above data to the sender.
The sender sends data to the receiver after receiving the acknowledgment of previously
sent data.
The process is unidirectional and continues until the sender sends the End of
Transmission (EoT) frame.
Sliding Window Protocol
The sliding window protocol is the flow control protocol for noisy channels that allows the
sender to send multiple frames even before acknowledgments are received. It is called a Sliding
window because the sender slides its window upon receiving the acknowledgments for the sent
frames.
Working :
The sender and receiver have a “window” of frames. A window is a space that consists
of multiple bytes. The size of the window on the receiver side is always 1.
Each frame is sequentially numbered from 0 to n - 1, where n is the window size at the
sender side.
The sender sends as many frames as would fit in a window.
After receiving the desired number of frames, the receiver sends an acknowledgment.
The acknowledgment (ACK) includes the number of the next expected frame.
1. The sender sends the frames 0 and 1 from the first window (because the window size
is 2).
2. The receiver after receiving the sent frames, sends an acknowledgment for frame 2 (as
frame 2 is the next expected frame).
3. The sender then sends frames 2 and 3. Since frame 2 is lost on the way, the receiver sends
back a “NAK” signal (a non-acknowledgment) to inform the sender that frame 2 has been
lost. So, the sender retransmits frame 2.
What is Error Control in the Data Link Layer ?
Error Control is a combination of both error detection and error correction. It ensures that the
data received at the receiver end is the same as the one sent by the sender.
Error detection is the process by which the receiver informs the sender about any erroneous
frame (damaged or lost) sent during transmission.
Error correction refers to the retransmission of those frames by the sender.
Purpose of Error Control
Error control is a vital function of the data link layer that detects errors in transmitted frames
and retransmits all the erroneous frames. Error discovery and amendment deal with data frames
damaged or lost in transit and the acknowledgment frames lost during transmission. The method
used in noisy channels to control these errors is ARQ or Automatic Repeat Request.
Categories of Error Control
Stop-and-wait ARQ
In the case of stop-and-wait ARQ after the frame is sent, the sender maintains a timeout
counter.
If acknowledgment of the frame comes in time, the sender transmits the next frame in the
queue.
Else, the sender retransmits the frame and starts the timeout counter.
In case the receiver receives a negative acknowledgment, the sender retransmits the frame.
Sliding Window ARQ
To deal with the retransmission of lost or damaged frames, a few changes are made to the sliding
window mechanism used in flow control.
Go-Back-N ARQ :
In Go-Back-N ARQ, if the sent frames are suspected or damaged, all the frames are re-
transmitted from the lost packet to the last packet transmitted.
Working of Go-Back-N ARQ Protocol
Given below are the steps to clearly explain how the Go Back N ARQ algorithm works.
1. Data packets are divided into multiple frames. Each frame contains information about the
destination address, the error control mechanism it follows, etc. These multiple frames are
numbered so that they can be distinguished from each other.
2. The integer 'N' in Go Back 'N' ARQ tells us the size of the window i.e. the number of
frames that are sent at once from sender to receiver. Suppose the window size 'N' is equal to 4.
Then, 4 frames (frame 0, frame 1, frame 2, and frame 3) will be sent first from sender to receiver.
3. Receiver sends the acknowledgment of frame 0. Then the sliding window moves by one
and frame 4 is sent.
4. Receiver sends the acknowledgment of frame 1. Then the sliding window moves by one and frame 3 is sent.
5. The sender waits for the acknowledgment for some fixed amount of time. If the sender does
not get the acknowledgment for a frame in the time, it considers the frame to be corrupted. Then
the sliding window moves to the starting of the corrupted frame and all the frames in the window
are retransmitted.
For example, if the sender does not receive the acknowledgment for frame 2, it retransmits all
the frames in the windows i.e. frames [2, 3, 4, 5].
Characteristics of Go-Back-N ARQ
Given below are the characteristics of the Go-Back-N ARQ protocol.
1. The size of the sender window in Go Back N ARQ is equal to N.
2. The size of the receiver window in Go Back N ARQ is equal to 1.
3. When the acknowledgment for one frame is not received by the sender or the frames
received by the receiver are out of order, then the whole window starting from the
corrupted frame is retransmitted.
Go-Back-N ARQ follows the principle of pipelining i.e. a frame can be sent by the sender
before receiving the acknowledgment of the previously sent frame.
TOKEN OPERATION
Token ring is defined as a communication protocol in a local area network in which all
the stations present in the network are connected through a ring topology.
Token is a small size frame of 3 bytes.
For the avoidance of network congestion, token rings allow only one device to be active
at a time.
Listen Mode, Transmit Mode, and by-pass mode are the three modes of operation of the
token ring.
Token's main work is to circulate in the network and pass through the stations in the
network and allow them to transfer the packet.
Twisted-pair cable pair covered with the shield is recommended by IEEE 802.5 for the
token ring.
Starting and ending delimiter, access control, frame control, source address and
destination address, and checksum are the fields of the IEEE 802.5 token ring format.
Token ring protocol sets one station as an active monitor for the maintenance of the
network.
There are several problems associated with token rings, such as token loss, package
failure, and interface issues.
For the avoidance of collisions in the network, token rings allow only one device to be
active at a time.
How Does A Token Ring Work?
A Frame or packet reaches the next station according to the sequence of the ring.
1. The frame or packet reaches the next station according to the sequence of the ring.
2. Whether the frame contains a message addressed to them is determined by the current
node. If yes, then the message is removed by the node from the frame. If not, then there is
an empty frame(an empty frame is called the token frame).
3. A Station that has the token frame, only has the access to transfer the data. If it has data
then insert that data into the token frame otherwise release that token for the next station.
The next station picks up that token frame for further transmission.
For the avoidance of network congestion, token rings allow only one device to be active at a
time. The above steps are repeated by all the stations present in the token ring network. The size
of the token is 3 bytes and it has a start and end delimiter that defines the beginning and the
ending of the frame. It also has an access control byte within it. 4,500 bytes can be the maximum
possible size of the data portion.