Understanding Computer Networks
Understanding Computer Networks
1
link measured in bits/second.
When one end system has data to send to another end system, the sending end
system segments the data and adds header bytes to each segment. The resulting
packages of information, known as packets.
A packet switch takes a packet arriving on one of its incoming communication
links and forwards that packet on one of its outgoing communication links. The two
most prominent types in today’s Internet are routers and link-layer switches. Both
types of switches forward packets toward their ultimate destinations.
The sequence of communication links and packet switches traversed by a packet
from the sending end system to the receiving end system is known as a route or
path.
End systems access the Internet through Internet Service Providers
(ISPs),including residential ISPs such as local cable or telephone companies;
corporate ISPs; university ISPs; and ISPs that provide WiFi access in airports, hotels,
coffee shops, and other public places.
End systems, packet switches, and other pieces of the Internet run protocols that
control the sending and receiving of information within the Internet. The
Transmission Control Protocol (TCP) and the Internet Protocol (IP) are two of the
most important protocols in the Internet.
The IP protocol specifies the format of the packets that are sent and received
among routers and end systems. The Internet’s principal protocols are collectively
known as TCP/IP.
A Services Description
As an infrastructure that provides services to applications. These applications
include electronic mail, Web surfing, social networks, instant messaging, video
streaming, distributed games, peer-to-peer (P2P) file sharing, television over the
Internet, remote login, and much, much more.
The applications are said to be distributed applications, since they involve
multiple end systems that exchange data with each other. Importantly, Internet
applications run on end systems—they do not run in the packet switches in the
network core.
End systems attached to the Internet provide an Application Programming
Interface (API) that specifies how a program running on one end system asks the
Internet infrastructure to deliver data to a specific destination program running on
another end system.
This Internet API is a set of rules that the sending program must follow so that
the Internet can deliver the data to the destination program.
What Is a Protocol?
A protocol defines the format and the order of messages exchanged between two or
more communicating entities, as well as the actions taken on the transmission
and/or receipt of a message or other event.
2
Fig: A human protocol and a computer network protocol
3
Fig: Access Networks
End systems are also referred to as hosts because they run application programs
such as a Web browser program, a Web server program, an e-mail client program, or
an e-mail server program.
Hosts are sometimes further divided into two categories: clients and servers.
Access Networks
The network that physically connects an end system to the first router (also known
as the “edge router”) on a path from the end system to any other distant end system.
i) DSL (digital subscriber line): Today, the two most prevalent types of broadband
residential access are digital subscriber line (DSL) and cable.
DSL is a wire line transmission technology that transmits data faster.
A residence typically obtains DSL Internet access from the same local telephone
company (telco) that provides its wired local phone access. When DSL is used, a
customer’s telco is also its ISP.
4
As shown in Figure, each customer’s DSL modem uses the existing telephone
line to exchange data with a digital subscriber line access multiplexer (DSLAM)
located in the telco’s local central office (CO).
The home’s DSL modem takes digital data and translates it to high frequency
tones for transmission over telephone wires to the CO; the analog signals from many
such houses are translated back into digital format at the DSLAM.
The residential telephone line carries both data and traditional telephone signals
simultaneously.
On the customer side, a splitter separates the data and telephone signals arriving
to the home and forwards the data signal to the DSL modem.
On the telco side, in the CO, the DSLAM separates the data and phone signals
and sends the data into the Internet. Hundreds or even thousands of households
connect to a single DSLAM.
ii) Cable: While DSL makes use of the telco’s existing local telephone infrastructure,
cable Internet access makes use of the cable television company’s existing cable
television infrastructure.
A residence obtains cable Internet access from the same company that provides its
cable television.
As shows in Figure, fiber optics connects the cable head end to neighborhood-
level junctions, from which traditional coaxial cable is then used to reach individual
houses and apartments.
Because both fiber and coaxial cable are employed in this system, it is often
referred to as hybrid fiber coax (HFC).
Cable internet access requires special modems, called cable modems. The cable
modem is typically an external device and connects to the home PC through an
Ethernet port.
At the cable head end, the cable modem termination system (CMTS) serves a
similar function as the DSL network’s DSLAM—turning the analog signal sent from
the cable modems in many downstream homes back into digital format.
Cable modems divide the HFC network into two channels, a downstream and an
upstream channel.
One important characteristic of cable Internet access is that it is a shared
broadcast medium. In particular, every packet sent by the head end travels
downstream on every link to every home and every packet sent by a home travels on
the upstream channel to the head end.
iii) FTTH (Fiber To The Home): FTTH includes fiber-optic access solutions designed
for residential deployments. In FTTH networks, fibers are directly connected to
individual homes or buildings.
5
The FTTH can provide an optical fiber path from the CO directly to the home.
There are two competing optical-distribution network architectures that perform
this splitting: active optical networks (AONs) and passive optical networks (PONs).
Figure shows FTTH using the PON distribution architecture. Each home has an
optical network terminator (ONT), which is connected by dedicated optical fiber to a
neighborhood splitter.
The splitter combines a number of homes (typically less than 100) onto a single,
shared optical fiber, which connects to an optical line terminator (OLT) in the telco’s
CO.
The OLT, providing conversion between optical and electrical signals, connects to
the Internet via a telco router.
In the home, users connect a home router (typically a wireless router) to the ONT
and access the Internet via this home router.
In the PON architecture, all packets sent from OLT to the splitter are replicated at
the splitter (similar to a cable head end).
FTTH can potentially provide Internet access rates in the gigabits per second
range.
iv) Dial-Up, and Satellite: Dial-up access over traditional phone lines is based on the
same model as DSL—a home modem connects over a phone line to a modem in the
ISP. Compared with DSL and other broadband access networks, dial-up access is
excruciatingly slow at 56 kbps.
Satellite link can be used to connect a residence to the Internet at speeds of more
than 1 Mbps; StarBand and HughesNet are two such satellite access providers.
6
Fig: Ethernet Internet access
WiFi: Increasingly, however, people are accessing the Internet wirelessly from
laptops, smart phones, tablets, and other devices.
In a wireless LAN setting, wireless users transmit/receive packets to/from an
access point that is connected into the enterprise’s network (most likely including
wired Ethernet), which in turn is connected to the wired Internet.
A wireless LAN user must typically be within a few tens of meters of the access
point.
Wireless LAN access based on IEEE 802.11 technology, more colloquially known
as WiFi, is now just about everywhere—universities, business offices, cafes, airports,
homes, and even in airplanes.
802.11 today provides a shared transmission rate of up to 54 Mbps.
A router will typically have many incident links, since its job is to switch an
incoming packet onto an outgoing link.
In this example, the source has three packets, each consisting of L bits, to send
to the destination.
The source has transmitted some of packet 1, and the front of packet 1has
already arrived at the router. Because the router employs store-and-forwarding, at
this instant of time, the router cannot transmit the bits it has received; instead it
must first buffer (i.e., “store”) the packet’s bits.
Only after the router has received all of the packet’s bits can it begin to transmit
(i.e., “forward”) the packet onto the outbound link.
Let’s now consider the general case of sending one packet from source to
destination over a path consisting of N links each of rate R. Applying the same logic
as above, we see that the end-to-end delay is:
8
Queuing Delays and Packet Loss: Each packet switch has multiple links attached
to it. For each attached link, the packet switch has an output buffer (also called an
output queue), which stores packets that the router is about to send into that link.
The output buffers play a key role in packet switching. If an arriving packet needs
to be transmitted onto a link but finds the link busy with the transmission of another
packet, the arriving packet must wait in the output buffer.
Thus, in addition to the store-and-forward delays, packets suffer output buffer
queuing delays. These delays are variable and depend on the level of congestion in
the network.
The amount of buffer space is finite, an arriving packet may find that the buffer is
completely full with other packets waiting for transmission. In this case, packet loss
will occur—either the arriving packet or one of the already-queued packets will be
dropped.
Forwarding Tables and Routing Protocols: In the Internet, every end system has
an address called an IP address. When a source end system wants to send a packet
to a destination end system, the source includes the destination’s IP address in the
packet’s header.
When a packet arrives at a router in the network, the router examines a portion of
the packet’s destination address and forwards the packet to an adjacent router.
More specifically, each router has a forwarding table that maps destination
addresses (or portions of the destination addresses) to that router’s outbound links.
When a packet arrives at a router, the router examines the address and searches
its forwarding table, using this destination address, to find the appropriate outbound
link. The router then directs the packet to this outbound link.
The Internet has a number of special routing protocols that are used to
automatically set the forwarding tables. A routing protocol may, for example,
determine the shortest path from each router to each destination and use the
shortest path results to configure the forwarding tables in the routers.
Circuit Switching
In circuit-switched networks, the resources needed along a path (buffers, link
transmission rate) to provide for communication between the end systems are
reserved for the duration of the communication session between the end systems.
In packet-switched networks, these resources are not reserved; a session’s
messages use the resources on demand, and as a consequence, may have to wait
(that is, queue) for access to a communication link.
Traditional telephone networks are examples of circuit-switched networks.
A circuit-switched network is made of a set of switches connected by physical
9
links, in which each link is divided into n channels.
Each link is divided into n (n is 3 in the figure) channels by using FDM or TDM.
Figure illustrates a circuit-switched network. In this network, the four circuit switches
are interconnected by four links. Each of these links has four circuits, so that each
link can support four simultaneous connections.
The hosts are each directly connected to one of the switches. When two hosts
want to communicate, the network establishes a dedicated end-to-end connection
between the two hosts.
Fig: A simple circuit-switched network consisting of four switches and four links
For example: When end system A needs to communicate with end system B, system
A needs to request a connection to B that must be accepted by all switches as well
as by B itself. This is called the setup phase; after the dedicated path made of
connected circuits (channels) is established, the data-transfer phase can take place.
After all data have been transferred, the circuits are tearing down.
In circuit switching, the resources need to be reserved during the setup phase; the
resources remain dedicated for the entire duration of data transfer until the
teardown phase.
With FDM, the frequency spectrum of a link is divided up among the connections
established across the link.
The link dedicates a frequency band to each connection for the duration of the
connection. The width of the band is called, not surprisingly, the bandwidth.
FM radio stations also use FDM to share the frequency spectrum between 88
MHz and 108 MHz, with each station being allocated a specific frequency band.
For a TDM link, time is divided into frames of fixed duration, and each frame is
divided into a fixed number of time slots. When the network establishes a
10
connection across a link, the network dedicates one time slot in every frame to this
connection.
Figure shows FDM and TDM for a specific network link supporting up to four circuits.
For FDM, the frequency domain is segmented into four bands, each of bandwidth 4
kHz. For TDM, the time domain is segmented into frames, with four time slots in
each frame
Types of Delay: The end-to-end route between source and destination, a packet is
sent from the upstream node through router A to router B. Our goal is to characterize
the nodal delay at router A.
i) Processing Delay: The time required to examine the packet’s header and
determine where to direct the packet is part of the processing delay.
The processing delay can also include other factors, such as the time needed to
check for bit-level errors in the packet that occurred in transmitting the packet’s bits
from the upstream node to router A.
Processing delays in high-speed routers are typically on the order of
microseconds or less. After this nodal processing, the router directs the packet to
the queue that precedes the link to router B.
11
ii) Queuing Delay: At the queue, the packet experiences a queuing delay as it waits
to be transmitted onto the link.
The length of the queuing delay of a specific packet will depend on the number of
earlier-arriving packets that are queued and waiting for transmission onto the link.
If the queue is empty and no other packet is currently being transmitted, then our
packet’s queuing delay will be zero.
On the other hand, if the traffic is heavy and many other packets are also waiting to
be transmitted, the queuing delay will be long.
Queuing delays can be on the order of microseconds to milliseconds in practice.
iii) Transmission Delay: This is the amount of time required to push (that is, transmit)
all of the packet’s bits into the link.
Denote the length of the packet by L bits, and denote the transmission rate of the
link from router A to router B by R bits/sec.
For example, for a 10 Mbps Ethernet link, the rate is R = 10 Mbps; for a 100 Mbps
Ethernet link, the rate is R = 100 Mbps.
The transmission delay is L/R. Transmission delays are typically on the order of
microseconds to milliseconds in practice.
iv) Propagation Delay: Once a bit is pushed into the link, it needs to propagate to
router B. The time required to propagate from the beginning of the link to router B is
the propagation delay.
The bit propagates at the propagation speed of the link. The propagation speed
depends on the physical medium of the link (that is, fiber optics, twisted-pair copper
wire, and so on)
The propagation delay is the distance between two routers divided by the
propagation speed. That is, the propagation delay is d/s, where d is the distance
between router A and router B and s is the propagation speed of the link.
Propagation delays are on the order of milliseconds.
12
The fact that as the traffic intensity approaches 1, the average queuing delay
increases rapidly. A small percentage increase in the intensity will result in a much
larger percentage-wise increase in delay.
Packet Loss: we have assumed that the queue is capable of holding an infinite
number of packets. In reality a queue preceding a link has finite capacity, although
the queuing capacity greatly depends on the router design and cost. Because the
queue capacity is finite, packet delays do not really approach infinity as the traffic
intensity approaches 1. Instead, a packet can arrive to find a full queue.
With no place to store such a packet, a router will drop that packet; that is, the
packet will be lost.
End-to-End Delay: nodal delay is the delay at a single router. Let’s now consider
the total delay from source to destination. To get a handle on this concept, suppose
there are N-1 routers between the source host and the destination host.
The nodal delays accumulate and give an end-to-end delay,
Example: Figure (a) shows two end systems, a server and a client, connected by two
communication links and a router. Consider the throughput for a file transfer from
the server to the client.
Let Rs denote the rate of the link between the server and the router; and Rc denote
the rate of the link between the router and the client.
For this simple two-link network, the throughput is min{Rc, Rs}, that is, it is the
transmission rate of the bottleneck link. Having determined the throughput, we can
now approximate the time it takes to transfer a large file of F bits from server to
client as F/min{Rs, Rc}.
5. Reference Models
Layered Architecture or Protocol Layering
A layered architecture allows us to discuss a well-defined, specific part of a large and
complex system. In layered architecture of network model, one whole network
process is divided into small tasks. Each small task is then assigned to a particular
layer which works dedicatedly to process the task only. Every layer does only specific
work.
The layer provides the same service to the layer above it, and uses the same services
from the below it.
In data communication and networking, a protocol defines the rules that both the
sender and receiver and all intermediate devices need to follow to be able to
communicate effectively. When communication is simple, we may need only one
simple protocol; when the communication is complex, we may need to divide the
task between different layers, in which case we need a protocol at each layer, or
protocol layering.
Scenarios
Let us develop two simple scenarios to better understand the need for protocol
layering.
First Scenario: In the first scenario, communication is so simple that it can occur in
only one layer. Assume Maria and Ann are neighbors with a lot of common ideas.
Communication between Maria and Ann takes place in one layer.
Second Scenario: In the second scenario, we assume that Ann is offered a higher-
level position in her company, but needs to move to another branch located in a city
very far from Maria. The two friends still want to continue their communication and
exchange ideas by using protocol layering.
14
Fig: multiple protocols layering
Two models have been devised to define computer network operations: the TCP/IP
protocol suite and the OSI model. The protocol layering is used in both models.
Reference Models
i) OSI model
The OSI model is based on a proposal developed by the International Standards
Organization (ISO). The model is called the ISO OSI (Open Systems Interconnection),
which allows different systems to communicate.
An open system is a set of protocols that allows any two different systems to
communicate regardless of their underlying architecture. The purpose of the OSI
model is to show how to facilitate communication between different systems
without requiring changes to the logic of the underlying hardware and software.
The OSI model is a layered framework for the design of network systems that allows
communication between all types of computer systems.
It consists of seven layers: 1. Physical Layer, 2. Data link Layer, 3. Network Layer,
4.transport Layer, 5. Session Layer, 6. Presentation layer, 7. Application Layer.
15
Fig: The interaction between layers in the OSI model
i) Physical layer: the physical layer is responsible for movement of individual bits
from one node to the next.
The physical layer required to carry a bit stream over a physical medium.
It deals with the mechanical and electrical specifications of the interface and
transmission medium.
ii) Data Link Layer: The data link layer is responsible for moving frames from one
node to the next.
Frame: Frame is a series of bits that form a unit of data.
iii) Network Layer: The network layer is responsible for the source-to-destination
delivery of a packet, possibly across multiple networks (links).
The network layer is responsible for the delivery of individual packets from the
source host to the destination host.
Responsibilities of the network layer
Logical addressing: Addressing system to help to differentiate the source and
destination systems. The network layer adds a header to the packet coming from the
upper layer that includes the logical addresses of the sender and receiver.
Routing: When independent networks or links are connected to create internetworks
(network of networks) or a large network, the connecting devices (called routers or
switches) route or switch the packets to their final destination.
v) Session Layer: The session layer is responsible for dialog control and
synchronization. The session layer is the network dialog controller. It establishes,
maintains, and synchronizes the interaction among communicating systems
Responsibilities of the session layer:
Dialog control: The session layer allows two systems to enter into a dialog. It allows
the communication between two processes to take place in either half-duplex (one
way at a time) or full-duplex (two ways at a time) mode.
Synchronization: The session layer allows a process to add checkpoints, or
synchronization points, to a stream of data.
For example, if a system is sending a file of 2000 pages, it is advisable to insert
checkpoints after every 100 pages to ensure that each 100-page unit is received and
acknowledged independently. In this case, if a crash happens during the
transmission of page 523, the only pages that need to be resent after system
recovery are pages 501 to 523. Pages previous to 501 need not be resent.
vii) Application Layer: The application layer is responsible for providing services to
17
the user.
The application layer enables the user, whether human or software, to access the
network. It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types of
distributed information services.
Services provided by the application layer:
Network virtual terminal: A network virtual terminal is a software version of a
physical terminal, and it allows a user to log on to a remote host.
File transfer, access, and management: This application allows a user to access
files in a remote host (to make changes or read data), to retrieve files from a remote
computer for use in the local computer, and to manage or control files in a remote
computer locally.
Mail services: This application provides the basis for e-mail forwarding and storage.
Directory services: This application provides distributed database sources and
access for global information about various objects and services.
Physical Layer: We can say that the physical layer is responsible for carrying
individual bits in a frame across the link. The physical layer is the lowest level in the
TCP/IP protocol suite. There is a hidden layer, the transmission media, under the
physical layer. Two devices are connected by a transmission medium (cable or
air).The transmission medium does not carry bits; it carries electrical or optical
signals.
Data-link Layer: the data-link layer is responsible for taking the datagram and
moving it across the link. The link can be a wired LAN with a link-layer switch, a
wireless LAN, a wired WAN, or a wireless WAN. We can also have different protocols
used with any link type.
TCP/IP does not define any specific protocol for the data-link layer, but it uses the
HDLC and PPP protocols. It supports all the standard and proprietary protocols.
18
In network layer the main protocol is Internet Protocol (IP), which defines the format
of the packet, called a datagram at the network layer. IP also defines the format and
the structure of addresses used in this layer.
The network layer also has some auxiliary protocols that help IP in its delivery and
routing tasks. The Internet Control Message Protocol (ICMP) helps IP to report
some problems when routing a packet. The Internet Group Management Protocol
(IGMP) is another protocol that helps IP in multitasking. The Dynamic Host
Configuration Protocol (DHCP) helps IP to get the network-layer address for a host.
The Address Resolution Protocol (ARP) is a protocol that helps IP to find the link-
layer address of a host or a router when its network-layer address is given.
Transport Layer: The logical connection at the transport layer is also end-to-end.
The transport layer at the source host gets the message from the application layer,
encapsulates it in a transport layer packet called a segment or a user datagram.
The main protocol, Transmission Control Protocol (TCP), is a connection-oriented
protocol that first establishes a logical connection between transport layers at two
hosts before transferring data. It creates a logical pipe between two TCPs for
transferring a stream of bytes.
The other common protocol, User Datagram Protocol (UDP), is a connectionless
protocol that transmits user datagram without first creating a logical connection. In
UDP, each user datagram is an independent entity without being related to the
previous or the next one.
Application Layer: The logical connection between the two application layers is
end to-end. The two application layers exchange messages between each other as
though there were a bridge between the two layers.
The Hypertext Transfer Protocol (HTTP) is a vehicle for accessing the World Wide
Web (WWW). The Simple Mail Transfer Protocol (SMTP) is the main protocol used
in electronic mail (e-mail) service. The File Transfer Protocol (FTP) is used for
transferring files from one host to another. The Terminal Network (TELNET) and
Secure Shell (SSH) are used for accessing a site remotely. The Simple Network
Management Protocol (SNMP) is used by an administrator to manage the Internet at
global and local levels. The Domain Name System (DNS) is used by other protocols
to find the network-layer address of a computer.
6. Transmission Media
Introduction: Transmission media are actually located below the physical layer and
are directly controlled by the physical layer. We could say that transmission media
belong to layer zero. Figure shows the position of transmission media in relation to
the physical layer.
Transmission media can be divided into two broad categories: guided and unguided.
19
Guided media include twisted-pair cable, coaxial cable, and fiber-optic cable.
Unguided medium is free space.
One of the wires is used to carry signals to the receiver, and the other is used only as
a ground reference. The receiver uses the difference between the two. In addition to
the signal sent by the sender on one of the wires, interference (noise) and crosstalk
may affect both wires and create unwanted signals.
Unshielded Versus Shielded Twisted-Pair Cable: The most common twisted-pair
cable used in communications is referred to as unshielded twisted-pair (UTP).
IBM has also produced a version of twisted-pair cable for its use, called shielded
twisted-pair (STP). STP cable has a metal foil or braided mesh covering that encases
each pair of insulated conductors. Although metal casing improves the quality of
cable by preventing the penetration of noise or crosstalk, it is bulkier and more
expensive.
20
Applications: Twisted-pair cables are used in telephone lines to provide voice and
data channels.
Local-area networks also use twisted-pair cables.
Coaxial Cable: Coaxial cable (or coax) carries signals of higher frequency
ranges than those in twisted pair cable, in part because the two media are
constructed quite differently. Instead of having two wires, coax has a central core
conductor of solid or stranded wire (usually copper) enclosed in an insulating sheath,
which is, in turn, encased in an outer conductor of metal foil, braid, or a combination
of the two.
The outer metallic wrapping serves both as a shield against noise and as the second
conductor, which completes the circuit. This outer conductor is also enclosed in an
insulating sheath, and the whole cable is protected by a plastic cover.
Applications Coaxial cable was widely used in analog telephone networks where a
single coaxial network could carry 10,000 voice signals.
Later it was used in digital telephone networks where a single coaxial cable could
carry digital data up to 600 Mbps.
However, coaxial cable in telephone networks has largely been replaced today
with fiber optic cable.
Cable TV networks also use coaxial cables. Later, however, cable TV providers
replaced most of the media with fiber-optic cable.
Another common application of coaxial cable is in traditional Ethernet LANs.
Because of its high bandwidth, and consequently high data rate, coaxial cable was
chosen for digital transmission in early Ethernet LANs.
21
Fig: Bending of light ray
Optical fibers use reflection to guide light through a channel. A glass or plastic core
is surrounded by a cladding of less dense glass or plastic. The difference in density
of the two materials must be such that a beam of light moving through the core is
reflected off the cladding instead of being refracted into it.
Cable Composition: Figure shows the composition of a typical fiber-optic cable. The
outer jacket is made of Teflon. Inside the jacket are Kevlar strands to strengthen the
cable. Below the Kevlar is another plastic coating to cushion the fiber. The fiber is at
the center of the cable, and it consists of cladding and core.
Unguided signals can travel from the source to the destination in several ways:
ground propagation, sky propagation, and line-of-sight propagation, as shown in
Figure 7.18.
In ground propagation, radio waves travel through the lowest portion of the
atmosphere, hugging the earth. These low-frequency signals emanate in all
directions from the transmitting antenna and follow the curvature of the planet.
In sky propagation, higher-frequency radio waves radiate upward into the ionosphere
(the layer of atmosphere where particles exist as ions) where they are reflected back
to earth. This type of transmission allows for greater distances with lower output
power.
In line-of-sight propagation, very high-frequency signals are transmitted in straight
lines directly from antenna to antenna.
Radio Waves: The electromagnetic waves ranging in frequencies between 3
kHz and 1 GHz are normally called radio waves.
Radio waves are omnidirectional. When an antenna transmits radio waves, they
are propagated in all directions. A sending antenna sends waves that can be
received by any receiving antenna.
Radio waves, particularly those waves that propagate in the sky mode, can travel
long distances. This makes radio waves a good candidate for long-distance
broadcasting such as AM radio.
Radio waves, particularly those of low and medium frequencies, can penetrate
walls. It is an advantage because, for example, an AM radio can receive signals
inside a building.
Omnidirectional Antenna Radio waves use omnidirectional antennas that send out
signals in all directions. Figure shows an omnidirectional antenna.
23
Fig: Omnidirectional antenna
Applications: Microwaves, due to their unidirectional properties, are very useful when
unicast (oneto-one) communication is needed between the sender and the receiver.
They are used in cellular phones, satellite networks, and wireless LANs.
Infrared: Infrared waves, with frequencies from 300 GHz to 400 THz. It can be
used for short-range communication. Infrared waves, having high frequencies,
cannot penetrate walls.
This advantageous characteristic prevents interference between one system and
another; a short-range communication system in one room cannot be affected by
another system in the next room. When we use our infrared remote control, we do
not interfere with the use of the remote by our neighbors.
Infrared signals can be used for short-range communication in a closed area using
line-of-sight propagation
7. Example Networks
i) Internet
24
A Brief History: A network is a group of connected communicating devices such
as computers and printers. The Internet collaboration of more than hundreds of
thousands of interconnected networks.
Private individuals as well as various organizations such as government agencies,
schools, research facilities, corporations, and libraries in more than 100 countries
use the Internet. Millions of people are users. The extraordinary communication
system only came into being in 1969.
By 1969, ARPANET was a reality. Four nodes, at the Universities were connected via
the IMPs to form a network. Software called the Network Control Protocol (NCP)
provided communication between the hosts.
In 1972, Vint Cerf and Bob Kahn, both of whom were part of the core ARPANET
group, collaborated on what they called the Internetting Project.
Cerf and Kahn's landmark 1973 paper outlined the protocols to achieve end-to-end
delivery of packets. This paper on Transmission Control Protocol (TCP) included
concepts such as encapsulation, the datagram, and the functions of a gateway.
After that split TCP into two protocols: Transmission Control Protocol (TCP) and
Internetworking Protocol (lP).
The Internet Today: Today most end users who want Internet connection use the
services of Internet service providers (lSPs). There are international service providers,
national service providers, regional service providers, and local service providers.
802.11 networks are made up of clients, such as laptops and mobile phones, and
infrastructure called APs (access points) that is installed in buildings. Access points
are sometimes called base stations. The access points connect to the wired
network, and all communication between clients goes through an access point.
26
UNIT-2
The Data Link Layer, Access Networks, and LANs
Fig: Frame
Header:The header consists of control information whose role is to guide the whole
frame to its correct destination.
Frame header includes Source and Destination address field,Physical Link Control
field, Flow control field, and Congestion Control field etc.,
Trailer: Data-link Layer adds also a trailer at the end of each frame. The trailer is
responsible for ensuring that frames are received intact or undamaged.
1
(c) This type of service provides additional reliability because source machine
retransmit the frames if it does not receive the acknowledgement of these frames
within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.
Frame Size
Frames can be of fixed or variable size. In fixed-size framing, there is no need for
defining the boundaries of the frames; the size itself can be used as a delimiter. In
variable-size framing, we need a way to define the end of one frame and the
beginning of the next.
Two approaches were used for this purpose: a character-oriented approach and a
bit-oriented approach.
Character-oriented framing was popular when only text was exchanged by the data-
link layers. The flag could be selected to be any character not used for text
communication.
Now we send other types of information such as graphs, audio, and video; any
character used for the flag could also be part of the information.
2
To solve this problem, a byte-stuffing strategy was added to character-oriented
framing. In byte stuffing is the process of adding one extra byte whenever there is a
flag or escape character in the text. The data section is stuffed with an extra byte.
This byte is usually called the escape character (ESC) and has a predefined bit
pattern.
If the flag pattern appears in the data, we need to somehow inform the receiver that
this is not the end of the frame. We do this by stuffing 1 single bit to prevent the
pattern from looking like a flag. The strategy is called bit stuffing. In bit stuffing, if 0
and five consecutive 1 bits are encountered, an extra 0 is added.
3
Fig: Bit stuffing and unstuffing
Flow and Error Control: One of the responsibilities of the data-link control
sublayer is flow and error control at the data-link layer.
Flow Control: Whenever an entity produces items and another entity consumes them,
there should be a balance between production and consumption rates. If the items
are produced faster than they can be consumed, the consumer can be overwhelmed
and may need to discard some items. We need to prevent losing the data items at
the consumer site.
Buffers
Flow control can be implemented in several ways, one of the solutions is normally to
use two buffers; one at the sending data-link layer and the other at the receiving data
-link layer.
A buffer is a set of memory locations that can hold packets at the sender and
receiver. The flow control communication can occur by sending signals from the
consumer to the producer. When the buffer of the receiving data-link layer is full, it
informs the sending data-link layer to stop pushing frames.
Error Control: we need to implement error control at the data-link layer to prevent the
receiving node from delivering corrupted packets to its network layer. Error control at
the data-link layer is normally very simple and implemented using one of the
following two methods.
In both methods, a CRC is added to the frame header by the sender and checked by
the receiver.
We divide our message into blocks, each of d bits, called datawords. We add r
redundant bits to each block to make the length n = d+ r. The resulting n-bit blocks
are called codewords. How the extra r bits are chosen or calculated.
5
Fig: One-bit even parity
Receiver operation is also simple with a single parity bit. The receiver need only
count the number of 1s in the received d + 1 bits. If an odd number of 1- valued bits
are found with an even parity scheme, the receiver knows that at least one bit error
has occurred.
Figure shows a possible structure of an encoder (at the sender) and a decoder (at
the receiver).
Suppose now that a single bit error occurs in the original d bits of information. With
this two-dimensional parity scheme, the parity of both the column and the row
containing the flipped bit will be in error. The receiver can thus not only detect the
fact that a single bit error has occurred, but can use the column and row indices of
the column and row with parity errors to actually identify the bit that was corrupted
and correct that error.
6
Fig: Two-dimensional even parity
The cyclic redundancy check (CRC), which is used in networks such as LANs and
WANs.
CRC encoder
In the encoder, the dataword has k bits (4 here); the codeword has n bits (7 here).
The size of the dataword is increased by adding n − k (3 here) 0s to the right-hand
side of the word.
The n-bit result is providing to the generator. The generator uses a divisor of size
n − k + 1 (4 here), predefined.
The generator divides the increased dataword by the divisor (modulo-2 division).
The quotient of the division is discarded; the remainder (r2r1r0) is appended to the
dataword to create the codeword.
CRC decoder
The decoder receives the codeword. A copy of all n bits is fed to the checker,
which is a copy of the generator.
The analyzer has a simple function. If the syndrome bits are all 0s, the 4 leftmost
bits of the codeword are accepted as the dataword (interpreted as no error);
otherwise, the 4 bits are discarded (error).
7
Example:
Encoder
Decoder:
Polynomials: A better way to understand cyclic codes and how they can be
analyzed is to represent them as polynomials. A pattern of 0s and 1s can be
represented as a polynomial with coefficients of 0 and 1. The power of each term
shows the position of the bit; the coefficient shows the value of the bit.
Figure shows a binary pattern and its polynomial representation. In Figure a we show
how to translate a binary pattern into a polynomial; in Figure b we show how the
polynomial can be shortened by removing all terms with zero coefficients and
replacing x1 by x and x0 by 1.
8
Fig: A polynomial to represent a binary word
Degree of a Polynomial The degree of a polynomial is the highest power in the
polynomial. For example, the degree of the polynomial x6 + x + 1 is 6.
3
Cyclic Code Encoder Using Polynomials:The dataword 1001 is represented as x + 1.
The divisor 1011 is represented as x3 + x + 1. To find the augmented dataword, we
have left-shifted the dataword 3 bits (multiplying by x3). The result is x6 + x3.
iii) Checksum
Checksum is an error-detecting technique that can be applied to a message of any
length.
In checksum error detection scheme, the data is divided into k segments each of
m bits.
In the source, the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the destination, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Fig: Checksum
Example:
9
iv) Forward Error Correction
Numbers of methods are used for error detection and retransmission. However,
retransmission of corrupted and lost packets is not useful for real-time multimedia
transmission because it creates an unacceptable delay in reproducing: we need to
wait until the lost or corrupted packet is resent. We need to correct the error or
reproduce the packet immediately.
Several schemes have been designed and used in these cases that are collectively
referred to as forward error correction (FEC) techniques.
Hamming code example: The key to the Hamming Code is the use of extra parity
bits to allow the identification of a single error. Create the code word as follows:
a) Mark all bit positions that are powers of two as parity bits. (Positions 1, 2, 4, 8, 16,
32, 64, etc.)
b) All other bit positions are for the data to be encoded. (Positions 3, 5, 6, 7, 9, 10, 11,
12, 13, 14, 15, 17, etc.)
c) Each parity bit calculates the parity for some of the bits in the code word. The
position of the parity bit determines the sequence of bits that it alternately checks
and skips.
Parity bit 1 covers all the bits positions whose binary representation includes
a 1 in the least significantposition (1, 3, 5, 7, 9, 11, etc).
Parity bit 2 covers all the bits positions whose binary representation includes
a 1 in the second position fromthe least significant bit (2, 3, 6, 7, 10, 11, etc).
Parity bit 4 covers all the bits positions whose binary representation includes
a 1 in the third position fromthe least significant bit (4–7, 12–15, 20–23, etc).
Parity bit 8 covers all the bits positions whose binary representation includes
a 1 in the fourth position fromthe least significant bit bits (8–15, 24–31,
40–47, etc).
d) Set a parity bit to 1 if the total number of ones in the positions it checks is odd.
Set a parity bit to 0 if the total number of ones in the positions it checks is even.
Example:
Data word: 1001, we check for even parity.
10
P1 bit is calculated using the bits positions: 1, 3, 5, and 7. To find the parity bit
P1, we check for even parity. Since the total number of 1’s in all the bit
positions corresponding to P1 is an even number value (parity bit’s value) = 0.
P2 bit is calculated using the bits positions: 2, 3, 6, and 7. Since the total
number of 1’s in all the bit positions corresponding to P2 is an even number
value (parity bit’s value) = 0.
P3 bit is calculated using the bits positions: 4 , 5, 6 and 7. Since the total
number of 1’s in all the bit positions corresponding to P2 is an even number
value (parity bit’s value) = 1.
Thus, the data transferred is:
Error detection and correction: Suppose in the above example the 3rd bit is changed
from 1 to 0 during data transmission, then it gives new parity values in the binary
number:
Simplest Protocol: Our first protocol, which we call the Simplest, is one that has
no flow or error control. It is a unidirectional protocol in which data frames are
traveling in only one direction-from the sender to receiver.
We assume that the receiver can immediately handle any frame it receives with a
processing time. The data link layer of the receiver immediately removes the header
from the frame and hands the data packet to its network layer, which can also
accept the packet immediately.
Design:
11
Fig: The design of the simplest protocol with no flow or error control
Flow diagram:
Stop-and-Wait: The receiver does not have enough storage space, especially if it
is receiving data from many sources. This may result in either the discarding of
frames or denial of service.
To prevent the receiver from becoming overwhelmed with frames, we somehow
need to tell the sender to slow down. There must be feedback from the receiver to
the sender.
The protocol we discuss now is called the Stop-and-Wait Protocol because the
sender sends one frame, stops until it receives confirmation from the receiver, and
then sends the next frame.
We still have unidirectional communication for data frames, but auxiliary ACK frames
(simple tokens of acknowledgment) travel from the other direction. We add flow
control to protocol.
Design:
Flow diagram:
12
Sliding Window Protocols (Noisy Channels): Although the Stop-and-Wait
Protocol gives us an idea of how to add flow control to its predecessor, noiseless
channels are nonexistent. We can ignore the error or we need to add error control to
our protocols. We discuss three protocols in this section that use error control.
To detect and correct corrupted frames, we need to add redundancy bits to our data
frame. When the frame arrives at the receiver site, it is checked and if it is corrupted,
it is silently discarded.
The completed and lost frames need to be resent in this protocol. The sender keeps
a copy of the sent frame. At the same time, it starts a timer. If the timer expires and
there is no ACK for the sent frame, the frame is resent, the copy is held, and the timer
is restarted.
Sequence Numbers: the protocol specifies that frames need to be numbered. This is
done by using sequence numbers. A field is added to the data frame to hold the
sequence number of that frame.
Acknowledgment Numbers Since the sequence numbers must be suitable for both
data frames and ACK frames, we use this convention: The acknowledgment
numbers always announce the sequence number of the next frame expected by the
receiver. For example, if frame 0 has arrived safe and sound, the receiver sends an
ACK frame with acknowledgment 1.
Design:
Sliding Window: In this protocol, the sliding window is an abstract concept that
defines the range of sequence numbers that is the concern of the sender and
receiver. In other words, the sender and receiver need to deal with only part of the
possible sequence numbers.
The send window the maximum size of the window is 2m.The receive window
makes sure that the correct data frames are received and that the correct
acknowledgments are sent. The size of the receive window is always 1.
Design:
14
Selective Repeat Automatic Repeat Request: Go-Back-N ARQ simplifies the
process at the receiver site. The receiver keeps track of only one variable, and there
is no need to buffer out-of-order frames; they are simply discarded. However, this
protocol is very inefficient for a noisy link. In a noisy link a frame has a higher
probability of damage, which means the resending of multiple frames.
For noisy links, there is another mechanism that does not resend N frames when just
one frame is damaged; only the damaged frame is resent. This mechanism is called
Selective Repeat ARQ.
Windows: The Selective Repeat Protocol also uses two windows: a send window and
a receive window. The receive window is the same size as the send window.
Design:
Flow diagram:
15
4. Multiple Access Links and Protocols
When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.
The problem of controlling the access to the medium is similar to the rules of
speaking in an assembly.
Many protocols have been devised to handle access to a shared link. All of these
protocols belong to a sublayer in the data-link layer called media access control
(MAC). We categorize them into three groups, as shown in Figure.
In a random-access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send, there
is an access conflict—collision—and the frames will be either destroyed or modified.
ALOHA
ALOHA, the earliest random access method, was developed at the University of
Hawaii in early 1970. The medium is shared between the stations. When a station
sends data, another station may attempt to do so at the same time. The data from
the two stations collide and become distorted.
Pure ALOHA
16
The original ALOHA protocol is called pure ALOHA. This is a simple but well-
designed protocol.
The idea is that each station sends a frame whenever it has a frame to send
(multiple access). However, since there is only one channel to share, there is the
possibility of collision between frames from different stations. Figure shows an
example of frame collisions in pure ALOHA.
There are four stations (unrealistic assumption) that contend with one another for
access to the shared channel. The figure shows that each station sends two frames;
there are a total of eight frames on the shared medium. Some of these frames
collide because multiple frames are in contention for the shared channel. Figure
shows that only two frames survive: one frame from station 1 and one frame from
station 3.
Pure ALOHA has a method to prevent congesting the channel with retransmitted
frames. After a maximum number of retransmissions attempts Kmax, a station must
give up and try later.
Slotted ALOHA
A station may send soon after another station has started or just before another
station has finished. Slotted ALOHA was invented to improve the efficiency of pure
ALOHA.
In slotted ALOHA we divide the time into slots of Tfr seconds and force the station
to send only at the beginning of the time slot. Figure shows an example of frame
collisions in slotted ALOHA.
17
Fig: Frames in a slotted ALOHA network
Because a station is allowed to send only at the beginning of the synchronized time
slot, if a station misses this moment, it must wait until the beginning of the next time
slot. This means that the station which started at the beginning of this slot has
already finished sending its frame.
CSMA
To minimize the chance of collision and, therefore, increase the performance, the
CSMA method was developed. The chance of collision can be reduced if a station
senses the medium before trying to use it.
Carrier sense multiple access (CSMA) requires that each station first listen to the
medium (or check the state of the medium) before sending. In other words, CSMA is
based on the principle “sense before transmit”.
CSMA can reduce the possibility of collision, but it cannot eliminate it. The reason for
this is shown in Figure, a space and time model of a CSMA network. Stations are
connected to a shared channel.
At time t1, station B senses the medium and finds it idle, so it sends a frame. At time
t2 (t2 >t1), station C senses the medium and finds it idle because, at this time, the
first bits from station B have not reached station C. Station C also sends a frame.
The two signals collide and both frames are destroyed.
18
Fig: 1-Persistent
Fig: Nonpersistent
P-Persistent: The p-persistent method is used if the channel has time slots with a
slot duration equal to or greater than the maximum propagation time. It reduces the
chance of collision and improves efficiency. In this method, after the station finds
the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 − p, the station waits for the beginning of the next time slot
and checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.
Fig: p-Persistent
CSMA/CD
The CSMA method does not specify the procedure following a collision. Carrier
sense multiple access with collision detection (CSMA/CD) augments the algorithm
to handle the collision.
In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If so, the station is finished. If, however, there is a
collision, the frame is sent again.
19
To better understand CSMA/CD, let us look at the first bits transmitted by the two
stations involved in the collision. Although each station continues to send bits in the
frame until it detects the collision, we show what happens as the first bits collide. In
Figure, stations A and C are involved in the collision.
Procedure: Now let us look at the flow diagram for CSMA/CD in Figure.
The sending of a short jamming signal to make sure that all other stations become
aware of the collision.
CSMA/CA
Carrier sense multiple access with collision avoidance (CSMA/CA) was invented for
wireless networks. Collisions are avoided through the use of CSMA/CA’s three
strategies: the inter frame space, the contention window, and acknowledgments, as
shown in Figure.
Inter Frame Space (IFS). First, collisions are avoided by deferring transmission even
if the channel is found idle. When an idle channel is found, the station does not send
immediately. It waits for a period of time called the interframe space or IFS.
Contention Window. The contention window is an amount of time divided into slots.
A station that is ready to send chooses a random number of slots as its wait time.
The number of slots in the window changes according to the binary exponential
backoff strategy.
Acknowledgment. With all these precautions, there still may be a collision resulting
in destroyed data. In addition, the data may be corrupted during the transmission.
The positive acknowledgment and the time-out timer can help guarantee that the
receiver has received the frame.
20
ii) Controlled Access: In controlled access, the stations consult one another to
find which station has the right to send. A station cannot send unless it has been
authorized by other stations. We discuss three controlled-access methods.
Reservation:
In the reservation method, a station needs to make a reservation before sending
data. Time is divided into intervals.
In each interval, a reservation frame precedes the data frames sent in that
interval. If there are N stations in the system, there are exactly N reservation
minislots in the reservation frame. Each minislot belongs to a station. When a station
needs to send a data frame, it makes a reservation in its own minislot.
The stations that have made reservations can send their data frames after the
reservation frame. Figure shows a situation with five stations and a five-minislot
reservation frame.
In the first interval, only stations 1, 3, and 4 have made reservations. In the
second interval, only station 1 has made a reservation.
Polling
Polling works with topologies in which one device is designated as a primary
station and the other devices are secondary stations.
All data exchanges must be made through the primary device even when the
ultimate destination is a secondary device.
The primary device controls the link; the secondary devices follow its instructions.
It is up to the primary device to determine which device is allowed to use the channel
at a given time.
This method uses poll and select functions to prevent collisions.
Select: The select function is used whenever the primary device has something to
send. Remember that the primary controls the link. If the primary is neither sending
nor receiving data, it knows the link is available.
The primary must alert the secondary to the upcoming transmission and wait for
an acknowledgment of the secondary’s ready status.
Before sending data, the primary creates and transmits a select (SEL) frame, one
21
field of which includes the address of the intended secondary.
Poll: The poll function is used by the primary device to request transmissions from
the secondary devices.
When the primary is ready to receive data, it must ask (poll) each device in turn if
it has anything to send.
When the first secondary is approached, it responds either with a NAK frame if it
has nothing to send or with data (in the form of a data frame) if it does.
If the response is negative (a NAK frame), then the primary polls the next
secondary in the same manner until it finds one with data to send.
When the response is positive (a data frame), the primary reads the frame and
returns an acknowledgment (ACK frame), verifying its receipt.
Token Passing
In the token-passing method, the stations in a network are organized in a logical
ring. For each station, there is a predecessor and a successor.
The predecessor is the station which is logically before the station in the ring; the
successor is the station which is after the station in the ring.
The current station is the one that is accessing the channel now. The right to this
access has been passed from the predecessor to the current station.
The right will be passed to the successor when the current station has no more
data to send.
But how is the right to access the channel passed from one station to another? In
this method, a special packet called a token circulates through the ring.
Token management is needed for this access method. Stations must be limited in
the time they can have possession of the token. The token must be monitored to
ensure it has not been lost or destroyed.
Another function of token management is to assign priorities to the stations and to
the types of data being transmitted. And finally, token management is needed to
make low-priority stations release the token to high-priority stations.
FDMA
In frequency-division multiple access (FDMA), the available bandwidth is divided
into frequency bands.
22
Each station is allocated a band to send its data. In other words, each band is
reserved for a specific station, and it belongs to the station all the time.
Each station also uses a bandpass filter to confine the transmitter frequencies.
To prevent station interferences, the allocated bands are separated from one
another by small guard bands. Figure shows the idea of FDMA.
TDMA
In time-division multiple access (TDMA), the stations share the bandwidth of the
channel in time. Each station is allocated a time slot during which it can send data.
Each station transmits its data in its assigned time slot. Figure shows the idea
behind TDMA.
Let us assume we have four stations, 1, 2, 3, and 4, connected to the same channel.
The data from station 1 are d1, from station 2 are d2, and so on. The code assigned
to the first station is c1, to the second is c2, and so on.
We assume that the assigned codes have two properties.
1. If we multiply each code by another, we get 0.
2. If we multiply each code by itself, we get 4 (the number of stations).
23
5. Switched Local Area Networks
Figure shows a switched local network connecting three departments, two servers
and a router with four switches. Because these switches operate at the link layer,
they switch link-layer frames.
Instead of using IP addresses, we will soon see that they use link-layer addresses to
forward link-layer frames through the network of switches.
Link-Layer Addressing
In a connectionless internetwork such as the Internet we cannot make a datagram
reach its destination using only IP addresses. The reason is that each datagram in
the Internet, from the same source host to the same destination host, may take a
different path.
The source and destination IP addresses define the two ends but cannot define
which links the datagram should pass through.
So, we need another addressing mechanism in a connectionless internetwork: the
link-layer addresses of the two nodes.
A link-layer address is sometimes called a link address, sometimes a physical
address, and sometimes a MAC address.
When a datagram passes from the network layer to the data-link layer, the datagram
will be encapsulated in a frame and two data-link addresses are added to the frame
header. These two addresses are changed every time the frame moves from one link
to another.
Address Resolution Protocol (ARP): the IP address of the next node is not helpful
in moving a frame through a link; we need the link-layer address of the next node.
This is the time when the Address Resolution Protocol (ARP) becomes helpful.
The ARP protocol is one of the auxiliary protocols defined in the network layer. It
belongs to the network layer; it maps an IP address to a logical-link address.
ARP accepts an IP address from the IP protocol, maps the address to the
corresponding link-layer address, and passes it to the data-link layer.
24
Fig: Position of ARP in TCP/IP protocol suite
Every host or router on the network receives and processes the ARP request packet,
but only the intended recipient recognizes its IP address and sends back an ARP
response packet. The response packet contains the recipient’s IP and link-layer
addresses. The packet is unicast directly to the node that sent the request packet.
ARP Packet Format: Figure shows the format of an ARP packet. The hardware type
field defines the type of the link-layer protocol; the protocol type field defines the
network-layer protocol.
The source hardware and source protocol addresses are variable-length fields
defining the link-layer and network-layer addresses of the sender.
The destination hardware address and destination protocol address fields define the
receiver link-layer and network-layer addresses. An ARP packet is encapsulated
directly into a data-link frame.
25
Fig: ARP packet
Preamble. This field contains 7 bytes (56 bits) of alternating 0s and 1s that alert the
receiving system to the coming frame and enable it to synchronize its clock if it’s out
synchronization. The pattern provides only an alert and a timing pulse.
Destination address (DA). This field is six bytes (48 bits) and contains the link-layer
address of the destination station or stations to receive the packet.
Source address (SA). This field is also six bytes and contains the link-layer address
of the sender of the packet.
Type. This field defines the upper-layer protocol whose packet is encapsulated in the
frame. This protocol can be IP, ARP, OSPF, and so on.
Data. This field carries data encapsulated from the upper-layer protocols. It is a
minimum of 46 and a maximum of 1500 bytes.
Cyclic redundancy check (CRC) (4 bytes). the purpose of the CRC field is to allow the
receiving adapter, adapter B, to detect bit errors in the frame.
Ethernet Technologies:
1. Standard Ethernet (l0 Mbps),
2. Fast Ethernet (100 Mbps),
3. Gigabit Ethernet (l Gbps)
4. Ten-Gigabit Ethernet (l0 Gbps)
Standard Ethernet: A standard Ethernet network can transmit data at a rate up to 10
Megabits per second (10 Mbps). The Institute for Electrical and Electronic Engineers
developed an Ethernet standard known as IEEE Standard 802.3. This standard
defines rules for configuring an Ethernet network and specifies how the elements in
an Ethernet network interact with one another.
Fig: Connecting two VLAN switches with two VLANs: (a) two cables (b) trunked
28
Fig: A data center network with a hierarchical topology
Load Balancing
A cloud data center, such as a Google or Microsoft data center, provides many
applications concurrently, such as search, email, and video applications.
To support requests from external clients, each application is associated with a
publicly visible IP address to which clients send their requests and from which they
receive responses.
Inside the data center, the external requests are first directed to a load balancer
whose job it is to distribute requests to the hosts, balancing the load across the
hosts as a function of their current load.
A large data center will often have several load balancers, each one devoted to a
set of specific cloud applications.
29
3. The broadcast Ethernet frame containing the DHCP request is the first
frame sent by Bob’s laptop to the Ethernet switch. The switch broadcasts
the incoming frame on all outgoing ports, including the port connected to
the router.
4. The router receives the broadcast Ethernet frame containing the DHCP
request on its interface with MAC address 00:22:6B:45:1F:1B and the IP
datagram is extracted from the Ethernet frame. The DHCP request message
is extracted from the UDP segment. The DHCP server now has the DHCP
request message.
6. The Ethernet frame containing the DHCP ACK is sent (unicast) by the router
to the switch. Because the switch is self-learning and previously received an
Ethernet frame (containing the DHCP request) from Bob’s laptop.
7. Bob’s laptop receives the Ethernet frame containing the DHCP ACK,
extracts the IP datagram from the Ethernet frame. At this point, Bob’s
laptop has initialized its networking components and is ready to begin
processing the Web page fetch.
11.The gateway router receives the frame containing the ARP query message
on the interface and finds that the target IP address of 68.85.2.1 in the ARP
message matches the IP address of its interface. The gateway router thus
prepares an ARP reply, indicating that its MAC address of 00:22:6B:45:1F:1B
corresponds to IP address 68.85.2.1.
12.Bob’s laptop receives the frame containing the ARP reply message and
extracts the MAC address of the gateway router (00:22:6B:45:1F:1B) from
the ARP reply message.
18.The HTTP server at www.google.com reads the HTTP GET message from
the TCP socket, creates an HTTP response message, places the requested
Web page content in the body of the HTTP response message, and sends
the message into the TCP socket.
19.The datagram containing the HTTP reply message is forwarded through the
Google, bob’s network, and arrives at Bob’s laptop. Bob’s Web browser
program reads the HTTP response from the socket, extracts the html for
the Web page from the body of the HTTP response, and finally displays the
Web page!
31
1
The network layer or layer 3 of the OSI (Open Systems Interconnec on) model is concerned
delivery of data packets from the source to the des na on across mul ple hops or links. It is
the lowest layer that is concerned with end − to − end transmission. The designers who are
concerned with designing this layer needs to cater to certain issues. These issues encompass
the services provided to the upper layers as well as internal design of the layer.
The design issues can be elaborated under four heads −
Store − and − Forward Packet Switching
Services to Transport Layer
Providing Connec on Oriented Service
Providing Connec onless Service
The transport layer needs to be protected from the type, number and topology of the
available router.
The network addresses for the transport layer should use uniform numbering pa ern
also at LAN and WAN connec ons.
Based on the connec ons there are 2 types of services provided:
Connec onless – The rou ng and inser on of packets into subnet is done individually.
No added setup is required.
Connec on-Oriented – Subnet must offer reliable service and all the packets must be
transmi ed over a single route.
When a data packet leaves its origin, it can take one of many different paths. It computes the
best path (least-cost path) to send the data.
What is Flooding:
Flooding is a sta c rou ng technique, based on the following principle:
“When a packet reaches the router, it is transferred to all the outgoing links, except only the
link that it has reached the router through.”
Flooding is used in rou ng protocols such as O.S.P.F. (Open Shortest Path First), peer-to-peer
file transfers, systems such as Usenet, bridging, etc. Let us have a look at an example for a
be er understanding. Assume there is a network with 6 routers connected through
transmission lines, as shown in the figure ahead.
Types of Flooding:
Flooding may be of three types −
Uncontrolled flooding − Here, each router uncondi onally transmits the incoming data
packets to all its neighbours.
Controlled flooding − They use some methods to control the transmission of packets
to the neighbouring nodes. The two popular algorithms for controlled flooding are
Sequence Number Controlled Flooding (SNCF) and Reverse Path Forwarding (RPF).
Selec ve flooding − Here, the routers don't transmit the incoming packets only along
those paths which are heading towards approximately in the right direc on, instead
of every available paths.
Characteris cs of Flooding:
Following are some features of flooding:
Every possible route between the source and the des na on for transmission is tried
in flooding.
There always exists a minimum of one route which is the shortest.
Any node that is connected, whether directly or indirectly, is explored.
Flooding does not require any informa on related to the network, such as the costs of
various paths, load condi ons, topology, etc. This is why it is non-adap ve.
Advantages of Flooding:
It is very simple to setup and implement, since a router may know only its neighbours.
It is extremely robust. Even in case of malfunc oning of a large number routers, the
packets find a way to reach the des na on.
All nodes which are directly or indirectly connected are visited. So, there are no
chances for any node to be le out. This is a main criteria in case of broadcast
messages.
The shortest path is always chosen by flooding.
5
Let's understand a few key points about the distance vector rou ng protocol:
Network Informa on:
Every node in the network should have informa on about its neighbouring node. Each node
in the network is designed to share informa on with all the nodes in the network.
Rou ng Pa ern:
In DVR the data shared by the nodes are transmi ed only to that node that is linked directly
to one or more nodes in the network.
Data sharing:
The nodes share the informa on with the neighbouring node from me to me as there is a
change in network topology.
Step 1:
Each router shares its rou ng table with every neighbour in this distance vector rou ng
network. As A will share its rou ng table with neighbours B and C, neighbours B and C will
share their rou ng table with A.
Rou ng table A:
7
8
9
Let see the full rou ng table for router 1A which has 17 entries, as shown below –
11
Explana on:
Step 1 − For example, the best path from 1A to 5C is via region 2, but hierarchical
rou ng of all traffic to region 5 goes via region 3 as it is be er for most of the other
des na ons of region 5.
Step 2 − Consider a subnet of 720 routers. If no hierarchy is used, each router will
have 720 entries in its rou ng table.
Step 3 − Now if the subnet is par oned into 24 regions of 30 routers each, then
each router will require 30 local entries and 23 remote entries for a total of 53
entries.
Example:
If the same subnet of 720 routers is par oned into 8 clusters, each containing 9 regions and
each region containing 10 routers. Then what will be the total number of table entries in
each router.
Solu on:
10 local entries + 8 remote regions + 7 clusters = 25 entries.
12
Conges on Control is a mechanism that controls the entry of data packets into the
network, enabling a be er use of a shared network infrastructure and avoiding
conges ve collapse.
Conges ve-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid conges ve collapse in a network.
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
When host wants to send packet, packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
Busty traffic is converted to a uniform traffic by the leaky bucket.
In prac ce the bucket is a finite queue that outputs at a finite rate.
Advantages of IPv4
It becomes easy to a ach mul ple devices across an outsized network while not NAT.
This is a model of communica on so provides quality service also as economical
knowledge transfer.
IP version 6
IPv6 was developed by Internet Engineering Task Force (IETF) to deal with the problem of IPv4
exhaus on. IPv6 is a 128-bits address having an address space of 2128, which is way bigger
than IPv4. IPv6 use Hexa-Decimal format separated by colon (:).
Components in Address format
IPv6 has new op ons to allow for addi onal func onali es.
4. Allowance for extension
IPv6 is designed to allow the extension of the protocol if required by new technologies or
applica ons.
5. Support for resource alloca on
In IPv6, the type of service field has been removed, but two new fields, traffic class and flow
label have been added to enables the source to request special handling of the packet. this
mechanism can be used to support traffic such as real- me audio and video.
6. Support for more security
The encryp on and authen ca on op ons in IPv6 provide confiden ality and integrity of the
packet.
Unicast
Mul cast
Anycast
Addressing methods
1. Unicast Address
Unicast Address iden fies a single network interface. A packet sent to a unicast address is
delivered to the interface iden fied by that address.
3. Anycast Address
Anycast Address is assigned to a group of interfaces. Any packet sent to an anycast address
will be delivered to only one member interface (mostly nearest host possible).
Note: Broadcast is not defined in IPv6.
18
Extension Headers
In IPv6, the Fixed Header contains only that much informa on which is necessary, avoiding
that informa on which is either not required or is rarely used. All such informa on is put
between the Fixed Header and the Upper layer header in the form of Extension Headers. Each
Extension Header is iden fied by a dis nct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points to the first
Extension Header. If there is one more Extension Header, then the first Extension Header’s
‘Next-Header’ field points to the second one, and so on. The last Extension Header’s ‘Next-
Header’ field points to the Upper Layer Header. Thus, all the headers’ points to the next one
in a linked list manner.
If the Next Header field contains the value 59, it indicates that there are no headers a er this
header, not even Upper Layer Header.
The following Extension Headers must be supported as per RFC 2460:
Extension Headers are arranged one a er another in a linked list manner, as depicted in the
following diagram:
20
IP addresses
All the computers of the world on the Internet network communicate with each other with
underground or underwater cables or wirelessly. If I want to download a file from the internet
or load a web page or literally do anything related to the internet, my computer must have an
address so that other computers can find and locate mine in order to deliver that par cular
file or webpage that I am reques ng. In technical terms, that address is called IP Address or
Internet Protocol Address.
Let us understand it with another example, like if someone wants to send you a mail then
he/she must have your home address. Similarly, your computer too needs an address so that
other computers on the internet can communicate with each other without the confusion of
delivering informa on to someone else’s computer. And that is why each computer in this
world has a unique IP Address. Or in other words, an IP address is a unique address that is
used to iden fy computers or nodes on the internet. This address is just a string of numbers
wri en in a certain format. It is generally expressed in a set of numbers for example
192.155.12.1. Here each number in the set is from 0 to 255 range. Or we can say that a full IP
address ranges from 0.0.0.0 to 255.255.255.255. And these IP addresses are assigned by IANA
(known as Internet Corpora on for Internet Assigned Numbers Authority).
Working of IP addresses
It can also use some set of rules to send informa on. Using these protocols, we can easily
send, and receive data or files to the connected devices. There are several steps behind the
scenes. Let us look at them
Your device directly requests your Internet Service Provider which then grants your
device access to the web.
And an IP Address is assigned to your device from the given range available.
Your internet ac vity goes through your service provider, and they route it back to you,
using your IP address.
Your IP address can change. For example, turning your router on or off can change your
IP Address.
When you are out from your home loca on your home IP address doesn’t accompany
you. It changes as you change the network of your device.
Types of IP Address
IP Address is of two types:
1. IPv4:
21
Internet Protocol version 4. It consists of 4 numbers separated by the dots. Each number can
be from 0-255 in decimal numbers. But computers do not understand decimal numbers, they
instead change them to binary numbers which are only 0 and 1. Therefore, in binary, this (0-
255) range can be wri en as (00000000 – 11111111). Since each number N can be
represented by a group of 8-digit binary digits. So, a whole IPv4 binary address can be
represented by 32-bits of binary digits. In IPv4, a unique sequence of bits is assigned to a
computer, so a total of (2^32) devices approximately = 4,294,967,296 can be assigned with
IPv4.
IPv4 can be wri en as:
189.123.123.90
2. IPv6:
But there is a problem with the IPv4 address. With IPv4, we can connect only the above
number of 4 billion devices uniquely, and apparently, there are much more devices in the
world to be connected to the internet. So, gradually we are making our way to IPv6 Address
which is a 128-bit IP address. In human-friendly form, IPv6 is wri en as a group of 8
hexadecimal numbers separated with colons (:). But in the computer-friendly form, it can be
wri en as 128 bits of 0s and 1s. Since, a unique sequence of binary digits is given to
computers, smartphones, and other devices to be connected to the internet. So, via IPv6 a
total of (2^128) devices can be assigned with unique addresses which are actually more than
enough for upcoming future genera ons.
IPv6 can be wri en as:
2011:0bd9:75c5:0000:0000:6b3e:0170:8394
Classifica on of IP Address
An IP address is classified into the following types:
1. Public IP Address:
This address is available publicly and it is assigned by your network provider to your router,
which further divides it to your devices. Public IP Addresses are of two types,
available range. Since IP Address keeps on changing every me when you connect to
the internet, it is called a Dynamic IP Address.
Sta c IP Address: Sta c address never changes. They serve as a permanent internet
address. These are used by DNS servers. What are DNS servers? Actually, these are
computers that help you to open a website on your computer. Sta c IP Address
provides informa on such as device is located on which con nent, which country,
which city, and which Internet Service Provider provides internet connec on to that
par cular device. Once, we know who is the ISP, we can trace the loca on of the device
connected to the internet. Sta c IP Addresses provide less security than Dynamic IP
Addresses because they are easier to track.
2. Private IP Address:
This is an internal address of your device which are not routed to the internet and no exchange
of data can take place between a private address and the internet.
3. Shared IP addresses:
Many websites use shared IP addresses where the traffic is not huge and very much
controllable, they decide to rent it to other similar websites so to make it cost-friendly. Several
companies and email sending servers use the same IP address (within a single mail server) to
cut down the cost so that they could save for the me the server is idle.
4. Dedicated IP addresses:
A dedicated IP Address is an address used by a single company or an individual which gives
them certain benefits using a private Secure Sockets Layer (SSL) cer ficate which is not in the
case of a shared IP address. It allows to access the website or log in via File Transfer Protocol
(FTP) by IP address instead of its domain name. It increases the performance of the website
when the traffic is high. It also protects from a shared IP address that is black-listed due to
spam.
The value of any segment (byte) is between 0 and 255 (both included).
There are no zeroes preceding the value in any segment (054 is wrong, 54 is correct).
Classful Addressing
The 32-bit IP address is divided into five sub-classes. These are:
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for
mul cast and experimental purposes respec vely. The order of bits in the first octet
determines the classes of IP address.
IPv4 address is divided into two parts:
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and host ID and the
number of total networks and hosts possible in that par cular class. Each ISP or network
administrator assigns IP address to each device that is connected to its network.
24
Class A:
IP address belonging to class A are assigned to the networks that contain a large number of
hosts.
2^7-2= 126 network ID (Here 2 address is subtracted because 0.0.0.0 and 127.x.y.z are
special address.)
2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x
25
Class B:
IP address belonging to class B are assigned to the networks that ranges from medium-sized
to large-sized networks.
Class C:
IP address belonging to class C are assigned to small-sized networks.
Class D:
IP address belonging to class D are reserved for mul -cas ng. The higher order bits of the first
octet of IP addresses belonging to class D are always set to 1110. The remaining bits are for
the address that interested hosts recognize.
Class D does not possess any sub-net mask. IP addresses belonging to class D ranges from
224.0.0.0 – 239.255.255.255.
Class E:
IP addresses belonging to class E are reserved for experimental and research purposes. IP
addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any sub-
net mask. The higher order bits of first octet of class E are always set to 1111.
OSPF
h ps://www.scaler.com/topics/ospf-protocol/
h ps://www.javatpoint.com/ospf-protocol
BGP
h ps://www.scaler.com/topics/computer-network/bgp-border-gateway-protocol/
1
UNIT-4
UNIT- IV: TRANSPORT LAYER
UDP – Segment header, Remote procedure call, Real- me transport protocols; TCP – service
model, Protocol, Segment header, Connec on establishment, Connec on release, Sliding
window, Timer management, Conges on control.
UDP Header –
UDP header is an 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to
60 bytes. The first 8 Bytes contains all necessary header information and the remaining part
consist of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish
different user requests or processes.
2
UNIT-4
Source Port: Source Port is a 2 Byte long field used to identify the port number of the
source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined packet.
Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or
flow control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting.
Also, UDP provides port numbers so that is can differentiate between users’ requests.
Applications of UDP:
Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP (Routing Information
Protocol).
Normally used for real-time applications which cannot tolerate uneven delays
between sections of a received message.
Advantages of UDP:
1. Speed: UDP is faster than TCP because it does not have the overhead of establishing a
connection and ensuring reliable data delivery.
2. Lower latency: Since there is no connection establishment, there is lower latency and faster
response time.
3. Simplicity: UDP has a simpler protocol design than TCP, making it easier to implement
and manage.
Disadvantages of UDP:
1. No reliability: UDP does not guarantee delivery of packets or order of delivery, which can
lead to missing or duplicate data.
2. No congestion control: UDP does not have congestion control, which means that it can
send packets at a rate that can cause network congestion.
3. No flow control: UDP does not have flow control, which means that it can overwhelm the
receiver with packets that it cannot handle.
3
UNIT-4
Remote Procedure Call (RPC):
A remote procedure call is an inter process communication technique that is used for client-
server-based applications. It is also known as a subroutine call or a function call.
A client has a request message that the RPC translates and sends to the server. This request may
be a procedure or a function call to a remote server. When the server receives the request, it
sends the required response back to the client. The client is blocked while the server is
processing the call and only resumed execution after the server is finished.
Port numbers below 1024 are reserved for standard services that can usually only be
started by privileged users (e.g., root in UNIX systems).
They are called well-known ports.
7
UNIT-4
Secure Shell (SSH) is a cryptographic network protocol for opera ng network services
securely over an unsecured network
IMAP (Internet Message Access Protocol) is a standard email protocol that stores email
messages on a mail server, but allows the end user to view and manipulate the
messages as though they were stored locally on the end user's compu ng device(s).
The Real Time Streaming Protocol (RTSP) is a network control protocol designed for
use in entertainment and communica ons systems to control streaming media
servers. The protocol is used for establishing and controlling media sessions between
end points.
The Internet Prin ng Protocol (IPP) is a specialized Internet protocol for
communica on between client devices (computers, mobile phones, tablets, etc.) and
printers
All TCP connec ons are full duplex and point-to-point.
Full duplex means that traffic can go in both direc ons at the same me.
Point-to-point means that each connec on has exactly two end points.
TCP-Protocol:
TCP (Transmission Control Protocol) is one of the main protocols of the Internet protocol suite.
It lies between the Application and Network Layers which are used in providing reliable
delivery services. It is a connection-oriented protocol for communications that helps in the
exchange of messages between different devices over a network. The Internet Protocol (IP),
which establishes the technique for sending data packets between computers, works with TCP.
8
UNIT-4
Working of TCP:
To make sure that each message reaches its target location intact, the TCP/IP model breaks
down the data into small bundles and afterward reassembles the bundles into the original
message on the opposite end. Sending the information in little bundles of information makes
it simpler to maintain efficiency as opposed to sending everything in one go.
After a particular message is broken down into bundles, these bundles may travel along
multiple routes if one route is jammed but the destination remains the same.
For example, In TCP, the connection is established by using three-way handshaking. The
client sends the segment with its sequence number. The server, in return, sends its segment
with its own sequence number as well as the acknowledgement sequence, which is one more
than the client sequence number. When the client receives the acknowledgment of its
segment, then it sends the acknowledgment to the server. In this way, the connection is
established between the client and the server.
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are
1. Segment Numbering System
TCP keeps track of the segments being transmitted or received by assigning numbers to each
and every single one of them.
A specific Byte Number is assigned to data bytes that are to be transferred while segments are
assigned sequence numbers.
Acknowledgment Numbers are assigned to received segments.
9
UNIT-4
2. Connection Oriented
It means sender and receiver are connected to each other till the completion of the process.
The order of the data is maintained i.e., order remains same before and after transmission.
3. Full Duplex
In TCP data can be transmitted from receiver to the sender or vice – versa at the same time.
It increases efficiency of data flow between sender and receiver.
4. Flow Control
Flow control limits the rate at which a sender transfers data. This is done to ensure reliable
delivery.
The receiver continually hints to the sender on how much data can be received (using a
sliding window)
5. Error Control
TCP implements an error control mechanism for reliable data transfer
Error control is byte-oriented
Segments are checked for error detection
Error Control includes – Corrupted Segment & Lost Segment Management, Out-of-order
segments, Duplicate segments, etc.
6. Congestion Control
TCP takes into account the level of congestion in the network
Congestion level is determined by the amount of data sent by a sender
Advantages:
It is a reliable protocol.
It provides an error-checking mechanism as well as one for recovery.
It gives flow control.
It makes sure that the data reaches the proper destination in the exact order that it was
sent.
Open Protocol, not owned by any organization or individual.
Disadvantages:
TCP is made for Wide Area Networks; thus, its size can become an issue for small
networks with low resources.
TCP runs several layers so it can slow down the speed of the network.
10
UNIT-4
It is not generic in nature. Meaning, it cannot represent any protocol stack other than
the TCP/IP suite. E.g., it cannot work with a Bluetooth connection.
No modifications since their development around 30 years ago.
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there
are no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:
Source Port Address –
A 16-bit field that holds the port address of the application that is sending the data segment.
Sequence Number –
A 32-bit field that holds the sequence number, i.e., the byte number of the first byte that is
sent in that particular segment. It is used to reassemble the message at the receiving end of
the segments that are received out of order.
Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e., the byte number that the receiver
expects to receive next. It is an acknowledgement for the previous bytes being received
successfully.
Control flags –
These are 6 1-bit control bits that control connection establishment, connection termination,
connection abortion, flow control, mode of transfer etc. Their function is:
URG: Urgent pointer is valid
ACK: Acknowledgement number is valid (used in case of cumulative
acknowledgement)
PSH: Request for push
RST: Reset the connection
SYN: Synchronize sequence numbers
FIN: Terminate the connection
Window size –
This field tells the window size of the sending TCP in bytes.
Checksum –
This field holds the checksum for error control. It is mandatory in TCP as opposed to UDP.
12
UNIT-4
Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that is urgently
required that needs to reach the receiving process at the earliest. The value of this field is
added to the sequence number to get the byte number of the last urgent byte.
2. TCP is a full-duplex protocol so both sender and receiver require a window for
receiving messages from one another.
Sequence number (Seq=2000):
contains the random initial sequence number generated at the receiver side.
Syn flag (Syn=1):
request the sender to synchronize its sequence number with the above-provided sequence
number.
Maximum segment size (MSS=500 B):
receiver tells its maximum segment size, so that sender sends datagram which won’t require
any fragmentation. MSS field is present inside Option field in TCP header.
13
UNIT-4
Since MSS receiver < MSS sender, both parties agree for minimum MSS i.e., 500 B to avoid
fragmentation of packets at both ends.
Window size (window=10000 B):
receiver tells about his buffer capacity in which he has to store messages from the sender.
Acknowledgement Number (Ack no.=522):
Since sequence number 521 is received by the receiver so, it makes a request for the next
sequence number with Ack no.=522 which is the next packet expected by the receiver since
Syn flag consumes 1 sequence no.
ACK flag (ACk=1):
tells that the acknowledgement number field contains the next sequence expected by the
receiver.
3. Sender makes the final reply for connection establishment in the following way:
Sequence number (Seq=522):
since sequence number = 521 in 1st step and SYN flag consumes one sequence number
hence, the next sequence number will be 522.
Acknowledgement Number (Ack no.=2001):
since the sender is acknowledging SYN=1 packet from the receiver with sequence number
2000 so, the next sequence number expected is 2001.
ACK flag (ACK=1):
tells that the acknowledgement number field contains the next sequence expected by the
sender.
14
UNIT-4
TCP supports two types of connection releases like most connection-oriented transport
protocols:
1) Graceful connection release –
In the Graceful connection release, the connection is open until both parties have closed their
sides of the connection.
When a TCP entity sends an RST segment, it should contain 00 if it does not belong to any
existing connection else it should contain the current value of the sequence number for the
connection and the acknowledgment number should be set to the next expected in- sequence
number on this connection.
TCP Timers:
Various types of TCP timers are used for making sure that excessive delay in the transmission
of data is not encountered when communication begins. Most of these timers are delicate and
handle issues that are not found immediately at the first analysis of the transmission of data.
Look at the below section to know about how the timer makes sure proper data transfers from
one end to another.
17
UNIT-4
1) Time Out Timer or Retransmission Timer:
A timeout timer begins, when the sender transmits a segment to the receiver.
Before expiring the timer, if the ACK is received, then nothing is lost.
Otherwise, that particular segment is considered lost and it becomes necessary to
retransmit the segment again and restart the timer.
It is required to look at the various RTTs for finding out how the retransmission
timeout interval is being calculated.
Measured RTT(RTTm)
The time needed by the segment to reach the destination and also get acknowledgement, even
though the acknowledgement includes another segment also, is known as measured round-
trip time (RTTm).
Smoothed RTT(RTTs)
The average weight of RTTm is known as smoothed RTT (RTTs). There is a possibility of
changes in RTTm and its fluctuation is very high that's why RTO cannot be calculated using a
single measurement.
Deviated RTT(RTTd)
RTTs alone are not used by most implementation. So, for finding the RTO (Retransmission
Time Out) RTT deviated also needs to be calculated.
4) Persistent Timer:
The persistent timer is one of the TCP timers used in TCP for dealing with one of the
deadlock situations,i.e. zero-window-size deadlock situations.
If the other end closes its receiver window, then also it keeps the window size
information flowing.
Whenever the sender receives an ACK from the receiver side with a zero window
size, then it begins the persistent timer.
In this situation, when the persistent timer goes off, then the sender transmits the
special type of segment to the receiver.
This special type of segment is commonly known as the probe segment and this
special type of segment has only 1 byte of new data.
The sequence number of this segment is never acknowledged.
This sequence number is also not considered even when calculating the sequence
number for the rest data.
When the receiver transmits the response to the probe segment, then through this
response the window size updates.
If it is found that the updated window size is non-zero, then it represents that the data
can be transmitted now.
And if the size of the updated window is still found to be zero, then the persistent
timer needs to be set again and this process continues till we get a non-zero window
size.
Congestion control:
Before understanding what is TCP congestion control, let’s first understand what you mean
by congestion in the TCP network. Congestion is an important factor in packet switched
network. It refers to the state of a network where the message traffic becomes so heavy that
the network response time slows down leading to the failure of the packet. It leads to packet
19
UNIT-4
loss. Due to this, it is necessary to control the congestion in the network, however, it cannot
be avoided.
TCP congestion control refers to the mechanism that prevents congestion from happening or
removes it after congestion takes place. When congestion takes place in the network, TCP
handles it by reducing the size of the sender’s window. The window size of the sender is
determined by the following two factors:
Receiver window size
Congestion window size
The DNS name space refers to the hierarchical structure of domain names that are used to
identify resources on the internet. It is a distributed database that organizes domain names into
a tree-like structure, allowing for efficient and scalable domain name resolution.
1) Generic Domains:
It defines the registered hosts according to their generic behaviour.
Each node in a tree defines the domain name, which is an index to the DNS database.
It uses three-character labels, and these labels describe the organization type.
3
UNIT-5
2) Country Domain:
The format of country domain is same as a generic domain, but it uses two-character country
abbreviations (e.g., us for the United States) in place of three-character organizational
abbreviations.
3) Inverse Domain:
The inverse domain is used for mapping an address to a name. When the server has received a
request from the client, and the server contains the files of only authorized clients. To determine
whether the client is on the authorized list or not, it sends a query to the DNS server and ask
for mapping an address to the name.
Working of DNS:
DNS is a client/server network communication protocol. DNS clients send requests to
the. server while DNS servers send responses to the client.
Client requests contain a name which is converted into an IP address known as a
forward DNS lookups while requests containing an IP address which is converted into
a name known as reverse DNS lookups.
DNS implements a distributed database to store the name of all the hosts available on
the internet.
4
UNIT-5
If a client like a web browser sends a request containing a hostname, then a piece of
software such as DNS resolver sends a request to the DNS server to obtain the IP
address of a hostname. If DNS server does not contain the IP address associated with a
hostname, then it forwards the request to another DNS server. If IP address has arrived
at the resolver, which in turn completes the request over the internet protocol.
There are different types of name servers involved in the DNS resolution process:
Root name servers: These servers are at the top of the DNS hierarchy. They are
responsible for providing information about the authoritative name servers for each top-
level domain (TLD), such as .com, .org, .net, etc.
Top-level domain (TLD) name servers: Each TLD has its own set of name servers. They
maintain information about the authoritative name servers for the second-level domains
within their TLD. For example, the .com TLD name servers have information about the
authoritative name servers for domains ending in .com.
Authoritative name servers: These servers hold the DNS records for specific domain
names. They are responsible for providing the IP address associated with a domain
name. Each domain has its own authoritative name servers, which are specified in the
domain's DNS settings.
When a DNS resolver needs to resolve a domain name, it starts by querying the root name
servers to find the TLD name servers for the domain's extension. Then, it contacts the
appropriate TLD name servers to obtain the authoritative name servers for the specific domain.
Finally, it queries the authoritative name servers to retrieve the IP address associated with the
domain.
Overall, DNS name servers play a crucial role in the functioning of the DNS by providing the
necessary information to translate domain names into IP addresses, enabling seamless
communication on the internet.
7
UNIT-5
Electronic mail:
Electronic mail is one of the most well-known network services. Electronic mail is a computer-
based service that allows users to communicate with one another by exchanging messages.
Email information is transmitted via email servers and uses a variety of TCP/IP protocols. For
example, the simple mail transfer protocol (SMTP) is a protocol that is used to send messages.
Similarly, IMAP or POP receives messages from a mail server.
Second Scenario:
In this case, the sender and recipient of an e-mail are essentially users on two different machines
over the internet. User-Agents and Message Transfer Agents (MTA) are required in this
scenario.
Take, for example, two user agents (Ninja1 and Ninja2), as illustrated in the diagram. When
Ninja1 sends an e-mail to Ninja2, the user agent (UA) and message transfer agents (MTAs)
programmes prepare the e-mail for transmission over the internet. Following that, this e-mail
gets stored in Ninja2's inbox.
11
UNIT-5
Third Scenario:
The sender is connected to the system by a point-to-point WAN, which can be a dial-up modem
or a cable modem in this case. On the other hand, the receiver is directly attached to the system,
as it was in the second scenario.
The sender also needs a User agent (UA) to prepare the message in this situation. After
preparing the statement, the sender delivers it over LAN or WAN via a pair of MTAs.
Fourth Scenario:
In this scenario, the recipient is linked to the mail server via WAN or LAN. When the message
arrives, the recipient must retrieve it, which needs additional client/server agents. This scenario
requires two user agents (UAs), two pairs of message transfer agents (MTAs), and a couple of
message access agents (MAAs).
12
UNIT-5
Services provided by E-mail system:
Composition – The composition refer to process that creates messages and answers. For
composition any kind of text editor can be used.
Transfer – Transfer means sending procedure of mail i.e. from the sender to recipient.
Reporting – Reporting refers to confirmation for delivery of mail. It help user to check
whether their mail is delivered, lost or rejected.
Displaying – It refers to present mail in form that is understand by the user.
Disposition – This step concern with recipient that what will recipient do after receiving
mail i.e., save mail, delete before reading or delete after reading.
User agent:
13
UNIT-5
Email format:
E-mail is represented as the transmission of messages on the Internet. It is one of the most
commonly used features over communications networks containing text, files, images, or other
attachments.
Format of E-mail:
An e-mail consists of three parts that are as follows:
1. Envelope
2. Header
3. Body
E-mail Envelope:
In modern e-mail systems, there is a distinction made between the e-mail and its contents. An
e-mail envelope contains the message, destination Address, Priority security level etc. The
message transport agents use this envelope for routing.
Message:
The actual message inside the envelope is made of two parts
1. Header
2. Body
The header carries the control information while the Body contains the message contents. The
envelope and messages are shown in the figure below –
14
UNIT-5
Message Formats:
Let us understand the RFC 822 message format in an email.
Messages consist of a primitive envelope, some header fields and a blank line, and the message
body. Each header field logically includes a single line of ASCII text which contains the field
name, a colon and a field. RFC 822 is an old standard. Usually, the user agent builds a message
and passes it to be the message transfer agent with the user’s header fields to construct an
envelope.
The following table shows the principal header fields related to message transport.
The Cc – field:
Just like the physical carbon copy, CC (carbon copy) is an easy way to send copies of an email
to other people.
The Bcc:
BCC stands for “blind carbon copy.” Just like CC, BCC is a way of sending copies of an email
to other people. The difference between the two is that, while you can see a list of recipients
when CC is used, that’s not the case with BCC. It’s called blind carbon copy because the other
recipients won’t be able to see that someone else has been sent a copy of the email.
15
UNIT-5
Received field:
A-line containing the Received field is added by each message transfer agent along the way.
This line carries the agent’s identity, date and time at which they received the message. It also
contains some other information that can be used to find bugs in the routing system.
The RFC 822 allows the users to invent new headers for their private use, but these headers
must start with the string X − Event of the week.
Body:
The body of a message contains text that is the actual content/message that needs to be sent,
such as “Employees who are eligible for the new health care program should contact their
supervisors by next Friday if they want to switch.” The message body also may include
signatures or automatically generated text that is inserted by the sender’s email system.
History:
It is a project created, by Timothy Berner Lee in 1989, for researchers to work together
effectively at CERN. is an organization, named the World Wide Web Consortium (W3C),
which was developed for further development of the web. This organization is directed by Tim
Berner’s Lee, aka the father of the web.
System Architecture:
From the user’s point of view, the web consists of a vast, worldwide connection of documents
or web pages. Each page may contain links to other pages anywhere in the world. The pages
can be retrieved and viewed by using browsers of which internet explorer, Netscape Navigator,
Google Chrome, etc are the popular ones. The browser fetches the page requested interprets
the text and formatting commands on it, and displays the page, properly formatted, on the
screen.
The basic model of how the web works are shown in the figure below. Here the browser is
displaying a web page on the client machine. When the user clicks on a line of text that is linked
to a page on the abd.com server, the browser follows the hyperlink by sending a message to the
abd.com server asking it for the page.
17
UNIT-5
Here the browser displays a web page on the client machine when the user clicks on a line of
text that is linked to a page on abd.com, the browser follows the hyperlink by sending a message
to the abd.com server asking for the page.
Working of WWW:
The World Wide Web is based on several different technologies: Web browsers, Hypertext
Markup Language (HTML) and Hypertext Transfer Protocol (HTTP).
A Web browser is used to access web pages. Web browsers can be defined as programs which
display text, data, pictures, animation and video on the Internet. Hyperlinked resources on the
World Wide Web can be accessed using software interfaces provided by Web browsers.
Initially, Web browsers were used only for surfing the Web but now they have become more
universal. Web browsers can be used for several tasks including conducting searches, mailing,
transferring files, and much more. Some of the commonly used browsers are Internet Explorer,
Opera Mini, and Google Chrome.
Features of WWW:
Hypertext Information System
Cross-Platform
Distributed
Open Standards and Open Source
Uses Web Browsers to provide a single interface for many services
Dynamic, Interactive and Evolving.
“Web 2.0”
18
UNIT-5
HTTP:
HTTP stands for Hypertext Transfer Protocol. Tim Berner invents it. Hypertext is the type of
text which is specially coded with the help of some standard coding language called HyperText
Markup Language (HTML). HTTP/2 is the successor version of HTTP, which was published
on May 2015. HTTP/3 is the latest version of HTTP, which is published in 2022.
The protocol used to transfer hypertext between two computers is known as HyperText Transfer
Protocol.
HTTP provides a standard between a web browser and a web server to establish
communication. It is a set of rules for transferring data from one computer to another. Data
such as text, images, and other multimedia files are shared on the World Wide Web. Whenever
a web user opens their web browser, the user indirectly uses HTTP. It is an application protocol
that is used for distributed, collaborative, hypermedia information systems.
Working of HTTP:
First of all, whenever we want to open any website then first open a web browser after that we
will type the URL of that website (e.g., www.facebook.com). This URL is now sent to Domain
Name Server (DNS). Then DNS first check records for this URL in their database, then DNS
will return the IP address to the web browser corresponding to this URL. Now the browser is
able to send requests to the actual server.
After the server sends data to the client, the connection will be closed. If we want something
else from the server, we should have to re-establish the connection between the client and the
server.
19
UNIT-5
HTTP Request:
HTTP request is simply termed as the information or data that is needed by Internet browsers
for loading a website. This is simply known as HTTP Request.
There is some common information that is generally present in all HTTP requests. These are
mentioned below.
HTTP Version
URL
HTTP Method
HTTP Request Headers
HTTP Body
HTTP Method:
HTTP Methods are simply HTTP Verb. In spite of being present so many HTTP Methods, the
most common HTTP Methods are HTTP GET and HTTP POST. These two are generally
used in HTTP cases. In HTTP GET, the information is received in the form of a website.
For more, refer to the Difference Between HTTP GET and HTTP POST.
20
UNIT-5
HTTP Response:
HTTP Response is simply the answer to what a Server gets when the request is raised. There
are various things contained in HTTP Response, some of them are listed below.
HTTP Status Code
HTTP Headers
HTTP Body
Advantages of HTTP:
Memory usage and CPU usage are low because of fewer simultaneous connections.
The error can be reported without closing the connection.
HTTP allows HTTP pipe-lining of requests or responses.
21
UNIT-5
Disadvantages of HTTP:
HTTP requires high power to establish communication and transfer data.
HTTP is not optimized for cellular phones and it is too gabby.
HTTP does not offer a genuine exchange of data because it is less secure.
FTP:
FTP stands for File transfer protocol.
FTP is a standard internet protocol provided by TCP/IP used for transmitting the files
from one host to another.
It is mainly used for transferring the web page files from their creator to the computer
that acts as a server for other computers on the internet.
It is also used for downloading the files to computer from other servers.
Objectives of FTP:
It provides the sharing of files.
It is used to encourage the use of remote computers.
It transfers the data more reliably and efficiently.
Why FTP:
Although transferring files from one system to another is very simple and straightforward, but
sometimes it can cause problems. For example, two systems may have different file
conventions. Two systems may have different ways to represent text and data. Two systems
may have different directory structures. FTP protocol overcomes these problems by
establishing two connections between hosts. One connection is used for data transfer, and
another connection is used for the control connection.
Mechanism of FTP:
22
UNIT-5
The above figure shows the basic model of the FTP. The FTP client has three components: the
user interface, control process, and data transfer process. The server has two components: the
server control process and the server data transfer process.
Control Connection:
The control connection uses very simple rules for communication. Through control connection,
we can transfer a line of command or line of response at a time. The control connection is made
between the control processes. The control connection remains connected during the entire
interactive FTP session.
23
UNIT-5
Data Connection:
The Data Connection uses very complex rules as data types may vary. The data connection is
made between data transfer processes. The data connection opens when a command comes for
transferring the files and closes when the file is transferred.
Characteristics of FTP:
FTP uses TCP as a transport layer protocol.
It is good for simple file transfers, such as during boot time.
Errors in the transmission (lost packets, checksum errors) must be handled by the TFTP
server.
It uses only one connection through well-known port 69.
TFTP uses a simple lock-step protocol (each data packet needs to be acknowledged).
Thus, the throughput is limited.
Advantages of FTP:
Speed is one of the advantages of FTP (File Transfer Protocol).
File sharing also comes in the category of advantages of FTP in this between two
machines files can be shared on the network.
Efficiency is more in FTP.
Disadvantages of FTP:
File size limit is the drawback of FTP only 2 GB size files can be transferred.
Multiple receivers are not supported by the FTP.
FTP does not encrypt the data this is one of the biggest drawbacks of FTP.
FTP is unsecured we use login IDs and passwords making it secure but they can be
attacked by hackers.