CS3591 Computer Networks Lecture Notes
CS3591 Computer Networks Lecture Notes
COURSE OBJECTIVES:
To understand the concept of layering in networks.
To know the functions of protocols of each layer of TCP/IP protocol suite.
To visualize the end-to-end flow of information.
To learn the functions of network layer and the various routing protocols
To familiarize the functions and protocols of the Transport layer
UNIT IV ROUTING 7
Routing and protocols: Unicast routing - Distance Vector Routing - RIP - Link State Routing –
OSPF – Path-vector routing - BGP - Multicast Routing: DVMRP – PIM.
TOTAL:45 PERIODS
TEXT BOOKS
1. James F. Kurose, Keith W. Ross, Computer Networking, A Top-Down Approach Featuring
the Internet, Eighth Edition, Pearson Education, 2021.
2. Behrouz A. Forouzan, Data Communications and Networking with TCP/IP Protocol Suite,
Sixth Edition TMH, 2022
COMPUTER NETWORKS
CS3591
SEMESTER - V
ANNA UNIVERSITY REGULATION 2021
PREPARED BY
K. JAYANTH, M.E
ASSISTANT PROFESSOR
GOJAN SCHOOL OF BUSINESS AND TECHNOLOGY
Course Outcomes
CO1: Explain the basic layers and its functions in computer networks. (K2)
Course Objective
Important Topics
INTRODUCTION TO NETWORKS
A network is a set of devices (often referred to as nodes) connected by communication links.
A node can be a computer, printer, or any other device capable of sending or receiving data
generated by other nodes on the network.
When we communicate, we are sharing information. This sharing can be local or remote.
CHARACTERISTICS OF A NETWORK
The effectiveness of a network depends on three characteristics.
TRANSMISSION MODES
The way in which data is transmitted from one device to another device is known as
transmission mode.
The transmission mode is also known as the communication mode.
Each communication channel has a direction associated with it, and transmission media
provide the direction. Therefore, the transmission mode is also known as a directional mode.
The transmission mode is defined in the physical layer.
Types of Transmission mode
The Transmission mode is divided into three categories:
Simplex Mode
Half-duplex Mode
Full-duplex mode (Duplex Mode)
SIMPLEX MODE
In Simplex mode, the communication is unidirectional, i.e., the data flow in one direction.
A device can only send the data but cannot receive it or it can receive the data but cannot send the
data.
This transmission mode is not very popular as mainly communications require the two- way
exchange of data. The simplex mode is used in the business field as in sales that do not require any
corresponding reply.
The radio station is a simplex channel as it transmits the signal to the listeners but never allows them
to transmit back.
Keyboard and Monitor are the examples of the simplex mode as a keyboard can only accept the
data from the user and monitor can only be used to display the data on the screen.
The main advantage of the simplex mode is that the full capacity of the communication channel can
be utilized during transmission.
HALF-DUPLEX MODE
In a Half-duplex channel, direction can be reversed, i.e., the station can transmit and receive the data
as well.
Messages flow in both the directions, but not at the same time.
The entire bandwidth of the communication channel is utilized in one direction at a time.
In half-duplex mode, it is possible to perform the error detection, and if any error occurs, then the
receiver requests the sender to retransmit the data.
A Walkie-talkie is an example of the Half-duplex mode.
In Walkie-talkie, one party speaks, and another party listens. After a pause, the other speaks and first
party listens. Speaking simultaneously will create the distorted sound which cannot be understood.
In Full duplex mode, the communication is bi-directional, i.e., the data flow in both the directions.
Both the stations can send and receive the message simultaneously.
Full-duplex mode has two simplex channels. One channel has traffic moving in one direction, and
another channel has traffic flowing in the opposite direction.
The Full-duplex mode is the fastest mode of communication between devices.
The most common example of the full-duplex mode is a Telephone network. When two
people are communicating with each other by a telephone line, both can talk and listen at
the same time.
Send/Receive A device can only send Both the devices can Both the devices can
the data but cannot send and receive the send and receive the
receive it or it can only data, but one at a time. data simultaneously.
receive the data but
cannot send it.
Example Radio, Keyboard, and Walkie-Talkie Telephone
monitor. network.
Line configuration refers to the way two or more communication devices attach to a link. A link is a
communications pathway that transfers data from one device to another. There are two possible line
configurations:
i. Point to Point (PPP): Provides a dedicated Communication link between two devices. It is simple to
establish. The most common example for Point-to-Point connection is a computer connected by
telephone line. We can connect the two devices by means of a pair of wires or using a microwave or
satellite link.
ii. MultiPoint: It is also called Multidrug configuration. In this connection two or more devices share a
single link. There are two kinds of Multipoint Connections.
Spatial Sharing: If several devices can share the link simultaneously, it is called Spatially shared
line configuration
Temporal (Time) Sharing: If users must take turns using the link, then it’s called Temporally
shared or Time Shared Line Configuration
NETWORK TYPES
A computer network is a group of computers linked to each other that enables the computer to
communicate with another computer and share their resources, data, and applications.
A computer network can be categorized by their size.
A computer network is mainly of three types:
1. Local Area Network (LAN)
2. Wide Area Network (WAN)
3. Metropolitan Area Network (MAN)
LOCAL AREA NETWORK (LAN)
Local Area Network is a group of computers connected to each other in a small area such as building,
office.
LAN is used for connecting two or more personal computers through a communication medium such
as twisted pair, coaxial cable,
It is less costly as it is built with inexpensive hardware such as hubs, network adapters, and
Ethernet cables.
The data is transferred at an extremely faster rate in Local Area Network.
LAN can be connected using a common cable or a Switch.
A Wide Area Network is not limited to a single location, but it spans over a large geographical area
through a telephone line, fiber
optic cable or satellite links.
The internet is one of the biggest WAN in the world.
A Wide Area Network is widely used in the field of Business, government, and education.
WAN can be either a point-to-point WAN or Switched WAN.
Point-to-point WAN
Switched WAN
A metropolitan area network is a network that covers a larger geographic area by interconnecting
a different LAN to form a larger network.
It generally covers towns and cities (50 km)
In MAN, various LANs are connected to each other through a telephone exchange line.
Communication medium used for MAN are optical fibers, cables etc.
It has a higher range than Local Area Network (LAN). It is adequate for distributed computing
applications.
INTERNETWORK
Types of Internetworks
Extranet Intranet
An extranet is used for information sharing.
The access to the extranet is restricted to only An intranet belongs to an organization which is
those users who have login credentials. An only accessible by the organization's employee
extranet is the lowest level of internetworking. or members. The main aim of the intranet is to
It can be categorized as MAN, WAN or other share the information and resources among the
computer networks. An extranet cannot have a organization employees. An intranet provides
single LAN, at least it must have one the facility to work in groups and for
connection to the external network. teleconferences.
PROTOCOL LAYERING
In networking, a protocol defines the rules that both the sender and receiver and all intermediate devices
need to follow to be able to communicate effectively.
A protocol provides a communication service that the process uses to exchange messages.
When communication is simple, we may need only one simple protocol.
When the communication is complex, we may need to divide the task between different layers, in which
case we need a protocol at each layer, or protocol layering.
Protocol layering is that it allows us to separate the services from the implementation.
A layer needs to be able to receive a set of services from the lower layer and to give the services to the
upper layer.
Any modification in one layer will not affect the other layers.
Basic Elements of Layered Architecture
Service: It is a set of actions that a layer provides to the higher layer.
Protocol: It defines a set of rules that a layer uses to exchange the information with peer entity. These
rules mainly concern about both the contents and order of the messages used.
Interface: It is a way through which the message is transferred from one layer to another layer.
Features of Protocol Layering
1. It decomposes the problem of building a into more manageable components.
2. It provides a more modular design.
Principles of Protocol Layering
1. The first principle dictates that if we want bidirectional communication, we need to make each layer so
that it is able to perform two opposite tasks, one in each direction.
2. The second principle that we need to follow in protocol layering is that the two objects under each layer
at both sites should be identical.
Protocol Graph
The set of protocols that make up a network system is called a protocol graph.
The nodes of the graph correspond to protocols, and the edges represent a dependence relation.
For example, the Figure below illustrates a protocol graph consists of protocols RRP (Request/Reply
Protocol) and MSP (Message Stream Protocol) implement two different types of process-to-process
channels, and both depend on the HHP (Host-to- Host Protocol) which provides a host-to-host connectivity
service
OSI MODEL
OSI stands for Open System Interconnection.
It is a reference model that describes how information from a software application in one computer moves
through a physical medium to the software application in another computer.
OSI consists of seven layers, and each layer performs a particular network function.
OSI model was developed by the International Organization for Standardization (ISO) in 1984, and it is
now considered as an architectural model for the inter- computer communications.
OSI model divides the whole task into seven smaller and manageable tasks. Each layer is assigned a
particular task.
Each layer is self-contained, so that task assigned to each layer can be performed independently.
1. PHYSICAL LAYER
The physical layer coordinates the functions required to transmit a bit stream over a physical medium.
The physical layer is concerned with the following functions:
Physical characteristics of interfaces and media - The physical layer defines the characteristics of
the interface between the devices and the transmission medium.
Representation of bits - To transmit the stream of bits, it must be encoded to signals. The physical
layer defines the type of encoding.
Signals: It determines the type of the signal used for transmitting the information.
Data Rate or Transmission rate - The number of bits sent each second –is also defined by the
physical layer.
Synchronization of bits - The sender and receiver must be synchronized at the bit level. Their clocks
must be synchronized.
Line Configuration - In a point-to-point configuration, two devices are connected together through
a dedicated link. In a multipoint configuration, a link is shared between several devices.
Physical Topology - The physical topology defines how devices are connected to make a network.
Devices can be connected using a mesh, bus, star or ring topology.
Transmission Mode - The physical layer also defines the direction of transmission between two
devices: simplex, half-duplex or full-duplex.
It is responsible for transmitting frames from one node to the next node. The other responsibilities
of this layer are
Framing - Divides the stream of bits received into data units called frames.
Physical addressing – If frames are to be distributed to different systems on the network, data link
layer adds a header to the frame to define the sender and receiver.
Flow control- If the rate at which the data are absorbed by the receiver is less than the rate
produced in the sender, the Data link layer imposes a flow ctrl mechanism.
Error control- Used for detecting and retransmitting damaged or lost frames and to prevent
duplication of frames. This is achieved through a trailer added at the end of the frame.
Medium Access control -Used to determine which device has control over the link at any given
time.
3. NETWORK LAYER
This layer is responsible for the delivery of packets from source to destination.
It determines the best path to move data from source to the destination based on the network
conditions, the priority of service, and other factors.
The other responsibilities of this layer are
Logical addressing - If a packet passes the network boundary, we need another addressing system
for source and destination called logical address. This addressing is used to identify the device on the
internet.
Routing – Routing is the major component of the network layer, and it determines the best optimal
path out of the multiple paths from source to the destination.
4. TRANSPORT LAYER
It is responsible for Process-to-Process delivery. That is responsible for source-to- destination (end-to-end)
delivery of the entire message, it also ensures whether the message arrives in order or not.
The other responsibilities of this layer are
Port addressing / Service Point addressing - The header includes an address called port address /
service point address. This layer gets the entire message to the correct process on that computer.
Segmentation and reassembly - The message is divided into segments and each segment is assigned
a sequence number. These numbers are arranged correctly on the arrival side by this layer.
Connection control - This can either be connectionless or connection oriented.
o The connectionless treats each segment as an individual packet and delivers to the
destination.
o The connection-oriented makes connection on the destination side before the delivery. After
the delivery the termination will be terminated.
Flow control - The transport layer also responsible for flow control but it is performed
end-to-end rather than across a single link.
Error Control - Error control is performed end-to-end rather than across the single link.
5. SESSION LAYER
This layer establishes, manages and terminates connections between applications. The other
responsibilities of this layer are
Dialog control - Session layer acts as a dialog controller that creates a dialog between two processes
or we can say that it allows the communication between two processes which can be either half-
duplex or full-duplex.
Synchronization- Session layer adds some checkpoints when transmitting the data in a sequence. If
some error occurs in the middle of the transmission of data, then the transmission will take place
again from the checkpoint. This process is known as Synchronization and recovery.
6.PRESENTATION LAYER
It is concerned with the syntax and semantics of information exchanged between two systems.
The other responsibilities of this layer are
Translation – Different computers use different encoding system; this layer is responsible for
interoperability between these different encoding methods. It will change the message into some
common format.
Encryption and Decryption-It means that sender transforms the original information to another
form and sends the resulting message over the n/w. and vice versa.
Compression and Expansion-Compression reduces the number of bits contained in the information
particularly in text, audio and video.
7.APPLICATION LAYER
This layer enables the user to access the network. It handles issues such as network transparency,
resource allocation, etc. This allows the user to log on to remote user.
The other responsibilities of this layer are
FTAM (File Transfer, Access, Management) - Allows user to access files in a remote host.
Mail services - Provides email forwarding and storage.
Directory services - Provides database sources to access information about various sources and
objects.
APPLICATION LAYER
An application layer incorporates the function of top three OSI layers. An application layer is the
topmost layer in the TCP/IP model.
It is responsible for handling high-level protocols, issues of representation. This layer allows the user
to interact with the application.
When one application layer protocol wants to communicate with another application layer, it
forwards its data to the transport layer.
Protocols such as FTP, HTTP, SMTP, POP3, etc running in the application layer provides service to
other program running on top of application layer
TRANSPORT LAYER
The transport layer is responsible for the reliability, flow control, and correction of data
which is being sent over the network.
The two protocols used in the transport layer are User Datagram protocol and
Transmission control protocol.
UDP – UDP provides connectionless service and end-to-end delivery of transmission. It is
an unreliable protocol as it discovers the errors but not specify the error.
TCP – TCP provides a full transport layer services to applications. TCP is a reliable protocol
as it detects the error and retransmits the damaged frames.
INTERNET LAYER
The internet layer is the second layer of the TCP/IP model. An internet layer is also known as the
network layer.
The main responsibility of the internet layer is to send the packets from any network, and they arrive
at the destination irrespective of the route they take.
Internet layer handle the transfer of information across multiple networks through router and
gateway.
IP protocol is used in this layer, and it is the most significant part of the entire TCP/IP suite.
SOCKETS
A socket is one endpoint of a two-way communication link between two programs running on the
network. The socket mechanism provides a means of inter-process communication (IPC) by establishing
named contact points between which the communication take place.
Like ‘Pipe’ is used to create pipes and sockets is created using ‘socket’ system call. The socket provides
bidirectional FIFO Communication facility over the network. A socket connecting to the network is
created at each end of the communication. Each socket has a specific address. This address is composed
of an IP address and a port number.
Socket are generally employed in client server applications. The server creates a socket, attaches it to a
network port address then waits for the client to contact it. The client creates a socket and then attempts to
connect to the server socket. When the connection is established, transfer of data takes place.
Types of Sockets: There are two types of Sockets: the datagram socket and the stream socket.
1. Datagram Socket: This is a type of network which has connection less point for sending and receiving
packets. It is similar to mailbox. The letters (data) posted into the box are collected and delivered
(transmitted) to a letterbox (receiving socket).
2. Stream Socket: In Computer operating system, a stream socket is type of interprocess communications
socket or network socket which provides a connection-oriented, sequenced, and unique flow of data
without record boundaries with well-defined mechanisms for creating and destroying connections and for
detecting errors. It is similar to phone. A connection is established between the phones (two ends) and a
conversation (transfer of data) takes place.
The Hypertext Transfer Protocol (HTTP) is used to define how the client-server programs can
be written to retrieve web pages from the Web.
It is a protocol used to access the data on the World Wide Web (WWW).
The HTTP protocol can be used to transfer the data in the form of plain text, hypertext, audio,
video, and so on.
HTTP is a stateless request/response protocol that governs client/server communication.
An HTTP client sends a request; an HTTP server returns a response.
The server uses the port number 80; the client uses a temporary port number. HTTP uses the services
of TCP, a connection-oriented and reliable protocol. HTTP is a text-oriented protocol. It contains
embedded URL known as links.
When hypertext is clicked, browser opens a new connection, retrieves file from the server and
displays the file.
Each HTTP message has the general form
START_LINE <CRLF> MESSAGE_HEADER <CRLF>
<CRLF> MESSAGE_BODY <CRLF>
where <CRLF> stands for carriage-return-line-feed.
Features of HTTP
Connectionless protocol:
HTTP is a connectionless protocol. HTTP client initiates a request and waits for a response from the
server. When the server receives the request, the server processes the request and sends back the
response to the HTTP client after which the client disconnects the connection. The connection
between client and server exist only during the current request and response time only.
Media independent:
HTTP protocol is a media independent as data can be sent as long as both the client and server
know how to handle the data content. It is required for both the client and server to specify the
content type in MIME-type header.
Stateless:
HTTP is a stateless protocol as both the client and server know each other only during the current request.
Due to this nature of the protocol, both the client and server do not retain the information between various
requests of the web pages.
Request Message: The request message is sent by the client that consists of a request line, headers, and
sometimes a body.
Response Message: The response message is sent by the server to the client that
consists of a status line, headers, and sometimes a body.
HTTP REQUEST MESSAGE
Request Header
Each request header line sends additional information from the client to the server.
Each header line has a header name, a colon, a space, and a header value. The value field defines the
values associated with each header name.
Headers defined for request message include
Body
Status Line
The Status line contains three fields - HTTP version, Status code, Status phrase
The first field defines the version of HTTP protocol, currently 1.1.
The status code field defines the status of the request. It classifies the HTTP result.
It consists of three digits.
1xx–Informational,2xx– Success,3xx–Redirection, 4xx–Client error,5xx–Server error
The Status phrase field gives brief description about status code in text form.
Some of the Status codes are
The body contains the document to be sent from the server to the client.
The body is present unless the response is an error message.
HTTP CONNECTIONS
HTTP Clients and Servers exchange multiple messages over the same TCP connection. If
some of the objects are located on the same server, we have two choices: to retrieve each
object using a new TCP connection or to make a TCP connection and retrieve them all.
The first method is referred to as a non-persistent connection, the second as a
persistent connection.
HTTP 1.0 uses non-persistent connections and HTTP 1.1 uses persistent
connections.
NON-PERSISTENT CONNECTIONS
In a non-persistent connection, one TCP connection is made for each
request/response.
Only one object can be sent over a single TCP connection
The client opens a TCP connection and sends a request.
The server sends the response and closes the connection.
The cli It then closes the connection.
enter reads the data until it encounters an end-of-file marker.
PERSISTENT CONNECTIONS
FTP OBJECTIVES
It provides the sharing of files.
It is used to encourage the use of remote computers.
It transfers the data more reliably and efficiently.
FTP MECHANISM
FTP CONNECTIONS
There are two types of connections in FTP -
Control Connection and Data Connection.
The two connections in FTP have different lifetimes.
The control connection remains connected during the entire interactive FTP session.
The data connection is opened and then closed for each file transfer activity. When a user starts an
FTP session, the control connection opens.
While the control connection is open, the data connection can be opened and closed multiple times if
several files are transferred.
FTP uses two well-known TCP ports:
Port 21 is used for the control connection
Port 20 is used for the data connection.
Control Connection:
The control connection uses very simple rules for communication.
Through control connection, we can transfer a line of command or line of response at a time.
The control connection is made between the control processes.
The control connection remains connected during the entire interactive FTP session.
Data Connection:
The Data Connection uses very complex rules as data types may vary.
The data connection is made between data transfer processes.
The data connection opens when a command comes for transferring the files and closes when the file is
transferred.
FTP COMMUNICATION
FTP Communication is achieved through commands and responses. FTP Commands are sent from the
client to the server
FTP responses are sent from the server to the client.
FTP Commands are in the form of ASCII uppercase, which may or may not be followed by an argument.
Some of the most common commands are
Every FTP command generates at least one response.
A response has two parts: a three-digit number followed by text.
The numeric part defines the code; the text part defines needed parameter.
FTP SECURITY
FTP requires a password; the password is sent in plaintext which is unencrypted. This
means it can be intercepted and used by an attacker.
The data transfer connection also transfers data in plaintext, which is insecure.
To be secure, one can add a Secure Socket Layer between the FTP application layer and
the TCP layer.
In this case FTP is called SSL-FTP.
EMAIL (SMTP, MIME, IMAP, POP3)
One of the most popular Internet services is electronic mail (E-mail). Email is one of
the oldest network applications.
The three main components of an Email are
1. User Agent (UA)
2. Message Transfer Agent (MTA) – SMTP
3. Message Access Agent (MAA) - IMAP, POP
When the sender and the receiver of an e-mail are on the same system, we need only two User
Agents and no Message Transfer Agent
When the sender and the receiver of an e-mail are on different system, we need two UA, two pairs
of MTA (client and server), and two MAA (client and server).
WORKING OF EMAIL
When Alice needs to send a message to Bob, she runs a UA program to prepare the message and
send it to her mail server.
The mail server at her site uses a queue (spool) to store messages waiting to be sent. The message,
however, needs to be sent through the Internet from Alice’s site to Bob’s site using an MTA.
Here two message transfer agents are needed: one client and one server.
The server needs to run all the time because it does not know when a client will ask for a connection.
The client can be triggered by the system when there is a message in the queue to be sent.
The user agent at the Bob site allows Bob to read the received message.
Bob later uses an MAA client to retrieve the message from an MAA server running on the
second server.
USER AGENT (UA)
The first component of an electronic mail system is the user agent (UA).
It provides service to the user to make the process of sending and receiving a message
easier.
A user agent is a software package that composes, reads, replies to, and forwards
messages. It also handles local mailboxes on the user computers.
Command driven
Command driven user agents belong to the early days of electronic mail.
A command-driven user agent normally accepts a one-character command from the keyboard to perform
its task.
Some examples of command driven user agents are mail, pine, and elm.
GUI-based
Modern user agents are GUI-based.
They allow the user to interact with the software by using both the keyboard and the mouse.
They have graphical components such as icons, menu bars, and windows that make the services easy to
access.
Some examples of GUI-based user agents are Eudora and Outlook.
MESSAGE TRANSFER AGENT (MTA)
The actual mail transfer is done through message transfer agents (MTA).
To send mail, a system must have the client MTA, and to receive mail, a system must have a
server MTA.
The formal protocol that defines the MTA client and server in the Internet is called Simple Mail
Transfer Protocol (SMTP).
SMTP is the standard protocol for transferring mail between hosts in the TCP/IP protocol suite.
SMTP is not concerned with the format or content of messages themselves.
SMTP uses information written on the envelope of the mail (message header), but does not look at
the contents (message body) of the envelope.
SMTP Responses
Responses are sent from the server to the client.
A response is a three-digit code that may be followed by additional textual information.
SMTP OPERATIONS
Basic SMTP operation occurs in three phases:
1. Connection Setup
2. Mail Transfer
3. Connection Termination
Connection Setup
An SMTP sender will attempt to set up a TCP connection with a target host when it has one or more
mail messages to deliver to that host.
The sequence is quite simple:
1. The sender opens a TCP connection with the receiver.
2. Once the connection is established, the receiver identifies itself with "Service Ready”.
3. The sender identifies itself with the HELO command.
4. The receiver accepts the sender's identification with "OK".
5. If the mail service on the destination is unavailable, the destination host returns a "Service Not Available"
reply in step 2, and the process is terminated.
Mail Transfer
Once a connection has been established, the SMTP sender may send one or more messages to
the SMTP receiver.
There are three logical phases to the transfer of a message:
1. A MAIL command identifies the originator of the message.
2. One or more RCPT commands identify the recipients for this message.
3. A DATA command transfers the message text.
Connection Termination
The SMTP sender closes the connection in two steps.
First, the sender sends a QUIT command and waits for a reply.
The second step is to initiate a TCP close operation for the TCP connection.
The receiver initiates its TCP close after sending its reply to the QUIT command.
LIMITATIONS OF SMTP
SMTP cannot transmit executable files or other binary objects.
SMTP cannot transmit text data that includes national language characters, as these are represented
by 8-bit codes with values of 128 decimal or higher, and SMTP is limited to 7-bit ASCII.
SMTP servers may reject mail message over a certain size.
SMTP gateways that translate between ASCII and the character code EBCDIC do not use a
consistent set of mappings, resulting in translation problems.
Some SMTP implementations do not adhere completely to the SMTP standards defined.
Common problems include the following:
1. Deletion, addition, or recording of carriage return and linefeed.
2. Truncating or wrapping lines longer than 76 characters.
3. Removal of trailing white space (tab and space characters).
4. Padding of lines in a message to the same length. Conversion of tab characters into multiple-space
characters
SMTP provides a basic email service, while MIME adds multimedia capability to SMTP.
MIME is an extension to SMTP and is used to overcome the problems and limitations of SMTP.
Email system was designed to send messages only in ASCII format.
Languages such as French, Chinese, etc., are not supported.
Image, audio and video files cannot be sent. MIME adds the following features to email service:
Be able to send multiple attachments with a single message;
Unlimited message length;
Use of character sets other than ASCII code;
Use of rich text (layouts, fonts, colors, etc)
Binary attachments (executables, images, audio or video files, etc.), which may be divided if needed.
MIME is a protocol that converts non-ASCII data to 7-bit NVT (Network Virtual Terminal) ASCII and vice-
versa.
MIME HEADERS
Using headers, MIME describes the type of message content and the encoding used.
Headers defined in MIME are:
MIME-Version- current version, i.e., 1.1
Content-Type - message type (text/html, image/jpeg, application/pdf)
Content-Transfer-Encoding - message encoding scheme (eg base64).
Content-Id - unique identifier for the message.
MTA is a mail daemon (send mail) active on hosts having mailbox, used to send an email.
Mail passes through a sequence of gateways before it reaches the recipient mail server. Each gateway
stores and forwards the mail using Simple mail transfer protocol (SMTP).
SMTP defines communication between MTAs over TCP on port 25.
In an SMTP session, sending MTA is client and receiver is server. In each exchange:
Client posts a command (HELO, MAIL, RCPT, DATA, QUIT, VRFY, etc.) Server responds with a
code (250, 550, 354, 221, 251 etc) and an explanation. Client is identified using HELO command and
verified by the server
Client forwards message to server, if server is willing to accept. Message is terminated by a line with
only single period (.) in it. Eventually client terminates the connection.
IMAP4
The latest version is IMAP4. IMAP4 is more powerful and more complex. IMAP4 provides the
following extra functions:
A user can check the e-mail header prior to downloading.
A user can search the contents of the e-mail for a specific string of characters prior to downloading.
A user can partially download e-mail. This is especially useful if bandwidth is limited and the e-
mail contains multimedia with high bandwidth requirements.
A user can create, delete, or rename mailboxes on the mail server.
A user can create a hierarchy of mailboxes in a folder for e-mail storage.
ADVANTAGES OF IMAP
With IMAP, the primary storage is on the server, not on the local machine. Email being put
away for storage can be foldered on local disk, or can be
POP3 client is installed on the recipient computer and POP server on the mail server.
Client opens a connection to the server using TCP on port 110.
Client sends username and password to access mailbox and to retrieve messages.
POP3 Commands
POP commands are generally abbreviated into codes of three or four letters
The following describes some of the POP commands:
1. UID - This command opens the connection
2. STAT - It is used to display number of messages currently in the mailbox
3. LIST - It is used to get the summary of messages
4. RETR -This command helps to select a mailbox to access the messages
5. DELE - It is used to delete a message
6. RSET - It is used to reset the session to its initial state
7. QUIT - It is used to log off the session
DIFFERENCE BETWEEN POP AND IMAP
4 All the messages have to be downloaded. It allows selective transfer of messages to the
client.
5 Only one mailbox can be created on the Multiple mailboxes can be created on the
server. server.
6 Not suitable for accessing non-mail data. Suitable for accessing non-mail data i.e.
attachment.
POP commands are generally abbreviated IMAP commands are not abbreviated,
7 into codes of three or four letters. Eg. they are full. Eg. STATUS.
STAT.
8 It requires minimum use of server Clients are totally dependent on server.
resources.
9 Mails once downloaded cannot be Allows mails to be accessed from
accessed from some other location. multiple locations.
10 The e-mails are not downloaded Users can view the headings and sender of
automatically. e-mails and then decide to download.
11 POP requires less internet usage time. IMAP requires more internet usage time.
IMAP is more powerful and more complex than POP. User can check the e-mail header prior to
downloading.
User can search e-mail for a specific string of characters prior to downloading. User can download
partially, very useful in case of limited bandwidth.
User can create, delete, or rename mailboxes on the mail server.
Domain Name System was designed in 1984. DNS is used for name-to-address mapping.
The DNS provides the protocol which allows clients and servers to communicate with each other.
Eg: Host name like www.yahoo.com is translated into numerical IP addresses like
207.174.77.131
Domain Name System (DNS) is a distributed database used by TCP/IP applications to map between
hostnames and IP addresses and to provide electronic mail routing information.
Each site maintains its own database of information and runs a server program that other systems
across the Internet can query.
WORKING OF DNS
The following six steps shows the working of a DNS. It maps the host name to an IP address:
1. The user passes the host name to the file transfer client.
2. The file transfer client passes the host name to the DNS client.
3. Each computer, after being booted, knows the address of one DNS server. The DNS client
sends a message to a DNS server with a query that gives the file transfer server name using the
known IP address of the DNS server.
4. The DNS server responds with the IP address of the desired file transfer server.
5. The DNS server passes the IP address to the file transfer client.
6. The file transfer client now uses the received IP address to access the file transfer server.
NAME SPACE
To be unambiguous, the names assigned to machines must be carefully selected from a name space
with complete control over the binding between the names and IP address.
The names must be unique because the addresses are unique.
A name space that maps each address to a unique name can be organized in two ways:
flat (or) hierarchical.
Each node in the tree has a label, which is a string with a maximum of 63 characters. The root label
is a null string (empty string). DNS requires that children of a node (nodes that branch from the
same node) have different labels, which guarantees the uniqueness of the domain names.
Domain Name
Each node in the tree has a label called as domain name.
A full domain name is a sequence of labels separated by dots (.)
The domain names are always read from the node up to the root.
The last label is the label of the root (null).
This means that a full domain name always ends in a null label, which means the last character is a
dot because the null string is nothing.
If a label is terminated by a null string, it is called a fully qualified domain name (FQDN).
If a label is not terminated by a null string, it is called a partially qualified domain
name (PQDN).
Domain
A domain is a subtree of the domain name space.
The name of the domain is the domain name of the node at the top of the sub-tree.
A domain may itself be divided into domains.
ROOT SERVER
A root sever is a server whose zone consists of the whole tree.
A root server usually does not store any information about domains but delegates its authority to
other servers, keeping references to those servers.
Currently there are more than 13 root servers, each covering the whole domain name space.
The servers are distributed all around the world.
Country Domains
The country domains section follows the same format as the generic domain subset uses two
characters for country abbreviations
E.g.; in for India, us for United States etc.) in place of the three- c h a r a c t e r organizational
abbreviation at the first level.
Second level labels can be organizational, or they can be more specific, national designation.
India for example, uses state abbreviations as a subdivision of the country domain us. (e.g.,
ca.in.)
Inverse Domains
Mapping an address to a name is called Inverse domain.
The client can send an IP address to a server to be mapped to a domain name and it is called
PTR(Pointer) query.
To answer queries of this kind, DNS uses the inverse domain
DNS RESOLUTION
Mapping a name to an address or an address to a name is called name address resolution. DNS is
designed as a client server application.
A host that needs to map an address to a name or a name to an address calls a DNS client named a
Resolver.
The Resolver accesses the closest DNS server with a mapping request.
If the server has the information, it satisfies the resolver; otherwise, it either refers the resolver to
other servers or asks other servers to provide the information.
After the resolver receives the mapping, it interprets the response to see if it is a real resolution or an
error and finally delivers the result to the process that requested it.
A resolution can be either recursive or iterative.
Recursive Resolution
The application program on the source host calls the DNS resolver (client) to find the IP address of the
destination host. The resolver, which does not know this address, sends the query to the local DNS server of
the source (Event 1)
The local server sends the query to a root DNS server (Event 2)
The Root server sends the query to the top-level-DNS server (Event 3)
The top-level DNS server knows only the IP address of the local DNS server at the destination. So, it
forwards the query to the local server, which knows the IP address of the destination host (Event 4)
The IP address of the destination host is now sent back to the top-level DNS server (Event 5) then back to
the root server (Event 6), then back to the source DNS server, which may cache it for the future queries
(Event 7), and finally back to the source host (Event 8).
Iterative Resolution
In iterative resolution, each server that does not know the mapping, sends the IP address of the next
server back to the one that requested it.
The iterative resolution takes place between two local servers.
The original resolver gets the final answer from the destination local server.
The messages shown by Events 2, 4, and 6 contain the same query.
However, the message shown by Event 3 contains the IP address of the top-level domain server.
The message shown by Event 5 contains the IP address of the destination local DNS server
The message shown by Event 7 contains the IP address of the destination.
When the Source local DNS server receives the IP address of the destination, it sends it to the resolver
(Event 8).
DNS CACHING
Each time a server receives a query for a name that is not in its domain, it needs to search its database
for a server IP address.
DNS handles this with a mechanism called caching.
When a server asks for a mapping from another server and receives the response, it stores this
information in its cache memory before sending it to the client.
If the same or another client asks for the same mapping, it can check its cache memory and resolve
the problem.
However, to inform the client that the response is coming from the cache memory and not from an
authoritative source, the server marks the response as unauthoritative.
Caching speeds up resolution. Reduction of this search time would increase efficiency, but it can also
be problematic.
If a server caches a mapping for a long time, it may send an outdated mapping to the client.
To counter this, two techniques are used.
First, the authoritative server always adds information to the mapping called time to live (TTL). It
defines the time in seconds that the receiving server can cache the information. After that time, the
mapping is invalid and any query must be sent again to the authoritative server.
Second, DNS requires that each server keep a TTL counter for each mapping it caches.
The cache memory must be searched periodically and those mappings with an expired TTL
must be purged.
DNS MESSAGES
DNS has two types of messages: query and response. Both types have the same format.
The query message consists of a header and question section.
The response message consists of a header, question section, answer section,
authoritative section, and additional section.
Header
Both query and response messages have the same header format with some fields set to
zero for the query messages.
The header fields are as follows:
The identification field is used by the client to match the response with the query.
The flag field defines whether the message is a query or response. It also includes status of error.
The next four fields in the header define the number of each record type in the message.
Question Section
The question section consists of one or more question records. It is present in both query and
response messages.
Answer Section
The answer section consists of one or more resource records. It is present only in response messages.
Authoritative Section
The authoritative section gives information (domain name) about one or more authoritative servers
for the query.
Additional Information Section
The additional information section provides additional information that may help the resolver.
DNS CONNECTIONS
DNS can use either UDP or TCP.
In both cases the well-known port used by the server is port 53.
UDP is used when the size of the response message is less than 512 bytes because most UDP
packages have a 512-byte packet size limit.
If the size of the response message is more than 512 bytes, a TCP connection is used.
DNS REGISTRARS
New domains are added to DNS through a registrar. A fee is charged.
A registrar first verifies that the requested domain name is unique and then enters it into the DNS database.
Today, there are many registrars; their names and addresses can be found at
http://www.intenic.net
To register, the organization needs to give the name of its server and the IP address of the server.
For example, a new commercial organization named wonderful with a server named ws and IP address
200.200.200.5, needs to give the following information to one of the registrars:
Domain name: ws.wonderful.com IP address: 200.200.200.5
SNMP (SIMPLE NETWORK MANAGEMENT PROTOCOL)
The Simple Network Management Protocol (SNMP) is a framework for managing devices in
an internet using the TCP/IP protocol suite.
SNMP is an application layer protocol that monitors and manages routers, distributed over a
network.
It provides a set of operations for monitoring and managing the internet.
SNMP uses services of UDP on two well-known ports: 161 (Agent) and 162 (manager).
SNMP uses the concept of manager and agent.
SNMP MANAGER
A manager is a host that runs the SNMP client program
The manager has access to the values in the database kept by the agent.
A manager checks the agent by requesting the information that reflects the behavior of the agent.
A manager also forces the agent to perform a certain function by resetting values in the agent
database.
For example, a router can store in appropriate variables the number of packets received and forwarded.
The manager can fetch and compare the values of these two variables to see if the router is congested or
not.
SNMP AGENT
The agent is a router that runs the SNMP server program.
The agent is used to keep the information in a database while the manager is used to access the values in
the database.
For example, a router can store the appropriate variables such as a number of packets received and
forwarded while the manager can compare these variables to determine whether the router is congested or
not.
Agents can also contribute to the management process.
A server program on the agent checks the environment, if something goes wrong, the agent sends a
warning message to the manager.
SNMP MANAGEMENT COMPONENTS
Management of the internet is achieved through simple interaction between a manager and agent.
Management is achieved through the use of two protocols:
Structure of Management Information (SMI)
Management Information Base (MIB).
Structure of Management Information (SMI)
To use SNMP, we need rules for naming objects.
SMI is a protocol that defines these rules.
SMI is a guideline for SNMP
It emphasizes three attributes to handle an object: name, data type, and encoding method.
Its functions are:
To name objects.
To define the type of data that can be stored in an object.
To show how to encode data for transmission over the network.
Name
SMI requires that each managed object (such as a router, a variable in a router, a value, etc.) have a
unique name. To name objects globally.
SMI uses an object identifier, which is a hierarchical identifier based on a tree structure.
The tree structure starts with an unnamed root. Each object can be defined using a sequence of
integers separated by dots.
The tree structure can also define an object using a sequence of textual names separated by dots.
Type of data
The second attribute of an object is the type of data stored in it.
To define the data type, SMI uses Abstract Syntax Notation One (ASN.1)
definitions.
SMI has two broad categories of data types: simple and structured.
The simple data types are atomic data types. Some of them are taken directly from ASN.1; some are
added by SMI.
SMI defines two structured data types: sequence and sequence of.
Sequence - A sequence data type is a combination of simple data types, not necessarily of the same
type.
Sequence of - A sequence of data type is a combination of simple data types all of the same type or a
combination of sequence data types all of the same type.
Encoding data
SMI uses another standard, Basic Encoding Rules (BER), to encode data to be transmitted over the
network.
BER specifies that each piece of data be encoded in triplet format (TLV): tag, length, value
Get Request
The Get Request PDU is sent from the manager (client) to the agent (server) to retrieve the value of a
variable or a set of variables.
GetNextRequest
The GetNextRequest PDU is sent from the manager to the agent to retrieve the value of a variable.
GetBulkRequest
The GetBulkRequest PDU is sent from the manager to the agent to retrieve a large amount of data. It can
be used instead of multiple Get Request and GetNextRequest PDUs.
Set Request
The Set Request PDU is sent from the manager to the agent to set (store) a value in a variable.
Response
The Response PDU is sent from an agent to a manager in response to Get Request or GetNextRequest. It
contains the value(s) of the variable(s) requested by the manager.
Trap
The Trap PDU is sent from the agent to the manager to report an event. For example, if the agent is
rebooted, it informs the manager and reports the time of rebooting.
Inform Request
The Inform Request PDU is sent from one manager to another remote manager to get the value of some
variables from agents under the control of the remote manager. The remote manager responds with a
Response PDU.
Report
The Report PDU is designed to report some types of errors between managers.
52
UNIT II – TRANSPORT LAYER
CO2: Understand the basic of how data flows from one node to another. (K2)
Course Objective
Important Topics
INTRODUCTION
The transport layer is the fourth layer of the OSI model and is the core of the Internet model.
It responds to service requests from the session layer and issues service requests to the
network Layer.
The transport layer provides transparent transfer of data between hosts.
It provides end-to-end control and information transfer with the quality of service needed
by the application program.
It is the first true end-to-end layer, implemented in all End Systems (ES).
Process-to-Process Communication
The Transport Layer is responsible for delivering data to the appropriate application process
on the host computers.
This involves multiplexing of data from different application processes, i.e. forming data
packets, and adding source and destination port numbers in the header of each Transport Layer
data packet.
Together with the source and destination IP address, the port numbers constitute a network
socket, i.e. an identification address of the process-to-process communication.
Addressing: Port Numbers
Ports are the essential ways to address multiple entities in the same location.
Using port addressing it is possible to use more than one network-based application at the
same time.
Three types of Port numbers are used:
Well-known ports - These are permanent port numbers. They range between 0 to 1023.These
port numbers are used by Server Process.
Registered ports - The ports ranging from 1024 to 49,151 are not assigned or controlled.
Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They range between
49152–65535.These port numbers are used by Client Process.
Encapsulation and De capsulation
To send a message from one process to another, the transport-layer protocol encapsulates
and de capsulate messages.
Encapsulation happens at the sender site. The transport layer receives the data and adds the
transport-layer header.
De capsulation happens at the receiver site. When the message arrives at the destination
transport layer, the header is dropped and the transport layer delivers the message to the process
running at the application layer.
Multiplexing and DE multiplexing
Whenever an entity accepts items from more than one source, this is referred to as
multiplexing (many to one).
Whenever an entity delivers items to more than one source, this is referred to as
DE multiplexing (one to many).
The transport layer at the source performs multiplexing
The transport layer at the destination performs DE multiplexing
Flow Control
Flow Control is the process of managing the rate of data transmission between two nodes to
prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the
receiving node is not overwhelmed with data from transmitting node.
Error Control
Error control at the transport layer is responsible for
1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.
Error Control involves Error Detection and Error Correction
Congestion Control
Congestion in a network may occur if the load on the network (the number of packets sent to
the network) is greater than the capacity of the network (the number of packets a network can
handle).
Congestion control refers to the mechanisms and techniques that control the congestion and
keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
Each protocol provides a different type of service and should be used appropriately.
UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity and
efficiency in applications where error control can be provided by the application-layer process.
TCP - TCP is a reliable connection-oriented protocol that can be used in any application where
reliability is important.
SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP and
TCP in an effort to create a better protocol for multimedia communication.
USER DATAGRAM PROTOCOL (UDP)
User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol.
UDP adds process-to-process communication to best-effort service provided by IP.
UDP is a very simple protocol using a minimum of overhead.
UDP is a simple DE multiplexer, which allows multiple processes on each host to
communicate.
UDP does not provide flow control, reliable or ordered delivery.
UDP can be used to send small message where reliability is not expected.
Sending a small message using UDP takes much less interaction between the sender and
receiver.
UDP allow processes to indirectly identify each other using an abstract locator called port or
mailbox
UDP PORTS
Processes (server/client) are identified by an abstract locator known as port.
Server accepts message at well-known port.
Some well-known UDP ports are 7–Echo, 53–DNS, 111–RPC, 161–SNMP, etc.
< port, host > pair is used as key for DE multiplexing.
Ports are implemented as a message queue.
When a message arrives, UDP appends it to end of the queue.
When queue is full, the message is discarded.
When a message is read, it is removed from the queue.
When an application process wants to receive a message, one is removed from the front of
the queue.
If the queue is empty, the process blocks until a message becomes available.
UDP DATAGRAM (PACKET) FORMAT
UDP packets are known as user datagrams.
These user datagrams, have a fixed-size header of 8 bytes made of four fields, each of 2 bytes
(16 bits).
APPLICATIONS OF UDP
UDP is used for management processes such as SNMP.
UDP is used for route updating protocols such as RIP.
UDP is a suitable transport protocol for multicasting. Multicasting capability is embedded in
the UDP software
UDP is suitable for a process with internal flow and error control mechanisms such as Trivial
File Transfer Protocol (TFTP).
UDP is suitable for a process that requires simple request-response communication with little
concern for flow and error control.
UDP is normally used for interactive real-time applications that cannot tolerate uneven delay
between sections of a received message.
TCP SERVICES
Process-to-Process Communication
TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP is a stream-oriented protocol.
TCP allows the sending process to deliver data as a stream of bytes and allows the receiving
process to obtain data as a stream of bytes.
TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet.
The sending process produces (writes to) the stream and the receiving process consumes
(reads from) it.
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same time.
Each TCP endpoint then has its own sending and receiving buffer, and segments move
in both directions.
Multiplexing and DE multiplexing
TCP performs multiplexing at the sender and DE multiplexing at the receiver.
Connection-Oriented Service
TCP is a connection-oriented protocol.
A connection needs to be established for each pair of processes.
When a process at site A wants to send to and receive data from another
process at site B, the following three phases occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
Reliable Service
TCP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
TCP SEGMENT
A packet in TCP is called a segment.
Data unit exchanged between TCP peers are called segments.
A TCP segment encapsulates the data received from the application layer.
The TCP segment is encapsulated in an IP datagram, which in turn is encapsulated in a frame
at the data-link layer.
TCP is a byte-oriented protocol, which means that the sender writes bytes into a TCP
connection and the receiver reads bytes out of the TCP connection.
TCP does not, itself, transmit individual bytes over the Internet.
TCP on the source host buffers enough bytes from the sending process to fill a
reasonably sized packet and then sends this packet to its peer on the destination host.
TCP on the destination host then empties the contents of the packet into a receive buffer,
and the receiving process reads from this buffer at its leisure.
TCP connection supports byte streams flowing in both directions.
The packets exchanged between TCP peers are called segments, since each one carries a
segment of the byte stream.
TCP PACKET FORMAT
Each TCP segment contains the header plus the data.
The segment consists of a header of 20 to 60 bytes, followed by data from the application
program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options.
1. Client sends a SYN segment to the server containing its initial sequence number (Flags
= SYN, SequenceNum = x)
2. Server responds with a segment that acknowledges client’s segment and specifies its
initial sequence number (Flags = SYN + ACK, ACK = x + 1 SequenceNum = y).
3. Finally, client responds with a segment that acknowledges server’s sequence number
(Flags = ACK, ACK = y + 1).
The reason that each side acknowledges a sequence number that is one larger than the one
sent is that the Acknowledgment field actually identifies the “next sequence number expected,”
A timer is scheduled for each of the first two segments, and if the expected response is not
received, the segment is retransmitted.
Data Transfer
After connection is established, bidirectional data transfer can take place.
The client and server can send data and acknowledgments in both directions.
The data traveling in the same direction as an acknowledgment are carried on the same
segment.
The acknowledgment is piggybacked with the data.
Connection Termination
Connection termination or teardown can be done in two ways:
Three-way Close and Half-Close
Send Buffer
Sending TCP maintains send buffer which contains 3 segments
(1) acknowledged data
(2) unacknowledged data
(3) data to be transmitted.
Send buffer maintains t h r e e p o i n t e r s
(1) LastByteAcked, (2) LastByteSent, and (3) LastByteWritten such that:
LastByteAcked ≤ L a s t B y t e S e n t ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be
acknowledged.
Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely
(1) LastByteRead, (2) NextByteExpected, and (3) LastByteRcvd such that:
LastByteRead ≤ NextByteExpected ≤ LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the application.
TCP TRANSMISSION
TCP has the mechanisms to trigger the transmission of a segment.
They are
o Maximum Segment Size (MSS) - Silly Window Syndrome
o Timeout - Nagle’s Algorithm
Slow Start
Slow start is used to increase Congestion Window exponentially from a cold start.
Source TCP initializes Congestion Window to one packet.
TCP doubles the number of packets sent every RTT on successful transmission.
When ACK arrives for first packet TCP adds 1 packet to Congestion Window and sends
two packets.
When two ACKs arrive, TCP increments Congestion Window by 2 packets and sends four
packets and so on.
Instead of sending entire permissible packets at once (bursty traffic), packets are sent in a
phased manner, i.e., slow start.
Initially TCP has no idea about congestion, henceforth it increases
Congestion Window rapidly until there is a timeout. On timeout:
Congestion Threshold = Congestion Window/ 2 Congestion Window = 1
Slow start is repeated until Congestion Window Reaches Congestion Threshold and
thereafter 1 packet per RTT.
The congestion window trace will look like
Average queue length is measured over a time interval that includes the
last busy + last idle cycle + current busy cycle.
It calculates the average queue length by dividing the curve area with time interval.
Red - Random Early Detection
The second mechanism of congestion avoidance is called as Random Early Detection
(RED).
Each router is programmed to monitor its own queue length, and when it detects that there is
congestion, it notifies the source to adjust its congestion window.
RED differs from the DEC bit scheme by two ways:
a. In DEC bit, explicit notification about congestion is sent to source, whereas RED implicitly
notifies the source by dropping a few packets.
b. DEC bit may lead to tail drop policy, whereas RED drops packet based on drop probability
in a random manner. Drop each arriving packet with some drop probability whenever the
queue length exceeds some drop level. This idea is called early random drop.
Computation of average queue length using RED
RED has two queue length thresholds that trigger certain activity: MinThreshold and
MaxThreshold
When a packet arrives at a gateway it compares Avglen with these two values
according to the following rules.
SCTP SERVICES
Stream Control Transmission Protocol (SCTP) is a reliable, message-oriented transport layer
protocol.
SCTP has mixed features of TCP and UDP.
SCTP maintains the message boundaries and detects the lost data, duplicate data as well as
out-of-order data.
SCTP provides the Congestion control as well as Flow control.
SCTP is especially designed for internet applications as well as multimedia communication.
Process-to-Process Communication
SCTP provides process-to-process communication.
Multiple Streams
SCTP allows multistream service in each connection, which is called association in SCTP
terminology.
If one of the streams is blocked, the other streams can still deliver their data.
Multihoming
An SCTP association supports multihoming service.
The sending and receiving host can define multiple IP addresses in each end for an
association.
In this fault-tolerant approach, when one path fails, another interface can be used for data
delivery without interruption.
Full-Duplex Communication
SCTP offers full-duplex service, where data can flow in both directions at the same time.
Each SCTP then has a sending and receiving buffer and packets are sent in both directions.
Connection-Oriented Service
SCTP is a connection-oriented protocol.
In SCTP, a connection is called an association.
If a client wants to send and receive message from server, the steps are:
Step1: The two SCTPs establish the connection with each other.
Step2: Once the connection is established, the data gets exchanged in both the directions.
Step3: Finally, the association is terminated.
Reliable Service
SCTP is a reliable transport protocol.
It uses an acknowledgment mechanism to check the safe and sound arrival of data.
SCTP PACKET FORMAT
An SCTP packet has a mandatory general header and a set of blocks called chunks.
General Header
The general header (packet header) defines the end points of each association to which
the packet belongs
It guarantees that the packet belongs to a particular association
It also preserves the integrity of the contents of the packet including the header itself.
There are four fields in the general header.
Source port
This field identifies the sending port.
Destination port
This field identifies the receiving port that hosts use to route the packet to the appropriate
endpoint/application.
Verification tag
A 32-bit random value created during initialization to distinguish stale packets from a previous
connection.
Checksum
The next field is a checksum. The size of the checksum is 32 bits. SCTP uses CRC-32
Checksum.
Chunks
Control information or user data are carried in chunks.
Chunks have a common layout.
The first three fields are common to all chunks; the information field depends on the type of
chunk.
The type field can define up to 256 types of chunks. Only a few have been defined so far;
the rest are reserved for future use.
The flag field defines special flags that a particular chunk may need.
The length field defines the total size of the chunk, in bytes, including the type, flag, and
length fields.
Types of Chunks
An SCTP association may send many packets, a packet may contain several chunks, and
chunks may belong to different streams.
SCTP defines two types of chunks - Control chunks and Data chunks.
A control chunk controls and maintains the association.
A data chunk carries user data.
SCTP ASSOCIATION
SCTP is a connection-oriented protocol.
A connection in SCTP is called an association to emphasize multihoming.
SCTP Associations consists of three phases:
Association Establishment
Data Transfer
Association Termination
Association Establishment
Association establishment in SCTP requires a four-way handshake.
In this procedure, a client process wants to establish an association with a server process
using SCTP as the transport-layer protocol.
The SCTP server needs to be prepared to receive any association (passive open).
Association establishment, however, is initiated by the client (active open).
The client sends the first packet, which contains an INIT chunk.
The server sends the second packet, which contains an INIT ACK chunk. The INIT ACK
also sends a cookie that defines the state of the server at this moment.
The client sends the third packet, which includes a COOKIE ECHO chunk. This is a very
simple chunk that echoes, without change, the cookie sent by the server. SCTP allows the
inclusion of data chunks in this packet.
The server sends the fourth packet, which includes the COOKIE ACK chunk that
acknowledges the receipt of the COOKIE ECHO chunk. SCTP allows the inclusion of data
chunks with this packet.
Data Transfer
The whole purpose of an association is to transfer data between two ends.
After the association is established, bidirectional data transfer can take place.
The client and the server can both send data.
SCTP supports piggybacking.
When the site receives a data chunk, it stores it at the end of the buffer (queue) and
subtracts the size of the chunk from win Size.
The TSN number of the chunk is stored in the cum TSN variable.
When the process reads a chunk, it removes it from the queue and adds the size of the
removed chunk to win Size (recycling).
When the receiver decides to send a SACK, it checks the value of last Ask; if it is less than
cum TSN, it sends a SACK with a cumulative TSN number equal to the cum TSN.
It also includes the value of win Size as the advertised window size.
Sender Site
The sender has one buffer (queue) and three variables: cur TSN, rwnd, and in Transit.
We assume each chunk is 100 bytes long. The buffer holds the chunks produced by the
process that either have been sent or are ready to be sent.
The first variable, cur TSN, refers to the next chunk to be sent.
All chunks in the queue with a TSN less than this value have been sent, but not
acknowledged; they are outstanding.
The second variable, rwnd, holds the last value advertised by the receiver (in bytes).
The third variable, in Transit, holds the number of bytes in transit, bytes sent but not yet
acknowledged.
The following figure shows the queue and variables at the sender site.
A chunk pointed to by cur TSN can be sent if the size of the data is less than or equal to the
quantity rwnd – in Transit.
After sending the chunk, the value of cur TSN is incremented by 1 and now points to the
next chunk to be sent.
The value of in Transit is incremented by the size of the data in the transmitted chunk.
When a SACK is received, the chunks with a TSN less than or equal to the cumulative TSN
in the SACK are removed from the queue and discarded. The sender does not have to worry
about them anymore.
The value of in Transit is reduced by the total size of the discarded chunks.
The value of rwnd is updated with the value of the advertised window in the SACK.
SCTP ERROR CONTROL
SCTP is a reliable transport layer protocol.
It uses a SACK chunk to report the state of the receiver buffer to the sender.
Each implementation uses a different set of entities and timers for the receiver and sender
sites.
Receiver Site
The receiver stores all chunks that have arrived in its queue including the out-of- order
ones. However, it leaves spaces for any missing chunks.
It discards duplicate messages, but keeps track of them for reports to the sender.
The following figure shows a typical design for the receiver site and the state of the
receiving queue at a particular point in time.
The available window size is 1000 bytes.
The last acknowledgment sent was for data chunk 20.
Chunks 21 to 23 have been received in order.
The first out-of-order block contains chunks 26 to 28.
The second out-of-order block contains chunks 31 to 34.
A variable holds the value of cum TSN.
An array of variables keeps track of the beginning and the end of each block that is out of
order.
An array of variables holds the duplicate chunks received.
There is no need for storing duplicate chunks in the queue and they will be discarded.
Sender Site
At the sender site, it needs two buffers (queues): a sending queue and are transmission
queue.
Three variables were used - rwnd, in Transit, and cur TSN as described in the previous
section.
The following figure shows a typical design.
Course Outcomes
Course Objective
Important Topics
SWITCHING
The technique of transferring the information from one computer network to another network is
known as switching.
Switching in a computer network is achieved by using switches.
A switch is a small hardware device which is used to join multiple computers together
with one local area network (LAN).
Switches are devices capable of creating temporary connections between two or more
devices linked to the switch.
Switches are used to forward the packets based on MAC addresses.
A Switch is used to transfer the data only to the device that has been addressed. It verifies
the destination address to route the packet appropriately.
It is operated in full duplex mode.
It does not broadcast the message as it works with limited bandwidth.
Advantages of Switching:
Switch increases the bandwidth of the network.
It reduces the workload on individual PCs as it sends the information to only that device which
has been addressed.
It increases the overall performance of the network by reducing the traffic on the network.
There will be less frame collision as switch creates the collision domain for each connection.
Disadvantages of Switching:
A Switch is more expensive than network bridges.
A Switch cannot determine the network connectivity issues easily.
Proper designing and configuration of the switch are required to handle multicast packets.
Types of Switching Techniques
PACKET SWITCHING
The packet switching is a switching technique in which the message is sent in one go, but it is
divided into smaller pieces, and they are sent individually.
The message splits into smaller pieces known as packets and packets are given a unique number
to identify their order at the receiving end.
Every packet contains some information in its headers such as source address, destination
address and sequence number.
Packets will travel across the network, taking the shortest path as possible.
All the packets are reassembled at the receiving end in correct order.
If any packet is missing or corrupted, then the message will be sent to resend the message.
If the correct order of the packets is reached, then the acknowledgment message will be sent.
Advantages of Packet Switching:
Cost-effective
Reliable
Efficient
Each packet is treated the same by a switch regardless in this example, all four packets (or
datagrams) belong to the same message, but may travel different paths to reach their destination.
Routing Table
In this type of network, each switch (or packet switch) has a routing table which is based on the
destination address. The routing tables are dynamic and are updated periodically. The
destination addresses and the corresponding forwarding output ports are recorded in the tables.
In binary notation, an IPv4 address is displayed as 32 bits. To make the address more readable, one
or more spaces are usually inserted between bytes (8 bits).
In dotted-decimal notation, IPv4 addresses are usually written in decimal form with a decimal
point (dot) separating the bytes. Each number in the dotted-decimal notation is between 0 and
255.In hexadecimal notation, each hexadecimal digit is equivalent to four bits. This means that a
32-bit address has 8 hexadecimal digits. This notation is often used in network programming.
HIERARCHY IN IPV4 ADDRESSING
In any communication network that involves delivery, the addressing system is
hierarchical.
A 32-bit IPv4 address is also hierarchical, but divided only into two parts.
The first part of the address, called the prefix, defines the network (Net ID); the second
part of the address, called the suffix, defines the node (Host ID).
The prefix length is n bits and the suffix length is (32- n) bits.
Class A
In Class A, an IP address is assigned to those networks that contain a large number of hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0 and the remaining
7 bits determine the network ID.
The 24 bits determine the host ID in any network.
7
The total number of networks in Class A = 2 = 128 network address
24
The total number of hosts in Class A = 2 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from small- sized to large-
sized networks.
The Network ID is 16 bits long.
The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 0 1, and the remaining14
bits determine the network ID.
The other 16 bits determine the Host ID.
14
The total number of networks in Class B = 2 = 16384 network address
16
The total number of hosts in Class B = 2 - 2 = 65534 host address
Class C
In Class C, an IP address is assigned to only small-sized networks.
The Network ID is 24 bits long.
The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID.
The 8 bits of the host ID determine the host in a network.
21
The total number of networks = 2 = 2097152 network address
8
The total number of hosts = 2 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses.
It does not possess sub netting.
The higher order bits of the f i r s t a l w a y s set to 1110, and the remaining bits
determines the host ID in any network.
Class E
In Class E, an IP address is used for the future use or for the research and development
purposes.
It does not possess any sub netting.
The higher order bits of the first octet are always set to 1111, and the remaining bits
determines the host ID in any network.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
A DHCP packet is actually sent using a protocol called the User Datagram Protocol (UDP).
INTERNET PROTOCOL
The Internet Protocol is the key tool used today to build scalable, heterogeneous internetworks.
IP runs on all the nodes (both hosts and routers) in a collection of networks
IP defines the infrastructure that allows these nodes and networks to function as a single logical
internetwork.
IP SERVICE MODEL
Service Model defines the host-to-host services that we want to provide
The main concern in defining a service model for an internetwork is that we can provide a host-
to-host service only if this service can somehow be provided over each of the underlying physical
networks.
The Internet Protocol is the key tool used today to build scalable, heterogeneous internetworks.
The IP service model can be thought of as having two parts:
A GLOBAL ADDRESSING SCHEME - which provides a way to identify all hosts in the
internetwork
A DATAGRAM DELIVERY MODEL – A connectionless model of data delivery.
Version Specifies the version of IP. Two versions exist – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service desired
(Type of Service) such as Precedence, Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum
size of an IP datagram is 65,535(210 ) bytes
Ident Uniquely identifies the packet sequence number. Used for
(Identification) fragmentation and re-assembly.
Flags Used to control whether routers are allowed to fragment a packet.
If a packet is fragmented, this flag value is 1. If not, flag value is 0.
Offset Indicates where in the datagram, this fragment belongs.
(Fragmentation The fragment offset is measured in units of 8 octets (64 bits). The
offset) first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to
(Time to Live) remain in the network. If this field contains the value zero, then the
datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the
datagram
Checksum Used to detect the processing errors introduced into the packet
Source Address The IP address of the original sender of the packet.
Destination The IP address of the final destination of the packet.
Address
Options This is optional field. These options may contain values for options
such as Security, Record Route, Time Stamp, etc.
Pad Used to ensure that the internet header ends on a 32-bit boundary.
The padding is zero.
Fragmentation of a datagram will only be necessary if the path to the destination includes a network
with a smaller MTU.
When a host sends an IP datagram, it can choose any size that it wants.
Fragmentation typically occurs in a router when it receives a datagram that it wants to forward
over a network that has an MTU that is smaller than the received datagram.
Each fragment is itself a self-contained IP datagram that is transmitted over a sequence of
physical networks, independent of the other fragments.
Each IP datagram is re-encapsulated for each physical network over which it travels.
For example, if we consider an Ethernet network to accept packets up to 1500 bytes long.
This leaves two choices for the IP service model:
Make sure that all IP datagrams are small enough to fit inside one packet on any network
technology
Provide a means by which packets can be fragmented and reassembled when they are too big to
go over a given network technology.
Fragmentation produces smaller, valid IP datagrams that can be readily reassembled into the
original datagram upon receipt, independent of the order of their arrival.
Example:
The original packet starts at the client; the fragments are reassembled at the server.
The value of the identification field is the same in all fragments, as is the value of the flags field
with the more bit set for all fragments except the last.
Also, the value of the offset field for each fragment is shown.
Although the fragments arrived out of order at the destination, they can be correctly
reassembled.
The value of the offset field is always relative to the original datagram.
Even if each fragment follows a different path and arrives out of order, the final destination
host can reassemble the original datagram from the fragments received (if none of them is lost)
using the following strategy:
1) The first fragment has an offset field value of zero.
2) Divide the length of the first fragment by 8. The second fragment has an offset value equal to that
result.
3) Divide the total length of the first and second fragment by 8. The third fragment has an offset
value equal to that result.
4) Continue the process. The last fragment has its M bit set to 0.
5) Continue the process. The last fragment has a more bit value of 0.
Reassembly:
Reassembly is done at the receiving host and not at each router.
To enable these fragments to be reassembled at the receiving host, they all carry the same
identifier in the Ident field.
This identifier is chosen by the sending host and is intended to be unique among all the
datagrams that might arrive at the destination from this source over some reasonable time period.
Since all fragments of the original datagram contain this identifier, the reassembling host will
be able to recognize those fragments that go together.
For example, if a single fragment is lost, the receiver will still attempt to reassemble the
datagram, and it will eventually give up and have to garbage- collect the resources that were used
to perform the failed reassembly.
Hosts are now strongly encouraged to perform “path MTU discovery,” a process by which
fragmentation is avoided by sending packets that are small enough to traverse the link with the
smallest MTU in the path from sender to receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
An intruder may intercept an IP packet and make a copy of it.
Packet sniffing is a passive attack, in which the attacker does not change the contents of the
packet.
This type of attack is very difficult to detect because the sender and the receiver may never know
that the packet has been copied.
Although packet sniffing cannot be stopped, encryption of the packet can make the attacker’s
effort useless.
The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
The second type of attack is to modify the packet.
The attacker intercepts the packet, changes its contents, and sends the new packet to the
receiver.
The receiver believes that the packet is coming from the original sender.
This type of attack can be detected using a data integrity mechanism.
The receiver, before opening and using the contents of the message, can use this mechanism to
make sure that the packet has not been changed during the transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP packet that carries the source
address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming from one of the
customers.
This type of attack can be prevented using an origin authentication mechanism
IP Sec
The IP packets today can be protected from the previously mentioned attacks using a protocol
called IPsec (IP Security).
This protocol is used in conjunction with the IP protocol.
IPsec protocol creates a connection-oriented service between two entities in which they can
exchange IP packets without worrying about the three attacks such as Packet Sniffing, Packet
Modification and IP Spoofing.
IP Sec provides the following four services:
1) Defining Algorithms and Keys: The two entities that want to create a secure channel between
themselves can agree on some available algorithms and keys to be used for security purposes.
2) Packet Encryption: The packets exchanged between two parties can be encrypted for privacy
using one of the encryption algorithms and a shared key agreed upon in the first step. This makes
the packet sniffing attack useless.
3) Data Integrity: Data integrity guarantees that the packet is not modified during the
transmission. If the received packet does not pass the data integrity test, it is discarded. This
prevents the second attack, packet modification.
4) Origin Authentication: IPsec can authenticate the origin of the packet to be sure that the packet
is not created by an imposter. This can prevent IP spoofing attacks.
Echo Request & Reply―Combination of echo request and reply messages determines
whether two systems communicate or not.
Timestamp Request & Reply―Two machines can use the timestamp request and reply
messages to determine the round-trip time (RTT).
Address Mask Request & Reply―A host to obtain its subnet mask, sends an address mask
request message to the router, which responds with an address mask reply message.
Router Solicitation/Advertisement―A host broadcasts a router solicitation message to know
about the router. Router broadcasts its routing information with router advertisement message.
ICMP MESSAGE FORMAT
An ICMP message has an 8-byte header and a variable-size data section.
Type Defines the type of the message
Code Specifies the reason for the particular message type
Checksum Used for error detection
Rest of the header Specific for each message type
Data Used to carry information
Identifier Used to match the request with the reply
Sequence Number Sequence Number of the ICMP packet
ICMP DEBUGGING TOOLS
Two tools are used for debugging purpose. They are (1) Ping (2) Traceroute
Ping
The ping program is used to find if a host is alive and responding.
The source host sends ICMP echo-request messages; the destination, if alive, responds with
ICMP echo-reply messages.
The ping program sets the identifier field in the echo-request and echo-reply message and
starts the sequence number from 0; this number is incremented by 1 each time a new message is
sent.
To send a datagram over a network, we need both the logical and physical address.
IP addresses are made up of 32 bits whereas MAC addresses are made up of 48 bits.
ARP enables each host to build a table of IP address and corresponding physical address.
ARP relies on broadcast support from physical networks.
The Address Resolution Protocol is a request and response protocol.
The types of ARP messages are:
1. ARP request
2. ARP reply
ARP Operation
ARP maintains a cache table in which MAC addresses are mapped to IP addresses.
If a host wants to send an IP datagram to a host, it first checks for a mapping in the cache table.
If no mapping is found, it needs to invoke the Address Resolution Protocol over the network.
It does this by broadcasting an ARP query onto the network.
This query contains the target IP address.
Each host receives the query and checks to see if it matches its IP address.
If it does match, the host sends a response message that contains its link- layer address (MAC
Address) back to the originator of the query.
The originator adds the information contained in this response to its ARP table.
For example,
To determine system B’s physical (MAC) address, system A broadcasts an ARP request
containing B’s IP address to all machines on its network.
All nodes except the destination discard the packet but update their ARP table.
Destination host (System B) constructs an ARP Response packet
ARP Response is unicast and sent back to the source host (System A).
Source stores target Logical & Physical address pair in its ARP table from ARP Response.
If target node does not exist on same network, ARP request is sent to default router.
ARP Packet
Routing and protocols: Unicast routing - Distance Vector Routing - RIP - Link State
Routing – OSPF – Path-vector routing - BGP - Multicast Routing: DVMRP – PIM.
Course Outcomes
CO4: Explain the basic layers and its functions in computer networks. (K2)
Course Objective
Important Topics
ROUTING INTRODUCTION:
Routing is the process of selecting best paths in a network.
In unicast routing, a packet is routed, hop by hop, from its source to its destination by the help of
forwarding tables.
Routing a packet from its source to its destination means routing the packet from a source
router (the default router of the source host) to a destination router (the router connected to the
destination network).
The source host needs no forwarding table because it delivers its packet to the default router in its
local network.
The destination host needs no forwarding table either because it receives the packet from its
default router in its local network.
Only the intermediate routers in the networks need forwarding tables.
NETWORK AS A GRAPH
The Figure below shows a graph representing a network.
The nodes of the graph, labeled A through G, may be hosts, switches, routers, or networks.
The edges of the graph correspond to the network links.
Each edge has an associated cost.
The basic problem of routing is to find the lowest-cost path between any two nodes, where the
cost of a path equals the sum of the costs of all the edges that make up the path.
This static approach has several problems:
It does not deal with node or link failures.
It does not consider the addition of new nodes or links.
It implies that edge costs cannot change.
For these reasons, routing is achieved by running routing protocols among the
nodes.
These protocols provide a distributed, dynamic way to solve the problem of finding the
lowest-cost path in the presence of link and node failures and changing edge costs.
UNICAST ROUTING ALGORITHMS
There are three main classes of routing protocols:
1) Distance Vector Routing Algorithm – Routing Information Protocol
2) Link State Routing Algorithm – Open Shortest Path First Protocol
3) Path-Vector Routing Algorithm - Border Gateway Protocol
The initial table for all the nodes are given below
Each node sends its initial table (distance vector) to neighbors and receives their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from nodes B, C, E & F.
Each node updates its routing table by comparing with each of its neighbor's table
For each destination, Total Cost is computed as:
Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
If Total Cost < Cost, then
Cost = Total Cost and Next Hop = Neighbor
Node A learns from C's table to reach node D and from F's table to reach node G.
Total Cost to reach node D via C = Cost (A to C) + Cost (C to D) Cost = 1 + 1 = 2.
Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
Total Cost to reach node G via F = Cost (A to F) + Cost (F to G) = 1 + 1 = 2
Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
Each node builds complete routing table after few exchanges amongst its neighbors.
System stabilizes when all nodes have complete routing information, i.e.,
convergence.
Routing tables are exchanged periodically or in case of triggered update.
The final distances stored at each node is given below:
Periodic Update
In this case, each node automatically sends an update message every so often, even if nothing has
changed.
The frequency of these periodic updates varies from protocol to protocol, but it is typically on the
order of several seconds to several minutes.
Triggered Update
In this case, whenever a node notices a link failure or receives an update from one of its
neighbors that causes it to change one of the routes in its routing table.
Whenever a node’s routing table changes, it sends an update to its neighbors, which may lead to
a change in their tables, causing them to send an update to their neighbors.
ROUTING INFORMATION PROTOCOL (RIP)
RIP is an intra-domain routing protocol based on distance-vector algorithm.
Example
Routers advertise the cost of reaching networks. Cost of reaching each link is 1 hop. For example,
router C advertises to A that it can reach network 2, 3 at cost 0 (directly connected), networks 5, 6 at
cost 1 and network 4 at cost 2.
Each router updates cost and next hop for each network number.
Infinity is defined as 16, i.e., any route cannot have more than 15 hops. Therefore, RIP can be
implemented on small-sized networks only.
Advertisements are sent every 30 seconds or in case of triggered update.
Spanning Trees
In path-vector routing, the path from a source to all destinations is determined by the best
spanning tree.
The best spanning tree is not the least-cost tree.
It is the tree determined by the source when it imposes its own policy.
If there is more than one route to a destination, the source can choose the route that meets its
policy best.
A source may apply several policies at the same time.
One of the common policies uses the minimum number of nodes to be visited. Another common
policy is to avoid some nodes as the middle node in a route.
The spanning trees are made, gradually and asynchronously, by each node. When a node is
booted, it creates a path vector based on the information it can obtain about its immediate neighbor.
A node sends greeting messages to its immediate neighbors to collect these pieces of
information.
Each node, after the creation of the initial path vector, sends it to all its immediate neighbors.
Each node, when it receives a path vector from a neighbor, updates its path vector using the
formula
Each AS have a border router (gateway), by which packets enter and leave that AS. In above
figure, R3 and R4 are border routers.
One of the routers in each autonomous system is designated as BGP speaker.
BGP Speaker exchange reachability information with other BGP speakers, known as external
BGP session.
BGP advertises complete path as enumerated list of AS (path vector) to reach a particular
network.
Paths must be without any loop, i.e., AS list is unique.
For example, backbone network advertises that network 128.96 and 192.4.153 can be reached
along the path <AS1, AS2, AS4>.
If there are multiple routes to a destination, BGP speaker chooses one based on policy.
Speakers need not advertise any route to a destination, even if one exists.
Advertised paths can be cancelled, if a link/node on the path goes down. This negative
advertisement is known as withdrawn route.
Routes are not repeatedly sent. If there is no change, keep alive messages are sent.
iBGP - interior BGP
Used by routers to update routing information learnt from other speakers to routers inside the
autonomous system.
Each router in the AS is able to determine the appropriate next hop for all prefixes.
MULTICAST ROUTING
To support multicast, a router must additionally have multicast forwarding tables that
indicate, based on multicast address, which links to use to forward the multicast packet.
Unicast forwarding tables collectively specify a set of paths.
Multicast forwarding tables collectively specify a set of trees -Multicast distribution trees.
Multicast routing is the process by which multicast distribution trees are determined.
To support multicasting, routers additionally build multicast forwarding tables.
Multicast forwarding table is a tree structure, known as multicast distribution trees.
Internet multicast is implemented on physical networks that support broadcasting by
extending forwarding functions.
Pruning:
Sent from routers receiving multicast traffic for which they have no active group members
Grafting:
Used after a branch has been pruned back
Goes from router to router until a router active on the multicast group is reached
Shared Tree
When a router sends Join message for group G to RP, it goes through a set of routers.
Join message is wild carded (*), i.e., it is applicable to all senders. Routers create an entry (*, G)
in its forwarding table for the shared tree.
Interface on which the Join arrived is marked to forward packets for that group.
Forwards Join towards rendezvous router RP.
Eventually, the message arrives at RP. Thus a shared tree with RP as root is formed.
Example
Router R4 sends Join message for group G to rendezvous router RP.
Join message is received by router R2. It makes an entry (*, G) in its table and forwards
the message to RP.
When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
As routers send Join message for a group, branches are added to the tree, i.e., shared.
Multicast packets sent from hosts are forwarded to designated router RP.
Suppose router R1, receives a message to group G. oR1 has no
state for group G.
Encapsulates the multicast packet in a Register message. Multicast packet is tunneled along the
way to RP.
RP encapsulates the packet and sends multicast packet onto the shared tree, towards
R2.
R2 forwards the multicast packet to routers R4 and R5 that have members for group G.
Source-Specific Tree
RP can force routers to know about group G, by sending Join message to the sending host, so that
tunneling can be avoided.
Intermediary routers create sender-specific entry (S, G) in their tables. Thus a source-specific
route from R1 to RP is formed.
If there is high rate of packets sent from a sender to a group G, then shared- tree is replaced by
source-specific tree with sender as root.
Example
Rendezvous router RP sends a Join message to the host router R1. Router R3
learns about group G through the message sent by RP.
Router R4 send a source-specific Join due to high rate of packets from sender. Router R2
learns about group G through the message sent by R4.
Eventually a source-specific tree is formed with R1 as root.
Analysis of PIM
Protocol independent because, tree is based on Join messages via shortest path. Shared trees
are more scalable than source-specific trees.
Source-specific trees enable efficient routing than shared trees.
UNIT V – DATA LINK AND PHYSICAL LAYERS
Data Link Layer – Framing – Flow control – Error control – Data-Link Layer
Protocols – HDLC –PPP - Media Access Control – Ethernet Basics –
CSMA/CD – Virtual LAN – Wireless LAN (802.11) - Physical Layer: Data
and Signals - Performance – Transmission media - Switching –Circuit
Switching.
Course Outcomes
CO5: Explain the basic layers and its functions in computer networks. (K2)
Course Objective
Important Topics
1. FRAMING
The data-link layer packs the bits of a message into frames, so that each frame is
distinguishable from another.
Although the whole message could be packed in one frame, that is not normally done.
One reason is that a frame can be very large, making flow and error control very inefficient.
When a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole frame.
When a message is divided into smaller frames, a single-bit error affects only that small
frame.
Framing in the data-link layer separates a message from one source to a destination by
adding a sender address and a destination address.
The destination address defines where the packet is to go; the sender address helps the
recipient acknowledge the receipt.
Frame Size
Frames can be of fixed or variable size.
Frames of fixed size are called cells. In fixed-size framing, there is no need for defining the
boundaries of the frames; the size itself can be used as a delimiter.
In variable-size framing, we need a way to define the end of one frame and the beginning of the
next. Two approaches were used for this purpose: a character-oriented approach and a bit-oriented
approach.
Character-Oriented Framing
In character-oriented (or byte-oriented) framing, data to be carried are 8-bit characters.
To separate one frame from the next, an 8-bit (1-byte) flag is added at the beginning and
the end of a frame.
The flag, composed of protocol-dependent special characters, signals the start or end of a
frame.
Any character used for the flag could also be part of the information.
If this happens, when it encounters this pattern in the middle of the data, the receiver thinks
it has reached the end of the frame.
To fix this problem, a byte-stuffing strategy was added to character- oriented
framing.
Byte Stuffing (or) Character Stuffing
Byte stuffing is the process of adding one extra byte whenever there is a flag or escape
character in the text.
In byte stuffing, a special byte is added to the data section of the frame when there is a
character with the same pattern as the flag.
The data section is stuffed with an extra byte. This byte is usually called the escape
character (ESC) and has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from the data section and
treats the next character as data, not as a delimiting flag. it-Oriented Framing
In bit-oriented framing, the data section of a frame is a sequence of bits to be interpreted by the
upper layer as text, graphic, audio, video, and so on.
In addition to headers and trailers), we still need a delimiter to separate one frame from the other.
Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define the beginning
and the end of the frame
If the flag pattern appears in the data, the receiver must be informed that this is not the end of the
frame.
This is done by stuffing 1 single bit (instead of 1 byte) to prevent the pattern from looking like a
flag. The strategy is called bit stuffing.
Bit Stuffing
Bit stuffing is the process of adding one extra 0 whenever five consecutive 1s follow a 0 in
the data, so that the receiver does not mistake the pattern 0111110 for a flag.
In bit stuffing, if 0 and five consecutive 1 bit are encountered, an extra 0 is added.
This extra stuffed bit is eventually removed from the data by the receiver.
The extra bit is added after one 0 followed by five 1’s regardless of the value of the next bit.
This guarantees that the flag field sequence does not inadvertently appear in the frame.
FLOW CONTROL
Flow control refers to a set of procedures used to restrict the amount of data that the
sender can send before waiting for acknowledgment.
The receiving device has limited speed and limited memory to store the data.
Therefore, the receiving device must be able to inform the sending device to stop the transmission
temporarily before the limits are reached.
It requires a buffer, a block of memory for storing the information until they are processed.
STOP-AND-WAIT
The simplest scheme is the stop-and-wait algorithm.
In the Stop-and-wait method, the sender waits for an acknowledgement after every
frame it sends.
When acknowledgement is received, then only next frame is sent.
The process of alternately sending and waiting of a frame continues until the sender
transmits the EOT (End of transmission) frame.
If the acknowledgement is not received within the allotted time, then the sender assumes that the
frame is lost during the transmission, so it will retransmit the frame.
The acknowledgement may not arrive because of the following three scenarios:
1. Original frame is lost
2. ACK is lost
3. ACK arrives after the timeout
Advantage of Stop-and-wait
The Stop-and-wait method is simple as each frame is checked and acknowledged before the next
frame is sent
Disadvantages of Stop-And-Wait
In stop-and-wait, at any point in time, there is only one frame that is sent and waiting to be
acknowledged.
This is not a good use of transmission medium.
To improve efficiency, multiple frames should be in transition while waiting for ACK.
PIGGYBACKING
SLIDING WINDOW
The Sliding Window is a method of flow control in which a sender can transmit the several
frames before getting an acknowledgement.
In Sliding Window Control, multiple frames can be sent one after the another due to which
capacity of the communication channel can be utilized efficiently.
A single ACK acknowledge multiple frames.
Sliding Window refers to imaginary boxes at both the sender and receiver end.
The window can hold the frames at either end, and it provides the upper limit on the number
of frames that can be transmitted before the acknowledgement.
Frames can be acknowledged even when the window is not completely filled.
The window has a specific size in which they are numbered as modulo-n means that they
are numbered from 0 to n-1.
For example, if n = 8, the frames are numbered from
0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
The size of the window is represented as n-1. Therefore, maximum n-1 frames can be
sent before acknowledgement.
When the receiver sends the ACK, it includes the number of the next frame that it wants to
receive.
For example, to acknowledge the string of frames ending with frame number 4, the
receiver will send the ACK containing the number 5.
When the sender sees the ACK with the number 5, it got to know that the frames from 0
through 4 have been received.
Sender Window Receiver Window
3. ERROR CONTROL
Data can be corrupted during transmission. For reliable communication, errors must be
detected and corrected. Error Control is a technique of error detection and retransmission.
TYPES OF ERRORS
SINGLE-BIT ERROR
The term Single-bit error means that only one bit of a given data unit (such as byte, character,
data unit or packet) is changed from 1 to 0 or from 0 to 1.
BURST ERROR
The term Burst Error means that two or more bits in the data unit have changed from 1
to 0 or from 0 to 1.
PARITY CHECK
One bit, called parity bit is added to every data unit so that the total number of 1’s in the
data unit becomes even (or) odd.
The source then transmits this data via a link, and bits are checked and verified at the
destination.
Data is considered accurate if the number of bits (even or odd) matches the number
transmitted from the source.
This technique is the most common and least complex method.
1. Even parity – Maintain even number of 1s E.g., 1011 → 1011 1
2. Odd parity – Maintain odd number of 1s E.g., 1011 → 1011 0
Sender Side:
Receiver Side:
(For Both Case – Without Error and with Error)
Polynomials
A pattern of 0s and 1s can be represented as a polynomial with coefficients of 0 and 1.
The power of each term shows the position of the bit; the coefficient shows the value of the
bit.
INTERNET CHECKSUM
ERROR CONTROL
Error control includes both error detection and error correction.
Whenever an error is detected, specified frames are retransmitted
It allows the receiver to inform the sender if a frame is lost or damaged during transmission and
coordinates the retransmission of those frames by the sender.
Includes the following actions:
o Error detection
o Positive Acknowledgement (ACK): if the frame arrived with no errors
o Negative Acknowledgement (NAK): if the frame arrived with errors
o Retransmissions after Timeout: Frame is retransmitted after certain amount of time if no
acknowledgement was received
Error control in the data link layer is based on automatic repeat request (ARQ).
Categories of Error Control
STOP-AND-WAIT ARQ
Stop-and-wait ARQ is a technique used to retransmit the data in case of damaged or lost frames.
This technique works on the principle that the sender will not transmit the next frame until it
receives the acknowledgement of the last transmitted frame.
Two possibilities of the retransmission in Stop and Wait ARQ:
o Damaged Frame: When the receiver receives a damaged frame (i.e., the frame contains an
error), then it returns the NAK frame. For example, when the frame DATA 1 is sent, and then the
receiver sends the ACK 0 frame means that the data 1 has arrived correctly. The sender transmits the
next frame: DATA 0. It reaches undamaged, and the receiver returns ACK 1. The sender transmits
the third frame: DATA 1. The receiver reports an error and returns the NAK frame. The sender
retransmits the DATA 1 frame.
o Lost Frame: Sender is equipped with the timer and starts when the frame is transmitted.
Sometimes the frame has not arrived at the receiving end so that it cannot be acknowledged either
positively or negatively. The sender waits for acknowledgement until the timer goes off. If the timer
goes off, it retransmits the last transmitted frame.
Sliding Window ARQ is a technique used for continuous transmission error control.
1. GO-BACK-N ARQ
o In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it retransmits all
the frames after which it does not receive the positive ACK.
In the above figure, three frames (Data 0,1,2) have been transmitted before an error
discovered in the third frame.
The receiver discovers the error in Data 2 frame, so it returns the NAK 2 frame.
All the frames including the damaged frame (Data 2,3,4) are discarded as it is transmitted
after the damaged frame.
Therefore, the sender retransmits the frames (Data2,3,4).
2. SELECTIVE-REJECT(REPEAT) ARQ
Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.
In this technique, only those frames are retransmitted for which negative
acknowledgement (NAK) has been received.
The receiver storage buffer keeps all the damaged frames on hold until the frame in error is
correctly received.
The receiver must have an appropriate logic for reinserting the frames in a correct order.
The sender must consist of a searching mechanism that selects only the requested frame
for retransmission.
In the above figure, three frames (Data 0,1,2) have been transmitted before an error discovered in
the third frame.
The receiver discovers the error in Data 2 frame, so it returns the NAK 2 frame.
The damaged frame only (Data 2) is discarded.
The other subsequent frames (Data 3,4) are accepted.
Therefore, the sender retransmits only the damaged frame (Data2).
Four protocols have been defined for the data-link layer controls. They are
1. Simple Protocol
2. Stop-and-Wait Protocol
3. Go-Back-N Protocol
4. Selective-Repeat Protocol
1. SIMPLE PROTOCOL
The first protocol is a simple protocol with neither flow nor error control.
We assume that the receiver can immediately handle any frame it receives.
In other words, the receiver can never be overwhelmed with incoming frames.
The data-link layers of the sender and receiver provide transmission services for their network
layers
The data-link layer at the sender gets a packet from its network layer, makes a frame out of it, and
sends the frame.
The data-link layer at the receiver receives a frame from the link, extracts the packet from the
frame, and delivers the packet to its network layer.
NOTE :
1. STOP-AND-WAIT PROTOCOL
REFER STOP AND WAIT FROM FLOW CONTROL
2. GO-BACK-N PROTOCOL
REFER GO-BACK-N ARQ FROM ERROR CONTROL
3. SELECTIVE-REPEAT PROTOCOL
REFER SELECTIVE-REPEAT ARQ FROM ERROR CONTROL
HDLC FRAMES
HDLC defines three types of frames:
1. Information frames (I-frames) - used to carry user data
2. Supervisory frames (S-frames) - used to carry control information
3. Unnumbered frames (U-frames) – reserved for system management
Each type of frame serves as an envelope for the transmission of a different type of message.
Each frame in HDLC may contain up to six fields:
1. Beginning flag field
2. Address field
3. Control field
4. Information field (User Information/ Management Information)
5. Frame check sequence (FCS) field
6. Ending flag field
In multiple-frame transmissions, the ending flag of one frame can serve as the beginning flag of
the next frame.
Flag field - This field contains synchronization pattern 01111110, which identifies both the
beginning and the end of a frame.
Address field - This field contains the address of the secondary station. If a primary station
created the frame, it contains a ‘to’ address. If a secondary station creates the frame, it contains a
‘from’ address. The address field can be one byte or several bytes long, depending on the needs
of the network.
Control field. The control field is one or two bytes used for flow and error control.
Information field. The information field contains the user’s data from the network layer or
management information. Its length can vary from one network to another.
FCS field. The frame checks sequence (FCS) is the HDLC error detection field. It can contain
either a 16- bit or 32-bit CRC.
CONTROL FIELD FORMAT FOR THE DIFFERENT FRAME TYPES
Control Field for I-Frames
o I-frames are designed to carry user data from the network layer. In addition, they can include
flow-control and error-control information
o The first bit defines the type. If the first bit of the control field is 0, this means the frame is
an I-frame.
o The next 3 bits, called N(S), define the sequence number of the frame.
o The last 3 bits, called N(R), correspond to the acknowledgment number when piggybacking is
used.
o The single bit between N(S) and N(R) is called the P/F bit. If this bit is 1 it means poll (the frame
is sent by a primary station to a secondary).
If the first 2 bits of the control field are 11, this means the frame is an U- frame.
U-frame codes are divided into two sections: a 2-bit prefix before the P/F bit and a 3-bit suffix
after the P/F bit.
Together, these two segments (5 bits) can be used to create up to 32 different types of U-frames.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one or more bytes.
1. Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the flag is
01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
5. Payload − This carries the data from the network layer. The maximum length of the payload field
is 1500 bytes.
6. FCS − It is a 2-byte(16-bit) or 4 bytes(32-bit) frame check sequence forerror detection. The
standard code used is CRC.
Dead: In dead phase the link is not used. There is no active carrier and the line is quiet.
Establish: Connection goes into this phase when one of the nodes start communication. In this
phase, two parties negotiate the options. If negotiation is successful, the system goes into
authentication phase or directly to networking phase.
Authenticate: This phase is optional. The two nodes may decide whether they need this phase
during the establishment phase. If they decide to proceed with authentication, they send several
authentication packets. If the result is successful, the connection goes to the networking phase;
otherwise, it goes to the termination phase.
Network: In network phase, negotiation for the network layer protocols takes place. PPP
specifies that two nodes establish a network layer agreement before data at the network layer can be
exchanged. This is because PPP supports several protocols at network layer. If a node is running
multiple protocols simultaneously at the network layer, the receiving node needs to know which
protocol will receive the data.
Open: In this phase, data transfer takes place. The connection remains in this phase until one of
the endpoints wants to end the connection.
Terminate: In this phase connection is terminated.
Components/Protocols of PPP
Three sets of components/protocols are defined to make PPP powerful:
Link Control Protocol (LCP)
Authentication Protocols (AP)
Network Control Protocols (NCP)
Link Control Protocol (LCP) − It is responsible for establishing, configuring, testing, maintaining
and terminating links for transmission. It also provides negotiation mechanisms to set options
between the two endpoints. Both endpoints of the link must reach an agreement about the options
before the link can be established.
Authentication Protocols (AP) − Authentication means validating the identity of a user who needs
to access a set of resources. PPP has created two protocols for authentication -Password
Authentication Protocol and Challenge Handshake Authentication Protocol.
PAP
The Password Authentication Protocol (PAP) is a simple authentication procedure with a two-step
process:
a. The user who wants to access a system sends an authentication identification (usually the user
name) and a password.
b. The system checks the validity of the identification and password and either accepts or denies
connection.
CHAP
The Challenge Handshake Authentication Protocol (CHAP) is a three-way handshaking
authentication protocol that provides greater security than PAP. In this method, the password is kept
secret; it is never sent online.
a. The system sends the user a challenge packet containing a challenge value.
b. The user applies a predefined function that takes the challenge value and the user’s own password
and creates a result. The user sends the result in the response packet to the system.
c. The system does the same. It applies the same function to the password of the user (known to the
system) and the challenge value to create a result. If the result created is the same as the result sent
in the response packet, access is granted; otherwise, it is denied.
CHAP is more secure than PAP, especially if the system continuously changes the challenge value.
Even if the intruder learns the challenge value and the result, the password is still secret.
Network Control Protocols (NCP) − PPP is a multiple-network-layer protocol. It can carry a
network-layer data packet from protocols defined by the Internet. PPP
has defined a specific Network Control Protocol for each network protocol. These protocols
are used for negotiating the parameters and facilities for the network layer. For every higher-
7. MEDIA ACCESS CONTROL (MAC)
layer protocol supported by PPP, one NCP is there.
When two or more nodes transmit data at the same time, their frames will collide and the link
bandwidth is wasted during collision.
To coordinate the access of multiple sending/receiving nodes to the shared link, we need a
protocol to coordinate the transmission.
These protocols are called Medium or Multiple Access Control (MAC) Protocols. MAC
belongs to the data link layer of OSI model
MAC defines rules for orderly access to the shared medium. It tries to ensure that no two nodes
are interfering with each other’s transmissions, and deals with the situation when they do.
MAC Types
Round-Robin: – Each station is given opportunity to transmit in turns. Either a central controller
polls a station to permit to go, or stations can coordinate among themselves.
Reservation: - Station wishing to transmit makes reservations for time slots in advance.
(Centralized or distributed).
Contention (Random Access): - No control on who tries; If collision‖ occurs, retransmission
takes place.
MECHANISMS USED
Wired Networks:
o CSMA / CD – Carrier Sense Multiple Access / Collision Detection
Wireless Networks:
o CSMA / CA – Carrier Sense Multiple Access / Collision Avoidance
Collision Detect means that a node listens as it transmits and can therefore detect when a frame it
is transmitting has collided with a frame transmitted by another node.
Flowchart of CSMA/CD Operation
The non-persistent approach reduces the chance of collision because it is unlikely that two or more
stations will wait the same amount of time and retry to send simultaneously.
However, this method reduces the efficiency of the network because the medium remains idle
when there may be stations with frames to send.
Persistent Strategy
1- Persistent:
The 1-persistent method is simple and straightforward.
In this method, after the station finds the line idle, it sends its frame immediately
(with probability 1).
This method has the highest chance of collision because two or more stations may find the line
idle and send their frames immediately.
P-Persistent:
In this method, after the station finds the line idle it follows these steps:
With probability p, the station sends its frame.
With probability q = 1 − p, the station waits for the beginning of the next time slot and checks
the line again.
The p-persistent method is used if the channel has time slots with a slot duration equal to or
greater than the maximum propagation time.
The p-persistent approach combines the advantages of the other two strategies. It reduces the
chance of collision and improves efficiency.
EXPONENTIAL BACK-OFF
Once an adaptor has detected a collision and stopped its transmission, it waits a certain amount of
time and tries again.
Each time it tries to transmit but fails, the adaptor doubles the amount of time it waits before
trying again.
This strategy of doubling the delay interval between each retransmission attempt is a general
technique known as exponential back-off.
Inter frame Space (IFS) - First, collisions are avoided by deferring transmission even if the
channel is found idle. When an idle channel is found, the station does not send immediately. It
waits for a period of time called the interframe space or IFS.
Contention Window - The contention window is an amount of time divided into slots. A
station that is ready to send chooses a random number of slots as its wait time. The number of
slots in the window changes according to the binary exponential back off strategy. This means
that it is set to one slot the first time and then doubles each time the station cannot detect an
idle channel after the IFS time.
Acknowledgment - In addition, the data may be corrupted during the transmission. The
positive acknowledgment and the time-out timer can help guarantee that the receiver has
received the frame.
ETHERNET BASICS
Ethernet was developed in the mid-1970’s at the Xerox Palo Alto Research Center (PARC),
IEEE controls the Ethernet standards.
The Ethernet is the most successful local area networking technology, that uses bus topology.
The Ethernet is multiple-access networks that is set of nodes send and receive frames over a
shared link.
Ethernet uses the CSMA / CD (Carrier Sense Multiple Access with Collision Detection)
mechanism.
EVOLUTION OF ETHERNET
Fast Ethernet or 100BASE-T provides transmission speeds up to 100 megabits per second and is
typically used for LAN backbone systems.
The 100BASE-T standard consists of three different component specifications –
1. 100 BASE-TX
2. 100BASE-T4
3. 100BASE-FX
The 64-bit preamble allows the receiver to synchronize with the signal; it is a sequence of
alternating 0’s and 1’s.
Both the source and destination hosts are identified with a 48-bit address.
The packet type field serves as the DE multiplexing key.
Each frame contains up to 1500 bytes of data(Body).
CRC is used for Error detection
Ethernet Addresses
Every Ethernet host has a unique Ethernet address (48 bits – 6 bytes).
Ethernet address is represented by sequence of six numbers separated by colons.
Each number corresponds to 1 byte of the 6- byte address and is given bypair of hexadecimal
digits.
Eg: 8:0:2b: e4: b1:2 is the representation of
00001000 00000000 00101011 11100100 10110001 00000010
Each frame transmitted on an Ethernet is received by every adaptor connected to the Ethernet.
In addition to unicast addresses an Ethernet address consisting of all 1s is treated as broadcast
address.
Similarly, the address that has the first bit set to 1 but it is not the broadcast address is called
multicast address.
ADVANTAGES OF ETHERNET
Ethernets are successful because
It is extremely easy to administer and maintain. There are no switches that can fail, no routing or
configuration tables that have to be kept up-to-date, and it is easy to add a new host to the network.
It is inexpensive: Cable is cheap, and the only other cost is the network adaptor on each host.
1. Flexibility: Within radio coverage, nodes can access each other as radio waves can penetrate
even partition walls.
2. Planning: No prior planning is required for connectivity as long as devices
follow standard convention
3. Design: Allows to design and develop mobile devices.
4. Robustness: Wireless network can survive disaster. If the devices survive, communication can
still be established.
DISADVANTAGES OF WLAN / 802.11
1. Quality of Service: Low bandwidth (1 – 10 Mbps), higher error rates due to interference, delay
due to error correction and detection.
2. Cost: Wireless LAN adapters are costly compared to wired adapters.
3. Proprietary Solution: Due to slow standardization process, many solutions are proprietary that
limit the homogeneity of operation.
4. Restriction: Individual countries have their own radio spectral policies. This restricts the
development of the technology
5. Safety and Security: Wireless Radio waves may interfere with other devices. Eg; In a hospital,
radio waves may interfere with high-tech equipment.
TECHNOLOGY USED IN WLAN / 802.11
WLAN’s uses Spread Spectrum (SS) technology.
The idea behind Spread spectrum technique is to spread the signal over a wider frequency band
than normal, so as to minimize the impact of interference from other devices.
There are two types of Spread Spectrum:
Frequency Hopping Spread Spectrum (FHSS)
Direct Sequence Spread Spectrum (DSSS)
Frequency Hopping Spread Spectrum (FHSS)
Frequency hopping is a spread spectrum technique that involves transmitting the signal over a
random sequence of frequencies.
That is, first transmitting at one frequency, then a second, then a third, and so on.
The random sequence of frequencies is computed by a pseudorandom number generator.
The receiver uses the same algorithm as the sender and initializes it with the same seed and hence
is able to hop frequencies in sync with the transmitter to correctly receive the frame.
Wireless protocol would follow exactly the same algorithm as the Ethernet—Wait until the link
becomes idle before transmitting and back off should a collision occur.
Hidden Node Problem
Advantages of Switching:
Switch increases the bandwidth of the network.
It reduces the workload on individual PCs as it sends the information to only that device which
has been addressed.
It increases the overall performance of the network by reducing the traffic on the network.
There will be less frame collision as switch creates the collision domain for each connection.
Disadvantages of Switching:
A Switch is more expensive than network bridges.
A Switch cannot determine the network connectivity issues easily.
Proper designing and configuration of the switch are required to handle multicast packets.
Types of Switching Techniques
CIRCUIT SWITCHING
Circuit switching is a switching technique that establishes a dedicated path between sender and
receiver.
In the Circuit Switching Technique, once the connection is established then the dedicated path
will remain to exist until the connection is terminated.
Circuit switching in a network operates in a similar way as the telephone works.
A complete end-to-end path must exist before the communication takes place.
In case of circuit switching technique, when any user wants to send the data, voice, video, a
request signal is sent to the receiver then the receiver sends back the acknowledgment to ensure
the availability of the dedicated path. After receiving the acknowledgment, dedicated path
transfers the data.
Circuit switching is used in public telephone network. It is used for voice transmission.
Fixed data can be transferred at a time in circuit switching technology.
Phases in Circuit Switching
Communication through circuit switching has 3 phases:
1. Connection Setup / Establishment - In this phase, a dedicated circuit is established from the
source to the destination through a number of intermediate switching centers. The sender and
receiver transmits communication signals to request and acknowledge establishment of circuits.
2. Data transfer - Once the circuit has been established, data and voice are transferred from the source
to the destination. The dedicated connection remains as long as the end parties communicate.
3. Connection teardown / Termination - When data transfer is complete, the connection is
relinquished. The disconnection is initiated by any one of the user. Disconnection involves
removal of all intermediate links from the sender to the receiver.
Advantages
It is suitable for long continuous transmission, since a continuous transmission route is
established, that remains throughout the conversation.
The dedicated path ensures a steady data rate of communication.
No intermediate delays are found once the circuit is established. So, they are suitable for real
time communication of both voice and data transmission.
Disadvantages
Circuit switching establishes a dedicated connection between the end parties. This dedicated
connection cannot be used for transmitting any other data, even if the data load is very low.
Bandwidth requirement is high even in cases of low data volume.
There is underutilization of system resources. Once resources are allocated to a particular
connection, they cannot be used for other connections.
Time required to establish connection may be high.
It is more expensive than other switching techniques as a dedicated path is required for each
connection.
K.JAYANTH/AP/AI&DS