Port Addressing
At transport layer, we need a transport layer address, called a port number, to choose
among multiple process running on destination host. The destination port number is
needed for delivery, source port number is needed for reply.
Ports are represented by 16-bit numbers between 0 and 65535. There are 2^16 port
numbers i.e 65536.
Daytime client process, can use ephemeral port number 52000 to identify itself.
Daytime server must use well-known port number 13
User Datagram Protocol
➔ User Datagram Protocol (UDP) - connectionless, unreliable transport protocol
➔ Provides process-to-process delivery instead of host-to-host communication
➔ Very limited error checking and flow control
➔ Simple protocol that uses minimum overhead
➔ If a process want to send a small message and does not care much about
reliability it can use UDP
User Datagram
Have a fixed size header of 8 bytes
1. Source Port Number - Used by process running on source host. 16 bits long. If
source is client, then port number is ephemeral and is chosen by UDP software.
If it is server, port number is well-known
2. Destination Port Number - Used by process running on destination host. 16 bits
long.
3. Length - 16 bit field that defines total length of user datagram plus data. Length
of UDP is actually not necessary as it is encapsulated in an IP datagram. It can
be found out by following formula:
4. Checksum - Used to detect errors over the entire user datagram.
Checksum
UDP checksum calculation is different from IP. Checksum includes three sections:
pseudoheader, UDP header and data coming from application layer.
- Pseudoheader is part of header of IP packet in which the user datagram is to be
encapsulated with some fields filled with 0s.
- If checksum doesn’t include pseudo header, a user datagram may arrive safe and
sound. But if IP header is corrupted it may be delivered to wrong host.
- Protocol field added to ensure packet belongs to UDP. Value for UDP is 17
Features of UDP
1. Connectionless Services:
a. User datagrams are independent
b. They are not numbered
c. No connection establishment and connection termination
d. Datagrams can travel on different paths
e. UDP cannot send stream of data, each data request must be small
enough to fit in one datagram
f. Only processes sending short messages should use UDP
2. Flow and Error Control:
a. Simple and unreliable transport protocol
b. No flow control and no window mechanism
c. Receiver may overflow with incoming messages
d. No error control except for checksum
e. Sender cannot know if message is lost or duplicated
f. When receiver detects error through checksum, user datagram silently
discarded
3. Encapsulation and Decapsulation:
a. To send a message from one process to another, UDP protocol
encapsulates and decapsulates messages in IP Datagram
4. Queuing:
a. When process starts, it requests port numbers from OS. Incoming and
outgoing queues are created for each process. Queues function as long as
process is running
b. Client process sends messages to outgoing queue using port number.
UDP removes messages one by one and after adding header, delivers
them to IP
c. When message arrives for a client, UDP checks to see if an incoming
queue has been created. If it exists UDP sends received user datagram to
end of queue. If no queue, UDP discards user datagram and asks ICMP
protocol to send a port unreachable message to server.
d. If outgoing queue overflows, the OS asks client process to wait before
sending any message. If incoming queue overflows, UDP drops user
datagram and asks for a port unreachable message to be sent to server
e. Server asks for incoming and outgoing queues using its well-known port,
when it starts running. Queues remain open as long as server is running.
f. When a message arrives for server, UDP checks if incoming queue is
created for specified port number. If it exists, UDP sends received user
datagram to end of queue. If no queue, UDP discard the datagram and
asks ICMP to send a port unreachable message to client.
g. If incoming queue overflows, UDP drops user datagram and asks for port
unreachable message to be sent to client.
h. When server wants to respond to client, it sends messages to outgoing
queue, using source port number specified in request. UDP removes
messages one by one and delivers to IP after adding UDP header.
i. If outgoing queue overflows, OS asks the server to wait before sending
any more messages
Transmission Control Protocol
➔ Process to process protocol hence uses port numbers
➔ Connection oriented, creates virtual connection between two TCPs to send data.
➔ Uses flow and error control mechanisms at transport level
TCP Services
TCP Features
★ Numbering System : Segment contains two fields: sequence number and
acknowledgement number. They refer to the byte number
○ Byte Number - TCP numbers all data bytes transmitted in a connection.
When TCP receives bytes of data from a process, it stores them in the
sending buffer and numbers them. The bytes of data being transferred in
each connection are numbered by TCP. The numbering starts with a
randomly generated number
○ Sequence Number - TCP assigns a sequence number to each segment
being sent. Sequence number of each segment is the number of first byte
in that segment
When a segment carries a combination of data and control information it
uses a sequence number. If a segment does not carry user data, it does
not logically define a sequence number.
○ Acknowledgment Number - Each party uses an acknowledgment number
to confirm the bytes it has received. Ack no defines the number of next
byte that party expects to receive.
Ack no is cumulative. Party takes the number of last byte it has received,
adds 1 to it and announces sum as ack no. If party uses 5643 as ack no it
has received all bytes from beginning upto 5642
★ Flow Control - Receiver of data controls amount of data to be sent by the
sender. Done to prevent receiver from being over whelmed. Number system
allows TCP to use a byte-oriented flow control
★ Error Control - To provide reliable service, TCP implements error control
mechanism. Error-control is byte oriented
★ Congestion Control - Amount of data sent by a sender is not only controlled by
receiver, but is also determined by level of congestion in network
Segment
● Header - 20-60 byte, 20 bytes if there no options and upto 60 bytes if it
contains options
● Sequence Number - Defines number assigned to first byte of data contained in
segment. TCP is stream transport protocol and to ensure connectivity each byte
is numbered.
● During connection establishment, each party uses a random number generator
to create an initial sequence number (ISN), which is usually different in each
direction
● Acknowledgment Number - Byte number that receiver of the segment is
expecting to receive from other party. If it has successfully received byte no x it
defines x + 1 as ack no.
● Header Length - 4 bit field indicates number of 4 byte words in TCP header.
Value can be between 5(5x4 = 20) and 15(15x4 = 60)
● Reserved - 6 bit field reserved for future use.
● Control - This field defines 6 different control bits or flags. One or more of these
bits can be set at a time
● Window Size - Defines size of window, in bytes, that the other party must
maintain.
● Checksum - TCP follows same procedure as UDP. For TCP pseudo header, value
for protocol field is 6
● Urgent Pointer - Used when segment contains urgent data. Defines the number
that must be added to the sequence number to obtain the number of the last
urgent byte in data section of segment.
● Options - Can be upto 40 bytes of optional information
TCP Connection
- TCP which uses the services of IP, a connectionless protocol is
connection-oriented because TCP connection is virtual, not physical
- TCP delivers individual segments, but controls the connection itself. If a
segment is lost or corrupted, it is retransmitted.
- Unlike TCP, IP is unaware of retransmission
- If a segment arrives out of order, TCP holds it until the missing segments arrive,
IP is unaware of reordering
Three phases of connection:
Connection Establishment
- It is called three-way handshaking.
- Server program tells its TCP that it is ready to accept a connection. This is called
a request for a passive open.
- Client issues a request for active open. A client that wishes to connect to an
open server tells its TCP it needs to be connected to that particular server.
Steps:
1. Client sends first segment, a SYN segment, in which only SYN flag is set. This
segment is for synchronization of sequence numbers. It carries no real data, but
consumes one sequence number, as when data transfer starts sequence number
is incremented by 1
2. Server sends second segment, SYN + ACK, with 2 flag bits set: SYN and ACK. It
serves a dual purpose i.e it is a SYN segment for communication in other
direction and serves as ack for SYN segment. Consumes one sequence number.
Doesn’t carry data
3. Client sends third segment which is just ACK. It acknowledges receipt of second
segment with ACK flag and ack no. field. ACK segment doesn’t consume any
sequence numbers
Simultaneous Open - a rare situation, may occur when both process issue an active
open. Both TCPs transmit a SYN + ACK segment to each other and one single
connection is established between them
Data Transfer
● Bidirectional data transfer can take place. Acknowledgment is piggybacked with
data.
● In below example, client sends 2000 bytes of data in two segments. Server
sends 2000 bytes in one segment. The client sends one more segment with no
data but only an ack.
● Data segments sent by client have the PSH (push) flag set so that server TCP
knows to deliver data to server process.
● For Urgent data set the URG flag. Sending application tells TCP piece of data is
urgent.
Connection Termination
Any of the two parties involved in exchanging data can close the connection, although
it is usually initiated by client.
Three Way Handshaking
● After receiving a close command from client process, TCP sends first segment, a
FIN segment in which the FIN flag is set. FIN segment consumes one sequence
number if it doesn’t carry data
● Server TCP, after receiving FIN segment, informs its process of the situation and
sends second segment, a FIN + ACK segment, to confirm receipt of FIN segment
from client and the same time announce closing of the connection in other
direction. This segment can also contain last chunk of data from server. If no
data, consumes one sequence number
● Client TCP sends last segment, an ACK segment, to confirm the receipt of the
FIN segment from TCP server. It contains ack no, which is plus 1 the sequence
number received in the FIN segment from server. This segment cannot carry
data and consumes no sequence numbers
Half Close
One end can stop sending data while still receiving data. Either end can issue a half
close but it is normally initiated by the client. It can occur when server needs all data
before processing can begin. A good example is sorting.
Flow Control
Sliding window is something between Go-back-n and selective-repeat.
● It resembles go-back-n because it does not use NAKs (negative ack)
● It resembles Selective repeat because receiver holds the out-of-order segments
until missing ones arrive.
● Sliding window of TCP is byte oriented, the one in dll is frame oriented
● Sliding window of TCP is of variable size, the one in dll was of fixed size
Size of window at one end is determined by lesser of two values - rwnd and cwnd
Rwnd = receiver window
Cwnd = congestion window
➔ Window spans portion of buffer containing bytes received from process.
➔ Opening a window means moving the right wall to right which allows more new
bytes in buffer that are eligible for sending.
➔ Closing the window means moving the left wall to the right. This means some
bytes have been acknowledged and sender need not worry about them
anymore.
➔ Shrinking the window means moving the right wall to left. Window can be
opened or closed but should not be shrunk.
Error Control
Error control includes mechanism for detecting corrupted segments, lost segments,
out-of-order segments, and duplicated segments
Error control also includes mechanism for correcting errors after they are detected.
Checksum - used to check for a corrupted segment. It is discarded by the destination
TCP and is considered as lost.
Acknowledgment - Acks are used to confirm the receipt of data segments. Control
segments that carry no data but consume a sequence number are also acknowledged.
ACK segments are never acknowledged.
Retransmission - When segment is corrupted, lost or delayed, it is retransmitted.
Segment is retransmitted on two occasions: when a retransmission timer expires or
when sender receives three duplicate ACKs. No retransmission occurs for segments
that don’t consume sequence numbers.
Difference in TCP and UDP
Characteristics/Descriptions UDP TCP
General Simple, high speed, low Full-featured, reliable data
functionality transfer
Connection Setup Connectionless Connection Oriented
Data Interface to Application Message Based Data is Sent Stream-based data is sent
Reliability Unreliable (no ack) Reliable (ack received)
Retransmission Not performed Data is lost -> retransmission
Flow Control none Sliding window protocol
Overhead Very low Low but higher than UDP
Transmission Speed Very high High, lower than UDP
Application that uses Protocol DNS, BOOTP, DHCP, SNMP FTP, Telnet, SMTP, DNS,
(Multimedia Application) HTTP
Congestion
It may occur if the load on the network is greater than the capacity of the network.
Congestion happens in any system that involves waiting.
It occurs due to queues in routers and switches.
The queue of the receiver is full but the network still keeps sending packets even
though the receiver has not yet processed the packets. This is what causes congestion
Causes of Congestion:
➔ Excessive network bandwidth consumption - devices may utilize more
bandwidth than the average device.
➔ Poor subnet management - subnets are not scaled according to usage patterns
and resource requirements
➔ Broadcast storms - Occurs when there is a sudden upsurge in the number of
requests to a network
➔ Multicasting - The network allows multiple computers to communicate with e/o
at the same time.
➔ Border Gateway Protocol - While routing a packet, it doesn’t consider the
amount of traffic present in the route. There is a possibility all packets are being
routed via the same route.
➔ Too many devices - Every network has a limit on the amount of data it can
manage. If the network has too many devices linked to it, the network may
become burdened with data requests.
➔ Outdated Hardware - When data is transmitted over old switches, routers,
servers, and Internet exchanges, bottlenecks can emerge.
➔ Over-subscription - A cost-cutting tactic that can result in the network being
compelled to accommodate far more traffic than it was designed to handle
Effects of Congestion
1. Queueing delay
2. Packet Loss
3. Slow Network
4. Blocking of new connections
5. Low throughput
Traffic Descriptors:
Traffic Profiles:
CBR - fixed-rate, average data rate and peak data rate are the same. Very easy for
network to handle since it is predictable. It knows in advance how much bandwidth to
allocate for this type of flow
VBR - avg data rate and peak data rate are different. Maximum burst size usually small
value. More diff to handle than CBR
Bursty - Most difficult to handle since the profile is unpredictable
Network Performance:
When load is much less than the capacity of the network, the delay is at a minimum.
When load reaches network capacity the delay increases sharply. Delay becomes
infinite when load is greater than the capacity.
Throughput = no. of packets passing through the network in a unit of time. When the
load is below the capacity of the network, the throughput increases proportionally with
the load. Throughput declines sharply after the load reaches capacity as the queues
become full and the routers have to discard some packets.
Congestion Control Strategies
Congestion control refers to the mechanisms and techniques to control the congestion
and keep the load below capacity
For quick revision
Open Loop Congestion Control
Policies are applied to prevent congestion before it happens. They are
➔ Retransmission Policy: Retransmission is unavoidable but may increase
congestion. A good retransmission policy can prevent it for eg. TCP is designed
to prevent congestion
➔ Window Policy: Sending a lot of frames again and again may cause congestion.
Selective repeat can be used instead of Go-Back-N
➔ Acknowledgment Policy: If the receiver does not acknowledge every packet it
receives, it may slow down the sender and help prevent congestion. A receiver
may acknowledge only if it has a packet to be sent or a special timer expires. A
receiver may decide to acknowledge only N packets at a time.
➔ Discarding Policy: A good discarding policy by routers may prevent congestion
and at the same time may not harm the integrity of the transmission. For eg. in
audio transmission, if the policy is to discard less sensitive packets when
congestion is likely to happen, the quality of sound is still preserved, and
congestion is alleviated
➔ Admission Policy: Switches check the resource requirement of a flow before
admitting it to the network. A router can deny establishing a virtual circuit
connection if there is congestion in the network or if there is a possibility of
future congestion
Closed Loop Congestion Control
Tries to remove congestion after it happens. Policies are
➔ Backpressure: A congested node stops receiving data from the immediate
upstream node or nodes. This may cause the upstream node to become
congested and in turn, they reject data from their upstream node and so on.
Backpressure is the node-to-node congestion control that starts with a node
and propagates in the opposite direction of data flow.
➔ Choke Packet: A packet sent by a node to the source to inform it of congestion.
In backpressure, a warning is from one node to its upstream node, even though
it may reach the source node. In a choke packet, the warning is from the router,
which has encountered congestion, to the source directly. Intermediate nodes
were not warned.
➔ Implicit Signaling: There is no communication between congested node and
source. The source guesses that there is congestion somewhere in the network
from symptoms. For eg. no ack or delay in ack
➔ Explicit Signaling: Nodes that experience congestion can explicitly send a signal
to the source or destination. In the choke packet method, a separate packet is
used for this purpose; in the explicit signaling method, the signal is included in
the packet that carries data.
◆ Backward Signaling: A bit can be set in a packet moving in the direction
opposite to the congestion. This bit can warn the source.
◆ Forward Signaling: A bit can be set in the packet moving in the direction
of congestion. This bit can warn the destination and it can slow down
acknowledgments.
Quality of Service
We try to create an appropriate environment for the data traffic
Techniques to Improve QOS:-
1. FIFO Queueing: If queue has capacity of only 7 packets then at a time only 7
packets will come in the queue
- Packets wait in a buffer until the node is ready to process them
- If average arrival rate is higher than average processing rate, queue will
fill up and new packets will be discarded
2. Priority Queueing: Higher priority queues departed first but can lead to the
problem of starvation
Multimedia can reach the destination with less delay.
Starvation may be caused for lower priority queue
3. Weighted Fair Queueing: In this, the queues are weighted based on the priority
of queue.
The system process selected packets from each queue based on corresponding
weight
Traffic Shaping - It is the mechanism to control the amount and the rate of the traffic
sent to the network
Leaky Bucket Algorithm
This packet shapes Bursty traffic into fixed-rate traffic by averaging the data rate.
Water leaks at a constant rate from a small hole at the bottom of a bucket and the rate
doesn’t depend on the rate at which water is entering the bucket.
In leaky bucket, bursty chunks are stored in bucket and sent out at an average rate.
Host sends a burst of data at a rate of 12Mbps for 2s, for a total of 24 Mbits of data.
The host is silent for 5s and then sends data at a rate of 2Mbps for 3s for a total of
6Mbits of data. Total = 30Mbits in 10s
But cause of leaky bucket it is being sent at a rate of 3Mbps for 10s which avoids
congestion and consuming more bandwidth
Problem in leaky bucket - data flow when data goes out of the system is constant. It is
very restrictive.
If the host has bursty data, the leaky bucket allows only an average rate. The time
when the host was idle is not taken into account.
Token Bucket Algorithm
➔ It allows Bursty Traffic at a regulated maximum rate.
➔ It allows idle hosts to accumulate credit for the future in the form of tokens.
➔ The system removes one token for every cell of data sent. For each tick of the
clock the system sends n tokents to the bucket.
➔ If n is 100 and host is idle for 100 ticks, bucket collects 10000 tokens. Host can
now consume all these tokens in one tick with 10000 cells or host takes 1000
ticks with 10 cells per tick
➔ Token bucket can be easily implemented with a counter. The token is initialised
to zero. Each time a token is added, counter is incremented to 1. Each time a unit
of data is sent, counter is decremented by 1. When counter is zero, host cannot
send data.
app
Difference Between Token Bucket and Leaky Bucket
Token Bucket Leaky Bucket
Token dependent Token independent
If bucket is full token is discarded but not If bucket is full, then packets are discarded
the packet
Packets can only transmit when there are Packets are transmitted continuously
enough tokens
Allows large bursts to be sent at faster rate Sends the packet at a constant rate