Unit 4 Transport Layer_Computer Network
Unit 4 Transport Layer_Computer Network
DONE BY
Mrs ARCHANA R M.Sc., B.Ed., M.PhiL.,
ASSISTANT PROFESSOR IN DEPARTMENT OF COMPUTER SCIENCE,
SRI ADI CHUNCHANAGIRI WOMENS COLLEGE, CUMBUM.
1.Transport layer services in computer network ?
In a computer network, the transport layer is responsible for providing
end-to-end communication services for applications. It ensures reliable data
transfer, error control, flow control, and segmentation of data into smaller
packets. This layer is located above the network layer and is a key part of the OSI
(Open Systems Interconnection) model as well as the TCP/IP model.
Error Detection and Recovery: Uses checksums for error detection and
acknowledgment packets to confirm successful delivery.
Flow Control: Uses mechanisms like sliding windows to ensure that the sender
doesn't overwhelm the receiver.
Commonly used for applications that require high reliability, such as web
browsing (HTTP/HTTPS), email (SMTP), and file transfers (FTP).
2. User Datagram Protocol (UDP):
No Error Recovery or Flow Control: UDP does not have mechanisms for
retransmitting lost data or controlling the rate of data flow.
Low Overhead: Its minimal protocol overhead makes it suitable for applications
where speed is more important than reliability, such as real-time streaming, VoIP
(Voice over IP), and DNS (Domain Name System).
Summary:
The transport layer plays a critical role in enabling reliable and efficient
communication between devices in a network. It is responsible for data
segmentation, error control, flow control, and ensuring reliable delivery of data
between endpoints. TCP, UDP, and SCTP are the primary protocols used at this
layer, with TCP offering reliability, UDP focusing on low latency, and SCTP
providing a combination of the two.
1. Flow Control:
Flow control ensures that the sender does not overwhelm the receiver with too
much data at once.
TCP:
TCP has built-in flow control mechanisms to avoid congestion. The receiver
informs the sender about its buffer space, and the sender adjusts its transmission
rate accordingly.
Window size: In TCP, flow control is handled through the sliding window
mechanism. The receiver advertises a window size (amount of data it can handle),
and the sender must respect that by not sending more data than the window size.
UDP:
UDP does not have any built-in flow control mechanisms. It sends data at the rate
of the sender, regardless of the receiver's ability to process it.
2. Error Control:
Error control ensures that lost or corrupted data can be detected and corrected.
TCP:
Error detection: TCP uses a checksum to detect errors in the header and data.
If an error is found, the packet is discarded and must be retransmitted.
UDP:
Error detection: UDP also uses a checksum to detect errors, but it does not
perform error recovery. If a packet is lost or corrupted, UDP simply discards the
packet.
3. Congestion Control:
Slow Start: Initially, the transmission rate is small. As successful packets are
acknowledged, the rate gradually increases.
Congestion Avoidance: When packet loss is detected, the sender reduces the rate
to avoid congestion.
Fast Retransmit and Fast Recovery: TCP will immediately retransmit packets
that are believed to be lost based on missing ACKs and adjust the transmission
rate accordingly.
UDP:
UDP does not implement congestion control. It sends packets as quickly as the
sender can, without considering network congestion.
Reliable, Connection-Oriented
Flow Control: Sliding Window mechanism (adjusts based on receiver’s
buffer space)
Error Control: Checksums, Retransmission on packet loss
Congestion Control: Slow start, congestion avoidance, fast
retransmit/recovery
UDP (No Flow Control, No Error Control, No Congestion Control)
Unreliable, Connectionless
Flow Control: None
Error Control: Checksum, but no retransmission on loss
Congestion Control: None
Diagram:
pgsql
Copy
+-------------------------+----------------------------+
| TCP | UDP |
+-------------------------+----------------------------+
| Connection-Oriented | Connectionless |
| Flow Control: Sliding | No Flow Control |
| Window (Receiver-side) | |
| Error Control: Retransmit| Checksum (No Retransmit) |
| Congestion Control: Slow | No Congestion Control |
| Start, Avoidance, Fast | |
| Retransmit | |
+-------------------------+----------------------------+
Flow control is a mechanism used to regulate the amount of data sent over
the network to prevent congestion, buffer overflow, and to optimize resource
usage. It ensures that the sender does not overwhelm the receiver with more data
than it can handle. SCTP has its own flow control mechanisms that are designed
to improve the overall quality of service.
Key Flow Control Mechanisms in SCTP
SCTP uses a receiver window size (similar to TCP) to manage flow control. The
receiver window determines how much data the sender can transmit before it
must wait for an acknowledgment.
SCTP dynamically adjusts the receiver window size based on the receiver’s
available buffer space. The receiver advertises its window size to the sender,
which helps to ensure the sender doesn't send too much data at once.
2. Congestion Control:
SCTP uses a congestion control algorithm similar to TCP. It manages the flow
of data between the sender and receiver based on network congestion.
The sender monitors network conditions and adjusts its transmission rate by
reducing its congestion window when packet losses or delays are detected,
thereby preventing network congestion.
One of the unique features of SCTP is its support for multi-streaming. In a single
SCTP association, there can be multiple streams, each with its own flow control
mechanism. This allows different types of data (e.g., voice, video, text) to be
transmitted independently, which improves QoS.
5. Heartbeat Mechanism:
SCTP optimizes buffer usage, ensuring that both sender and receiver buffers are
managed effectively. Buffer management avoids excessive memory usage on the
sender and receiver sides, which can lead to unnecessary delays or data loss.
1. Resource Reservation
IntServ works by explicitly reserving resources (like bandwidth and buffer space)
along the path from the sender to the receiver. This reservation ensures that traffic
receives the necessary QoS during its journey across the network.
The Resource Reservation Protocol (RSVP) is used by IntServ to request and set
up these reservations. It allows the sender to specify the type of service required
(e.g., low latency for voice or high bandwidth for video).
RSVP is used to signal routers along the path to reserve the required resources.
3. Traffic Classes
Guaranteed Service: Ensures that the traffic flow receives a certain level of
service and bandwidth, suitable for applications like video conferencing.
5. Flow Specification
6. Path Setup
When a flow is established, the network routers and switches use RSVP messages
to set up the path and make necessary resource reservations. This happens on a
hop-by-hop basis from source to destination.
7. Admission Control
Before a new flow is allowed to enter the network, the routers perform admission
control to check if sufficient resources are available for the requested QoS. If not,
the flow is denied admission to prevent overloading the network.
8. Scalability Issues
One of the significant challenges with IntServ is scalability. Because each router
in the network needs to maintain state information for every flow, it can become
difficult to manage when there are large numbers of flows. This problem becomes
more significant as the size of the network grows.
IntServ vs DiffServ
Conclusion
DSCP values are mapped to specific forwarding behaviors at each network device
(such as routers), which allows the network to handle traffic differently based on
its priority or type.
2. Traffic Classes:
Packets are classified into different classes based on the DSCP value, which helps
in providing different levels of service for different types of traffic. Some
common classes include:
Default Forwarding (DF): The standard forwarding behavior used for most
traffic.
Traffic shaping can also be used to smooth traffic flow and ensure that it conforms
to the expected rate and behavior.
Benefits of Differentiated Services:
Voice over IP (VoIP): VoIP traffic is given high priority with an EF PHB
to minimize latency and jitter.
Video streaming: Can be assigned a medium priority with an AF PHB to
ensure smooth streaming even under network congestion.
Best-effort traffic: Regular internet browsing and general data traffic can
be assigned lower priority with the default behavior.
1. Request and Response: The client sends a request to the server for data or
services, and the server processes the request and responds with the
requested information. This could involve querying a database, fetching a
file, performing computations, or any other task.
2. Protocol: Communication between the client and server follows specific
rules, called a protocol. A protocol defines the format and sequence of
messages exchanged. Common network protocols include:
o HTTP (Hypertext Transfer Protocol): Used for web browsers and
web servers.
o FTP (File Transfer Protocol): Used for transferring files.
o TCP/IP: A low-level communication protocol used in most internet
and intranet communications.
3. Sockets: A socket is an endpoint for sending or receiving data across a
computer network. The client and server use sockets to communicate over
a network. In most programming languages, there are libraries or APIs to
handle socket programming.
4. Ports: Servers typically listen on specific ports for incoming client
requests. A port is a virtual point through which data is sent or received.
For example, HTTP usually runs on port 80, while HTTPS runs on port
443.
How Client-Server Communication Works:
while True:
# Accept a connection from a client
client_socket, client_address = server_socket.accept()
print(f"Connection established with {client_address}")
Explanation:
1. Server: The server listens on port 12345 and waits for incoming client
connections. Once a client connects, it receives the message and sends a
response.
2. Client: The client connects to the server at localhost:12345, sends a
message, and then waits for the server's response.
Disadvantages:
Single Point of Failure: If the server fails, all client communication will
be disrupted.
Performance Bottleneck: Heavy server load or high traffic can impact
performance, so efficient server management and load balancing are
crucial.
Two Mark :
7. what is SCTP-QOS ?