[go: up one dir, main page]

0% found this document useful (0 votes)
27 views25 pages

Unit 4 Transport Layer_Computer Network

The document discusses the transport layer in computer networks, detailing its key services such as end-to-end communication, error control, flow control, and segmentation. It compares protocols like TCP, UDP, and SCTP, highlighting their differences in reliability, flow control, and congestion management. Additionally, it covers Integrated Services (IntServ) for QoS guarantees and contrasts it with Differentiated Services (DiffServ) for scalability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views25 pages

Unit 4 Transport Layer_Computer Network

The document discusses the transport layer in computer networks, detailing its key services such as end-to-end communication, error control, flow control, and segmentation. It compares protocols like TCP, UDP, and SCTP, highlighting their differences in reliability, flow control, and congestion management. Additionally, it covers Integrated Services (IntServ) for QoS guarantees and contrasts it with Differentiated Services (DiffServ) for scalability.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

SRI ADI CHUNCHANAGIRI WOMENS COLLEGE, CUMBUM.

DEPARTMENT OF COMPUTER SCIENCE

SUBJECT : COMPUTER NETWORKS

TOPIC : TRANSPORT LAYER

DONE BY
Mrs ARCHANA R M.Sc., B.Ed., M.PhiL.,
ASSISTANT PROFESSOR IN DEPARTMENT OF COMPUTER SCIENCE,
SRI ADI CHUNCHANAGIRI WOMENS COLLEGE, CUMBUM.
1.Transport layer services in computer network ?
In a computer network, the transport layer is responsible for providing
end-to-end communication services for applications. It ensures reliable data
transfer, error control, flow control, and segmentation of data into smaller
packets. This layer is located above the network layer and is a key part of the OSI
(Open Systems Interconnection) model as well as the TCP/IP model.

Key Services of the Transport Layer:

1. End-to-End Communication: The transport layer ensures


communication between end systems (i.e., hosts) and is responsible for
transferring data between two devices over a network.
2. Segmentation and Reassembly: Data from the application layer is
typically too large to be sent as a single unit over the network. The transport
layer breaks this data into smaller segments, which are then reassembled
by the receiving end. This is necessary because the network layer (IP)
might have size limitations on the packets that can be transmitted.
3. Error Control: The transport layer provides mechanisms for detecting and
recovering from errors in the transmitted data. If an error occurs (such as
data corruption), the transport layer can request retransmission of lost or
corrupted data.
4. Flow Control: To prevent a sender from overwhelming a receiver with too
much data too quickly, the transport layer manages the rate of data
transmission. This is especially important in scenarios where the receiver's
processing capacity is limited.
5. Reliability: The transport layer ensures reliable communication, which is
vital for many applications (e.g., web browsing, file transfer). It guarantees
the delivery of data in the correct order, retransmits lost data, and checks
for data integrity.
6. Connection Establishment and Termination: The transport layer is
responsible for establishing, maintaining, and terminating connections
between communicating devices. This process includes handshakes to
establish a reliable connection and proper teardown once communication
is complete.

Common Protocols at the Transport Layer:

1. Transmission Control Protocol (TCP):

Connection-Oriented: TCP is a reliable, connection-oriented protocol. It


guarantees the delivery of data, ensures the correct order, and performs flow
control.

Error Detection and Recovery: Uses checksums for error detection and
acknowledgment packets to confirm successful delivery.

Flow Control: Uses mechanisms like sliding windows to ensure that the sender
doesn't overwhelm the receiver.

Congestion Control: Implements congestion control algorithms like slow-start


and congestion avoidance to prevent network congestion.

Commonly used for applications that require high reliability, such as web
browsing (HTTP/HTTPS), email (SMTP), and file transfers (FTP).
2. User Datagram Protocol (UDP):

Connectionless: UDP is a connectionless, lightweight protocol. It does not


guarantee delivery, order, or error correction, making it faster than TCP.

No Error Recovery or Flow Control: UDP does not have mechanisms for
retransmitting lost data or controlling the rate of data flow.

Low Overhead: Its minimal protocol overhead makes it suitable for applications
where speed is more important than reliability, such as real-time streaming, VoIP
(Voice over IP), and DNS (Domain Name System).

3. Stream Control Transmission Protocol (SCTP):

SCTP is a newer, connection-oriented protocol that combines features of both


TCP and UDP. It supports multiple streams (like UDP) while maintaining
reliability (like TCP).

It also provides message-oriented transmission, better support for multi-homing


(multiple IP addresses per device), and improved congestion control.

Services Provided by the Transport Layer:

 Multiplexing: The transport layer allows multiple applications to use the


network simultaneously by assigning a unique port number to each
communication session.
 Data Integrity: Ensures that data is delivered without errors, or at least
provides mechanisms for detecting and correcting errors.
 Session Management: It manages the sessions for multiple applications
and facilitates the data exchange between them.

Summary:
The transport layer plays a critical role in enabling reliable and efficient
communication between devices in a network. It is responsible for data
segmentation, error control, flow control, and ensuring reliable delivery of data
between endpoints. TCP, UDP, and SCTP are the primary protocols used at this
layer, with TCP offering reliability, UDP focusing on low latency, and SCTP
providing a combination of the two.

2. UDp-TCP: Transition diagram flow control,error


control,congestion control ?

A UDP-TCP transition diagram would typically explain the differences


and operations in two types of protocols: UDP (User Datagram Protocol) and
TCP (Transmission Control Protocol). Each protocol has its own approach to
flow control, error control, and congestion control. Let's break down each aspect
in the context of these two protocols.

1. Flow Control:

Flow control ensures that the sender does not overwhelm the receiver with too
much data at once.

 TCP:

TCP has built-in flow control mechanisms to avoid congestion. The receiver
informs the sender about its buffer space, and the sender adjusts its transmission
rate accordingly.

Window size: In TCP, flow control is handled through the sliding window
mechanism. The receiver advertises a window size (amount of data it can handle),
and the sender must respect that by not sending more data than the window size.
 UDP:

UDP does not have any built-in flow control mechanisms. It sends data at the rate
of the sender, regardless of the receiver's ability to process it.

It’s up to the application to implement any necessary flow control if needed.

2. Error Control:

Error control ensures that lost or corrupted data can be detected and corrected.

 TCP:

Error detection: TCP uses a checksum to detect errors in the header and data.
If an error is found, the packet is discarded and must be retransmitted.

Error recovery: If a packet is lost, TCP ensures reliable delivery through


acknowledgements (ACKs). If an ACK is not received within a certain time, the
packet is retransmitted.

 UDP:

Error detection: UDP also uses a checksum to detect errors, but it does not
perform error recovery. If a packet is lost or corrupted, UDP simply discards the
packet.

No error recovery: There is no mechanism in UDP to retransmit lost or corrupted


data. The application may handle retransmissions if required.

3. Congestion Control:

Congestion control avoids network congestion by adjusting the transmission


rate when network capacity is reached.
 TCP:

TCP has a robust congestion control mechanism to adjust the sender’s


transmission rate based on network conditions. The mechanisms include:

Slow Start: Initially, the transmission rate is small. As successful packets are
acknowledged, the rate gradually increases.

Congestion Avoidance: When packet loss is detected, the sender reduces the rate
to avoid congestion.

Fast Retransmit and Fast Recovery: TCP will immediately retransmit packets
that are believed to be lost based on missing ACKs and adjust the transmission
rate accordingly.

 UDP:

UDP does not implement congestion control. It sends packets as quickly as the
sender can, without considering network congestion.

It's up to the application to manage congestion control if needed.

Transition Diagram Summary:

TCP (Flow Control, Error Control, and Congestion Control)

 Reliable, Connection-Oriented
 Flow Control: Sliding Window mechanism (adjusts based on receiver’s
buffer space)
 Error Control: Checksums, Retransmission on packet loss
 Congestion Control: Slow start, congestion avoidance, fast
retransmit/recovery
UDP (No Flow Control, No Error Control, No Congestion Control)

 Unreliable, Connectionless
 Flow Control: None
 Error Control: Checksum, but no retransmission on loss
 Congestion Control: None

Diagram:

Here is a simple textual flow of the two protocols:

pgsql
Copy
+-------------------------+----------------------------+
| TCP | UDP |
+-------------------------+----------------------------+
| Connection-Oriented | Connectionless |
| Flow Control: Sliding | No Flow Control |
| Window (Receiver-side) | |
| Error Control: Retransmit| Checksum (No Retransmit) |
| Congestion Control: Slow | No Congestion Control |
| Start, Avoidance, Fast | |
| Retransmit | |
+-------------------------+----------------------------+

If you're thinking of this as a visual state transition diagram, it would


represent different states (for example, "waiting for acknowledgment", "slow
start", etc.) and transitions between them based on network conditions like packet
loss, congestion, and receiver readiness.

3. SCTP-QOS : Flow control to improve QOS in computer


network?

SCTP-QoS: Flow Control to Improve QoS in Computer Networks

SCTP (Stream Control Transmission Protocol) is a transport layer protocol


designed to provide several advantages over traditional TCP, including multi-
homing, multi-streaming, and enhanced security. Quality of Service (QoS) plays
a crucial role in ensuring that network performance meets certain standards,
especially for applications requiring consistent throughput, minimal delay, and
minimal packet loss. SCTP-QoS mechanisms, specifically flow control, can
significantly contribute to improving QoS in computer networks.

Flow Control in SCTP

Flow control is a mechanism used to regulate the amount of data sent over
the network to prevent congestion, buffer overflow, and to optimize resource
usage. It ensures that the sender does not overwhelm the receiver with more data
than it can handle. SCTP has its own flow control mechanisms that are designed
to improve the overall quality of service.
Key Flow Control Mechanisms in SCTP

1. Receiver Window Size (Flow Control):

SCTP uses a receiver window size (similar to TCP) to manage flow control. The
receiver window determines how much data the sender can transmit before it
must wait for an acknowledgment.

SCTP dynamically adjusts the receiver window size based on the receiver’s
available buffer space. The receiver advertises its window size to the sender,
which helps to ensure the sender doesn't send too much data at once.

2. Congestion Control:

SCTP uses a congestion control algorithm similar to TCP. It manages the flow
of data between the sender and receiver based on network congestion.

The sender monitors network conditions and adjusts its transmission rate by
reducing its congestion window when packet losses or delays are detected,
thereby preventing network congestion.

3. Stream-Based Flow Control:

One of the unique features of SCTP is its support for multi-streaming. In a single
SCTP association, there can be multiple streams, each with its own flow control
mechanism. This allows different types of data (e.g., voice, video, text) to be
transmitted independently, which improves QoS.

This stream-based approach also reduces the problem of head-of-line blocking,


where delays in one stream could affect all other streams.

4. Path MTU Discovery:


Path Maximum Transmission Unit (MTU) discovery is another important
aspect of SCTP’s flow control. SCTP adapts to the network's MTU to avoid
fragmentation. The path MTU ensures that the sender doesn’t send data packets
larger than the network's maximum size, which can cause delays, loss, or
inefficient use of the network.

5. Heartbeat Mechanism:

SCTP employs a heartbeat mechanism to monitor the health of network paths,


especially in multi-homing scenarios (where multiple network paths exist
between two hosts). By sending regular heartbeat messages, SCTP ensures that
paths are available and responsive, allowing for better handling of data flow.

6. SCTP’s Buffer Management:

SCTP optimizes buffer usage, ensuring that both sender and receiver buffers are
managed effectively. Buffer management avoids excessive memory usage on the
sender and receiver sides, which can lead to unnecessary delays or data loss.

Improving QoS with SCTP-QoS Flow Control

 Reducing Latency: SCTP’s ability to manage multiple streams


concurrently and provide individual flow control per stream helps reduce
the latency for each stream, thereby improving the responsiveness of the
system.
 Avoiding Congestion: By dynamically adjusting the congestion window
and receiver window sizes, SCTP minimizes the risk of congestion,
ensuring that the network remains responsive and stable.
 Managing Jitter: SCTP’s multi-streaming capability allows for smoother
transmission of data, especially in real-time applications like VoIP or video
streaming, reducing jitter and improving QoS for time-sensitive data.
 Ensuring Reliability: SCTP offers reliability similar to TCP, but with
enhancements like multi-path support, which ensures that even if one path
fails, the flow of data can continue via alternate paths, maintaining
consistent QoS.

Applications of SCTP-QoS Flow Control

 Telecommunications: SCTP is widely used in the Signaling System 7


(SS7) protocol stack for telecommunication signaling, and its flow control
features ensure that signaling traffic, which is critical for call setup,
termination, and routing, is handled efficiently and reliably.
 Multimedia Streaming: In applications such as live streaming or video
conferencing, SCTP’s ability to handle multiple streams and manage flow
control on a per-stream basis ensures that the video and audio quality are
maintained without interruptions due to network congestion.
 Financial Networks: SCTP is also used in high-frequency trading and
other financial applications where low latency and consistent network
performance are essential.

4. What is Integrated services ?


Integrated Services (IntServ) is a network architecture designed to provide
quality of service (QoS) guarantees for applications running over an IP network.
IntServ works by reserving resources across a network to ensure that certain
traffic flows (e.g., voice, video, and critical data) receive the necessary bandwidth
and low-latency treatment. This system is especially useful in environments
where the reliability and performance of the network are critical, such as VoIP
(Voice over IP) or real-time video conferencing.

1. Resource Reservation

IntServ works by explicitly reserving resources (like bandwidth and buffer space)
along the path from the sender to the receiver. This reservation ensures that traffic
receives the necessary QoS during its journey across the network.

2. Signaling Protocol (RSVP)

The Resource Reservation Protocol (RSVP) is used by IntServ to request and set
up these reservations. It allows the sender to specify the type of service required
(e.g., low latency for voice or high bandwidth for video).

RSVP is used to signal routers along the path to reserve the required resources.

3. Traffic Classes

In IntServ, traffic is classified into different types, such as:

Guaranteed Service: Ensures that the traffic flow receives a certain level of
service and bandwidth, suitable for applications like video conferencing.

Controlled Load Service: Provides lower levels of service compared to


guaranteed service but still ensures the application’s performance is above a
baseline, typically used for less demanding applications like file transfers.
4. Service Levels

IntServ defines various levels of service (like Guaranteed Service, Controlled


Load, etc.) that applications can request. These services are typically mapped to
different network parameters like delay, bandwidth, jitter, and packet loss.

5. Flow Specification

IntServ defines a flow as a stream of packets from a source to a destination that


requires a specific QoS treatment. The flow specification includes information
like:

o The required bandwidth.


o Latency constraints.
o Jitter constraints.

6. Path Setup

When a flow is established, the network routers and switches use RSVP messages
to set up the path and make necessary resource reservations. This happens on a
hop-by-hop basis from source to destination.

7. Admission Control

Before a new flow is allowed to enter the network, the routers perform admission
control to check if sufficient resources are available for the requested QoS. If not,
the flow is denied admission to prevent overloading the network.

8. Scalability Issues

One of the significant challenges with IntServ is scalability. Because each router
in the network needs to maintain state information for every flow, it can become
difficult to manage when there are large numbers of flows. This problem becomes
more significant as the size of the network grows.

IntServ vs DiffServ

IntServ (Integrated Services) provides more granular control over QoS by


reserving resources for each individual flow, making it more precise but less
scalablDiffServ (Differentiated Services), on the other hand, is more scalable
by classifying traffic into a smaller number of classes and providing differentiated
treatment based on those classes, rather than reserving resources for each
individual flow.

Use Cases for Integrated Services:

 Real-time applications like VoIP, video conferencing, and online gaming,


which need low latency and stable bandwidth.
 Mission-critical data transfer requiring specific QoS guarantees.

Conclusion

Integrated Services (IntServ) is a powerful way to guarantee QoS for individual


flows in a network, but it comes with scalability issues due to its need for
maintaining state information for each flow. It is more suited for smaller,
specialized networks where precise control over traffic is needed.
5.What is Differentiated services ?

Differentiated Services (DiffServ) is a computer networking architecture


designed to provide scalable and efficient Quality of Service (QoS) in IP
networks. Unlike the older Integrated Services (IntServ) model, which requires
maintaining state information for each flow, DiffServ uses a simpler, scalable
approach by classifying and managing traffic at the network layer.

Key Concepts of Differentiated Services:

1. Differentiated Services Code Point (DSCP):

In DiffServ, the IP header includes a 6-bit field called DSCP (Differentiated


Services Code Point) in the Type of Service (ToS) byte of the IPv4 header (or in
the Traffic Class field of the IPv6 header). The DSCP value is used to classify
packets into different traffic classes for differentiated treatment in routers across
the network.

DSCP values are mapped to specific forwarding behaviors at each network device
(such as routers), which allows the network to handle traffic differently based on
its priority or type.

2. Traffic Classes:

Packets are classified into different classes based on the DSCP value, which helps
in providing different levels of service for different types of traffic. Some
common classes include:

Expedited Forwarding (EF): Often used for latency-sensitive applications like


voice or video conferencing.
Assured Forwarding (AF): Provides a guarantee of delivery under normal
network conditions but with lower priority than EF.

Default Forwarding (DF): The standard forwarding behavior used for most
traffic.

3. Per-Hop Behavior (PHB):

Each DSCP value corresponds to a specific Per-Hop Behavior (PHB) at each


router or network node, indicating how packets should be treated. The two most
common PHBs are:

EF (Expedited Forwarding): Provides low-latency and low-loss service for


critical real-time applications like VoIP.

AF (Assured Forwarding): Provides reliability and higher-priority forwarding


based on traffic class.

4. Traffic Policing and Shaping:

DiffServ can include traffic policing mechanisms where packets exceeding


certain traffic thresholds are marked with lower priority, or they may be dropped
to prevent congestion.

Traffic shaping can also be used to smooth traffic flow and ensure that it conforms
to the expected rate and behavior.
Benefits of Differentiated Services:

 Scalability: DiffServ is more scalable than IntServ because it does not


require maintaining state information for each flow, making it suitable for
large networks.
 Simplicity: DiffServ simplifies the configuration and management of QoS
in comparison to IntServ by using DSCP markings and PHBs rather than
more complex flow-specific configurations.
 Flexibility: It provides flexibility in how network traffic is handled,
allowing different types of traffic to be prioritized or treated differently
based on service requirements.

Example Use Cases:

 Voice over IP (VoIP): VoIP traffic is given high priority with an EF PHB
to minimize latency and jitter.
 Video streaming: Can be assigned a medium priority with an AF PHB to
ensure smooth streaming even under network congestion.
 Best-effort traffic: Regular internet browsing and general data traffic can
be assigned lower priority with the default behavior.

6.client server programming ?

Client-server programming in computer networks is a model for designing


applications that allow different devices or programs to communicate with each
other over a network. This model involves two main components:

1. Client: The client is typically a device or application that sends requests to


a server. It could be anything from a web browser to a mobile app or even
a command-line application. The client initiates communication by making
a request for resources or services.
2. Server: The server is the device or application that listens for incoming
client requests, processes those requests, and then sends back a response.
A server can handle multiple client requests at the same time (using
techniques like multi-threading or asynchronous processing).

Key Concepts in Client-Server Programming:

1. Request and Response: The client sends a request to the server for data or
services, and the server processes the request and responds with the
requested information. This could involve querying a database, fetching a
file, performing computations, or any other task.
2. Protocol: Communication between the client and server follows specific
rules, called a protocol. A protocol defines the format and sequence of
messages exchanged. Common network protocols include:
o HTTP (Hypertext Transfer Protocol): Used for web browsers and
web servers.
o FTP (File Transfer Protocol): Used for transferring files.
o TCP/IP: A low-level communication protocol used in most internet
and intranet communications.
3. Sockets: A socket is an endpoint for sending or receiving data across a
computer network. The client and server use sockets to communicate over
a network. In most programming languages, there are libraries or APIs to
handle socket programming.
4. Ports: Servers typically listen on specific ports for incoming client
requests. A port is a virtual point through which data is sent or received.
For example, HTTP usually runs on port 80, while HTTPS runs on port
443.
How Client-Server Communication Works:

1. Client Initialization: The client starts by opening a socket and connecting


to the server's IP address and the specific port number where the server is
listening for requests.
2. Server Listening: The server, in turn, opens a socket and listens for
incoming connections on a specific port. When a client connects, the server
accepts the connection.
3. Request Handling: The client sends a request (like a web page request or
database query), and the server processes the request. The server may
interact with other systems (e.g., databases, file systems) to process the
request.
4. Response: The server sends a response back to the client, which might be
data (like a web page or query result), a file, or a confirmation of an action
taken.
5. Closing Connection: Once the communication is finished, the client or
server may close the connection, or it may be reused for multiple requests
(in case of persistent connections like HTTP/2).

Example of Client-Server Programming:

Here’s a simple example of client-server communication in Python using sockets:


Server Code:
python
Copy
import socket

# Create a socket object


server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

# Bind the socket to an address and port


server_socket.bind(('localhost', 12345))

# Start listening for connections


server_socket.listen(5)
print("Server is listening for connections...")

while True:
# Accept a connection from a client
client_socket, client_address = server_socket.accept()
print(f"Connection established with {client_address}")

# Receive the message from the client


data = client_socket.recv(1024)
print(f"Received message: {data.decode('utf-8')}")

# Send a response to the client


client_socket.send("Hello from server!".encode('utf-8'))

# Close the client connection


client_socket.close()
Client Code:
python
Copy
import socket

# Create a socket object


client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)

# Connect to the server


client_socket.connect(('localhost', 12345))

# Send a message to the server


client_socket.send("Hello from client!".encode('utf-8'))

# Receive the server's response


response = client_socket.recv(1024)
print(f"Server says: {response.decode('utf-8')}")

# Close the connection


client_socket.close()

Explanation:

1. Server: The server listens on port 12345 and waits for incoming client
connections. Once a client connects, it receives the message and sends a
response.
2. Client: The client connects to the server at localhost:12345, sends a
message, and then waits for the server's response.

Advantages of Client-Server Model:


 Scalability: Servers can handle requests from many clients
simultaneously, and new clients can be added easily.
 Centralized Management: Servers are typically centralized, making it
easier to manage data, perform updates, or enforce security policies.
 Resource Efficiency: Clients generally require fewer resources than
servers, which are designed to handle larger workloads.

Disadvantages:

 Single Point of Failure: If the server fails, all client communication will
be disrupted.
 Performance Bottleneck: Heavy server load or high traffic can impact
performance, so efficient server management and load balancing are
crucial.

The client-server model is foundational to many types of network applications,


including web services, email systems, file sharing, and more.

Two Mark :

1.what is transport layer ?

In computer networks, the transport layer (Layer 4 in the OSI


model) provides end-to-end communication services for applications, ensuring
reliable and efficient data transfer between hosts, using protocols like TCP and
UDP.

3. what is protocols UDP-TCP ?


TCP (Transmission Control Protocol) and UDP (User Datagram
Protocol) are two fundamental transport layer protocols used in computer
networks, with TCP being connection-oriented and reliable, while UDP is
connectionless and prioritizes speed over reliability.
4. what is transition diagram ?

A transition diagram, also known as a state transition diagram, is a visual


representation of a finite state machine (FSM), illustrating how a system
transitions between states based on events or inputs.

5. what is flow Control?

In computer networks, flow control regulates data transmission to prevent a


sender from overwhelming a receiver, ensuring efficient and reliable
communication by matching the sending rate with the receiver's processing
capacity.

5. what is error control ?


Error control refers to techniques used to detect and correct errors that may
occur during data transmission, ensuring data integrity and reliable
communication. This is achieved through error detection and correction codes, as
well as mechanisms like acknowledgements and retransmissions.

6. what is congestion control ?

Congestion control in computer networks is a mechanism that manages data


flow to prevent network overload and ensure efficient data transmission,
especially when multiple sources are sending data simultaneously.

7. what is SCTP-QOS ?

SCTP-QoS refers to using the Stream Control Transmission Protocol (SCTP)


to implement Quality of Service (QoS) mechanisms in computer networks,
ensuring prioritized and reliable data transmission for time-sensitive
applications.
7. what is Integrated services ?

In computer networking, "integrated services" or IntServ refers to an


architecture that aims to guarantee Quality of Service (QoS) by allowing
applications to request specific network resources and ensuring those requests are
met, enabling applications like video and audio to be delivered reliably.

8. what is Differentiated services ?


Differentiated Services (DiffServ) is a computer networking
architecture that classifies and manages network traffic to provide Quality
of Service (QoS) in IP networks, prioritizing certain traffic types for better
performance.
9. what is client server programming ?
client-server programming involves a client application requesting
services or resources from a server application, where the server manages
and provides those resources.
10.what is flow control to improve QOS ?

client-server programming involves a client application requesting services


or resources from a server application, where the server manages and provides
those resources.

You might also like