[go: up one dir, main page]

0% found this document useful (0 votes)
60 views16 pages

Presentation On: Congestion in Network

This presentation discusses congestion in networks. Congestion occurs when bandwidth is insufficient and network traffic exceeds capacity, resulting in deteriorated service quality like queuing delays and packet loss. Causes of congestion include more input lines than output lines, slow routers, insufficient router buffers, and mismatches in component capacities. Congestion can be controlled using open-loop methods like retransmission, windowing, and admission policies that prevent congestion, and closed-loop methods like backpressure, choke packets, and explicit/implicit signaling that remove congestion after it occurs. Specific algorithms discussed are leaky bucket, which shapes bursty traffic into a fixed rate, and token bucket, which allows bursty transfers while maintaining an average rate.

Uploaded by

Anjila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views16 pages

Presentation On: Congestion in Network

This presentation discusses congestion in networks. Congestion occurs when bandwidth is insufficient and network traffic exceeds capacity, resulting in deteriorated service quality like queuing delays and packet loss. Causes of congestion include more input lines than output lines, slow routers, insufficient router buffers, and mismatches in component capacities. Congestion can be controlled using open-loop methods like retransmission, windowing, and admission policies that prevent congestion, and closed-loop methods like backpressure, choke packets, and explicit/implicit signaling that remove congestion after it occurs. Specific algorithms discussed are leaky bucket, which shapes bursty traffic into a fixed rate, and token bucket, which allows bursty transfers while maintaining an average rate.

Uploaded by

Anjila
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Presentation on

Congestion in network
 What is Congestion?
Congestion, in the context of networks, refers to a
network state where a node or link carries so much
data that it may deteriorate network service quality,
resulting in queuing delay, frame or data packet loss
and the blocking of new connections.
In a congested network, response time slows
with reduced network throughput. Congestion occurs
when bandwidth is insufficient and network data traffic
exceeds capacity.
Causes of Congestion:
 When there are more input lines and less or single output lines.
 When there is slow router i.e., if routers CPU‘s, are slow
 If the router has no free buffers i.e., insufficient memory to
hold queue of packets.
 If the components used in subnet (link, router, switches, etc)
have different traffics carrying and switching capacities, then
congestion occurs.
 If the bandwidths of the lines are low, it can‘t carry large
volume of packets and caused
congestion. Hence, congestion cannot be eradicated but can be
controlled.
 
How to correct the Congestion
Problem:
1. Open Loop Congestion Control

In this method, policies are used to prevent the


congestion before it happens.
• Congestion control is handled either by the source
or by the destination.

(The various methods used for open loop


congestion control are: )
a) Retransmission Policy :
• The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
• However retransmission in general may increase the congestion in the network. But we need to implement
good retransmission policy to prevent congestion
b) Window Policy
• To implement window policy, selective reject window method is used for congestion control.
• Selective reject method sends only the specific lost or damaged packets.
c) Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect congestion.
• If the receiver does not acknowledge every packet it receives it may slow down the sender and help prevent
congestion.
• Acknowledgments also add to the traffic load on the network. Thus, by sending fewer acknowledgements we
can reduce load on the network.
d) Discarding Policy
• A router may discard less sensitive packets when congestion is likely to happen.
• Such a discarding policy may prevent congestion and at the same time may not harm the integrity of the
transmission.
e) Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also prevent congestion in virtual circuit
networks.
• Switches in a flow first check the resource requirement of a flow before admitting it to the network.
• A router can deny establishing a virtual circuit connection if there is congestion in the "network or if there is
a possibility of future congestion.
2. Closed Loop Congestion Control 

■ Closed loop congestion control


mechanisms try to remove the congestion
after it happens.

(The various methods used for closed loop


congestion control are:)
a) Backpressure
• Backpressure is a node-to-node congestion control that starts with a node and propagates, in the opposite
direction of data flow.

• In this method of congestion control, the congested node stops receiving data from the immediate
upstream node or nodes.
b) Choke Packet 
• In this method of congestion control, congested router or node sends a special type of packet called choke
packet to the source to inform it about the congestion.

• In choke packet method, congested node sends a warning directly to the source station i.e. the
intermediate nodes through which the packet has traveled are not warned.
c) Implicit Signaling
• In implicit signaling, there is no communication between the congested node or nodes and the source.
• The source guesses that there is congestion somewhere in the network when it does not receive any
acknowledgment. Therefore the delay in receiving an acknowledgment is interpreted as congestion in the
network.
• On sensing this congestion, the source slows down.
• This type of congestion control policy is used by TCP.
d) Explicit Signaling
• In this method, the congested nodes explicitly send a signal to the source or destination to inform about the
congestion.
• Explicit signaling is different from the choke packet method. In choke packed method, a separate packet is
used for this purpose whereas in explicit signaling method, the signal is included in the packets that carry
data .
• Explicit signaling can occur in either the forward direction or the backward direction .
Congestion control
Algorithms:
 Leaky Bucket Algorithm
• It is a traffic shaping mechanism that controls the amount and the rate of the traffic sent to the network.
• A leaky bucket algorithm shapes bursty traffic into fixed rate traffic by averaging the data rate.
• Imagine a bucket with a small hole at the bottom.
• The rate at which the water is poured into the bucket is not fixed and can vary but it leaks from the bucket
at a constant rate. Thus (as long as water is present in bucket), the rate at which the water leaks does not
depend on the rate at which the water is input to the bucket.
Also, when the bucket is full, any additional water that enters into the bucket spills over the sides and is lost.
• The same concept can be applied to packets in the network. Consider that data is coming from the source at
variable speeds. Suppose that a source sends data at 12 Mbps for 4 seconds. Then there is no data for 3
seconds. The source again transmits data at a rate of 10 Mbps for 2 seconds. Thus, in a time span of 9
seconds, 68 Mb data has been transmitted.
If a leaky bucket algorithm is used, the data flow will be 8 Mbps for 9 seconds. Thus constant flow is
maintained.
 Token bucket Algorithm
The leaky bucket algorithm allows only an average (constant) rate of data flow. Its major
problem is that it cannot deal with bursty data.
• A leaky bucket algorithm does not consider the idle time of the host. For example, if the
host was idle for 10 seconds and now it is willing to sent data at a very high speed for
another 10 seconds, the total data transmission will be divided into 20 seconds and average
data rate will be maintained. The host is having no advantage of sitting idle for 10 seconds.
• To overcome this problem, a token bucket algorithm is used. A token bucket algorithm
allows bursty data transfers.
• A token bucket algorithm is a modification of leaky bucket in which leaky bucket
contains tokens.
• In this algorithm, a token(s) are generated at every clock tick. For a packet to be
transmitted, system must remove token(s) from the bucket.
• Thus, a token bucket algorithm allows idle hosts to accumulate credit for the future in
form of token
■ For example, if a system generates 100 tokens in one clock tick and the host is idle for 100 ticks. The
bucket will contain 10,000 tokens.
■ Now, if the host wants to send bursty data, it can consume all 10,000 tokens at once for sending 10,000
cells or bytes.
■ Thus a host can send bursty data as long as bucket is not empty.
 Presentors:
-Anushthan Dhamala
-Anurag Dhamala
-Anish Nepal
-Bipin Kuikel
-Bibek Acharya

You might also like