Unit-Iii-C N-R18
Unit-Iii-C N-R18
Network Layer: Design issues, Routing algorithms: shortest path routing, Flooding,
Hierarchical routing, Broadcast, Multicast, distance vector routing, Congestion
Control Algorithms, Quality of Service, Internetworking, the Network layer in the
internet.
• It handles the service requests from the transport layer and further forwards the service
request to the data link layer.
• The network layer translates the logical addresses into physical addresses
• It determines the route from the source to the destination and also manages the traffic
problems such as switching, routing and controls the congestion of data packets.
• The main role of the network layer is to move the packets from sending host to the receiving
host.
The transport layer needs to be protected from the type, number and topology of the
available router.
The network addresses for the transport layer should use uniform numbering pattern also at
LAN and WAN connections.
Connectionless – The routing and insertion of packets into subnet is done individually. No
added setup is required.
Connection-Oriented – Subnet must offer reliable service and all the packets must be
transmitted over a single route.
1
4. Implementation of Connection-Oriented Service:
• If connection-oriented service is used, a path from the source router all the way to the
destination router must be established before any data packets can be sent.
• This connection is called a VC (virtual circuit), and the network is called a virtual-circuit
network.
• In connection-oriented services, the data packets are delivered to the receiver in the same
order in which they have been sent by the sender.
• With connection-oriented service, each packet carries an identifier telling which virtual
circuit it belongs to.
2
Classification of Routing Algorithms: The routing algorithms can be classified as follows:
1. Adaptive Algorithms:These are the algorithms which change their routing decisions whenever
network topology or traffic load changes. The changes in routing decisions are reflected in the
topology as well as traffic of the network. Also known as dynamic routing, these make use of
dynamic information such as current topology, load, delay, etc. to select routes. Optimization
parameters are distance, number of hops and estimated transit time.
Further these are classified as follows:
(a) Isolated – In this method each, node makes its routing decisions using the information it
has without seeking information from other nodes. The sending nodes doesn’t have
information about status of particular link. Disadvantage is that packet may be sent through a
congested network which may result in delay.
(b) Centralized – In this method, a centralized node has entire information about the network
and makes all the routing decisions. Advantage of this is only one node is required to keep the
information of entire network and disadvantage is that if central node goes down the entire
network is done. Link state algorithm is referred to as a global algorithm since it is aware of
the cost of each link in the network.
(c) Distributed – In this method, the node receives information from its neighbours and then
takes the decision about routing the packets. Disadvantage is that the packet may be delayed if
there is change in between interval in which it receives information and sends packet. It is also
known as decentralized algorithm as it computes the least-cost path between source and
destination
2. Non-Adaptive Algorithms –
These are the algorithms which do not change their routing decisions once they have been selected.
This is also known as static routing, as route to be taken is computed in advance and downloaded to
routers when router is booted.
(b) Random walk – In this method, packets are sent host by host or node by node to one of its
neighbours randomly. This is highly robust method which is usually implemented by sending
packets onto the link which is least queued.
3
Differences b/w Adaptive and Non-Adaptive Routing Algorithm
It depends on the following concept: Shortest path contains at most n−1 edges, because the shortest
path couldn't have a cycle.
Algorithm Steps:
The outer loop traverses from 0 : n−1.
Loop over all edges, check if the next node distance < current node distance + edge weight,
in this case update the next node distance to "current node distance + edge weight".
This algorithm depends on the relaxation principle where the shortest distance for all vertices is
gradually replaced by more accurate values until eventually reaching the optimum solution. In the
beginning all vertices have a distance of "Infinity", but only the distance of the source vertex = 0,
then update all the connected vertices with the new distances (source vertex distance + edge
weights), then apply the same concept for the new vertices with new distances and so on.
Each node is labeled with its distance from the source node along the best known path.
Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds
and paths are found, the labels may change, reflecting better paths.
4
A label may be either tentative or permanent. Initially, all labels are tentative. When it is
discovered that a label represents the shortest possible path from the source to that node, it is
made permanent and never changed thereafter.
Let us assume the weights represent, for example, distance. We want to find the shortest
path from A to D from the following Graph.
How it Works:
Step-1:
Each router prepares its routing table. By their local knowledge. Each router knows about-
All the Routers present in the network.
Distance to its neighbor Routers.
6
Step-2:
Each router exchanges its distance vector with its neighboring routers.
Each router prepares a new routing table using the distance vectors it has obtained from its
neighbors.
This step is repeated for (n-1) times if there are n routers in the network.
After this, routing tables converge / become stable
Consider- There is a network consisting of 4 routers. The weights are mentioned on the edges.
Weights could be distances or costs or delays.
Step-1:
Initially each router prepares its routing table using its local knowledge.
Step: 2
Each router exchanges its distance vector obtained in Step-01 with its neighbors.
After exchanging the distance vectors, each router prepares a new routing table.
At Router A
Router A receives distance vectors from its neighbors B and D.
Router A prepares a new routing table as-
7
Cost of reaching destination B from router A = min { 2+0 , 1+7 } = 2 via B.
Cost of reaching destination C from router A = min { 2+3 , 1+11 } = 5 via B.
Cost of reaching destination D from router A = min { 2+7 , 1+0 } = 1 via D
Router A can reach the destination router B via its neighbor B or neighbor D.
It chooses the path which gives the minimum cost.
Cost of reaching router B from router A via neighbor B = Cost (A→B) + Cost (B→B)= 2 + 0 = 2
Cost of reaching router B from router A via neighbor D = Cost (A→D) + Cost (D→B) = 1 + 7 = 8
Since the cost is minimum via neighbor B, so router A chooses the path via B.
It creates an entry (2, B) for destination B in its new routing table.
Similarly, we calculate the shortest path distance to each destination router at every router.
At Router B
Router B receives distance vectors from its neighbors A, C and D.
Router B prepares a new routing table as-
Cost of reaching destination A from router B = min { 2+0 , 3+∞ , 7+1 } = 2 via A.
Cost of reaching destination C from router B = min { 2+∞ , 3+0 , 7+11 } = 3 via C.
Cost of reaching destination D from router B = min { 2+1 , 3+11 , 7+0 } = 3 via A.
8
Thus, the new routing table at router B is-
At Router C-
Router C receives distance vectors from its neighbors B and D.
Router C prepares a new routing table as-
At Router D-
Router D receives distance vectors from its neighbors A, B and C.
Router D prepares a new routing table as-
Cost of reaching destination A from router D = min { 1+0 , 7+2 , 11+∞ } = 1 via A.
Cost of reaching destination B from router D = min { 1+2 , 7+0 , 11+3 } = 3 via A.
Cost of reaching destination C from router D = min { 1+∞ , 7+3 , 11+0 } = 10 via B .
9
Thus, the new routing table at router D is
Step-03:
Each router exchanges its distance vector obtained in Step-02 with its neighboring routers.
After exchanging the distance vectors, each router prepares a new routing table.
At Router A
Router A receives distance vectors from its neighbors B and D.
Router A prepares a new routing table as-
At Router B-
Router B receives distance vectors from its neighbors A, C and D.
Router B prepares a new routing table as-
10
Cost of reaching destination A from router B = min { 2+0 , 3+5 , 3+1 } = 2 via A.
Cost of reaching destination C from router B = min { 2+5 , 3+0 , 3+10 } = 3 via C.
Cost of reaching destination D from router B = min { 2+1 , 3+10 , 3+0 } = 3 via A.
Thus, the new routing table at router B is-
At Router C-
Router C receives distance vectors from its neighbors B and D.
Router C prepares a new routing table as-
11
At Router D-
Router D receives distance vectors from its neighbors A, B and C.
Router D prepares a new routing table as-
Cost of reaching destination A from router D = min { 1+0 , 3+2 , 10+5 } = 1 via A.
Cost of reaching destination B from router D = min { 1+2 , 3+0 , 10+3 } = 3 via A.
Cost of reaching destination C from router D = min { 1+5 , 3+3 , 10+0 } = 6 via A.
Thus, the new routing table at router D is-
So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries
for each other. B will know that it can get to C at a cost of 1, and A will know that it can get to C via
B at a cost of 2.
If the link between B and C is disconnected, then B will know that it can no longer get to C via that
link and will remove it from it’s table. Before it can send any updates it’s possible that it will receive
an update from A which will be advertising that it can get to C at a cost of 2. B can get to A at a cost
12
of 1, so it will update a route to C via A at a cost of 3. A will then receive updates from B later and
update its cost to 4. They will then go on feeding each other bad information toward infinity which is
called as Count to Infinity problem.
1. Route Poisoning:
When a route fails, distance vector protocols spread the bad news about a route failure by poisoning
the route. Route poisoning refers to the practice of advertising a route, but with a special metric value
called Infinity. Routers consider routes advertised with an infinite metric to have failed. Each distance
vector routing protocol uses the concept of an actual metric value that represents infinity. RIP defines
infinity as 16. The main disadvantage of poison reverse is that it can significantly increase the size of
routing announcements in certain fairly common network topologies.
2.Split horizon:
In computer networking, split-horizon route advertisement is a method of preventing routing loops
in distance-vector routing protocols by prohibiting a router from advertising a route back onto the
interface from which it was learned
If the link between B and C goes down, and B had received a route from A , B could end up using
that route via A. A would send the packet right back to B, creating a loop. But according to Split -
horizon Rule, Node A does not advertise its route for C (namely A to B to C) back to B.
On the surface, this seems redundant since B will never route via node A because the route costs more
than the direct route from B to C.
Link-State Routing
Link-state routing is an alternative to distance-vector. It is also a dynamic routing algorithm. In
distance-vector routing, each node knows a bare minimum of network topology: it knows nothing
about links beyond those to its immediate neighbours. In the link-state approach, each node keeps
a maximum amount of network information: a full map of all nodes and all links. Routes are then
computed locally from this map, using the shortest-path-first algorithm. The map also allows
calculation of a new route as soon as news of the failure of the existing route arrives; distance-
vector protocols on the other hand must wait for news of a new route after an existing route fails.
Link-state protocols distribute network map information through a modified form of
broadcast of the status of each individual link.
13
It sends out link-state packets (LSPs) that “flood” the network. This broadcast process is
called reliable flooding.
The link-state flooding algorithm avoids the usual problems of broadcast in the presence of
loops by having each node keep a database of all LSP messages. The originator of each LSP
includes its identity, information about the link that has changed status, and also a sequence
number.
The next step is to compute routes from the network map, using the shortest-path-first
(SPF) algorithm. This algorithm computes shortest paths from a given node.
Reliable Flooding
o Initial state: Each node knows the cost of its neighbours.
o Final state: Each node knows the entire graph.
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.
o The Link state routing algorithm is also known as Dijkstra's algorithm which is used to find
the shortest path from one node to every other node in the network.
o The Dijkstra's algorithm is an iterative, and it has the property that after k th iteration of the
algorithm, the least cost paths are well known for k destination nodes.
Algorithm
Initialization
N = {A} // A is a root node.
for all nodes v
if v adjacent to A
then D(v) = c(A,v)
else D(v) = infinity
loop
find w not in N such that D(w) is a minimum.
Add w to N
Update D(v) for all v adjacent to w and not in N:
D(v) = min(D(v) , D(w) + c(w,v))
14
Until all nodes in N
In the above algorithm, an initialization step is followed by the loop. The number of times the loop
is executed is equal to the total number of nodes available in the network.
Hierarchical Routing
As networks grow in size, the router routing tables grow proportionally. Not only is router memory
consumed by ever-increasing tables, but more CPU time is needed to scan them and more
bandwidth is needed to send status reports about them. At a certain point, the network may grow to
the point where it is no longer feasible for every router to have an entry for every other router, so
the routing will have to be done hierarchically When hierarchical routing is used, the routers are
divided into what we will call regions. Each router knows all the details about how to route packets
to destinations within its own region but knows nothing about the internal structure of other regions.
When different networks are interconnected, it is natural to regard each one as a separate region to
free the routers in one network from having to know the topological structure of the other ones. For
huge networks, a two-level hierarchy may be insufficient; it may be necessary to group the regions
into clusters, the clusters into zones, the zones into groups, and so on, until we run out of names for
aggregations.
The following Figure(5.14) gives a quantitative example of routing in a two-level hierarchy with
five regions. The full routing table for router 1A has 17 entries, as shown in Fig. 5-14(b). When
routing is done hierarchically, as in Fig. 5-14(c), there are entries for all the local routers, as before,
but all other regions are condensed into a single router, so all traffic for region 2 goes via the 1B-2A
line, but the rest of the remote traffic goes via the 1C-3B line. Hierarchical routing has reduced the
table from 17 to 7 entries. As the ratio of the number of regions to the number of routers per region
grows, the savings in table space increase.
Unfortunately, these gains in space are not free. There is a penalty to be paid: increased path length.
For example, the best route from 1A to 5C is via region 2, but with hierarchical routing all traffic to
region 5 goes via region 3, because that is better for most destinations in region 5.
15
Fig: Hierarchical routing.
When a single network becomes very large, an interesting question is ‘‘how many levels should the
hierarchy have?’’ For example, consider a network with 720 routers. If there is no hierarchy, each
router needs 720 routing table entries. If the network is partitioned into 24 regions of 30 routers
each, each router needs 30 local entries plus 23 remote entries for a total of 53 entries. If a three-
level hierarchy is chosen, with 8 clusters each containing 9 regions of 10 routers, each router needs
10 entries for local routers, 8 entries for routing to other regions within its own cluster, and 7 entries
for distant clusters, for a total of 25 entries.
Unicast routing
Most of the traffic on the internet and intranets known as unicast data or unicast traffic is sent with
specified destination. Routing unicast data over the internet is called unicast routing. It is the
simplest form of routing because the destination is already known. Hence the router just has to look
up the routing table and forward the packet to next hop.
16
Broadcast Routing
Sending a packet to all destinations simultaneously is called broadcasting. Various methods have
been proposed for doing it.
A router creates a data packet and then sends it to each host one by one. In this case, the
router creates multiple copies of single data packet with different destination addresses. All
packets are sent as unicast but because they are sent to all, it simulates as if router is
broadcasting. This method consumes lots of bandwidth and router must destination address
of each node.
Secondly, when router receives a packet that is to be broadcasted, it simply floods those
packets out of all interfaces. All routers are configured in the same way. This method is
easy on router's CPU but may cause the problem of duplicate packets received from peer
routers. Reverse path forwarding is a technique, in which router knows in advance about its
predecessor from where it should receive broadcast. This technique is used to detect and
discard duplicates.
The disadvantage with this method is wasteful of bandwidth and slow, but it also requires the
source to have a complete list of all destinations. This method is not desirable in practice, even
though it is widely applicable. An improvement is multidestination routing, in which each packet
contains either a list of destinations or a bit map indicating the desired destinations. When a packet
arrives at a router, the router checks all the destinations to determine the set of output lines that will
be needed. The router generates a new copy of the packet for each output line to be used and
includes in each packet only those destinations that are to use the line. In effect, the destination set
is partitioned among the output lines. After a sufficient number of hops, each packet will carry only
one destination like a normal packet. Multidestination routing is like using separately addressed
packets, except that when several packets must follow the same route, one of them pays full fare
and the rest ride free. The network bandwidth is therefore used more efficiently. However, this
scheme still requires the source to know all the destinations, plus it is as much work for a router to
determine where to send one multidestination packet as it is for multiple distinct packets.
Multicast Routing
17
Multicast routing is special case of broadcast routing with significance difference and challenges. In
broadcast routing, packets are sent to all nodes even if they do not want it. But in Multicast routing,
the data is sent to only nodes which wants to receive the packets.
The router must know that there are nodes, which wish to receive multicast packets (or stream)
then only it should forward. Multicast routing works spanning tree protocol to avoid looping.
Multicast routing also uses reverse path Forwarding technique, to detect and discard duplicates and
loops.
Anycast Routing
Anycast packet forwarding is a mechanism where multiple hosts can have same logical address.
When a packet destined to this logical address is received, it is sent to the host which is nearest in
routing topology.
Anycast routing is done with help of DNS server. Whenever an Anycast packet is received it is
enquired with DNS to where to send it. DNS provides the IP address which is the nearest IP
configured on it.
CONGESTION
18
Congestion in a net work may occur if the load on the network is greater than the capacity of a
network. If congestion occurs too many packets present in (a part of) the network causes packet
delay and loss that degrades performance. This situation is called congestion.
When the number of packets hosts send into the network is well within its carrying capacity, the
number delivered is proportional to the number sent. If twice as many are sent, twice as many are
delivered. However, as the offered load approaches the carrying capacity, bursts of traffic
occasionally fill up the buffers inside routers and some packets are lost. These lost packets consume
some of the capacity, so the number of delivered packets falls below the ideal curve. The network is
now congested.
The presence of congestion means that the load is (temporarily) greater than the resources (in a part
of the network) can handle. Two solutions come to mind: increase the resources or decrease the
load. As shown in Fig. 5-22, these solutions are usually applied on different time scales to either
prevent congestion or react to it once it has occurred.
19
Network Provisioning: Is the most basic way to avoid congestion is to build a network that
is well matched to the traffic that it carries. Sometimes resources can be added dynamically
when there is serious congestion, for example, turning on spare routers or enabling lines that
are normally used only as backups or purchasing bandwidth on the open market. More often,
links and routers that are regularly heavily utilized are upgraded at the earliest opportunity.
This is called provisioning.
Traffic-aware routing: Splitting traffic across multiple paths is called traffic-aware routing.
To make the most of the existing network capacity, routes can be tailored to traffic patterns
that change during the day as network users wake and sleep in different time zones. For
example, routes may be changed to shift traffic away from heavily used paths by changing
the shortest path weights.
Admission control: However, sometimes it is not possible to increase capacity. The only
way then to beat back the congestion is to decrease the load. In a virtual-circuit network,
new connections can be refused if they would cause the network to become congested. This
is called admission control. At a finer granularity, when congestion is imminent the
network can deliver feedback to the sources whose traffic flows are responsible for the
problem. The network can request these sources to throttle their traffic, or it can slow down
the traffic itself.
Load shedding: The process of forcing the network to discard packets that it cannot deliver
is generally called as Load Shedding. A good policy for choosing which packets to discard
can help to prevent congestion collapse.
Traffic Throttling: In the Internet and many other computer networks, senders adjust their
transmissions to send as much traffic as the network can readily deliver. When congestion
is imminent, it must tell the senders to throttle back their transmissions and slow down so
that congestion can be avoided.
a) Choke Packets: The most direct way to notify a sender of congestion is to tell it directly.
In this approach, the router selects a congested packet and sends a choke packet back to
the source host, To avoid increasing load on the network during a time of congestion, the
router may only send choke packets at a low rate. When the source host gets the choke
packet, it is required to reduce the traffic sent to the specified destination, for example, by
50%.
b) Explicit Congestion Notification: Instead of generating additional packets to warn of
congestion, a router can tag any packet it forwards (by setting a bit in the packet’s header)
to signal that it is experiencing congestion. When the network delivers the packet, the
destination can note that there is congestion and inform the sender when it sends a reply
packet. The sender can then throttle its transmissions as before. This design is called
ECN (Explicit Congestion Notification) and is used in the Internet.
If any of the routers they pass through is congested, that router will then mark the packet
as having experienced congestion as it is forwarded. The destination will then echo any
marks back to the sender as an explicit congestion signal in its next reply packet. This is
shown with a dashed line in the figure to indicate that it happens above the IP level (e.g.,
in TCP). The sender must then throttle its transmissions, as in the case of choke packets.
21
Load Shedding: Load shedding is a fancy way of saying that when routers are being
inundated by packets that they cannot handle, they just throw them away. The term comes
from the world of electrical power generation, where it refers to the practice of utilities
intentionally blacking out certain areas to save the entire grid from collapsing on hot
summer days when the demand for electricity greatly exceeds the supply. The key question
for a router drowning in packets is which packets to drop.
The preferred choice may depend on the type of applications that use the network. For a file
transfer, an old packet is worth more than a new one. This is because dropping packet 6 and
keeping packets 7 through 10, for example, will only force the receiver to do more work to
buffer data that it cannot yet use. In contrast, for real-time media, a new packet is worth
more than an old one. This is because packets become useless if they are delayed and miss
the time at which they must be played out to the user. The former policy (old is better than
new) is often called wine and the latter (new is better than old) is often called milk because
most people would rather drink new milk and old wine than the alternative.
Random early detection (RED), also known as random early discard or random
early drop is a queuing discipline for a network scheduler suited for congestion avoidance
22
By having routers drop packets early, before the situation has become hopeless, there is time
for the source to take action before it is too late. To determine when to start discarding,
routers maintain a running average of their queue lengths. When the average queue length
on some link exceeds a threshold, the link is said to be congested and a small fraction of the
packets are dropped at random. RED routers improve performance compared to routers that
drop packets only when their buffers are full. RED is used when hosts cannot receive
explicit signals.
Congestion control techniques can be broadly classified into two categories called
1) Open-Loop Congestion Control : Open loop congestion control policies are applied to prevent
congestion before it happens. The congestion control is handled either by the source or the
destination.
2) Closed-Loop Congestion Control: Closed loop congestion control technique is used to treat or
alleviate congestion after it happens.
QUALITY OF SERVICE
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions or traffic aggregates. An easy solution to provide
good quality of service is to build a network with enough capacity for whatever traffic will be
thrown at it. The name for this solution is overprovisioning. The resulting network will carry
application traffic without significant loss and, assuming a decent routing scheme, will deliver
packets with low latency.
23
QoS requirements for an Application can be specified as:
1. Delay: Network delay refers to the amount of time it takes for a packet to go from point A to
point B. If Point A is the source and point B is the destination, then the delay is called an end to
end delay.
2. Delay Variation(Jitter): It is the variation of the delays with which packets travelling on a
network connection reach their destination.
3. Throughput: Throughput is usually measured in bits per second (bit/s or bps), and sometimes
in data packets per second (p/s or pps) or data packets per time slot.
4. Error Rate: The degree of errors encountered during data transmission over a communications
or network connection. The higher the error rate, the less reliable the connection or data
transfer will be.
5. Bandwidth describes the maximum data transfer rate of a network or Internet connection,
For example, a gigabit Ethernet connection has a bandwidth of 1,000 Mbps
For better quality services an application may require less Delay, less jitter, less error rate , more
bandwidth and more throughput.
1) Scheduling,
2) Traffic shaping
3) Admission control, and
4) Resource reservation.
1. Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling techniques
are designed to improve the quality of service, three of them here are :
a) FIFO queuing
b) Priority queuing, and
c) Weighted fair queuing.
a) FIFO queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until
the node (router or switch) is ready to process them. If the average arrival rate is higher than
the average processing rate, the queue will fill up and new packets will be discarded.
24
b) Priority queuing: In priority queuing, packets are first assigned to a priority class. Each
priority class has its own queue. The packets in the highest-priority queue are processed
first. Packets in the lowest- priority queue are processed last.A priority queue can provide
better QoS than the FIFO queue because higher priority traffic, such as multimedia, can
reach the destination with less delay. However, there is a potential drawback. If there is a
continuous flow in a high-priority queue, the packets in the lower-priority queues will never
have a chance to be processed. This is a condition called starvation
c) Weighted fair queuing: A better scheduling method is weighted fair queuing. In this
technique, the packets are still assigned to different classes and admitted to different queues.
The queues, however, are weighted based on the priority of the queues; higher priority
means a higher weight. The system processes packets in each queue in a round-robin fashion
with the number of packets selected from each queue based on the corresponding weight.
For example, if the weights are 3, 2, and 1, three packets are processed from the first queue,
two from the second queue, and one from the third queue. If the system does not impose
priority on the classes, all weights can be equal. In this way, we have fair queuing with
priority.
2. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network.
Two techniques can shape traffic are:
1) Leaky bucket Algorithm and
2) Token bucket Algorithm
25
a. Leaky Bucket: If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket. The rate at which the water leaks does not
depend on the rate at which the water is input to the bucket unless the bucket is empty. The input
rate can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an
average rate. This can be seen from the below diagram.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host. The use
of the leaky bucket shapes the input traffic to make it conform to this commitment. In above Figure,
the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is
silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the
host has sent 30 Mbits of data in lOs. The leaky bucket smooth’s the traffic by sending out data at a
rate of 3 Mbps during the same 10 s.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data rate. It
may drop the packets if the bucket is full.
A FIFO queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM
networks), the process removes a fixed number of packets from the queue at each tick of the clock.
If the traffic consists of variable-length packets, the fixed output rate must be based on the number
of bytes or bits.
26
b. Token Bucket: The leaky bucket is very restrictive. It does not credit an idle host. For example,
if a host is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the
leaky bucket allows only an average rate. The time when the host was idle is not taken into account.
On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future
in the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The
system removes one token for every cell (or byte) of data sent. For example, if n is 100 and the host
is idle for 100 ticks, the bucket collects 10,000 tokens. The token bucket allows bursty traffic at a
regulated maximum rate.
The token bucket can easily be implemented with a counter. The token is initialized to zero. Each
time a token is added, the counter is incremented by 1. Each time a unit of data is sent, the counter
is decremented by 1. When the counter is zero, the host cannot send data.
3. Admission Control: It is a mechanism used by the networking device like router and switches
to accept or reject a flow based on predefined parameters called flow specification. Before a
router accepts a flow for processing, it checks the flow specification to see if its capacity and its
previous commitments to other flows can handle the new flow.
4. Resource Reservation: A flow of data needs resource such as buffer bandwidth, CPU time and
so on. The QoS is improved if these resources are reserved beforehand.
Internetworking
In real world scenario, networks under same administration are generally scattered geographically.
There may exist requirement of connecting two different networks of same kind as well as of
different kinds. Routing between two networks is called internetworking. Networks can be
considered different based on various parameters such as, Protocol, topology and addressing
scheme. In internetworking, routers have knowledge of each other’s address and addresses beyond
them. They can be statically configured go on different network or they can learn by using
internetworking routing protocol.
27
Routing protocols which are used within an organization or administration are called Interior
Gateway Protocols or IGP. RIP, OSPF are examples of IGP. Routing between different
organizations or administrations may have Exterior Gateway Protocol, and there is only one EGP
i.e. Border Gateway Protocol.
Tunneling
A technique of internetworking called Tunneling is used when source and destination networks
of same type are to be connected through a network of different type. Tunneling is also known as
port forwarding .Tunneling is widely used to connect isolated hosts and networks using other
networks. The network that results is called an overlay since it has effectively been overlaid on the
base network. Tunneling involves allowing private network communications to be sent across a
public network, such as the Internet, through a process called encapsulation.
For Example : This case is where the source and destination hosts are on the same type of network,
but there is a different network in between. As an example, think of an international bank with an
IPv6 network in Paris, an IPv6 network in London and connectivity between the offices via the
IPv4 Internet. Tunneling a packet from Paris to London can be seen in below fig.
To send an IP packet to a host in the London office, a host in the Paris office constructs the packet
containing an IPv6 address in London, and sends it to the multiprotocol router that connects the
Paris IPv6 network to the IPv4 Internet. When this router gets the IPv6 packet, it encapsulates the
packet with an IPv4 header addressed to the IPv4 side of the multiprotocol router that connects to
the London IPv6 network. That is, the router puts a (IPv6) packet inside a (IPv4) packet. When this
wrapped packet arrives, the London router removes the original IPv6 packet and sends it onward to
the destination host. The path through the IPv4 Internet can be seen as a big tunnel extending from
one multiprotocol router to the other. The IPv6 packet just travels from one end of the tunnel to the
other, snug in its nice box. It does not have to worry about dealing with IPv4 at all. Neither do the
hosts in Paris or London. Only the multiprotocol routers have to understand both IPv4 and IPv6
packets. In effect, the entire trip from one multiprotocol router to the other is like a hop over a
single link.
Tunneling is widely used to connect isolated hosts and networks using other networks. The network
that results is called an overlay since it has effectively been overlaid on the base network.
Packet Fragmentation
28
Fragmentation is an important function of network layer. It is technique in which gateways
break up or divide larger packets into smaller ones called fragments. Each fragment is then sent as
a separate internal packet. Each fragment has its separate header and trailer. Sometimes, a
fragmented datagram also get fragmented when it encounter a network that handle smaller
fragments. Thus, a datagram can be fragmented several times before it reaches final destination.
Reverse process of the fragmentation is difficult. Reassembly of fragments is usually done by the
destination host because each fragment has become an independent datagram.
There are two different strategies for the recombination or we can say reassembly of fragments :
Transparent Fragmentation, and Non-Transparent Fragmentation.
1. Transparent Fragmentation :
This fragmentation is done by one network is made transparent to all other subsequent
networks through which packet will pass. Whenever a large packet arrives at a gateway, it
breaks packet into smaller fragments as shown in the following figure gateway G1 breaks a
packet into smaller fragments.
After this, each fragment is going to address to same exit gateway. Exist gateway of a network
reassembles or recombines all fragments example is shown in the above figure as exit
gateway, G2 of network 1 recombines all fragments created by G1 before passing them to
network 2. Thus, subsequent network is not aware that fragmentation has occurred. This type
of strategy is used by ATM networks. These networks use special hardware that provides
transparent fragmentation of packets.
There are some disadvantages of transparency strategy which are as follows :
Exit fragment that recombines fragments in a network must known when it has received all
fragments.
Some fragments chooses different gateways for exit that results in poor performance.
It adds considerable overhead in repeatedly fragmenting and reassembling large packet.
29
2. Non-Transparent Fragmentation :
This fragmentation is done by one network is non-transparent to the subsequent networks
through which a packet passes. Packet fragmented by a gateway of a network is not
recombined by exit gateway of same network as shown in the below figure.
Once a packet is fragmented, each fragment is treated as original packet. All fragments of a
packet are passed through exit gateway and recombination of these fragments is done at the
destination host.
Disadvantages of Non-Transparent Fragmentation is as follows :
Every host has capability of reassembling fragments.
When a packet is fragmented, fragments should be numbered in such a way that the
original data stream can be reconstructed.
Total overhead increases due to fragmentation as each fragment must have its own header.
In the network layer, the Internet can be viewed as a collection of networks or ASes (Autonomous
Systems) that are interconnected. There is no real structure, but several major backbones exist.
These are constructed from high-bandwidth lines and fast routers. The biggest of these backbones,
to which everyone else connects to reach the rest of the Internet, are called Tier 1 networks.
Attached to the backbones are ISPs (Internet Service Providers) that provide Internet access to
homes and businesses, data centers and colocation facilities full of server machines, and regional
(mid-level) networks. The data centers serve much of the content that is sent over the Internet.
Attached to the regional networks are more ISPs, LANs at many universities and companies, and
other edge networks.
The glue that holds the whole Internet together is the network layer protocol, IP (Internet
Protocol).
31
HLEN: IP header length (4 bits), which is the number of 32 bit words in the header. The
minimum value for this field is 5 and the maximum is 15.
Type of service: Low Delay, High Throughput, Reliability (8 bits)
Total Length: Length of header + Data (16 bits), which has a minimum value 20 bytes
and the maximum is 65,535 bytes.
Identification: Unique Packet Id for identifying the group of fragments of a single IP
datagram (16 bits)
Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag, more
fragments flag (same order)
Fragment Offset: Represents the number of Data Bytes ahead of the particular fragment
in the particular Datagram. Specified in terms of number of 8 bytes, which has the
maximum value of 65,528 bytes.
Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop through the
network by restricting the number of Hops taken by a Packet before delivering to the
Destination.
Protocol: Name of the protocol to which the data is to be passed (8 bits)
Header Checksum: 16 bits header checksum for checking errors in the datagram header
Source IP address: 32 bits IP address of the sender
Destination IP address: 32 bits IP address of the receiver
Option: Optional information such as source route, record route. Used by the Network
administrator to check whether a path is working or not.
Due to the presence of options, the size of the datagram header can be of variable length (20 bytes
to 60 bytes).
IP Addresses
An Internet Protocol address (IP address) is a numerical label assigned to each device connected
to a computer network that uses the Internet Protocol for communication. An IP address serves
two main functions: host or network interface identification and location addressing.
IP address act as an identifier for a specific machine on a particular network. It also helps you to
develop a virtual connection between a destination and a source. The IP address is also called IP
number or internet address.
Version of IP address
Two types of IP addresses are 1)IPV4 and 2) IPV6.
32
IPV4: IPv4 was the first version of IP. It was deployed for production in the ARPANET in
1983. Today it is the most widely used IP version. It is used to identify devices on a network
using an addressing system. The IPv4 uses a 32-bit address scheme allowing to store 2^32
addresses, which is more than 4 billion addresses.
IPV6: It is the most recent version of the Internet Protocol. Internet Engineer Taskforce
initiated it in early 1994. The design and development of that suite is now called IPv6. This
new IP address version is being deployed to fulfill the need for more Internet addresses. It
was aimed to resolve issues which are associated with IPv4. With 128-bit address space.
Each IP address consists of two parts. The first part (first three bytes in IP address) specifies the
network and second part (last byte of an IP address) specifies the host in the network.
Classful Addressing
The class of IP address is used to determine the number of bits used in a class and number of
networks and hosts available in the class.
Class A:
In Class A, an IP address is assigned to those networks that contain a large number of hosts.
o The network ID is 8 bits long.
o The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always set to 0 and the remaining 7
bits determine the network ID. The 24 bits determine the host ID in any network.
33
The total number of networks in Class A = 27 = 128 network address
The total number of hosts in Class A = 224 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from small-sized to large-sized
networks.
o The Network ID is 16 bits long.
o The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 10, and the remaining14 bits
determine the network ID. The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address
The total number of hosts in Class B = 216 - 2 = 65534 host address
Class C
In Class C, an IP address is assigned to only small-sized networks.
o The Network ID is 24 bits long.
o The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110, and the remaining 21 bits
determine the network ID. The 8 bits of the host ID determine the host in a network.
The total number of networks = 221 = 2097152 network address
The total number of hosts = 28 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses. It does not possess subnetting. The
higher order bits of the first octet is always set to 1110, and the remaining bits determines the host
ID in any network.
34
Class E
In Class E, an IP address is used for the future use or for the research and development purposes. It
does not possess any subnetting. The higher order bits of the first octet is always set to 1111, and
the remaining bits determines the host ID in any network.
There are also several other addresses that have special meanings, as shown in Fig. 5-54. The IP
address 0.0.0.0, the lowest address, is used by hosts when they are being booted. It means ‘‘this
network’’ or ‘‘this host.’’ IP addresses with 0 as the network number refer to the current network.
These addresses allow machines to refer to their own network without knowing its number (but they
have to know the network mask to know how many 0s to include).
The address consisting of all 1s, or 255.255.255.255—the highest address—is used to mean all
hosts on the indicated network. It allows broadcasting on the local network, typically a LAN. The
addresses with a proper network number and all 1s in the host field allow machines to send
broadcast packets to distant LANs anywhere in the Internet. However, many network administrators
disable this feature as it is mostly a security hazard. Finally, all addresses of the form 127. xx.yy.zz
are reserved for loopback testing. Packets sent to that address are not put out onto the wire; they are
processed locally and treated as incoming packets. This allows packets to be sent to the host without
the sender knowing its number, which is useful for testing.
What is IPV6
IPv6 is the next generation Internet Protocol (IP) address standard intended to supplement and
eventually replace IPv4, the protocol many Internet services still use today. Every computer, mobile
phone, home automation component, IoT sensor and any other device connected to the Internet
needs a numerical IP address to communicate between other devices. The original IP address
scheme, called IPv4, is running out of addresses due to its widespread usage from the proliferation
of so many connected devices.
1
Version (4-bits): It represents the version of Internet Protocol, i.e. 0110.
36
2
Traffic Class (8-bits): These 8 bits are divided into two parts. The most significant 6 bits are
used for Type of Service to let the Router Known what services should be provided to this
packet. The least significant 2 bits are used for Explicit Congestion Notification (ECN).
3
Flow Label (20-bits): This label is used to maintain the sequential flow of the packets
belonging to a communication. The source labels the sequence to help the router identify that
a particular packet belongs to a specific flow of information. This field helps avoid re-ordering
of data packets. It is designed for streaming/real-time media.
4
Payload Length (16-bits): This field is used to tell the routers how much information a
particular packet contains in its payload. Payload is composed of Extension Headers and
Upper Layer data. With 16 bits, up to 65535 bytes can be indicated; but if the Extension
Headers contain Hop-by-Hop Extension Header, then the payload may exceed 65535 bytes
and this field is set to 0.
5
Next Header (8-bits): This field is used to indicate either the type of Extension Header, or if the
Extension Header is not present then it indicates the Upper Layer PDU. The values for the
type of Upper Layer PDU are same as IPv4’s.
6
Hop Limit (8-bits): This field is used to stop packet to loop in the network infinitely. This is
same as TTL in IPv4. The value of Hop Limit field is decremented by 1 as it passes a link
(router/hop). When the field reaches 0 the packet is discarded.
7
Source Address (128-bits): This field indicates the address of originator of the packet.
8
Destination Address (128-bits): This field provides the address of intended recipient of the
packet.
Extension Headers
In IPv6, the Fixed Header contains only that much information which is necessary, avoiding those
information which is either not required or is rarely used. All such information is put between the
Fixed Header and the Upper layer header in the form of Extension Headers. Each Extension
Header is identified by a distinct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points to the first
Extension Header. If there is one more Extension Header, then the first Extension Header’s ‘Next-
Header’ field points to the second one, and so on. The last Extension Header’s ‘Next-Header’ field
points to the Upper Layer Header. Thus, all the headers points to the next one in a linked list
manner.
38