[go: up one dir, main page]

0% found this document useful (0 votes)
32 views82 pages

Unit-V Ece 15

Uploaded by

rahilraj46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views82 pages

Unit-V Ece 15

Uploaded by

rahilraj46
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

UNIT-V

Topics:
• Congestion Control Algorithms
General Principles
Congestion Prevention Policies
• Internetworking
• The Network layer in the Internet and in the
ATM Networks
Congestion control
Network Layer
• General principles
• Design issues • Prevention policies
• Routing
• Congestion
• Internetworking
Congestion?
• Too many packets in (a part of) the subnet!
Congestion?
• Too many packets in (a part of) the subnet!
• caused by
– the same output line is used by many input lines, insufficient memory.
– Router have an infinite memory, congestion gets worse.
– slow processor: Router’s CPUs are slow.
– low bandwidth lines

• Congestion control Vs flow control


– Congestion: global issue
– Flow control: point-to-point
Congestion control Vs Flow Control
A source can get a “ slow down” message (feed
back from receiver) because the receiver cannot
handle the load. This is flow control technique.
A source can get a “ slow down” message (feed
back from receiver) because the network
cannot handle the load. This is congestion
control technique.
General Principles of Congestion Control
• Many problems in complex systems, such as computer
networks, can be viewed from a control theory point of view
This approach leads to dividing all solutions into two groups:
open loop and closed loop.
• Open loop solutions attempt to solve the problem by good
design, in essence, to make sure it does not occur in the first
place. Once the system is up and running, midcourse
corrections are not made.
• Tools for doing open-loop control include deciding when to
accept new traffic, deciding when to discard packets and which
ones, and making scheduling decisions at various points in the
network. All of these have in common the fact that they make
decisions without regard to the current state of the network.
General Principles of Congestion Control

• In contrast, closed loop solutions are based on the


concept of a feedback loop.
• This approach has three parts when applied to
congestion control:
1.Monitor the system to detect when and where
congestion occurs.
2.Pass this information to places where action can be
taken.
3.Adjust system operation to correct the problem.
Network Layer
• General principles
• Design issues • Prevention policies
• Routing
• Congestion
• Internetworking
• Internet Protocols
• Multimedia or QoS
Congestion Prevention Policies
• Let us start at the data link layer and work our way upward.
• The retransmission policy is concerned with how fast a sender
times out and what it transmits upon timeout.
• A jumpy sender that times out quickly and retransmits all
outstanding packets using go back n will put a heavier load on
the system than will a leisurely sender that uses selective
repeat. Closely related to this is the buffering policy.
• If receivers routinely discard all out-of-order packets, these
packets will have to be transmitted again later, creating extra
load. With respect to congestion control, selective repeat is
clearly better than go back n.
• Acknowledgement policy also affects congestion. If each
packet is acknowledged immediately, the acknowledgement
packets generate extra traffic. However, if acknowledgements
are saved up to piggyback onto reverse traffic, extra timeouts
and retransmissions may result
• A tight flow control scheme (e.g., a small window) reduces the
data rate and thus helps fight congestion
• At the network layer, the choice between using virtual circuits
and using data grams affects congestion since many congestion
control algorithms work only with virtual-circuit subnets.
• Packet queuing and service policy relates to whether routers
have one queue per input line, one queue per output line, or
both. It also relates to the order in which packets are processed
(e.g., round robin or priority based).
• Discard policy is the rule telling which packet is dropped when
there is no space. A good policy can help alleviate congestion
and a bad one can make it worse.
• A good routing algorithm can help avoid congestion by
spreading the traffic over all the lines, whereas a bad one can
send too much traffic over already congested lines.
• packet lifetime management deals with how long a packet may
live before being discarded. If it is too long, lost packets may
clog up the works for a long time, but if it is too short, packets
may sometimes time out before reaching their destination, thus
inducing retransmissions.
• In the transport layer, the same issues occur as in the data link
layer, but in addition, determining the timeout interval is harder
because the transit time across the network is less predictable
than the transit time over a wire between two routers.
• If the timeout interval is too short, extra packets will be sent
unnecessarily. If it is too long, congestion will be reduced but
the response time will suffer whenever a packet is lost.
Congestion Control in Datagram Subnets
• Each router can easily monitor the utilization of its output lines
and other resources. For example, it can associate with each line
a real variable, u, whose value, between 0.0 and 1.0, reflects the
recent utilization of that line.
• To maintain a good estimate of u, a sample of the
instantaneous line utilization, f (either 0 or 1), can be made
periodically and u updated according to
unew = a uold + (1 – a ) f
• where the constant a determines how fast the router forgets
recent history.
• Whenever u moves above the threshold, the output line enters a
''warning'' state.(unew > threshold)(In case of overload:)The Output line
enters warning state
Congestion Control in Datagram Subnets
Then Some action is taken:
Warning bit, Choke packets, Hop-by-hop choke packets
• Warning bit
– Output line in warning state
• Warning bit set in header
• Destination copies bit into next ack
• Source cuts back traffic
– Algorithm at source
• As long as warning bits arrive: reduce traffic
• Less warning bits: increase traffic
– Problems
• voluntary action of host!
• correct source selected?
– Used in
• DecNet
• Frame relay
• Choke packet:In this approach, the router sends a choke packet back
to the source host, giving it the destination found in the packet. The
original packet is tagged (a header bit is turned on) so that it will not
generate any more choke packets farther along the path and is then
forwarded in the usual way.
– In case of overload: router sends choke packet to host causing the
overload
– Host receiving choke packet
• reduces traffic to the specified destination
• ignores choke packets for a fixed interval
• new choke packets during next listening interval?
– Yes: reduce traffic
– No: increase traffic
– Problems:
• voluntary action of host!
• correct host selected?
• Choke packets:
– Example showing slow
reaction
– Solution: Hop-by-Hop choke
packets
• Hop-by-Hop choke packets
– Have choke packet take effect
at every hop
– Problem: more buffers needed
in routers
Load shedding
• Throw away packets that cannot be handled!!
• Packet selection?
– Random
– Based on application
• File transfer: discard new packet
• Multimedia: discard old packet
– Let sender indicate importance of packets
• Low, high priority
• Incentive to mark a packet with low priority
– Price
– Allow hosts to exceed agreed upon limits
• Random early detection …
• Random early detection
– Discard packets before all buffer space is
exhausted
– Routers maintain running average of queue lengths
– Select at random a packet
– Inform source?
• Send choke packet? ! more load!!
• No reporting
– When does it work?
• Source slows down when packets are lost
Congestion: jitter control
• Important for audio and video applications?
– not delay
– variance of delay
Congestion: jitter control
• Jitter = variation in packet delay
• Compute feasible mean value for delay
– compute expected transit time for each hop
– router checks to see if packet is
• behind
• ahead schedule
– behind: forward packet asap
– ahead: hold back packet to get it on schedule again
• Buffering? Depends on characteristics:
– Video on demand: ok
– Videoconferencing: nok
Quality of Service (QoS) Requirements
Congestion Control Algorithms

• Leaky Bucket Algorithm


– Regulate output flow
• Packets lost if buffer is full

• Token Bucket Algorithm


– Buffer filled with tokens
• transmit ONLY if tokens available
- Never loose data
Leaky Bucket Algorithm
• Imagine a bucket with a small hole in the bottom(Figure a)
• No matter the rate at which water enters the bucket, the outflow
is at a constant rate, r, when there is any water in the bucket and
zero when the bucket is empty. Also, once the bucket is full,
any additional water entering it spills over the sides and is lost
(i.e., does not appear in the output stream under the hole).
• The same idea can be applied to packets(Figure b).
Conceptually, each host is connected to the network by an
interface containing a leaky bucket, that is, a finite internal
queue.
• If a packet arrives at the queue when it is full, the packet is
discarded. In other words, if one or more processes within the
host try to send a packet when the maximum number is already
queued, the new packet is unceremoniously discarded.
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.
• This arrangement can be built into the hardware interface or
simulated by the host operating system. It was first proposed by
Turner (1986) and is called the leaky bucket algorithm.
• In fact, it is nothing other than a single-server queuing system
with constant service time.
• The host is allowed to put one packet per clock tick onto the
network. Again, this can be enforced by the interface card or by
the operating system. This mechanism turns an uneven flow of
packets from the user processes inside the host into an even
flow of packets onto the network, smoothing out bursts and
greatly reducing the chances of congestion.
• Implementing the original leaky bucket algorithm is easy. The
leaky bucket consists of a finite queue. When a packet arrives,
if there is room on the queue it is appended to the queue;
otherwise, it is discarded. At every clock tick, one packet is
transmitted (unless the queue is empty).
• The byte-counting leaky bucket is implemented almost the
same way. At each tick, a counter is initialized to n. If the first
packet on the queue has fewer bytes than the current value of
the counter, it is transmitted, and the counter is decremented by
that number of bytes.
• Additional packets may also be sent, as long as the counter is
high enough. When the counter drops below the length of the
next packet on the queue, transmission stops until the next tick,
at which time the residual byte count is reset and the flow can
continue.
• As an example of a leaky bucket, imagine that a computer can
produce data at 25 million bytes/sec (200 Mbps) and that the
network also runs at this speed. However, the routers can accept
this data rate only for short intervals (basically, until their
buffers fill up). For long intervals, they work best at rates not
exceeding 2 million bytes/sec.
• Now suppose data comes in 1-million-byte bursts, one 40-msec burst every
second. To reduce the average rate to 2 MB/sec, we could use a leaky bucket
with r=2 MB/sec and a capacity, C, of 1 MB. This means that bursts of up to
1 MB can be handled without data loss and that such bursts are spread out
over 500 msec, no matter how fast they come in.
• (a) Input to a leaky bucket. (b) Output from a leaky bucket. Output
from a token bucket with capacities of (c) 250 KB, (d) 500 KB, and (e)
750 KB. (f) Output from a 500KB token bucket feeding a 10-MB/sec
leaky bucket.
The Token Bucket Algorithm
• The leaky bucket algorithm enforces a rigid output pattern at
the average rate, no matter how bursty the traffic is. For many
applications, it is better to allow the output to speed up
somewhat when large bursts arrive, so a more flexible
algorithm is needed, preferably one that never loses data. One
such algorithm is the token bucket algorithm.
• In this algorithm, the leaky bucket holds tokens, generated by a
clock at the rate of one token every DT sec
• we see a bucket holding three tokens, with five packets waiting
to be transmitted. For a packet to be transmitted, it must capture
and destroy one token (Figure a)
• we see that three of the five packets have gotten through, but
the other two are stuck waiting for two more tokens to be
generated.(Figure b)
The Token Bucket Algorithm

(a) Before. (b) After.


• The token bucket algorithm provides a different kind of traffic
shaping than that of the leaky bucket algorithm.
• The leaky bucket algorithm does not allow idle hosts to save up
permission to send large bursts later.
• The token bucket algorithm does allow saving, up to the
maximum size of the bucket, n. This property means that bursts
of up to n packets can be sent at once, allowing some burstiness
in the output stream and giving faster response to sudden bursts
of input.
• Another difference between the two algorithms is that the token
bucket algorithm throws away tokens (i.e., transmission
capacity) when the bucket fills up but never discards packets. In
contrast, the leaky bucket algorithm discards packets when the
bucket fills up.
• The leaky bucket and token bucket algorithms can also be used
to smooth traffic between routers, as well as to regulate host
output as in our examples. However, one clear difference is that
a token bucket regulating a host can make the host stop sending
when the rules say it must. Telling a router to stop sending
while its input keeps pouring in may result in lost data.
• The implementation of the basic token bucket algorithm is just
a variable that counts tokens. The counter is incremented by
one every DT and decremented by one whenever a packet is
sent. When the counter hits zero, no packets may be sent. In the
byte-count variant, the counter is incremented by k bytes every
DT and decremented by the length of each packet sent
Internetworking
• How Networks Differ
• How Networks Can Be Connected
• Concatenated Virtual Circuits
• Connectionless Internetworking
• Tunneling
• Internetwork Routing
• Fragmentation
Connecting Networks

A collection of interconnected networks.


Systems Network Architecture (SNA)
Asynchronous Transfer Mode (ATM)
Fiber Distributed Data Interface(FDDI)
• FDDI (Fiber Distributed Data Interface) is a set of ANSI and ISO standards for
data transmission on fiber optic lines in a local area network (LAN) that can extend
in range up to 200 km (124 miles). The FDDI protocol is based on the token ring
protocol.

• ATM is a core protocol used over the SONET/SDH backbone of the public
switched telephone network (PSTN) and Integrated Services
Digital Network (ISDN), but its use is declining in favour of all IP.

• Systems Network Architecture (SNA) is IBM's


proprietary networking architecture. It is a complete protocol stack for
interconnecting computers and their resources. SNA describes formats and
protocols.
How Networks Differ

5-43

Some of the many ways networks can differ.


How Networks Can Be Connected

(a) Two Ethernets connected by a switch.


(b) Two Ethernets connected by routers.
Concatenated Virtual Circuits

Internetworking using concatenated virtual circuits.


Connectionless Internetworking

A connectionless internet.
Tunneling

Tunneling a packet from Paris to London.


Internetwork Routing

(a) An internetwork. (b) A graph of the internetwork.


Fragmentation

(a) Transparent fragmentation.


(b) Nontransparent fragmentation.
The Network Layer in the
Internet
The Network Layer in the Internet
1. Make sure it works: Do not finalize the design or standard
until multiple prototypes have successfully communicated
with each other. All too often designers first write a 1000-page
standard, get it approved, then discover it is deeply flawed and
does not work. Then they write version 1.1 of the standard.
This is not the way to go.
2. Keep it simple: When in doubt, use the simplest solution.
William of Occam stated this principle (Occam's razor) in the
14th century. Put in modern terms: fight features. If a feature is
not absolutely essential, leave it out, especially if the same
effect can be achieved by combining other features.
3. Make clear choices: If there are several ways of doing the
same thing, choose one. Having two or more ways to do the
same thing is looking for trouble. Standards often have
multiple options or modes or parameters because several
powerful parties insist that their way is best. Designers should
strongly resist this tendency. Just say no.
4 . Exploit modularity: This principle leads directly to the idea
of having protocol stacks, each of whose layers is independent
of all the other ones. In this way, if circumstances that require
one module or layer to be changed, the other ones will not be
affected.
5. Expect heterogeneity: Different types of hardware,
transmission facilities, and applications will occur on any
large network. To handle them, the network design must be
simple, general, and flexible.
6. Avoid static options and parameters: If parameters are
unavoidable (e.g., maximum packet size), it is best to have the
sender and receiver negotiate a value than defining fixed choices.
7. Look for a good design; it need not be perfect: Often the
designers have a good design but it cannot handle some weird
special case. Rather than messing up the design, the designers
should go with the good design and put the burden of working
around it on the people with the strange requirements.
8. Be strict when sending and tolerant when receiving: In other
words, only send packets that rigorously comply with the
standards, but expect incoming packets that may not be fully
conformant and try to deal with them.
9. Think about scalability: If the system is to handle millions of
hosts and billions of users effectively, no centralized databases
of any kind are tolerable and load must be spread as evenly as
possible over the available resources.
10.Consider performance and cost: If a network has poor
performance or outrageous costs, nobody will use it.
• Let us now leave the general principles and start looking at the
details of the Internet's network layer. At the network layer, the
Internet can be viewed as a collection of sub networks or
Autonomous Systems (ASes) that are interconnected.
• There is no real structure, but several major backbones exist.
These are constructed from high-bandwidth lines and fast
routers. Attached to the backbones are regional (midlevel)
networks, and attached to these regional networks are the LANs
at many universities, companies, and Internet service providers.
The Internet is an interconnected collection of many
networks.
• The glue that holds the whole Internet together is the network
layer protocol, IP (Internet Protocol).
• Unlike most older network layer protocols, it was designed
from the beginning with internetworking in mind. A good way
to think of the network layer is this. Its job is to provide a best-
efforts (i.e., not guaranteed) way to transport data grams from
source to destination, without regard to whether these machines
are on the same network or whether there are other networks in
between them.
• Communication in the Internet works as follows. The transport
layer takes data streams and breaks them up into data grams. In
theory, data grams can be up to 64 Kbytes each, but in practice
they are usually not more than 1500 bytes (so they fit in one
Ethernet frame). Each datagram is transmitted through the
Internet, possibly being fragmented into smaller units as it goes.
When all the pieces finally get to the destination machine, they
are reassembled by the network layer into the original datagram.
This datagram is then handed to the transport layer, which
inserts it into the receiving process' input stream.
The IPv4 (Internet Protocol) header
• Version:(4 bits): The Version field keeps track of which
version of the protocol the datagram belongs to. By including
the version in each datagram, it becomes possible to have the
transition between versions take years, with some machines
running the old version and others running the new one.
• Internet Header Length(IHL 4 bits): IHL, is provided to tell
how long the header is, in 32-bit words. The minimum value is
5, which applies when no options are present. The maximum
value of this 4-bit field is 15, which limits the header to 60
bytes, and thus the Options field to 40 bytes.
• Type-of-Service (8 bits): The Type of service field is one of
the few fields that has changed its meaning (slightly) over the
years. It was and is still intended to distinguish between
different classes of service. Various combinations of reliability
and speed are possible. For digitized voice, fast delivery beats
accurate delivery. For file transfer, error-free transmission is
more important than fast transmission.
• Originally, the 6-bit field contained (from left to right), a three-
bit Precedence field and three flags, D, T, and R. The
Precedence field was a priority, from 0 (normal) to 7 (network
control packet). The three flag bits allowed the host to specify
what it cared most about from the set {Delay, Throughput,
Reliability}. In theory, these fields allow routers to make
choices between, for example, a satellite link with high
throughput and high delay or a leased line with low throughput
and low delay. In practice, current routers often ignore the Type
of service field altogether.
• Total length(16 bits): The Total length includes everything in
the datagram—both header and data. The maximum length is
65,535 bytes. At present, this upper limit is tolerable, but with
future gigabit networks, larger data grams may be needed.
• Identifier (16 bits): To have a proper reassembling of
fragments , The Identification field is needed to allow the
destination host to determine which datagram a newly arrived
fragment belongs to. All the fragments of a datagram contain
the same Identification value.
• Flags(3 bits): First one an unused bit Only two of the bits are
currently defined: MF(More Fragments) ,DF(Don't Fragment)
MF(More Fragments) :When a receiving host sees a packet
arrive with the MF = 1, it examines the Fragment Offset to see
where this fragment is to be placed in the reconstructed packet.
Don't Fragment flag (DF):The Don't Fragment (DF) flag is a
single bit in the Flag field that indicates that fragmentation of
the packet is not allowed.
• Fragment offset: The Fragment offset tells where in the
current datagram this fragment belongs. All fragments except
the last one in a datagram must be a multiple of 8 bytes, the
elementary fragment unit. Since 13 bits are provided, there is a
maximum of 8192 fragments per datagram, giving a maximum
datagram length of 65,536 bytes, one more than the Total
length field.
• Time-to-Live (TTL) (8 bits): The Time to live field is a
counter used to limit packet lifetimes. It is supposed to count
time in seconds, allowing a maximum lifetime of 255 sec. It
must be decremented on each hop and is supposed to be
decremented multiple times when queued for a long time in a
router. In practice, it just counts hops. When it hits zero, the
packet is discarded and a warning packet is sent back to the
source host. This feature prevents data grams from wandering
around forever, something that otherwise might happen if the
routing tables ever become corrupted.
• Protocol (8 bits): The Protocol field tells it which transport
process to give it to. TCP is one possibility, but so are UDP and
some others. The numbering of protocols is global across the
entire Internet. Protocols and other assigned numbers were
formerly listed in RFC 1700, but nowadays they are contained
in an on-line data base located at www.iana.org.
• Header checksum (16 bits): The Header checksum verifies
the header only. Such a checksum is useful for detecting errors
generated by bad memory words inside a router. The algorithm
is to add up all the 16-bit half words as they arrive, using one's
complement arithmetic and then take the one's complement of
the result. For purposes of this algorithm, the Header checksum
is assumed to be zero upon arrival. This algorithm is more
robust than using a normal add. Note that the Header checksum
must be recomputed at each hop because at least one field
always changes (the Time to live field), but tricks can be used
to speed up the computation.
• IP Destination Address (32 bits): The IP Destination Address
field contains a 32-bit binary value that represents the packet
destination Network layer host address.
• IP Source Address (32 bits): The IP Source Address field
contains a 32-bit binary value that represents the packet source
Network layer host address.
• Options (variable): The Options field is padded out to a
multiple of four bytes. Originally, five options were defined
The current complete list is now maintained on-line at
www.iana.org/assignments/ip-parameters.
• The Security option tells how secret the information is. In
theory, a military router might use this field to specify not to
route through certain countries the military considers to be ''bad
guys.'' In practice, all routers ignore it, so its only practical
function is to help spies find the good stuff more easily.
• The Strict source routing option gives the complete path from
source to destination as a sequence of IP addresses. The
datagram is required to follow that exact route. It is most useful
for system managers to send emergency packets when the
routing tables are corrupted, or for making timing
measurements.
• The Loose source routing option requires the packet to traverse
the list of routers specified, and in the order specified, but it is
allowed to pass through other routers on the way. Normally, this
option would only provide a few routers, to force a particular
path. For example, to force a packet from London to Sydney to
go west instead of east, this option might specify routers in
New York, Los Angeles, and Honolulu. This option is most
useful when political or economic considerations dictate
passing through or avoiding certain countries.
• The Record route option tells the routers along the path to
append their IP address to the option field. This allows system
managers to track down bugs in the routing algorithms (''Why
are packets from Houston to Dallas visiting Tokyo first?'').
When the ARPANET was first set up, no packet ever passed
through more than nine routers, so 40 bytes of option was
ample. As mentioned above, now it is too small.
• Finally, the Timestamp option is like the Record route option,
except that in addition to recording its 32-bit IP address, each
router also records a 32-bit timestamp. This option, too, is
mostly for debugging routing algorithms.
IP Addresses
• Every host and router on the Internet has an IP address, which
encodes its network number and host number.
• The combination is unique: in principle, no two machines on
the Internet have the same IP address.
• All IP addresses are 32 bits long and are used in the Source
address and Destination address fields of IP packets. It is
important to note that an IP address does not actually refer to a
host. It really refers to a network interface, so if a host is on two
networks, it must have two IP addresses.
• An IP address is a unique global address for a network
interface,It is a 32 bit long identifier
• An IP address contains two parts:
- network number (network prefix)
- host number
Dotted Decimal Notation
• IP addresses are written in a so-called dotted
decimal notation
• Each byte is identified by a decimal number in
the range [0..255]:
• Example:
10000000 10001111 10001001 10010000
1st Byte 2nd Byte 3rd Byte 4th Byte
= 128 = 143 = 137 = 144

128.143.137.144
Example
• Example:
128.143 137.144

• Network id is: 128.143


• Host number is: 137.144
• Prefix notation: 128.143.137.144
» Network prefix is 16 bits long
Class full IP Addresses
• The Internet address space was divided up into
classes:
• Class A addressing – Allow 128 networks and 16 millions
hosts .
• Class B addressing – Allows 16,384 networks with 64
thousand hosts.
• Class C addressing – 2 million networks with 256 hosts.
• Class D addressing- number of groups are 2^28 million
groups
• Class E addressing
IP address formats.

IP address formats.
Special IP addresses

• The values 0 and -1 (all 1s) have special meanings, The value
0 means this network or this host. The value of -1 is used as a
broadcast address to mean all hosts on the indicated network
• The IP address 0.0.0.0 is used by hosts when they are being
booted. IP addresses with 0 as network number refer to the
current network. These addresses allow machines to refer to
their own network without knowing its number (but they have
to know its class to know how many 0s to include).
• The address consisting of all 1s allows broadcasting on the
local network, typically a LAN. The addresses with a proper
network number and all 1s in the host field allow machines to
send broadcast packets to distant LANs anywhere in the
Internet (although many network administrators disable this
feature). Finally, all addresses of the form 127.xx.yy.zz are
reserved for loopback testing.
• Packets sent to that address are not put out onto the wire; they
are processed locally and treated as incoming packets. This
allows packets to be sent to the local network without the
sender knowing its number.
NAT—Network Address Translation
• IP addresses are scarce. An ISP might have a /16 (formerly
class B) address, giving it 65,534 host numbers. If it has more
customers than that, it has a problem.
• The problem of running out of IP addresses is not a theoretical
problem that might occur at some point in the distant future. It
is happening right here and right now.
• The long-term solution is for the whole Internet to migrate to
IPv6, which has 128-bit addresses. This transition is slowly
occurring, but it will be years before the process is complete.
As a consequence, some people felt that a quick fix was needed
for the short term. This quick fix came in the form of NAT
(Network Address Translation)
• The basic idea behind NAT is to assign each company a single
IP address (or at most, a small number of them) for Internet
traffic. Within the company, every computer gets a unique IP
address, which is used for routing intramural traffic
• when a packet exits the company and goes to the ISP, an
address translation takes place. To make this scheme possible,
three ranges of IP addresses have been declared as private.
Companies may use them internally as they wish. The only rule
is that no packets containing these addresses may appear on the
Internet itself. The three reserved ranges are:
• 10.0.0.0 – 10.255.255.255/8 (16,777,216 hosts)
• 172.16.0.0 – 172.31.255.255/12 (1,048,576 hosts)
• 192.168.0.0 – 192.168.255.255/16 (65,536 hosts)
• The first range provides for 16,777,216 addresses (except for 0
and -1, as usual) and is the usual choice of most companies,
even if they do not need so many addresses
Placement and operation of a NAT box
• Within the company premises, every machine has a unique
address of the form 10.x.y.z. However, when a packet leaves the
company premises, it passes through a NAT box that converts
the internal IP source address, 10.0.0.1 in the figure, to the
company's true IP address, 198.60.42.12 in this example.
• The NAT box is often combined in a single device with a
firewall, which provides security by carefully controlling what
goes into the company and what comes out.
• It is also possible to integrate the NAT box into the company's
router.
Internet Control Protocols
• In addition to IP, which is used for data transfer, the Internet has
several control protocols used in the network layer, including
ICMP, ARP, RARP, BOOTP, and DHCP.
• Internet Control Message Protocol: The operation of
the Internet is monitored closely by the routers. When
something unexpected occurs, the event is reported by the
ICMP (Internet Control Message Protocol), which is also used
to test the Internet, About a dozen types of ICMP messages are
defined.
• Each ICMP message type is encapsulated in an IP packet.
• The most important ones are listed below
Internet Control Message Protocol
.

5-61

The principal ICMP message types


• The DESTINATION UNREACHABLE message is used when
the subnet or a router cannot locate the destination or when a
packet with the DF bit cannot be delivered because a ''small-
packet'' network stands in the way.
• The TIME EXCEEDED message is sent when a packet is
dropped because its counter has reached zero. This event is a
symptom that packets are looping, that there is enormous
congestion, or that the timer values are being set too low.
• The PARAMETER PROBLEM message indicates that an
illegal value has been detected in a header field. This problem
indicates a bug in the sending host'sIP software or possibly in
the software of a router transited.
• The SOURCE QUENCH message was formerly used to
throttle hosts that were sending too many packets. When a host
received this message, it was expected to slow down. It is rarely
used any more because when congestion occurs, these packets
tend to add more fuel to the fire.
• The REDIRECT message is used when a router notices that a
packet seems to be routed wrong. It is used by the router to tell
the sending host about the probable error.
• The ECHO and ECHO REPLY messages are used to see if a
given destination is reachable and alive. Upon receiving the
ECHO message, the destination is expected to send an ECHO
REPLY message back.
• The TIMESTAMP REQUEST and TIMESTAMP REPLY
messages are similar, except that the arrival time of the message
and the departure time of the reply are recorded in the reply.
This facility is used to measure network performance.
• In addition to these messages, others have been defined. The
on-line list is now kept at www.iana.org/assignments/icmp-
parameters.
ARP– The Address Resolution Protocol
• Although every machine on the Internet has one (or more) IP
addresses, these cannot actually be used for sending packets
because the data link layer hardware does not understand
Internet addresses. Nowadays, most hosts at companies and
universities are attached to a LAN by an interface board that
only understands LAN addresses.
• The Address Resolution Protocol (ARP) performs dynamic
address resolution, using only the low-level network
communication system.
• A machine uses ARP to find the H/W address of another
machine by broadcasting an ARP request. The request contains
the IP address of the machine for which a H/W address is
needed.
• All machines on a network receive an ARP request. If the
request matches a machine’s IP address, the machine responds
by sending a reply that contains the needed hardware address.
Replies are directed to one machine.
Reverse Address Resolution Protocol (RARP)
• RARP uses physical network addressing to obtain the
machine’s internet address
• The RARP mechanism supplies the target machine’s physical
H/W address to uniquely identify the processor and broadcasts
the RARP request
• Servers on the network receive the message, look up the
mapping in a table, reply to the sender
• Once a machine obtains its IP address, it stores the address in
memory and does not use RARP again until it reboots
• A disadvantage of RARP is that it uses a destination address of
all 1s (limited broadcasting) to reach the RARP server.
However, such broadcasts are not forwarded by routers, so a
RARP server is needed on each network. To get around this
problem, an alternative bootstrap protocol called BOOTP was
invented.
Dynamic Host Configuration
Protocol
• To handle automated address assignment, the IETF has
designed a new protocol called DHCP
• DHCP (Dynamic Host Configuration Protocol). DHCP allows
both manual IP address assignment and automatic assignment.
• DHCP allows 3 types of address assignment :
• 1. It allows manual configuration in which a manager can
configure a specific address for a specific computer
• 2. DHCP also permits automatic configuration in which a
manager allows a DHCP server to assign a permanent address
when a computer first attaches to the network
• 3. DHCP permits completely dynamic configuration in which
a server “loans” an address to a computer for a limited period
Dynamic Host Configuration
Protocol

You might also like