Written Assignment unit 6
Bachelor of Computer Science
University of the People
CS 2204-01: Communication and Networking
Dr. William Sexton
December 24, 2024
Understanding IPv4 Fragmentation and Related Networking Concepts
Fragmentation of IPv4 Packets
In the context of transferring 2000 bytes of user data using a single UDP send over a standard
Ethernet network, we must consider the maximum payload size allowed by Ethernet frames. The
maximum Ethernet payload is typically 1500 bytes, which includes the headers for both IPv4 and
UDP.
1. Calculating Required Fragments:
User Data: 2000 bytes
Maximum Payload Size: 1500 bytes
IPv4 Header Size: 20 bytes (standard size without options)
UDP Header Size: 8 bytes
The total header size for IPv4 and UDP combined is:
Total Header Size=20 bytes IPv4 +8 bytes UDP =28 bytesTotal Header Size=20 bytes IPv4 +8 b
ytes UDP =28 bytes
This means that the effective payload size for each fragment is:
Effective Payload=1500 bytes Ethernet −20 bytes IPv4 =1480 bytesEffective Payload=1500 byte
s Ethernet −20 bytes IPv4 =1480 bytes
To calculate how many fragments are needed to send 2000 bytes of user data:
The first fragment can carry up to 1480 bytes of user data.
The remaining data after the first fragment will be:
2000 bytes−1480 bytes=520 bytes2000 bytes−1480 bytes=520 bytes
The second fragment will carry this remaining data. Therefore, we need:
Fragment 1: Carries 1480 bytes of user data.
Fragment 2: Carries the remaining 520 bytes of user data.
Total Fragments Needed: 2
2. Fragmentation Breakdown:
Fragment 1:
User Data: 1480 bytes
Total Size (including headers): 1480+20+8=15081480+20+8=1508 bytes
Fragment 2:
User Data: 520 bytes
Total Size (including headers): 520+20+8=548520+20+8=548 bytes
Thus, the user data is split across two fragments, with the first fragment carrying the bulk of the
data and the second fragment containing the remainder.
Problems with Remote Procedure Call (RPC)
Despite its conceptual elegance, RPC has several inherent challenges:
1. Latency Issues: RPC can introduce significant latency due to the round-trip time
required for requests and responses between client and server. This delay can be
exacerbated in high-latency networks, making real-time applications less responsive.
2. Error Handling Complexity: Handling errors in RPC can be complicated because it
often requires managing state across distributed systems. If a call fails, determining
whether to retry or handle the error gracefully can be challenging, especially if state
information has changed.
3. Network Transparency Limitations: While RPC aims to provide a seamless interface
for remote communication, it can obscure underlying network issues. For instance, if a
network partition occurs or if there are performance bottlenecks, these issues may not be
apparent at the application level until they cause significant disruptions.
Importance of Timestamping in Real-Time Applications
Timestamping is crucial in real-time applications using the Real-time Transport Protocol (RTP)
for several reasons:
1. Synchronization: Timestamps help synchronize audio and video streams to ensure that
they play back in sync. This is particularly important in multimedia applications where
timing discrepancies can lead to poor user experiences.
2. Jitter Management: In environments where packet arrival times are inconsistent (jitter),
timestamps allow receivers to reorder packets based on their intended playback time, thus
improving the quality of service.
3. Delay Compensation: Timestamps enable applications to measure and compensate for
network delays, ensuring that media streams are delivered as intended without noticeable
lag.
Purpose of UDP
UDP exists to provide a lightweight transport layer protocol that prioritizes speed over reliability.
While it might seem sufficient to allow user processes to send raw IP packets directly, UDP
offers several advantages:
1. Multiplexing Support: UDP allows multiple applications on a single host to send and
receive messages using port numbers, enabling efficient communication without needing
a full connection-oriented protocol like TCP.
2. Minimal Overhead: UDP provides a simpler header structure compared to TCP,
reducing processing overhead and allowing faster transmission of packets—ideal for
applications where speed is critical, such as streaming or gaming.
3. Application-Level Control: By using UDP, developers have more control over error
handling and retransmission strategies tailored to specific application needs rather than
relying on built-in mechanisms from connection-oriented protocols.
QUIC's Reduction of RTTs
QUIC (Quick UDP Internet Connections) effectively reduces round-trip times (RTTs) needed at
the start of secure web connections through several mechanisms:
1. Connection Establishment Optimization: QUIC combines the handshake process for
establishing a connection with the TLS handshake, allowing clients to start sending
encrypted data immediately rather than waiting for multiple round trips.
2. Zero Round Trip Time (0-RTT) Resumption: For previously connected clients, QUIC
allows them to send data immediately upon reconnection without waiting for a
handshake, significantly speeding up subsequent connections.
3. Multiplexing Without Head-of-Line Blocking: QUIC's design allows multiple streams
within a single connection without head-of-line blocking issues seen in TCP, enabling
faster delivery of packets even if some packets are delayed or lost.
Reference:
Dordal, P. (2019). An introduction to computer networks.