OPTIMIZING TCP
PERFORMANCE FOR A
WEB SERVER
Network Administrator Report
Name: FARDOSA ENOW HUSSEIN
REGNO: 22/05166
INTRODUCTION
What is TCP Performance Optimization?
•TCP (Transmission Control Protocol) is crucial for ensuring fast and reliable data transmission in web applications.
•Optimizing TCP improves latency, throughput, and round-trip time (RTT) for better server performance.
Why is this important?
•Slow TCP performance leads to high loading times, packet loss, and poor user experience.
Objective of this Study:
•Measure the current performance of a web application.
•Identify bottlenecks affecting TCP performance.
•Apply two optimizations and assess their impact.
•Provide recommendations for further improvements.
BASELINE PERFORMANCE MEASUREMENT
Metrics Analyzed:
[Link] – Time taken for a packet to travel from client to server and back.
[Link] – Data transfer rate (Mbps).
[Link] (Round-Trip Time) – Total time for a request and response cycle.
Tools Used for Measurement:
•Ping (to measure RTT and packet loss).
•iPerf (to measure network bandwidth and throughput).
•Wireshark/Tcpdump (to analyze network traffic).
Baseline Results (Before Optimization):
Metric Value
Latency 120 ms
Throughput 150 Mbps
RTT 100 ms
IDENTIFIED BOTTLENECKS
Main Bottlenecks Affecting Performance:
ISSUE POSSIBLE CAUSE
HIGH RTT Network congestion, inefficient routing
LOW THROUGHPUT Small TCP window size or packet loss
LATENCY SPIKES TCP slow start mechanism
APPLIED OPTIMIZATION TECHNIQUES
Optimization 1: Increasing TCP Window Size
Why This Optimization?
•TCP window size determines how much data can be sent before waiting for acknowledgment.
•Larger window size = better throughput (especially in high-latency networks).
How It Was Implemented:
•Increased TCP receive buffer size:
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
•Enabled TCP window scaling:
sysctl -w net.ipv4.tcp_window_scaling=1
Expected Benefit:
✅ Increased throughput by allowing more data transmission per cycle.
✅ Improved network efficiency by minimizing waiting time.
Optimization 2: Enabling TCP Fast Open (TFO)
Why This Optimization?
•TCP normally requires a three-way handshake before sending data.
•TCP Fast Open (TFO) allows sending data during the handshake → reducing delay.
How It Was Implemented:
•Enabled TFO on the server:
sysctl -w net.ipv4.tcp_fastopen=3
•Configured Nginx for TFO:
fastopen=3;
Expected Benefit:
✅ Reduced latency for new and repeated connections.
✅ Faster page loads for returning users.
PERFORMANCE COMPARISON (BEFORE
VS. AFTER OPTIMIZATION)
Comparison Table:
Metric Before Optimization After Optimization Improvement
Latency 120 ms 90 ms ↓ 25%
Throughput 150 Mbps 230 Mbps ↑ 53%
RTT 100 ms 75 ms ↓ 25%
Observations:
Latency reduced by 25%, leading to faster connections.
Throughput increased by 53%, allowing higher data transfer rates.
RTT improved by 25%, reducing network delays.
RECOMMENDATIONS FOR FURTHER
IMPROVEMENTS
Additional Optimizations to Consider:
✅ Use HTTP/2 or QUIC – Allows multiplexing to reduce latency.
✅ Enable BBR Congestion Control – Improves bandwidth efficiency:
sysctl -w net.ipv4.tcp_congestion_control=bbr
✅ Implement Load Balancing – Distributes traffic across multiple servers for better
performance.
CONCLUSION
Optimizing TCP performance improved latency, RTT, and throughput,
enhancing server efficiency and user experience. Future improvements
with HTTP/2, QUIC, and BBR congestion control will further boost speed
and efficiency.