Network Throughput Calculator
Calculate actual TCP/UDP throughput from bandwidth and latency. Understand why your 1 Gbps link can't transfer a 10 GB file as fast as you'd expect.
There's a classic networking question that trips up even experienced engineers: "I have a 1 Gbps link, so why can't I transfer a 10 GB file in 80 seconds?" The answer is that raw bandwidth and actual throughput are different things, and the gap between them depends on latency, TCP window sizes, and protocol efficiency in ways that aren't always intuitive.
The TCP Throughput Formula
For a single TCP connection:
Throughput ≤ TCP Window Size / RTT
If your TCP receive window is 65,535 bytes (the old default) and your RTT is 100ms, your maximum throughput is:
65,535 bytes / 0.1s = 655,350 bytes/s = ~5.2 Mbps
That's a 5 Mbps throughput ceiling on a 1 Gbps link, purely because of TCP's window size. Modern operating systems use TCP window scaling (RWIN up to 1 GB in theory), which fixes this for high-bandwidth long-distance links — but only if both endpoints support it.
How the Calculator Works
At CalcHub, enter:
- Link bandwidth (Mbps/Gbps)
- Round-trip time (ms)
- TCP window size (or use the auto-detect which assumes modern OS defaults)
- Protocol overhead percentage (TCP/IP headers, TLS, HTTP)
- Packet loss percentage
You get: maximum theoretical throughput, estimated actual throughput, efficiency percentage, and the bottleneck factor (bandwidth-limited vs latency-limited vs loss-limited).
Throughput vs Bandwidth: Real Numbers
| Scenario | Link Speed | RTT | TCP Window | Max Throughput | Efficiency |
|---|---|---|---|---|---|
| Local LAN transfer | 1 Gbps | 0.3ms | 4 MB | 1 Gbps | ~99% |
| City to city (modern OS) | 1 Gbps | 20ms | 4 MB | 1.6 Gbps* | — |
| Cross-continent | 1 Gbps | 100ms | 4 MB | 320 Mbps | 32% |
| Old TCP stack (cross-country) | 100 Mbps | 100ms | 64 KB | 5.2 Mbps | 5.2% |
| 1% packet loss, any distance | 1 Gbps | 20ms | 4 MB | ~50 Mbps | 5% |
The packet loss row is sobering. Even 1% loss tanks throughput because TCP backs off aggressively on loss events. UDP-based protocols (QUIC, custom game protocols) handle loss more gracefully for this reason.
Protocol Overhead
Not all bytes transferred are your data. IP and TCP headers add overhead, as does TLS, HTTP, application framing, and retransmissions. Typical overhead percentages:
| Protocol Stack | Overhead |
|---|---|
| Raw TCP/IP | 1–3% |
| TCP/IP + TLS 1.3 | 3–5% |
| HTTP/1.1 over TLS | 5–10% |
| HTTP/2 over TLS | 4–8% |
| HTTP/3 (QUIC) | 3–6% |
| VPN tunnel (WireGuard) | 3–5% |
| VPN tunnel (OpenVPN UDP) | 10–15% |
Parallel Connections Help (to a Point)
Web browsers open 6+ parallel TCP connections to the same server. File transfer tools like rclone and aws s3 cp use parallel streams. With 8 parallel streams across a 100ms RTT link, you can get 8× the single-stream throughput — until you saturate the link, after which adding more streams just creates contention.
For maximum single-server throughput on a long-distance link, tuning TCP buffer sizes with sysctl (on Linux) or using tools like iperf3 with the -P flag to measure parallel streams is more effective than hoping the OS defaults are correct.
Tips
- Test with iperf3 before assuming performance. A 5-minute iperf3 run in both directions will tell you actual achievable throughput with your real network conditions.
- iPerf3 shows UDP differently. UDP throughput is bandwidth-limited, not window-limited, so UDP tests often show higher numbers than TCP for long-distance links. This is why UDP-based file transfer protocols exist.
- MTU size matters. Jumbo frames (MTU 9000) reduce per-packet overhead for large transfers on supported networks. Mismatched MTUs across a path cause silent fragmentation that tanks throughput.
Why do multiple parallel TCP connections beat a single connection?
Each TCP connection runs its own congestion window independently. Multiple streams can collectively use more of the available bandwidth before any single stream's congestion control kicks in. It's a throughput trick that works but wastes TCP state resources.
What's the Bandwidth-Delay Product?
BDP = Bandwidth × RTT, measured in bits or bytes. It represents how much data is "in flight" in the network at any time. A link with BDP of 10 MB needs a TCP window of at least 10 MB to be fully utilized. This is why cross-continental transfers require large TCP buffers.
Does fiber vs cable internet affect throughput differently?
Fiber is symmetric (same upload/download speed) and typically lower latency to the first hop. Cable (DOCSIS) is asymmetric and shares bandwidth with neighbors in the same cable segment, causing throughput to vary with neighborhood load. Fiber generally shows more consistent throughput measurements.
Related Calculators
- Latency Calculator — RTT is the key input for throughput calculations
- Bandwidth Calculator — raw capacity planning
- Download Time Calculator — translate throughput to file transfer times