Calculate the bandwidth-delay product (BDP) for network connections. Optimize TCP window sizes and buffer settings for maximum throughput.
The bandwidth-delay product (BDP) is a fundamental concept in networking that determines the maximum amount of data that can be "in flight" on a network connection at any given time. Understanding BDP is crucial for optimizing TCP performance, sizing network buffers, and troubleshooting throughput issues on high-latency links like satellite connections or transcontinental WAN links.
BDP equals the link bandwidth multiplied by the round-trip time (RTT). For a 100 Mbps connection with 50ms RTT, the BDP is 625 KB — meaning 625 KB of data can exist in transit between sender and receiver. If your TCP window size is smaller than the BDP, you're leaving bandwidth unused because the sender has to pause and wait for acknowledgments before the pipe is full.
This calculator helps network engineers, system administrators, and developers compute the correct BDP for their connections and determine optimal TCP buffer sizes. Enter your bandwidth and latency, and get recommendations for TCP window sizes, buffer configurations, and expected maximum throughput. Particularly valuable for tuning long-fat networks (LFNs) where high bandwidth meets high latency.
Misconfigured TCP buffers are a common cause of poor throughput on high-latency links. Calculate the correct BDP to ensure your network connections can achieve their full potential speed. Keep these notes focused on your operational context. Tie the context to the calculator’s intended domain. Use this clarification to avoid ambiguous interpretation. Align this note with review checkpoints.
BDP (bits) = Bandwidth (bps) × RTT (seconds) BDP (bytes) = Bandwidth (bps) × RTT (seconds) / 8 Max Throughput = TCP Window Size / RTT Pipe Utilization = min(TCP Window, BDP) / BDP × 100%
Result: BDP = 625 KB
100 Mbps × 50ms = 5,000,000 bits = 625,000 bytes ≈ 625 KB. Your TCP receive window must be at least 625 KB to fully utilize this 100 Mbps link with 50ms RTT.
The relationship between TCP window size and BDP directly determines maximum achievable throughput. TCP's sliding window protocol allows the sender to transmit up to window_size bytes before requiring an acknowledgment. If the window fills before ACKs return, the sender stalls. The time for an ACK to return is the RTT, so maximum throughput equals window_size/RTT. Only when the window size equals or exceeds the BDP can the full link bandwidth be utilized.
Default TCP window sizes on many systems are 64-128 KB, which is adequate for low-latency LAN connections but insufficient for WAN links. A 1 Gbps link with 30ms RTT has a BDP of 3.75 MB — requiring TCP window scaling and explicit buffer tuning to achieve full throughput. Without tuning, such a link would be limited to approximately 17 Mbps with a 64 KB window.
Different network scenarios have dramatically different BDP requirements. A local data center connection (10 Gbps, 0.2ms RTT) has a BDP of just 250 KB. A cross-country cloud connection (1 Gbps, 60ms RTT) needs 7.5 MB. A satellite internet link (50 Mbps, 600ms RTT) requires 3.75 MB. Each scenario demands different TCP tuning to achieve optimal performance.
CDN and cloud providers typically pre-tune their servers for large BDP, but client-side settings often remain at defaults. This is why speed tests to nearby servers show full bandwidth while actual transfers to distant servers underperform — the default TCP window is too small for the higher RTT.
On Linux, the key parameters are net.ipv4.tcp_rmem (receive buffer auto-tuning range), net.ipv4.tcp_wmem (send buffer range), and net.core.rmem_max (maximum allowed receive buffer). Set rmem_max and wmem_max to at least 2× your largest expected BDP. The tcp_rmem and tcp_wmem parameters take three values: minimum, default, and maximum. The kernel auto-tunes within this range based on observed RTT and bandwidth. On Windows, adjusting the TCP receive window typically requires registry modifications or netsh commands, and the OS auto-tuning feature (enabled by default since Windows Vista) handles most scenarios adequately.
BDP represents the maximum amount of unacknowledged data that can be in transit on a network link. It's the "volume" of the network pipe — bandwidth is the pipe diameter and delay is the pipe length.
TCP uses a sliding window to control flow. If the window size is smaller than the BDP, the sender must wait for ACKs before sending more data, leaving the link partially idle and reducing effective throughput.
Use ping to measure round-trip time to your destination. For more accurate measurements, use tools like mtr, traceroute, or application-level timing. RTT varies throughout the day due to congestion.
An LFN has a large BDP, typically over ~12.5 KB (the old default TCP window). Satellite links (500+ ms RTT) and high-bandwidth WAN links are classic LFNs needing TCP window scaling.
On Linux, adjust net.core.rmem_max and net.ipv4.tcp_rmem. On Windows, use netsh. TCP window scaling (RFC 7323) must be enabled to exceed the 65,535 byte legacy limit.
BDP still represents the pipe capacity for UDP, but since UDP has no flow control, buffer sizing is managed at the application level. For UDP streaming, send buffers should accommodate at least one BDP.