TCP Window Size Calculator

Calculate optimal TCP window size and maximum throughput based on bandwidth and round-trip latency (bandwidth-delay product).

About the TCP Window Size Calculator

TCP throughput is fundamentally limited by the TCP window size and network round-trip time (RTT). The Bandwidth-Delay Product (BDP) determines the optimal window size: the amount of data that can be in flight (sent but not yet acknowledged) to fully utilize the available bandwidth.

If the TCP window is smaller than the BDP, the connection cannot fill the pipe: the sender must wait for acknowledgments before transmitting more data, leaving bandwidth unused. This is especially impactful for high-bandwidth, high-latency links (WAN, satellite, cross-region cloud).

This calculator computes the BDP for your link parameters and determines the required TCP window size to achieve maximum throughput. It also estimates the maximum achievable throughput with the default and optimal window sizes.

This measurement provides a critical foundation for capacity planning and performance budgeting, helping teams align infrastructure resources with application requirements and growth projections. Integrating this calculation into monitoring and reporting workflows ensures that engineering decisions are grounded in real data rather than assumptions about system behavior.

Why Use This TCP Window Size Calculator?

Default TCP window sizes are often too small for high-bandwidth or high-latency links. This calculator determines the optimal window size to maximize throughput on your specific network path. Regular monitoring of this value helps DevOps teams detect anomalies early and maintain the system reliability and performance that users and business stakeholders expect.

How to Use This Calculator

  1. Enter the link bandwidth in Mbps.
  2. Enter the round-trip time (RTT) in milliseconds.
  3. Review the bandwidth-delay product and required window size.
  4. Configure OS TCP buffers to at least the BDP value.

Formula

BDP (bytes) = bandwidth_bps × RTT_sec / 8 BDP (bits) = bandwidth_bps × RTT_sec Max Throughput = window_size × 8 / RTT_sec (bps)

Example Calculation

Result: BDP: 6.25 MB, optimal window: 6.25 MB

BDP = 1,000 Mbps × 50 ms = 1,000,000,000 × 0.050 / 8 = 6,250,000 bytes (6.25 MB). A TCP window of 6.25 MB is needed to fully utilize a 1 Gbps link with 50 ms RTT. Default 64 KB window would limit throughput to ~10 Mbps.

Tips & Best Practices

The Bandwidth-Delay Product Explained

Imagine a pipe: bandwidth is the pipe's width, RTT is its length. The BDP is the pipe's volume — the amount of data needed to fill it. If you send less than BDP, the pipe has empty space (wasted bandwidth). The TCP window must be at least as large as the BDP to keep the pipe full.

Real-World Impact

A 10 Gbps link between US East and US West (RTT ~60 ms) has BDP = 75 MB. Default 64 KB windows would achieve only 8.5 Mbps on a 10,000 Mbps link — 0.085% utilization. Proper tuning achieves near-wire speed.

Modern TCP Congestion Control

BBR (Google's TCP congestion control) probes for optimal window size automatically, adjusting based on measured bandwidth and RTT. It performs better than traditional Cubic on high-BDP links. Consider enabling BBR on servers (net.ipv4.tcp_congestion_control=bbr).

Frequently Asked Questions

What is the default TCP window size?

Without window scaling, the maximum TCP window is 64 KB (2^16). With window scaling (standard on modern OS), windows can reach 1 GB. Most OS auto-tune the window size, but may not optimize for high-BDP paths without configuration.

Why does latency limit throughput?

TCP requires acknowledgments. The sender can only have window_size bytes in flight. Once the window is full, it waits for ACKs. With 50 ms RTT and 64 KB window: 64 KB / 50 ms = ~10 Mbps, regardless of available bandwidth.

How do I tune TCP buffers on Linux?

Set sysctl values: net.core.rmem_max and net.core.wmem_max (max buffer), net.ipv4.tcp_rmem and net.ipv4.tcp_wmem (auto-tuning range: min, default, max). Set max values to at least the BDP for your highest-bandwidth, highest-latency path.

Does TCP window scaling work with load balancers?

Most modern load balancers and proxies support window scaling. However, some older hardware load balancers may strip or ignore window scaling options. Test end-to-end to verify. This is a common cause of unexpectedly low throughput.

What about UDP performance?

UDP has no window concept — it sends as fast as the application allows. QUIC (used by HTTP/3) uses its own congestion control with window-like mechanics but avoids TCP's head-of-line blocking. BDP is still relevant for QUIC buffer sizing.

How does packet loss affect the window?

TCP congestion control (Cubic, BBR) reduces the window on packet loss. High loss rates collapse the effective window, severely limiting throughput. A 1% loss rate can reduce throughput by 50–75%. Monitor loss alongside window tuning.

Related Pages