67bf1d6c9e
Tests performed between a 1 Gbps connected server and a 100 mbps client, distant by 95ms showed that: - we need 1.1 MB in flight to fill the link - rare but inevitable losses are sufficient to make cubic's window collapse fast and long to recover - a 100 MB object takes 69s to download - tolerance for 1 loss between two ACKs suffices to shrink the download time to 20-22s - 2 losses go to 17-20s - 4 losses reach 14-17s At 100 concurrent connections that fill the server's link: - 0 loss tolerance shows 2-3% losses - 1 loss tolerance shows 3-5% losses - 2 loss tolerance shows 10-13% losses - 4 loss tolerance shows 23-29% losses As such while there can be a significant gain sometimes in setting this tolerance above zero, it can also significantly waste bandwidth by sending far more than can be received. While it's probably not a solution to real world problems, it repeatedly proved to be a very effective troubleshooting tool helping to figure different root causes of low transfer speeds. In spirit it is comparable to the no-cc congestion algorithm, i.e. it must not be used except for experimentation. |
||
---|---|---|
.. | ||
design-thoughts | ||
internals | ||
lua-api | ||
51Degrees-device-detection.txt | ||
DeviceAtlas-device-detection.txt | ||
HAProxyCommunityEdition_60px.png | ||
SOCKS4.protocol.txt | ||
SPOE.txt | ||
WURFL-device-detection.txt | ||
acl.fig | ||
coding-style.txt | ||
configuration.txt | ||
cookie-options.txt | ||
gpl.txt | ||
haproxy.1 | ||
intro.txt | ||
lgpl.txt | ||
linux-syn-cookies.txt | ||
lua.txt | ||
management.txt | ||
netscaler-client-ip-insertion-protocol.txt | ||
network-namespaces.txt | ||
peers-v2.0.txt | ||
peers.txt | ||
proxy-protocol.txt | ||
queuing.fig | ||
regression-testing.txt | ||
seamless_reload.txt |