With the CI occasionally slowing down, we're starting to see again some
spurious failures despite the long 1-second timeouts. This reports false
positives that are disturbing and doesn't provide as much value as this
could. However at this delay it already becomes a pain for developers
to wait for the tests to complete.
This commit adds support for the new environment variable
HAPROXY_TEST_TIMEOUT that will allow anyone to modify the connect,
client and server timeouts. It was set to 5 seconds by default, which
should be plenty for quite some time in the CI. All relevant values
that were 200ms or above were replaced by this one. A few larger
values were left as they are special. One test for the set-timeout
action that used to rely on a fixed 1-sec value was extended to a
fixed 5-sec, as the timeout is normally not reached, but it needs
to be known to compare the old and new values.
Some regtests involve multiple requests from multiple clients, which can
be dispatched as multiple requests to a server. It turns out that the
idle connection sharing works so well that very quickly few connections
are used, and regularly some of the remaining idle server connections
time out at the moment they were going to be reused, causing those random
"HTTP header incomplete" traces in the logs that make them fail often. In
the end this is only an artefact of the test environment.
And indeed, some tests like normalize-uri which perform a lot of reuse
fail very often, about 20-30% of the times in the CI, and 100% of the
time in local when running 1000 tests in a row. Others like ubase64,
sample_fetches or vary_* fail less often but still a lot in tests.
This patch addresses this by adding "tune.idle-pool.shared off" to all
tests which have at least twice as many requests as clients. It proves
very effective as no single error happens on normalize-uri anymore after
10000 tests. Also 100 full runs of all tests yield no error anymore.
One test is tricky, http_abortonclose, it used to fail ~10 times per
1000 runs and with this workaround still fails once every 1000 runs.
But the test is complex and there's a warning in it mentioning a
possible issue when run in parallel due to a port reuse.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
This patch fixes GitHub Issue #988. Commit ce9e7b2521
was not sufficient, because it fell back to a hash comparison if the bitmap
of known encodings was not acceptable instead of directly returning the the
cached response is not compatible.
This patch also extends the reg-test to test the hash collision that was
mentioned in #988.
Vary handling is 2.4, no backport needed.