Now that warnings were almost all removed, let's enable zero-warning
via -dW. All tests were adjusted, but two:
- mcli/mcli_start_progs.vtc:
the programs section currently cannot be silenced
- stats/stats-file.vtc:
the warning comes from the stats file itself on comment lines.
All other ones are now OK.
No less than 30 tests were missing timeouts, preventing them from being
started with zero-warning. Since they were not supposed to trigger, they
have been set to 30s so as never to trigger, and now they do not produce
any warning anymore.
When a message is compressed, A "Vary" header is added with
"accept-encoding" value. However, a new header is always added, regardless
there is already a Vary header or not. In addition, if there is already a
Vary header, there is no check on values to be sure "accept-encoding" value
is not already there. So it is possible to have it twice.
To improve this part, we now test Vary header values and "accept-encoding"
is only added if it was not found. In addition, "accept-encoding" value is
appended to the last Vary header found, if any. Otherwise, a new header is
added.
With the CI occasionally slowing down, we're starting to see again some
spurious failures despite the long 1-second timeouts. This reports false
positives that are disturbing and doesn't provide as much value as this
could. However at this delay it already becomes a pain for developers
to wait for the tests to complete.
This commit adds support for the new environment variable
HAPROXY_TEST_TIMEOUT that will allow anyone to modify the connect,
client and server timeouts. It was set to 5 seconds by default, which
should be plenty for quite some time in the CI. All relevant values
that were 200ms or above were replaced by this one. A few larger
values were left as they are special. One test for the set-timeout
action that used to rely on a fixed 1-sec value was extended to a
fixed 5-sec, as the timeout is normally not reached, but it needs
to be known to compare the old and new values.
Some regtests involve multiple requests from multiple clients, which can
be dispatched as multiple requests to a server. It turns out that the
idle connection sharing works so well that very quickly few connections
are used, and regularly some of the remaining idle server connections
time out at the moment they were going to be reused, causing those random
"HTTP header incomplete" traces in the logs that make them fail often. In
the end this is only an artefact of the test environment.
And indeed, some tests like normalize-uri which perform a lot of reuse
fail very often, about 20-30% of the times in the CI, and 100% of the
time in local when running 1000 tests in a row. Others like ubase64,
sample_fetches or vary_* fail less often but still a lot in tests.
This patch addresses this by adding "tune.idle-pool.shared off" to all
tests which have at least twice as many requests as clients. It proves
very effective as no single error happens on normalize-uri anymore after
10000 tests. Also 100 full runs of all tests yield no error anymore.
One test is tricky, http_abortonclose, it used to fail ~10 times per
1000 runs and with this workaround still fails once every 1000 runs.
But the test is complex and there's a warning in it mentioning a
possible issue when run in parallel due to a port reuse.
Ilya reported that the "which" utility is not that much portable and is
absent from Fedora. "type -p" is not portable either, and the correct
solution appears to be "command -v", so let's use this for now, we can
change it again in the future in case of problems.
Link: https://www.mail-archive.com/haproxy@formilux.org/msg36332.html
Make HAProxy set the `Vary: Accept-Encoding` response header if it compressed
the server response.
Technically the `Vary` header SHOULD also be set for responses that would
normally be compressed based off the current configuration, but are not due
to a missing or invalid `Accept-Encoding` request header or due to the
maximum compression rate being exceeded.
Not setting the header in these cases does no real harm, though: An
uncompressed response might be returned by a Cache, even if a compressed
one could be retrieved from HAProxy. This increases the traffic to the end
user if the cache is unable to compress itself, but it saves another
roundtrip to HAProxy.
see the discussion on the mailing list: https://www.mail-archive.com/haproxy@formilux.org/msg34221.html
Message-ID: 20190617121708.GA2964@1wt.eu
A small issue remains: The User-Agent is not added to the `Vary` header,
despite being relevant to the response. Adding the User-Agent header would
make responses effectively uncacheable and it's unlikely to see a Mozilla/4
in the wild in 2019.
Add a reg-test to ensure the behaviour as described in this commit message.
see issue #121
Should be backported to all branches with compression (i.e. 1.6+).
Some reg tests and their dependencies have been renamed. They may be
referenced by the .vtc files. So, this patch modifies also the references
to these dependencies.
This patch replaces LEVEL variable by REGTESTS_TYPES variable which is more
mnemonic and human readable. It is uses as a filter to run the reg tests scripts
where a commented #REGTEST_TYPE may be defined to designate their types.
Running the following command:
$ REGTESTS_TYPES=slow,default
will start all the reg tests where REGTEST_TYPE is defines as 'slow' or 'default'.
Note that 'default' is also the default value of REGTEST_TYPE when not specified
dedicated to run all the current h*.vtc files. When REGTESTS_TYPES is not specified
there is no filter at all. All the tests are run.
This patches also defines REGTEST_TYPE with 'slow' value for all the s*.vtc files,
'bug' value for al the b*.vtc files, 'broken' value for all the k*.vtc files.
RFC 7232 section 2.3.3 states:
> Note: Content codings are a property of the representation data,
> so a strong entity-tag for a content-encoded representation has to
> be distinct from the entity tag of an unencoded representation to
> prevent potential conflicts during cache updates and range
> requests. In contrast, transfer codings (Section 4 of [RFC7230])
> apply only during message transfer and do not result in distinct
> entity-tags.
Thus a strong ETag must be changed when compressing. Usually this is done
by converting it into a weak ETag, which represents a semantically, but not
byte-by-byte identical response. A conversion to a weak ETag still allows
If-None-Match to work.
This should be backported to 1.9 and might be backported to every supported
branch with compression.