The "http-restrict-req-hdr-names" option can now be set to restrict allowed
characters in the request header names to the "[a-zA-Z0-9-]" charset.
Idea of this option is to not send header names with non-alphanumeric or
hyphen character. It is especially important for FastCGI application because
all those characters are converted to underscore. For instance,
"X-Forwarded-For" and "X_Forwarded_For" are both converted to
"HTTP_X_FORWARDED_FOR". So, header names can be mixed up by FastCGI
applications. And some HAProxy rules may be bypassed by mangling header
names. In addition, some non-HTTP compliant servers may incorrectly handle
requests when header names contain characters ouside the "[a-zA-Z0-9-]"
charset.
When this option is set, the policy must be specify:
* preserve: It disables the filtering. It is the default mode for HTTP
proxies with no FastCGI application configured.
* delete: It removes request headers with a name containing a character
outside the "[a-zA-Z0-9-]" charset. It is the default mode for
HTTP backends with a configured FastCGI application.
* reject: It rejects the request with a 403-Forbidden response if it
contains a header name with a character outside the
"[a-zA-Z0-9-]" charset.
The option is evaluated per-proxy and after http-request rules evaluation.
This patch may be backported to avoid any secuirty issue with FastCGI
application (so as far as 2.2).
Since the 2.5, it is possible to define TCP/HTTP ruleset in defaults
sections. However, rules defining a capture in defaults sections was not
properly handled because they was not shared with the proxies inheriting
from the defaults section. This led to crash when haproxy tried to store a
new capture.
So now, to fix the issue, when a new proxy is created, the list of captures
points to the list of its defaults section. It may be NULL or not. All new
caputres are prepended to this list. It is not a problem to share the same
defaults section between several proxies, because it is not altered and we
take care to not release it when corresponding proxies are freed but only
when defaults proxies are freed. To do so, defaults proxies are now
unreferenced at the end of free_proxy() function instead of the beginning.
This patch should fix the issue #1674. It must be backported to 2.5.
There is no connection reuse to avoid race conditions in HTTP reg-tests. But
time to time, normalize_uri.vtc still report "HTTP header incomplete"
error. It seems to be because HTTP keep-alive is still used at the session
level. Thus when the same server section is used to handle multiple requests
for the same client, via a "-repeat" statement, a new request for this client
may be handled by HAProxy before the server is restarted.
To avoid any trouble, HTTP keep-alive is disabled on the server side by
adding "Connection: close" header in responses. It seems to be ok now. We
let the CI decide.
With the CI occasionally slowing down, we're starting to see again some
spurious failures despite the long 1-second timeouts. This reports false
positives that are disturbing and doesn't provide as much value as this
could. However at this delay it already becomes a pain for developers
to wait for the tests to complete.
This commit adds support for the new environment variable
HAPROXY_TEST_TIMEOUT that will allow anyone to modify the connect,
client and server timeouts. It was set to 5 seconds by default, which
should be plenty for quite some time in the CI. All relevant values
that were 200ms or above were replaced by this one. A few larger
values were left as they are special. One test for the set-timeout
action that used to rely on a fixed 1-sec value was extended to a
fixed 5-sec, as the timeout is normally not reached, but it needs
to be known to compare the old and new values.
This reverts commit 597909f4e6
http-after-response rules evaluation was changed to do the same that was
done for http-response, in the code. However, the opposite must be performed
instead. Only the rules of the current section must be stopped. Thus the
above commit is reverted and the http-response rules evaluation will be
fixed instead.
Note that only "allow" action is concerned. It is most probably an uncommon
action for an http-after-request rule.
This patch must be backported as far as 2.2 if the above commit was
backported.
A TCP/HTTP action can stop the rules evaluation. However, it should be
applied on the current section only. For instance, for http-requests rules,
an "allow" on a frontend must stop evaluation of rules defined in this
frontend. But the backend rules, if any, must still be evaluated.
For http-response rulesets, according the configuration manual, the same
must be true. Only "allow" action is concerned. However, since the
beginning, this action stops evaluation of all remaining rules, not only
those of the current section.
This patch may be backported to all supported versions. But it is not so
critical because the bug exists since a while. I doubt it will break any
existing configuration because the current behavior is
counterintuitive.
3 scripts are added:
* startup/default_rules.vtc to check configuration parsing
* http-rules/default_rules.vtc to check evaluation of HTTP rules
* tcp-rules/default_rules.vtc to check evaluation of TCP rules
http-after-response rules evaluation must be stopped after a "allow". It
means the frontend ruleset must not be evaluated if a "allow" was performed
in the backend ruleset. Internally, the evaluation must be stopped if on
HTTP_RULE_RES_STOP return value. Only the "allow" action is concerned by
this change.
Thanks to this patch, http-response and http-after-response behave in the
same way.
This patch should be backported as far as 2.2.
Sometimes it is convenient to remap large sets of URIs to new ones (e.g.
after a site migration for example). This can be achieved using
"http-request redirect" combined with maps, but one difficulty there is
that non-matching entries will return an empty response. In order to
avoid this, duplicating the operation as an ACL condition ending in
"-m found" is possible but it becomes complex and error-prone while it's
known that an empty URL is not valid in a location header.
This patch addresses this by improving the redirect rules to be able to
simply ignore the rule and skip to the next one if the result of the
evaluation of the "location" expression is empty. However in order not
to break existing setups, it requires a new "ignore-empty" keyword.
There used to be an ACT_FLAG_FINAL on redirect rules that's used during
the parsing to emit a warning if followed by another rule, so here we
only set it if the option is not there. The http_apply_redirect_rule()
function now returns a 3rd value to mention that it did nothing and
that this was not an error, so that callers can just ignore the rule.
The regular "redirect" rules were not modified however since this does
not apply there.
The map_redirect VTC was completed with such a test and updated to 2.5
and an example was added into the documentation.
Some regtests involve multiple requests from multiple clients, which can
be dispatched as multiple requests to a server. It turns out that the
idle connection sharing works so well that very quickly few connections
are used, and regularly some of the remaining idle server connections
time out at the moment they were going to be reused, causing those random
"HTTP header incomplete" traces in the logs that make them fail often. In
the end this is only an artefact of the test environment.
And indeed, some tests like normalize-uri which perform a lot of reuse
fail very often, about 20-30% of the times in the CI, and 100% of the
time in local when running 1000 tests in a row. Others like ubase64,
sample_fetches or vary_* fail less often but still a lot in tests.
This patch addresses this by adding "tune.idle-pool.shared off" to all
tests which have at least twice as many requests as clients. It proves
very effective as no single error happens on normalize-uri anymore after
10000 tests. Also 100 full runs of all tests yield no error anymore.
One test is tricky, http_abortonclose, it used to fail ~10 times per
1000 runs and with this workaround still fails once every 1000 runs.
But the test is complex and there's a warning in it mentioning a
possible issue when run in parallel due to a port reuse.
normalize-uri http rule is marked as experimental, so it cannot be
activated without the global 'expose-experimental-directives'. The
associated vtc is updated to be able to use it.
The map_redirect test already tests for "show map", "del map" and
"clear map" but doesn't have any "add map" command. Let's add some
trivial ones involving one regular entry and two other ones added as
payload, checking they are properly returned.
This normalizer removes "/./" segments from the path component.
Usually the dot refers to the current directory which renders those segments redundant.
See GitHub Issue #714.
This patch renames all existing uri-normalizers into a more consistent naming
scheme:
1. The part of the URI that is being touched.
2. The modification being performed as an explicit verb.
This normalizer merges `../` path segments with the predecing segment, removing
both the preceding segment and the `../`.
Empty segments do not receive special treatment. The `merge-slashes` normalizer
should be executed first.
See GitHub Issue #714.
This patch adds -m flag which allows to specify header name
matching method when deleting headers from http request/response.
Currently beg, end, sub, str and reg are supported.
This is related to GitHub issue #909
Its sole remaining purpose was to display "proxy foo started", which
has little benefit and pollutes output for those with plenty of proxies.
Let's remove it now.
The VTCs were updated to reflect this, because many of them had explicit
counts of dropped lines to match this message.
This is tagged as MEDIUM because some users may be surprized by the
loss of this quite old message.
This new script tests set-path/replace-path and set-pathq/replace-pathq
rules. It also tests path and pathq sample fetches.
This patch should be backported to 2.2 if corresponding keywords are also
backported.
A few regtests continue to regularly fail in highly loaded VMs because
they have very short timeouts. Actually the goal of running with short
timeouts was to make sure we do not uselessly wait during tests designed
to trigger them, but these timeouts here are never supposed to fire at
all, so they don't need to be kept in the 15-20ms range. They do not
pose any issue on any regular machine, but VMs are often suffering from
huge time jumps and cannot always produce responses in that short of a
time.
Just like with commit ce6fc25b1 ("REGTEST: increase timeouts on the
seamless-reload test"), let's raise these short timeouts to 1 second.
A few other ones remain set to 150-200ms and do not seem to cause any
issue. Some are actually expected to trigger so let's not touch them
for now.
This reg-test tests the spaces in an ACL file, it tries to add new
entries with spaces from the CLI
This reg-test could backported in all stable branches if the fix for
spaces on the CLI was backported.
A reg test has been added to ensure the evaluation of http-after-responses rules
is functionnal for all kind of responses (server, applet and internal
responses).
2 reg tests have been added to ensure the HTTP return action is functionnal. A
reg test is about returning error files. The other one is about returning
default responses and responses based on string or file payloads.
With this new reg test we ensure the strict rewriting mode of HTTP rules is
functional. The mode is tested for request and response rules. The default mode
(strict), the swtich off and the reset on new ruleset are tested for both.
First, concat() is a converter, not a sample fetch. So use str() sample fetch
with no string and call concat on it. Then, the argument of the set-uri rule
must be a log format string. So it must be inside %[] to be evaluated.
In Travis build https://travis-ci.com/haproxy/haproxy/jobs/195477767 we
can see that OSX tends to pad zeroes at a different position than Linux
in compact IPv6 addresses, resulting in a failure in the checks which
were developped on Linux. This patch uses [0:]* in holes and [0:]+ at the
end of addresses to allow the different variants. It will unfortunately
also accept impossible addresses but there is no reason that we have to
care about for such crap to be emitted.
Some reg tests and their dependencies have been renamed. They may be
referenced by the .vtc files. So, this patch modifies also the references
to these dependencies.
This patch replaces LEVEL variable by REGTESTS_TYPES variable which is more
mnemonic and human readable. It is uses as a filter to run the reg tests scripts
where a commented #REGTEST_TYPE may be defined to designate their types.
Running the following command:
$ REGTESTS_TYPES=slow,default
will start all the reg tests where REGTEST_TYPE is defines as 'slow' or 'default'.
Note that 'default' is also the default value of REGTEST_TYPE when not specified
dedicated to run all the current h*.vtc files. When REGTESTS_TYPES is not specified
there is no filter at all. All the tests are run.
This patches also defines REGTEST_TYPE with 'slow' value for all the s*.vtc files,
'bug' value for al the b*.vtc files, 'broken' value for all the k*.vtc files.
Because HAProxy may decide to close 301 responses, as others internal responses,
it is safer to use a different client for these requests. This is not the
purpose of this test to verify the keep-alive in such cases.
These ones are not needed anymore since commit 97aaa67 ("MINOR: mux-h2:
only increase the connection window with the first update"). The tests
should now be more reliable. It might be worth simply removing all the
explicit handshake though it doesn't hurt and still serves as documentation.