There's an issue related with shutting down POST transfers or closing the
connection after the end of the upload : the shutdown is forwarded to the
server regardless of the abortonclose option. The problem it causes is that
during a scan, brute force or whatever, it becomes possible that all source
ports are exhausted with all sockets in TIME_WAIT state.
There are multiple issues at once in fact :
- no action is done for the close, it automatically happens at the lower
layers thanks for channel_auto_close(), so we cannot act on NOLINGER ;
- we *do* want to continue to send a clean shutdown in tunnel mode because
some protocols transported over HTTP may need this, regardless of option
abortonclose, thus we can't set the option inconditionally
- for all other modes, we do want to close the dirty way because we're
certain whether we've sent everything or not, and we don't want to eat
all source ports.
The solution is a bit complex and applies to DONE/TUNNEL states :
1) disable automatic close for everything not a tunnel and not just
keep-alive / server-close. Force-close is now covered, as is HTTP/1.0
which implicitly works in force-close mode ;
2) when processing option abortonclose, we know we can disable lingering
if the client has closed and the connection is not in tunnel mode.
Since the last case above leads to a situation where the client side reports
an error, we know the connection will not be reused, so leaving the flag on
the stream-interface is safe. A client closing in the middle of the data
transmission already aborts the transaction so this case is not a problem.
This fix must be backported to 1.5 where the problem was detected.
Due to the code being mostly inspired from the tcp-request parser, it
does some crap because both don't work the same way. The "len" argument
could be mismatched and then the length could be used uninitialized.
This is only possible in frontends of course, but it will finally
make it possible to capture arbitrary http parts, including URL
parameters or parts of the message body.
It's worth noting that an ugly (char **) cast had to be done to
call sample_fetch_string() which is caused by a 5- or 6- levels
of inheritance of this type in the API. Here it's harmless since
the function uses it as a const, but this API madness must be
fixed, starting with the one or two rare functions that modify
the args and inflict this on each and every keyword parser.
(cherry picked from commit 484a4f38460593919a1c1d9a047a043198d69f45)
This patch introduces quoting which allows to write configuration string
including spaces without escaping them.
Strong (with single quotes) and weak (with double quotes) quoting are
supported. Weak quoting supports escaping and special characters when
strong quoting does not interpret anything.
This patch could break configuration files where ' and " where used.
Chad Lavoie reported an interesting regression caused by the latest
updates to automatically detect the processes a peers section runs on.
It turns out that if a config has neither nbproc nor a bind-process
statement and depending on the frontend->backend chaining, it is possible
to evade all bind_proc propagations, resulting in assigning only ~0UL (all
processes, which is 32 or 64) without ever restricting it to nbproc. It
was not visible in backends until they started to reference peers sections
which saw themselves with 64 processes at once.
This patch addresses this by replacing all those ~0UL with nbits(nbproc).
That way all "bind-process" settings *default* to the number of processes
defined in nbproc instead of 32 or 64.
This fix could possibly be backported into 1.5, though there is no indication
that this bug could have any effect there.
Issuing a "show sess all" prior to a "show stat" on the CLI results in no
proxy being dumped because the scope_len union member was not properly
reinitialized.
This fix must be backported into 1.5.
They're caused by the cast to long long from ptr in 32-bit.
src/pattern.c: In function 'pat_match_str':
src/pattern.c:479:44: warning: cast from pointer to integer of different size [-Wpointer-to-int-cast]
Body processing is still fairly limited, but this is a start. It becomes
possible to apply regex to find contents in order to decide where to route
a request for example. Only the first chunk is parsed for now, and the
response is not yet available (the parsing function must be duplicated for
this).
req.body : binary
This returns the HTTP request's available body as a block of data. It
requires that the request body has been buffered made available using
"option http-buffer-request". In case of chunked-encoded body, currently only
the first chunk is analyzed.
req.body_len : integer
This returns the length of the HTTP request's available body in bytes. It may
be lower than the advertised length if the body is larger than the buffer. It
requires that the request body has been buffered made available using
"option http-buffer-request".
req.body_size : integer
This returns the advertised length of the HTTP request's body in bytes. It
will represent the advertised Content-Length header, or the size of the first
chunk in case of chunked encoding. In order to parse the chunks, it requires
that the request body has been buffered made available using
"option http-buffer-request".
It is sometimes desirable to wait for the body of an HTTP request before
taking a decision. This is what is being done by "balance url_param" for
example. The first use case is to buffer requests from slow clients before
connecting to the server. Another use case consists in taking the routing
decision based on the request body's contents. This option placed in a
frontend or backend forces the HTTP processing to wait until either the whole
body is received, or the request buffer is full, or the first chunk is
complete in case of chunked encoding. It can have undesired side effects with
some applications abusing HTTP by expecting unbufferred transmissions between
the frontend and the backend, so this should definitely not be used by
default.
Note that it would not work for the response because we don't reset the
message state before starting to forward. For the response we need to
1) reset the message state to MSG_100_SENT or BODY , and 2) to reset
body_len in case of chunked encoding to avoid counting it twice.
Since 1.5, the request body analyser has become independant from any
other element and does not even disturb the message forwarder anymore.
And since it's disabled by default, we can place it before most
analysers so that it's can preempt any other one if an intermediary
one enables it.
The get_server_ph_post() function assumes that the buffer is contiguous.
While this is true for all the header part, it is not necessarily true
for the end of data the fit in the reserve. In this case there's a risk
to read past the end of the buffer for a few hundred bytes, and possibly
to crash the process if what follows is not mapped.
The fix consists in truncating the analyzed length to the length of the
contiguous block that follows the headers.
A config workaround for this bug would be to disable balance url_param.
This fix must be backported to 1.5. It seems 1.4 did have the check.
Due to the fact that we were still considering only msg->sov for the
first byte of data after calling http_parse_chunk_size(), we used to
miscompute the input data size and to count the CRLF and the chunk size
as part of the input data. The effect is that it was possible to release
the processing with 3 or 4 missing bytes, especially if they're typed by
hand during debugging sessions. This can cause the stats page to return
some errors in admin mode, and the url_param balance algorithm to fail
to properly hash a body input.
This fix must be backported to 1.5.
If a peers section is bound to no process, it's silently discarded. If its
bound to multiple processes, an error is emitted and the process will not
start.
This will prevent the peers section from remaining in listen state on
the incorrect process. The peers_fe pointer is set to NULL, which will
tell the peers task to commit suicide if it was already scheduled.
The peers initialization sequence is a bit complex, they're attached
to stick-tables and initialized very early in the boot process. When
we fork, if some must not start, it's too late to find them. Instead,
simply add a guard in their respective tasks to stop them once they
want to start.
Sometimes it's very hard to disable the use of peers because an empty
section is not valid, so it is necessary to comment out all references
to the section, and not to forget to restore them in the same state
after the operation.
Let's add a "disabled" keyword just like for proxies. A ->state member
in the peers struct is even present for this purpose but was never used
at all.
Maybe it would make sense to backport this to 1.5 as it's really cumbersome
there.
It's dangerous to initialize stick-tables before peers because they
start a task that cannot be stopped before we know if the peers need
to be disabled and destroyed. Move this after.
If a table in a disabled proxy references a peers section, the peers
name is not resolved to a pointer to a table, but since it belongs to
a union, it can later be dereferenced. Right now it seems it cannot
happen, but it definitely will after the pending changes.
It doesn't cost anything to backport this into 1.5, it will make gdb
sessions less head-scratching.
Recently some browsers started to implement a "pre-connect" feature
consisting in speculatively connecting to some recently visited web sites
just in case the user would like to visit them. This results in many
connections being established to web sites, which end up in 408 Request
Timeout if the timeout strikes first, or 400 Bad Request when the browser
decides to close them first. These ones pollute the log and feed the error
counters. There was already "option dontlognull" but it's insufficient in
this case. Instead, this option does the following things :
- prevent any 400/408 message from being sent to the client if nothing
was received over a connection before it was closed ;
- prevent any log from being emitted in this situation ;
- prevent any error counter from being incremented
That way the empty connection is silently ignored. Note that it is better
not to use this unless it is clear that it is needed, because it will hide
real problems. The most common reason for not receiving a request and seeing
a 408 is due to an MTU inconsistency between the client and an intermediary
element such as a VPN, which blocks too large packets. These issues are
generally seen with POST requests as well as GET with large cookies. The logs
are often the only way to detect them.
This patch should be backported to 1.5 since it avoids false alerts and
makes it easier to monitor haproxy's status.
There's not much reason for continuing to accept HTTP/0.9 requests
nowadays except for manual testing. Now we disable support for these
by default, unless option accept-invalid-http-request is specified,
in which case they continue to be upgraded to 1.0.
While RFC2616 used to allow an undeterminate amount of digits for the
major and minor components of the HTTP version, RFC7230 has reduced
that to a single digit for each.
If a server can't properly parse the version string and falls back to 0.9,
it could then send a head-less response whose payload would be taken for
headers, which could confuse downstream agents.
Since there's no more reason for supporting a version scheme that was
never used, let's upgrade to the updated version of the standard. It is
still possible to enforce support for the old behaviour using options
accept-invalid-http-request and accept-invalid-http-response.
It would be wise to backport this to 1.5 as well just in case.
The spec mandates that content-length must be removed from messages if
Transfer-Encoding is present, not just for valid ones.
This must be backported to 1.5 and 1.4.
The rules related to how to handle a bad transfer-encoding header (one
where "chunked" is not at the final place) have evolved to mandate an
abort when this happens in the request. Previously it was only a close
(which is still valid for the server side).
This must be backported to 1.5 and 1.4.
While Transfer-Encoding is HTTP/1.1, we must still parse it in HTTP/1.0
in case an agent sends it, because it's likely that the other side might
use it as well, causing confusion. This will also result in getting rid
of the Content-Length header in such abnormal situations and in having
a clean connection.
This must be backported to 1.5 and 1.4.
RFC7230 clarified the behaviour to adopt when facing both a
content-length and a transfer-encoding: chunked in a message. While
haproxy already complied with the method for getting the message
length right, and used to detect improper content-length duplicates,
it still did not remove the content-length header when facing a
transfer-encoding: chunked. Usually it is not a problem since other
agents (clients and servers) are required to parse the message
according to the rules that have been in place since RFC2616 in
1999.
However Régis Leroy reported the existence of at least one such
non-compliant agent so haproxy could be abused to get out of sync
with it on pipelined requests (HTTP request smuggling attack),
it consider part of a payload as a subsequent request.
The best thing to do is then to remove the content-length according
to RFC7230. It used to be in the todo list with a fixme in the code
while waiting for the standard to stabilize, let's apply it now that
it's published.
Thanks to Régis for bringing that subject to our attention.
This fix must be backported to 1.5 and 1.4.
Document the influence of email-alert level and other configuration
parameters on when email-alerts are sent.
Signed-off-by: Simon Horman <horms@verge.net.au>
This is similar to the way email alerts are sent when servers are marked as
DOWN.
Like the log messages corresponding to these state changes the messages
have log level notice. Thus they are suppressed by the default email-alert
level of 'alert'. To allow these messages the email-alert level should
be set to 'notice', 'info' or 'debug'. e.g:
email-alert level notice
"email-alert mailers" and "email-alert to" settings are also required in
order for any email alerts to be sent.
A follow-up patch will document the above.
Signed-off-by: Simon Horman <horms@verge.net.au>
Lower the priority of email alerts for log-health-checks messages from
LOG_NOTICE to LOG_INFO.
This is to allow set-ups with log-health-checks enabled to disable email
for health check state changes while leaving other email alerts enabled.
In order for email alerts to be sent for health check state changes
"log-health-checks" needs to be set and "email-alert level" needs to be 'info'
or lower. "email-alert mailers" and "email-alert to" settings are also
required in order for any email alerts to be sent.
A follow-up patch will document the above.
Signed-off-by: Simon Horman <horms@verge.net.au>
The principle of this cache is to have a global cache for all pattern
matching operations which rely on lists (reg, sub, dir, dom, ...). The
input data, the expression and a random seed are used as a hashing key.
The cached entries contains a pointer to the expression and a revision
number for that expression so that we don't accidently used obsolete
data after a pattern update or a very unlikely hash collision.
Regarding the risk of collisions, 10k entries at 10k req/s mean 1% risk
of a collision after 60 years, that's already much less than the memory's
reliability in most machines and more durable than most admin's life
expectancy. A collision will result in a valid result to be returned
for a different entry from the same list. If this is not acceptable,
the cache can be disabled using tune.pattern.cache-size.
A test on a file containing 10k small regex showed that the regex
matching was limited to 6k/s instead of 70k with regular strings.
When enabling the LRU cache, the performance was back to 70k/s.
This will be used to detect any change on the pattern list between
two operations, ultimately making it possible to implement a cache
which immediately invalidates obsolete keys after an update. The
revision is simply taken from the timestamp counter to ensure that
even upon a pointer reuse we cannot accidently come back to the
same (expr,revision) tuple.
The xxhash library provides a very fast and excellent hash algorithm
suitable for many purposes. It excels at hashing large blocks but is
also extremely fast on small ones. It's distributed under a 2-clause
BSD license (GPL-compatible) so it can be included here. Updates are
distributed here :
https://github.com/Cyan4973/xxHash
This will be usable to implement some maps/acl caches for heavy datasets
loaded from files (mostly regex-based but in general anything that cannot
be indexed in a tree).
This one returns a timestamp, either the one from the CPU or from
gettimeofday() in 64-bit format. The purpose is to be able to compare
timestamps on various entities to make it easier to detect updates.
It can also be used for benchmarking in certain situations during
development.
The commit e16c1b3f changed the way the function tcpcheck_get_step_id is
now called (check instead of server).
This change introduced a regression since now this function would return
0 all the time because of:
if (check->current_step)
return 0;
This patch fixes this issue by inversing the test: you want to return 0
only if current_step is not yet set :)
No backport is needed.
This commit adds 4 new log format variables that parse the
HTTP Request-Line for more specific logging than "%r" provides.
For example, we can parse the following HTTP Request-Line with
these new variables:
"GET /foo?bar=baz HTTP/1.1"
- %HM: HTTP Method ("GET")
- %HV: HTTP Version ("HTTP/1.1")
- %HU: HTTP Request-URI ("/foo?bar=baz")
- %HP: HTTP Request-URI without query string ("/foo")
Since appctx are scheduled out of streams, it's pointless to wake up
the task managing the stream to push updates, they won't be seen. In
fact unit tests work because silent sessions are restarted after 5s of
idle and the exchange is correctly scheduled during startup!
So we need to notify the appctx instead. For this we add a pointer to
the appctx in the peer session.
No backport is needed of course.