Samples fetches were relying on two flags SMP_CAP_REQ/SMP_CAP_RES to describe
whether they were compatible with requests rules or with response rules. This
was never reliable because we need a finer granularity (eg: an HTTP request
method needs to parse an HTTP request, and is available past this point).
Some fetches are also dependant on the context (eg: "hdr" uses request or
response depending where it's involved, causing some abiguity).
In order to solve this, we need to precisely indicate in fetches what they
use, and their users will have to compare with what they have.
So now we have a bunch of bits indicating where the sample is fetched in the
processing chain, with a few variants indicating for some of them if it is
permanent or volatile (eg: an HTTP status is stored into the transaction so
it is permanent, despite being caught in the response contents).
The fetches also have a second mask indicating their validity domain. This one
is computed from a conversion table at registration time, so there is no need
for doing it by hand. This validity domain consists in a bitmask with one bit
set for each usage point in the processing chain. Some provisions were made
for upcoming controls such as connection-based TCP rules which apply on top of
the connection layer but before instantiating the session.
Then everywhere a fetch is used, the bit for the control point is checked in
the fetch's validity domain, and it becomes possible to finely ensure that a
fetch will work or not.
Note that we need these two separate bitfields because some fetches are usable
both in request and response (eg: "hdr", "payload"). So the keyword will have
a "use" field made of a combination of several SMP_USE_* values, which will be
converted into a wider list of SMP_VAL_* flags.
The knowledge of permanent vs dynamic information has disappeared for now, as
it was never used. Later we'll probably reintroduce it differently when
dealing with variables. Its only use at the moment could have been to avoid
caching a dynamic rate measurement, but nothing is cached as of now.
This flag is used on ACL matches that support being looking up patterns
in trees. At the moment, only strings and IPs support tree-based lookups,
but the flag is randomly set also on integers and binary data, and is not
even always set on strings nor IPs.
Better get rid of this mess by only relying on the matching function to
decide whether or not it supports tree-based lookups, this is safer and
easier to maintain.
During normal HTTP request processing, request buffers are realigned if
there are less than global.maxrewrite bytes available after them, in
order to leave enough room for rewriting headers after the request. This
is done in http_wait_for_request().
However, if some HTTP inspection happens during a "tcp-request content"
rule, this realignment is not performed. In theory this is not a problem
because empty buffers are always aligned and TCP inspection happens at
the beginning of a connection. But with HTTP keep-alive, it also happens
at the beginning of each subsequent request. So if a second request was
pipelined by the client before the first one had a chance to be forwarded,
the second request will not be realigned. Then, http_wait_for_request()
will not perform such a realignment either because the request was
already parsed and marked as such. The consequence of this, is that the
rewrite of a sufficient number of such pipelined, unaligned requests may
leave less room past the request been processed than the configured
reserve, which can lead to a buffer overflow if request processing appends
some data past the end of the buffer.
A number of conditions are required for the bug to be triggered :
- HTTP keep-alive must be enabled ;
- HTTP inspection in TCP rules must be used ;
- some request appending rules are needed (reqadd, x-forwarded-for)
- since empty buffers are always realigned, the client must pipeline
enough requests so that the buffer always contains something till
the point where there is no more room for rewriting.
While such a configuration is quite unlikely to be met (which is
confirmed by the bug's lifetime), a few people do use these features
together for very specific usages. And more importantly, writing such
a configuration and the request to attack it is trivial.
A quick workaround consists in forcing keep-alive off by adding
"option httpclose" or "option forceclose" in the frontend. Alternatively,
disabling HTTP-based TCP inspection rules enough if the application
supports it.
At first glance, this bug does not look like it could lead to remote code
execution, as the overflowing part is controlled by the configuration and
not by the user. But some deeper analysis should be performed to confirm
this. And anyway, corrupting the process' memory and crashing it is quite
trivial.
Special thanks go to Yves Lafon from the W3C who reported this bug and
deployed significant efforts to collect the relevant data needed to
understand it in less than one week.
CVE-2013-1912 was assigned to this issue.
Note that 1.4 is also affected so the fix must be backported.
Sander Klein reported that since last snapshot, some downloads would
hang from nginx but succeed from apache. The culprit was not too hard
to find given the low number of recent changes affecting the data path.
Commit d655ffe slightly reorganized the HTTP state machine and
introduced this regression. The reason is that we must never jump
into the MSG_DONE case without first flushing remaining data because
this is not done anymore afterwards. This part is scheduled for
being reorganized since it's totally ugly especially since we added
compression, and this regression is an illustration of its readability.
The issue is entirely dependant on the server close sequence, which
explains why it was reproducible only with nginx here.
This commit fixed a bug and introduced a new one at the same time.
It's a stupid typo, the index to store the context is [0], not [2].
The effect is that parsing the header can loop forever if multiple
headers are found. This issue was reported by Lukas Tribus.
TCP Fast Open is supported in server mode since Linux 3.7, but current
libc's don't define TCP_FASTOPEN=23. Introduce the new USE flag USE_TFO
to define it manually in compat.h. Also note this in the TFO related
documentation.
fe61656b added the ability to load a list of certificates from a file,
but error control was incomplete and misleading, as some errors such
as missing files were not reported, and errors reported with Alert()
instead of memprintf() were inappropriate and mixed with upper errors.
Also, the code really supports a single SNI filter right now, so let's
correct it and the doc for that, leaving room for later change if needed.
It designates a list of PEM file with an optional list of SNI filter
per certificate, with the following format for each line :
<crtfile>[ <snifilter>]*
Wildcards are supported in the SNI filter. The certificates will be
presented to clients who provide a valid TLS Server Name Indication
field matching one of SNI filter. If no SNI filter is specified the
CN and alt subjects are used.
Formerly, if A was replaced by B, and then B by C before
A finished exiting, we didn't wait for B to finish so it
ended up as a zombie process.
Fix this by waiting randomly every child we spawn.
Signed-off-by: Marc-Antoine Perennou <Marc-Antoine@Perennou.com>
Baptiste Assmann reported that the cook*() ACLs do not work anymore.
The reason is the way we store the hdr_ctx between subsequent calls
to smp_fetch_cookie() since commit 3740635b (1.5-dev10).
The smp->ctx.a[] storage holds up to 8 pointers. It is not meant for
generic storage. We used to store hdr_ctx in the ctx, but while it used
to just fit for smp_fetch_hdr(), it does not for smp_fetch_cookie()
since we stored it at offset 2.
The correct solution is to use this storage to store a pointer to the
current hdr_ctx struct which is statically allocated.
The "osx" target may now be passed in the TARGET variable. It supports
the same features as FreeBSD and allows its users to use the GNU makefile
instead of the platform-specific makefile which lacks some features.
This allows to build haproxy for unknown targets and still have poll().
If for any reason a target does not support it, just passing USE_POLL=""
disables it.
This change makes the "crt" block of the documentation easier to use
for those not clear on what needs to go in what file, specifically for
those using CAs that require intermediate certificates.
Seen on Solaris 8, calling vsnprintf() with a null-size results
in the output size not being computed. This causes some random
behaviour including crashes when trying to display error messages
when loading an invalid configuration.
src/standard.c: In function `str2sa_range':
src/standard.c:734: warning: subscript has type `char'
This one was recently introduced by commit c120c8d3.
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
For this reason, we now control the compatibility between global.maxsock
and FD_SETSIZE, and refuse to use select() if there too many FDs are
expected to be used. Note that on Solaris, FD_SETSIZE is already forced
to 65536, and that FreeBSD and OpenBSD allow it to be redefined, though
this is not needed thanks to kqueue which is much more efficient.
In practice, since poll() is enabled on all targets, it should not cause
any problem, unless it is explicitly disabled.
This change must be backported to 1.4 because the crashes caused by glibc
have already been reported on this version.
Some recent glibc updates have added controls on FD_SET/FD_CLR/FD_ISSET
that crash the program if it tries to use a file descriptor larger than
FD_SETSIZE.
Do not rely on FD_* macros anymore and replace them with bit fields.
An issue reported by David Coulson is that when using http-send-name-header,
the response processing would randomly be performed. The issue was first
diagnosed by Cyril Bont as being related to a time race when processing
the closing of the response.
In practice, the issue is a bit trickier. It happens that
http_send_name_header() did not update msg->sol after a rewrite. This
counter is supposed to point to the beginning of the message's body
once headers are scheduled for being forwarded. And not updating it
means that the first forwarding of the request headers in
http_request_forward_body() does not send the correct count, leaving
some bytes in chn->to_forward.
Then if the server sends its response in a single packet with the
close, the stream interface switches to state SI_ST_DIS which in
turn moves to SI_ST_CLO in process_session(), and to close the
outgoing connection. This is detected by http_request_forward_body(),
which then switches the request message to the error state, and syncs
all FSMs and removes any response analyser.
The response analyser being removed, no processing is performed on
the response buffer, which is tunnelled as-is to the client.
Of course, the correct fix consists in having http_send_name_header()
update msg->sol. Normally this ought not to have been needed, but it
is an abuse to modify data already scheduled for being forwarded, so
it is expected that such specific handling has to be done there. Better
not have generic functions deal with such cases, so that it does not
become the standard.
Note: 1.4 does not have this issue even if it does not update the
pointer either, because it forwards from msg->som which is not
updated at the moment the connect() succeeds. So no backport is
required.
The check was made on "cond" instead of "rule->cond", so it never
emitted any warning since either the rule was NULL or it was set to
the last condition met.
This is 1.5-specific and the bug was introduced by commit 4baae248
in 1.5-dev17, so no backport is needed.
Patch 6cbbdbf3 fixed the missing "-" delimitors in logs but it caused
them to be emitted with "http-request add-header", eventhough it was
correctly fixed for the unique-id format. Fix this by simply removing
LOG_OPT_MANDATORY in this case.
Now that all addresses are parsed using str2sa_range(), it becomes easy
to add support for environment variables and use them everywhere an address
is needed. Environment variables are used as $VAR or ${VAR} as in shell.
Any number of variables may compose an address, allowing various fantasies
such as "fd@${FD_HTTP}" or "${LAN_DC1}.1:80".
These ones are usable in logs, bind, servers, peers, stats socket, source,
dispatch, and check address.
Using the address syntax "fd@<num>", a listener may inherit a file
descriptor that the caller process has already bound and passed as
this number. The fd's socket family is detected using getsockname(),
and the usual initialization is performed through the existing code
for that family, but the socket creation is skipped.
Whether the parent has performed the listen() call or not is not
important as this is detected.
For UNIX sockets, we immediately clear the path after preparing a
socket so that we never remove it in case an abort would happen due
to a late error during startup.
This change allows one to force the address family in any address parsed
by str2sa_range() by specifying it as a prefix followed by '@' then the
address. Currently supported address prefixes are 'ipv4@', 'ipv6@', 'unix@'.
This also helps forcing resolving for host names (when getaddrinfo is used),
and force the family of the empty address (eg: 'ipv4@' = 0.0.0.0 while
'ipv6@' = ::).
The main benefits is that unix sockets can now get a local name without
being forced to begin with a slash. This is useful during development as
it is no longer necessary to have stats socket sent to /tmp.
Several of the parsing functions made use of multiple errmsg/err_msg
variables which had to be freed, while there is already one in each
function that is freed upon exit. Adapt the code to use the existing
variable exclusively.
Don't use a statically allocated address both for str2ip and str2sa_range,
use the same. The inet and unix code paths have been splitted a little
better to improve readability.
The 'source' and 'usesrc' statements now completely rely on str2sa_range() to
parse an address. A test is made to ensure that the address family supports
connect().
Now that str2sa_range() knows how to parse UNIX addresses, make str2listener()
use it. It simplifies the function. Next step consists in unifying the error
handling to further simplify the call.
Tests have been done and show that unix sockets are correctly handled, with
and without prefixes, both for global stats and normal "bind" statements.
str2sa_range() now considers that any address beginning with '/' is a UNIX
address. It is compatible with all callers at the moment since all of them
perform this test and use a different parser for such addresses. However,
some parsers (eg: servers) still don't check for unix addresses.
We'll need str2sa_range() to support a prefix for unix sockets. Since
we don't always want to use it (eg: stats socket), let's not take it
unconditionally from global but let the caller pass it.
At the moment, all address families supported on a "server" statement support
a connect() method, but this will soon change with the generalization of
str2sa_range(). Checks currently call ->connect() unconditionally so let's
add a check for this.
The "unix-bind prefix" feature was made for explicit "bind" statements. Since
the stats socket was changed to use str2listener(), it implicitly inherited
from this feature. But both are defined in the global section, and we don't
want them to be position-dependant.
So let's make str2listener() explicitly not apply the unix-bind prefix to the
global stats frontend.
This only affects 1.5-dev so it does not need any backport.