The last few calls to conn_xprt_{want,stop}_{recv,send} in the central
connection code were replaced with their strictly exact equivalent fd_*,
adding the call to conn_ctrl_ready() when it was missing.
Just like previous commit, we don't need to pass through the connection
layer anymore to enable polling during a connect(), we know the FD, so
let's simply call fd_want_send().
Now that we know that the connection layer is transparent for polling
changes, we have no reason for hiding behind conn_xprt_stop_send() and
can safely call fd_stop_send() on the FD once the buffer is empty.
Historically we used to require that the connections held the desired
polling states for the data layer and the socket layer. Then with muxes
these were more or less merged into the transport layer, and now it
happens that with all transport layers having their own state, the
"transport layer state" as we have it in the connection (XPRT_RD_ENA,
XPRT_WR_ENA) is only an exact copy of the undelying file descriptor
state, but with a delay. All of this is causing some difficulties at
many places in the code because there are still some locations which
use the conn_want_* API to remain clean and only rely on connection,
and count on a later collection call to conn_cond_update_polling(),
while others need an immediate action and directly use the FD updates.
Since our updates are now much cheaper, most of them being only an
atomic test-and-set operation, and since our I/O callbacks are deferred,
there's no benefit anymore in trying to "cache" the transient state
change in the connection flags hoping to cancel them before they
become an FD event. Better make such calls transparent indirections
to the FD layer instead and get rid of the deferred operations which
needlessly complicate the logic inside.
This removes flags CO_FL_XPRT_{RD,WR}_ENA and CO_FL_WILL_UPDATE.
A number of functions related to polling updates were either greatly
simplified or removed.
Two places were using CO_FL_XPRT_WR_ENA as a hint to know if more data
were expected to be sent after a PROXY protocol or SOCKSv4 header. These
ones were simply replaced with a check on the subscription which is
where we ought to get the autoritative information from.
Now the __conn_xprt_want_* and their conn_xprt_want_* counterparts
are the same. conn_stop_polling() and conn_xprt_stop_both() are the
same as well. conn_cond_update_polling() only causes errors to stop
polling. It also becomes way more obvious that muxes should not at
all employ conn_xprt_{want|stop}_{recv,send}(), and that the call
to __conn_xprt_stop_recv() in case a mux failed to allocate a buffer
is inappropriate, it ought to unsubscribe from reads instead. All of
this definitely requires a serious cleanup.
Historically we used to have a global epoll_event for various
manipulations involving epoll_ctl() and when threads were added,
this was turned to a thread_local, which is needlessly expensive
since it's just a temporary variable. Let's move it to a local
variable wherever it's called instead.
While trying to address issue #253, Commit 5909380c ("BUG/MINOR: checks:
stop polling for write when we have nothing left to send") made sure that
we stop polling for writes when the buffer is empty. This was actually
more a workaround than a bug fix because by doing so we may be stopping
polling for an intermediary transport layer without acting on the check
itself in case there's SSL or send-proxy in the chain for example, thus
the approach is wrong. In practice due to the small size of check
requests, this will not have any impact. At best, we ought to unsubscribe
for sending, but that's already the case when we arrive in this function.
But given that the root cause of the issue was addressed later in commits
cc705a6b, c5940392 and ccf3f6d1, we can now safely revert this change.
It was confirmed on the faulty config that this change doesn't have any
effect anymore on the test.
In H1/H2/FCGI, the *_get_buf() functions try to disable receipt of data
when there's no buffer available. But they do so at the lowest possible
level, which is unrelated to the upper transport layers which may still
be trying to feed data based on subscription. The correct approach here
would theorically be to only disable subscription, though when we get
there, the subscription will already have been dropped, so we can safely
just remove that call.
It's unlikely that this could have had any practical impact, as the upper
xprt layer would call this callback which would fail an not resubscribe.
Having the lowest layer disabled would just be temporary since when
re-enabling reading, a subscribe at the end of data would re-enable it.
Backport should not harm but seems useless at this point.
If an client error is reported on the request channel (CF_READ_ERROR) while a
session is tarpitted, no error is returned to the client. Concretly,
http_reply_and_close() function is not called. This function is reponsible to
forward the error to the client. But not only. It is also responsible to abort
the request. Because this function is not called when a read error is reported
on the request channel, and because the tarpit analyzer is the last one, there
is nothing preventing a connection attempt on a server while it is totally
unexpected.
So, a useless connexion on a backend server may be performed because of this
bug. If an HTTP load-balancing algorithm is used on the backend side, it leads
to a crash of HAProxy because the request was already erased.
If you have tarpit rules and if you use an HTTP load-balancing algorithm on your
backends, you must apply this patch. Otherwise a simple TCP reset on a tarpitted
connexion will most likely crash your HAProxy. A safe workaround is to use a
silent-drop rule or a deny rule instead of a tarpit.
This bug also affect the legacy code. It is in fact an very old hidden bug. But
the refactoring of process_stream() in the 1.9 makes it visible. And,
unfortunately, with the HTX, it is easier to hit it because many processing has
been moved in lower layers, in the muxes.
It must be backported as far as 1.9. For the 2.0 and the 1.9, the legacy HTTP
code must also be patched the same way. For older versions, it may be backported
but the bug seems to not impact them.
Thanks to Olivier D <webmaster@ajeux.com> to have reported the bug and provided
all the infos to analyze it.
gcc complains rightfully:
src/ssl_sock.c: In function ‘ssl_load_global_issuers_from_path’:
src/ssl_sock.c:9860:4: warning: format not a string literal and no format arguments [-Wformat-security]
ha_warning(warn);
^
Introduced in 70df7bf19c.
if the monitor-uri starts by a slash ('/'), the matching is performed against
the request's path instead of the request's uri. It is a workaround to let the
HTTP/2 requests match the monitor-uri. Indeed, in HTTP/2, clients are encouraged
to send absolute URIs only.
This patch is not tagged as a bug, because the previous behavior matched exactly
what the doc describes. But it may surprise that HTTP/2 requests don't match the
monitor-uri.
This patch may be backported to 2.1 because URIs of HTTP/2 are stored using the
absolute-form starting this version. For previous versions, this patch will only
helps explicitely absolute HTTP/1 requests (and only the HTX part because on the
legacy HTTP, all the URI is matched).
It should fix the issue #509.
The monitor-uri should be case-sensitive. In reality, the scheme and the host
part are case-insensitives and only the path is case-sensive. But concretely,
since the start, the matching on the monitor-uri is case-sensitive. And it is
probably the expected behavior of almost all users.
This patch must be backported as far as 1.9. For HAProxy 2.0 and 1.9, it must be
applied on src/proto_htx.c.
Ilya reported that the "which" utility is not that much portable and is
absent from Fedora. "type -p" is not portable either, and the correct
solution appears to be "command -v", so let's use this for now, we can
change it again in the future in case of problems.
Link: https://www.mail-archive.com/haproxy@formilux.org/msg36332.html
Certificates loaded with "crt" and "crt-list" commonly share the same
intermediate certificate in PEM file. "issuers-chain-path" is a global
directive to share intermediate chain certificates in a directory. If
certificates chain is not included in certificate PEM file, haproxy
will complete chain if issuer match the first certificate of the chain
stored via "issuers-chain-path" directive. Such chains will be shared
in memory.
As reported in issue #507, since commiy 07e1e3c93e ("MINOR: sample:
regsub now supports backreferences") we must not proceed in regsub()
if we fali to allocate a trash (which in practice never happens). No
backport needed.
This counter is already incremented when a new request is received (or if an
error occurred waiting it). So it must not be incremented when the stream is
terminated, at the end of process_strem(). This bug was introduced by the commit
cff0f739e ("MINOR: counters: Review conditions to increment counters from
analysers").
No backport needed.
When an Host header is updated, the autority part, if any, is also updated to
keep the both syncrhonized. But, when the update is performed while there is no
change, a failure is reported while, in reality, no update is necessary. This
bug was introduced by the commit d7b7a1ce5 ("MEDIUM: http-htx: Keep the Host
header and the request start-line synchronized").
This commit was pushed in the 2.1. But on this version, the bug is hidden
because rewrite errors are silently ignored. And because it happens when there
is no change, if the rewrite fails, noone notices it. But since the 2.2, rewrite
errors are now fatals by default. So when the bug is hit, a 500 error is
returned to the client. Without this fix, a workaround is to disable the strict
rewriting mode (see the "strict-mode" HTTP rule).
The following HTTP rule is a good way to reproduce the bug if a request with an
authority is received. In HTT2, it is pretty common.
acl host_header_exists req.hdr(host) -m found
http-request set-header host %[req.hdr(host)] if host_header_exists
This patch must be backported to 2.1 and everywhere the commit d7b7a1ce5 is
backported. It should fix the issue #494.
In flt_analyze_http_headers() HTTP analyzer, we must not forward systematically
the headers. We must only count them as filtered data (ie. increment the offset
of the right size). It is the http_payload callback responsibility to decide to
forward headers or not by forwarding at least 1 byte of payload. And there is
always at least 1 byte of payload to forward, the EOM block.
This patch depends on following commits:
* MINOR: filters: Forward data only if the last filter forwards something
* MINOR: http-htx: Add a function to retrieve the headers size of an HTX message
This patch must be backported with commits above as far as 1.9. In HAProxy 2.0
and 1.9, the patch must be adapted because of the legacy HTTP code.
In flt_tcp_payload() and flt_http_payload(), if the last filter does not
forwarding anything, nothing is forwarded, not even the already filtered
data. For now, this patch is useless because the last filter is always sync with
the stream's offset. But it will be mandatory for a bugfix.
http_get_hdrs_size() function may now be used to get the bytes held by headers
in an HTX message. It only works if the headers were not already
forwarded. Metadata are not counted here.
The function is_idchar() was added by commit 36f586b ("MINOR: tools:
add is_idchar() to tell if a char may belong to an identifier") to
ease matching of sample fetch/converter names. But it lacked support
for the '+' character used in "base32+src" and "url32+src". A quick
way to figure the list of supported sample fetch+converter names is
to issue the following command:
git grep '"[^"]*",.*SMP_T_.*SMP_USE_'|cut -f2 -d'"'|sort -u
No more entry is reported once searching for characters not covered
by is_idchar().
No backport is needed.
Now that the configuration parser is more flexible with samples,
converters and their arguments, we can leverage this to enable
support for backreferences in regsub.
Recent commit 807aef8a14 ("BUG/MINOR: arg: report an error if an argument
is larger than bufsize") aimed at fixing the argument length check but
relied on the fact that the end of string was not reached, except that
it forgot to consider the delimiters (comma and parenthesis) which are
valid conditions to break out of the loop. This used to break simple
expressions like "hdr(xff,1)". Thanks to Jrme for reporting this.
No backport is needed.
Commit 0f5ce6014a ("SCRIPTS: announce-release: place the send command
in the mail's header") broke the announce-release script: by not having
to edit the message at all anymore, mutt does nothing when sending, but
it still does if the message is edited (which was the case before). With
some testing, it appears that mutt -H does work when there's no change,
so let's use this instead. This should be backported till 1.7.
Commit ef21facd99 ("MEDIUM: arg: make make_arg_list() support quotes
in arguments") removed the chunk_strncpy() to fill the trash buffer
as the input is being parsed, and accidently dropped the jump to the
error path in case the argument is too large, which is now fixed.
No backport is needed, this is for 2.2. This addresses issue #502.
For a very long time it used to be impossible to pass a closing square
bracket as a valid character in argument to a sample fetch function or
to a converter because the LF parser used to stop on the first such
character found and to pass what was between the first '[' and the first
']' to sample_parse_expr().
This patch addresses this by passing the whole string to sample_parse_expr()
which is the only one authoritative to indicate the first character that
does not belong to the expression. The LF parser then verifies it matches
a ']' or fails. As a result it is finally possible to write rules such as
the following, which is totally valid an unambigous :
http-request redirect location %[url,regsub([.:/?-],!,g)]
|-----| | |
arg1 | `---> arg3
`-----> arg2
|-----------------|
converter
|---------------------|
sample expression
|------------------------|
log-format tag
When an end pointer is passed, instead of complaining that a comma is
missing after a keyword, sample_parse_expr() will silently return the
pointer to the current location into this return pointer so that the
caller can continue its parsing. This will be used by more complex
expressions which embed sample expressions, and may even permit to
embed sample expressions into arguments of other expressions.
Now it becomes possible to reuse the quotes within arguments, allowing
the parser to distinguish a ',' or ')' that is part of the value from
one which delimits the argument. In addition, ',' and ')' may be escaped
using a backslash. However, it is also important to keep in mind that
just like in shell, quotes are first resolved by the word tokenizer, so
in order to pass quotes that are visible to the argument parser, a second
level is needed, either using backslash escaping, or by using an alternate
type.
For example, it's possible to write this to append a comma:
http-request add-header paren-comma-paren "%[str('(--,--)')]"
or this:
http-request add-header paren-comma-paren '%[str("(--,--)")]'
or this:
http-request add-header paren-comma-paren %[str(\'(--,--)\')]
or this:
http-request add-header paren-comma-paren %[str(\"(--,--)\")]
or this:
http-request add-header paren-comma-paren %[str(\"(\"--\',\'--\")\")]
Note that due to the wide use of '\' in front of parenthesis in regex,
the backslash character will purposely *not* escape parenthesis, so that
'\)' placed in quotes is passed verbatim to a regex engine.
For each and every argument parsed by make_arg_list(), there was an
strndup() call, just so that we have a trailing zero for most functions,
and this temporary buffer is released afterwards except for strings where
it is kept.
Proceeding like this is not convenient because 1) it performs a huge
malloc/free dance, and 2) it forces to decide upfront where the argument
ends, which is what prevents commas and right parenthesis from being used.
This patch makes the function copy the temporary argument into the trash
instead, so that we avoid the malloc/free dance for most all non-string
args (e.g. integers, addresses, time, size etc), and that we can later
produce the contents on the fly while parsing the input. It adds a length
check to make sure that the argument is not longer than the buffer size,
which should obviously never be the case but who knows what people put in
their configuration.
The main problem we're having with argument parsing is that at the
moment the caller looks for the first character looking like an end
of arguments (')') and calls make_arg_list() on the sub-string inside
the parenthesis.
Let's first change the way it works so that make_arg_list() also
consumes the parenthesis and returns the pointer to the first char not
consumed. This will later permit to refine each argument parsing.
For now there is no functional change.
Instead of scanning a string looking for an end of line, ')' or ',',
let's only accept characters which are actually valid identifier
characters. This will let the parser know that in %[src], only "src"
is the sample fetch name, not "src]". This was done both for samples
and ACLs since they are the same here.
This does like chunk_strcpy() except that the maximum string length may
be limited by the caller. A trailing zero is always appended. This is
particularly handy to extract portions of strings to put into the trash
for use with libc functions requiring a nul-terminated string.
Now, only one capture is mandatory in the path-info regex, the one matching the
script-name. The path-info capture is optional. Of couse, it must be defined to
fill the PATH_INFO parameter. But it is not mandatory. This way, it is possible
to get the script-name part from the path, excluding the path-info.
This patch is small enough to be backported to 2.1.
If a regex to match the PATH_INFO parameter is configured, it systematically
fails if a newline or a null character is present in the URL-decoded path. So,
from the moment there is at least a "%0a" or a "%00" in the request path, we
always fail to get the PATH_INFO parameter and all the decoded path is used for
the SCRIPT_NAME parameter.
It is probably not the expected behavior. Because, most of time, these
characters are not expected at all in a path, an error is now triggered when one
of these characters is found in the URL-decoded path before trying to execute
the path_info regex. However, this test is not performed if there is no regex
configured.
Note that in reality, the newline character is only a problem when HAProxy is
complied with pcre or pcre2 library and conversely, the null character is only a
problem for the libc's regex library. But both are always excluded to avoid any
inconsistency depending on compile options.
An alternative, not implemented yet, is to replace these characters by another
one. If someone complains about this behavior, it will be re-evaluated.
This patch must be backported to all versions supporting the FastCGI
applications, so to 2.1 for now.
When calling the mux "destroy" method, the argument should be the mux
context, not the connection. In a few instances in the mux code, the
connection was used (mainly when the session wouldn't handle the idle
connection, and the server pool was fool), and that could lead to random
segfaults.
This should be backported to 2.1, 2.0, and 1.9
we cannot return right after socket opening as we need to move back to
the default namespace first
this should fix github issue #500
this might be backported to all version >= 1.6
Fixes: b3e54fe387 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
I managed to mess up with the file's permission while using a temporary
one during last release, and to backport the non-exec version everywhere.
This can be backported as far as 1.7 now.
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.
this is a followup of github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
As haproxy wont build on AIX 7.2 using the old "aix52" TARGET a new
TARGET was introduced which adds two special CFLAGS to prevent the
loading of AIXs xmem.h and var.h. This is done by defining the
corresponding include-guards _H_XMEM and _H_VAR. Without excluding
those headers-files the build fails because of redefinition errors:
1)
CC src/mux_fcgi.o
In file included from /usr/include/sys/uio.h:90,
from /opt/freeware/lib/gcc/powerpc-ibm-aix7.1.0.0/8.3.0/include-fixed/sys/socket.h:104,
from include/common/compat.h:32,
from include/common/cfgparse.h:25,
from src/mux_fcgi.c:13:
src/mux_fcgi.c:204:13: error: expected ':', ',', ';', '}' or '__attribute__' before '.' token
struct ist rem_addr;
^~~~~~~~
2)
CC src/cfgparse-listen.o
In file included from include/types/arg.h:31,
from include/types/acl.h:29,
from include/types/proxy.h:41,
from include/proto/log.h:34,
from include/common/cfgparse.h:30,
from src/mux_h2.c:13:
include/types/vars.h:30:8: error: redefinition of 'struct var'
struct var {
^~~
Futhermore, to enable multithreading via USE_THREAD, the atomic
library was added to the LDFLAGS. Finally, two new CPUs were added
to simplify the usage of power8 and power9 optimizations.
This TARGET was only tested on GCC 8.3 and may or may not work on
IBM's native C-compiler (XLC).
Should be backported to 2.1.
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When intializing a listener, let's make sure the bind_thread mask is
always limited to all_threads_mask when inserting the FD. This will
avoid seeing listening FDs with bits corresponding to threads that are
not active (e.g. when using "bind ... process 1/even"). The side effect
is very limited, all that was identified is that atomic operations are
used in fd_update_events() when not necessary. It's more a matter of
long-term correctness in practice.
This fix might be backported as far as 1.8 (then proto_sockpair must
be dropped).
In bug #495 we found that it is possible to resume a listener on an
inexistent thread. This happens when a bind's thread_mask contains bits
out of the active threads mask, such as when using "1/odd" or "1/even".
The thread_mask was used as-is to pick a thread number to re-enable the
listener, and given that the highest number is used, 1/odd or 1/even can
produce quite high thread numbers and crash the process by queuing some
entries into non-existent lists.
This bug is an incomplete fix of commit 413e926ba ("BUG/MAJOR: listener:
fix thread safety in resume_listener()") though it will only trigger if
some bind lines are explicitly bound to thread numbers higher than the
thread count. The fix must be backported to all branches having the fix
above (as far as 1.8, though the code is different there, see the commit
message in 1.8 for changes).
There are a few other places where bind_thread is used without
enforcing all_thread_mask, namely when doing fd_insert() while creating
listeners. It seems harmless but would probably deserve another fix.
While looking for other occurrences of do { continue; } while (0) I
found these few leftovers in mini-clist where an outer loop was made
around "do { } while (0)" then another loop was placed inside just to
handle the continue. Let's clean this up by just removing the outer
one. Most of the patch is only the inner part of the loop that is
reindented. It was verified that the resulting code is the same.
Issue #490 reports that there are a few bogus constructs of the famous
"do { if (cond) continue; } while (0)" in the connection code, that are
used to retry on I/O failures caused by receipt of a signal. Let's turn
them into the more correct "while (1) { if (cond) continue; break }"
instead. This may or may not be backported, it shouldn't have any
visible effect.