As reported in issue #507, since commiy 07e1e3c93e ("MINOR: sample:
regsub now supports backreferences") we must not proceed in regsub()
if we fali to allocate a trash (which in practice never happens). No
backport needed.
This counter is already incremented when a new request is received (or if an
error occurred waiting it). So it must not be incremented when the stream is
terminated, at the end of process_strem(). This bug was introduced by the commit
cff0f739e ("MINOR: counters: Review conditions to increment counters from
analysers").
No backport needed.
When an Host header is updated, the autority part, if any, is also updated to
keep the both syncrhonized. But, when the update is performed while there is no
change, a failure is reported while, in reality, no update is necessary. This
bug was introduced by the commit d7b7a1ce5 ("MEDIUM: http-htx: Keep the Host
header and the request start-line synchronized").
This commit was pushed in the 2.1. But on this version, the bug is hidden
because rewrite errors are silently ignored. And because it happens when there
is no change, if the rewrite fails, noone notices it. But since the 2.2, rewrite
errors are now fatals by default. So when the bug is hit, a 500 error is
returned to the client. Without this fix, a workaround is to disable the strict
rewriting mode (see the "strict-mode" HTTP rule).
The following HTTP rule is a good way to reproduce the bug if a request with an
authority is received. In HTT2, it is pretty common.
acl host_header_exists req.hdr(host) -m found
http-request set-header host %[req.hdr(host)] if host_header_exists
This patch must be backported to 2.1 and everywhere the commit d7b7a1ce5 is
backported. It should fix the issue #494.
In flt_analyze_http_headers() HTTP analyzer, we must not forward systematically
the headers. We must only count them as filtered data (ie. increment the offset
of the right size). It is the http_payload callback responsibility to decide to
forward headers or not by forwarding at least 1 byte of payload. And there is
always at least 1 byte of payload to forward, the EOM block.
This patch depends on following commits:
* MINOR: filters: Forward data only if the last filter forwards something
* MINOR: http-htx: Add a function to retrieve the headers size of an HTX message
This patch must be backported with commits above as far as 1.9. In HAProxy 2.0
and 1.9, the patch must be adapted because of the legacy HTTP code.
In flt_tcp_payload() and flt_http_payload(), if the last filter does not
forwarding anything, nothing is forwarded, not even the already filtered
data. For now, this patch is useless because the last filter is always sync with
the stream's offset. But it will be mandatory for a bugfix.
http_get_hdrs_size() function may now be used to get the bytes held by headers
in an HTX message. It only works if the headers were not already
forwarded. Metadata are not counted here.
The function is_idchar() was added by commit 36f586b ("MINOR: tools:
add is_idchar() to tell if a char may belong to an identifier") to
ease matching of sample fetch/converter names. But it lacked support
for the '+' character used in "base32+src" and "url32+src". A quick
way to figure the list of supported sample fetch+converter names is
to issue the following command:
git grep '"[^"]*",.*SMP_T_.*SMP_USE_'|cut -f2 -d'"'|sort -u
No more entry is reported once searching for characters not covered
by is_idchar().
No backport is needed.
Now that the configuration parser is more flexible with samples,
converters and their arguments, we can leverage this to enable
support for backreferences in regsub.
Recent commit 807aef8a14 ("BUG/MINOR: arg: report an error if an argument
is larger than bufsize") aimed at fixing the argument length check but
relied on the fact that the end of string was not reached, except that
it forgot to consider the delimiters (comma and parenthesis) which are
valid conditions to break out of the loop. This used to break simple
expressions like "hdr(xff,1)". Thanks to Jérôme for reporting this.
No backport is needed.
Commit 0f5ce6014a ("SCRIPTS: announce-release: place the send command
in the mail's header") broke the announce-release script: by not having
to edit the message at all anymore, mutt does nothing when sending, but
it still does if the message is edited (which was the case before). With
some testing, it appears that mutt -H does work when there's no change,
so let's use this instead. This should be backported till 1.7.
Commit ef21facd99 ("MEDIUM: arg: make make_arg_list() support quotes
in arguments") removed the chunk_strncpy() to fill the trash buffer
as the input is being parsed, and accidently dropped the jump to the
error path in case the argument is too large, which is now fixed.
No backport is needed, this is for 2.2. This addresses issue #502.
For a very long time it used to be impossible to pass a closing square
bracket as a valid character in argument to a sample fetch function or
to a converter because the LF parser used to stop on the first such
character found and to pass what was between the first '[' and the first
']' to sample_parse_expr().
This patch addresses this by passing the whole string to sample_parse_expr()
which is the only one authoritative to indicate the first character that
does not belong to the expression. The LF parser then verifies it matches
a ']' or fails. As a result it is finally possible to write rules such as
the following, which is totally valid an unambigous :
http-request redirect location %[url,regsub([.:/?-],!,g)]
|-----| | |
arg1 | `---> arg3
`-----> arg2
|-----------------|
converter
|---------------------|
sample expression
|------------------------|
log-format tag
When an end pointer is passed, instead of complaining that a comma is
missing after a keyword, sample_parse_expr() will silently return the
pointer to the current location into this return pointer so that the
caller can continue its parsing. This will be used by more complex
expressions which embed sample expressions, and may even permit to
embed sample expressions into arguments of other expressions.
Now it becomes possible to reuse the quotes within arguments, allowing
the parser to distinguish a ',' or ')' that is part of the value from
one which delimits the argument. In addition, ',' and ')' may be escaped
using a backslash. However, it is also important to keep in mind that
just like in shell, quotes are first resolved by the word tokenizer, so
in order to pass quotes that are visible to the argument parser, a second
level is needed, either using backslash escaping, or by using an alternate
type.
For example, it's possible to write this to append a comma:
http-request add-header paren-comma-paren "%[str('(--,--)')]"
or this:
http-request add-header paren-comma-paren '%[str("(--,--)")]'
or this:
http-request add-header paren-comma-paren %[str(\'(--,--)\')]
or this:
http-request add-header paren-comma-paren %[str(\"(--,--)\")]
or this:
http-request add-header paren-comma-paren %[str(\"(\"--\',\'--\")\")]
Note that due to the wide use of '\' in front of parenthesis in regex,
the backslash character will purposely *not* escape parenthesis, so that
'\)' placed in quotes is passed verbatim to a regex engine.
For each and every argument parsed by make_arg_list(), there was an
strndup() call, just so that we have a trailing zero for most functions,
and this temporary buffer is released afterwards except for strings where
it is kept.
Proceeding like this is not convenient because 1) it performs a huge
malloc/free dance, and 2) it forces to decide upfront where the argument
ends, which is what prevents commas and right parenthesis from being used.
This patch makes the function copy the temporary argument into the trash
instead, so that we avoid the malloc/free dance for most all non-string
args (e.g. integers, addresses, time, size etc), and that we can later
produce the contents on the fly while parsing the input. It adds a length
check to make sure that the argument is not longer than the buffer size,
which should obviously never be the case but who knows what people put in
their configuration.
The main problem we're having with argument parsing is that at the
moment the caller looks for the first character looking like an end
of arguments (')') and calls make_arg_list() on the sub-string inside
the parenthesis.
Let's first change the way it works so that make_arg_list() also
consumes the parenthesis and returns the pointer to the first char not
consumed. This will later permit to refine each argument parsing.
For now there is no functional change.
Instead of scanning a string looking for an end of line, ')' or ',',
let's only accept characters which are actually valid identifier
characters. This will let the parser know that in %[src], only "src"
is the sample fetch name, not "src]". This was done both for samples
and ACLs since they are the same here.
This does like chunk_strcpy() except that the maximum string length may
be limited by the caller. A trailing zero is always appended. This is
particularly handy to extract portions of strings to put into the trash
for use with libc functions requiring a nul-terminated string.
Now, only one capture is mandatory in the path-info regex, the one matching the
script-name. The path-info capture is optional. Of couse, it must be defined to
fill the PATH_INFO parameter. But it is not mandatory. This way, it is possible
to get the script-name part from the path, excluding the path-info.
This patch is small enough to be backported to 2.1.
If a regex to match the PATH_INFO parameter is configured, it systematically
fails if a newline or a null character is present in the URL-decoded path. So,
from the moment there is at least a "%0a" or a "%00" in the request path, we
always fail to get the PATH_INFO parameter and all the decoded path is used for
the SCRIPT_NAME parameter.
It is probably not the expected behavior. Because, most of time, these
characters are not expected at all in a path, an error is now triggered when one
of these characters is found in the URL-decoded path before trying to execute
the path_info regex. However, this test is not performed if there is no regex
configured.
Note that in reality, the newline character is only a problem when HAProxy is
complied with pcre or pcre2 library and conversely, the null character is only a
problem for the libc's regex library. But both are always excluded to avoid any
inconsistency depending on compile options.
An alternative, not implemented yet, is to replace these characters by another
one. If someone complains about this behavior, it will be re-evaluated.
This patch must be backported to all versions supporting the FastCGI
applications, so to 2.1 for now.
When calling the mux "destroy" method, the argument should be the mux
context, not the connection. In a few instances in the mux code, the
connection was used (mainly when the session wouldn't handle the idle
connection, and the server pool was fool), and that could lead to random
segfaults.
This should be backported to 2.1, 2.0, and 1.9
we cannot return right after socket opening as we need to move back to
the default namespace first
this should fix github issue #500
this might be backported to all version >= 1.6
Fixes: b3e54fe387 ("MAJOR: namespace: add Linux network namespace
support")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
I managed to mess up with the file's permission while using a temporary
one during last release, and to backport the non-exec version everywhere.
This can be backported as far as 1.7 now.
when `getsockopt` previously failed, we were trying to set defaultmss
with -2 value.
this is a followup of github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
As haproxy wont build on AIX 7.2 using the old "aix52" TARGET a new
TARGET was introduced which adds two special CFLAGS to prevent the
loading of AIXs xmem.h and var.h. This is done by defining the
corresponding include-guards _H_XMEM and _H_VAR. Without excluding
those headers-files the build fails because of redefinition errors:
1)
CC src/mux_fcgi.o
In file included from /usr/include/sys/uio.h:90,
from /opt/freeware/lib/gcc/powerpc-ibm-aix7.1.0.0/8.3.0/include-fixed/sys/socket.h:104,
from include/common/compat.h:32,
from include/common/cfgparse.h:25,
from src/mux_fcgi.c:13:
src/mux_fcgi.c:204:13: error: expected ':', ',', ';', '}' or '__attribute__' before '.' token
struct ist rem_addr;
^~~~~~~~
2)
CC src/cfgparse-listen.o
In file included from include/types/arg.h:31,
from include/types/acl.h:29,
from include/types/proxy.h:41,
from include/proto/log.h:34,
from include/common/cfgparse.h:30,
from src/mux_h2.c:13:
include/types/vars.h:30:8: error: redefinition of 'struct var'
struct var {
^~~
Futhermore, to enable multithreading via USE_THREAD, the atomic
library was added to the LDFLAGS. Finally, two new CPUs were added
to simplify the usage of power8 and power9 optimizations.
This TARGET was only tested on GCC 8.3 and may or may not work on
IBM's native C-compiler (XLC).
Should be backported to 2.1.
we were trying to close file descriptor even when `socket` call was
failing.
this should fix github issue #499
this should be backported to all versions >= v1.8
Fixes: 153659f1ae ("MINOR: tcp: When binding socket, attempt to
reuse one from the old proc.")
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
When intializing a listener, let's make sure the bind_thread mask is
always limited to all_threads_mask when inserting the FD. This will
avoid seeing listening FDs with bits corresponding to threads that are
not active (e.g. when using "bind ... process 1/even"). The side effect
is very limited, all that was identified is that atomic operations are
used in fd_update_events() when not necessary. It's more a matter of
long-term correctness in practice.
This fix might be backported as far as 1.8 (then proto_sockpair must
be dropped).
In bug #495 we found that it is possible to resume a listener on an
inexistent thread. This happens when a bind's thread_mask contains bits
out of the active threads mask, such as when using "1/odd" or "1/even".
The thread_mask was used as-is to pick a thread number to re-enable the
listener, and given that the highest number is used, 1/odd or 1/even can
produce quite high thread numbers and crash the process by queuing some
entries into non-existent lists.
This bug is an incomplete fix of commit 413e926ba ("BUG/MAJOR: listener:
fix thread safety in resume_listener()") though it will only trigger if
some bind lines are explicitly bound to thread numbers higher than the
thread count. The fix must be backported to all branches having the fix
above (as far as 1.8, though the code is different there, see the commit
message in 1.8 for changes).
There are a few other places where bind_thread is used without
enforcing all_thread_mask, namely when doing fd_insert() while creating
listeners. It seems harmless but would probably deserve another fix.
While looking for other occurrences of do { continue; } while (0) I
found these few leftovers in mini-clist where an outer loop was made
around "do { } while (0)" then another loop was placed inside just to
handle the continue. Let's clean this up by just removing the outer
one. Most of the patch is only the inner part of the loop that is
reindented. It was verified that the resulting code is the same.
Issue #490 reports that there are a few bogus constructs of the famous
"do { if (cond) continue; } while (0)" in the connection code, that are
used to retry on I/O failures caused by receipt of a signal. Let's turn
them into the more correct "while (1) { if (cond) continue; break }"
instead. This may or may not be backported, it shouldn't have any
visible effect.
"snap" images are updated frequently, while regular images are updated quarterly.
so, mixing "snap" and regular images lead to package naming mismatch, which will occur every
quarter. we cannot use 11.3 release image, it is broken, so we have to use 11.3 "snap" image.
Thus let us use all "snap" images. 13-snap is first introduced with this commit.
We do have some checks for the UNIX socket path length to validate the
full pathname of a unix socket but the pathname extension is only taken
into account when using a bind_prefix. The second check only matches
against MAXPATHLEN. So this means that path names between 98 and 108
might successfully parse but fail to bind. Let's adjust the check in
the address parser and refine the error checking at the bind() step.
This addresses bug #493.
In commit 477902b ("MEDIUM: connections: Get ride of the xprt_done
callback.") we added an inconditional call to h2_wake_some_streams()
in h2_wake(), though we must not do it if the connection is destroyed
or we end up with a use-after-free. In this case it's already done in
h2_process() before destroying the connection anyway.
Let's just add this test for now. A cleaner approach might consist in
doing it in the h2_process() function itself when a connection status
change is detected.
No backport is needed, this is purely 2.2.
This patch provides a schematic of the new architecture based on the
struct cert_key_and_chain which appeared with haproxy 2.1.
Could be backported in 2.1
A bug was introduced during TCP rules refactoring by the commit ac98d81f4
("MINOR: http-rule/tcp-rules: Make track-sc* custom actions"). There is no
stream when L4/L5 TCP rules are evaluated. For these rulesets, In track-sc*
actions, we must take care to rely on the session instead of the stream.
Because of this bug, any evaluation of L4/L5 TCP rules using a track-sc* action
leads to a crash of HAProxy.
No backport needed, except if the above commit is backported.
The code which is supposed to apply the bind_conf configuration on the
SSL_CTX was not called correctly. Indeed it was called with the previous
SSL_CTX so the new ones were left with default settings. For example the
ciphers were not changed.
This patch fixes#429.
Must be backported in 2.1.
This patch fixes memory leaks and a null pointer dereference found by coverity
on the error path when an HTTP return action is parsed. See issue #491.
No need to backport this patch except the HTT return action is backported too.
In action_http_set_status(), when a rewrite error occurred, the stream error
flag must be set before returning the error.
No need to backport this patch except if commit 333bf8c33 ("MINOR: http-rules:
Set SF_ERR_PRXCOND termination flag when a header rewrite fails") is
backported. This bug was reported in issue #491.
The condition was inverted. When the branch was the master, it was
harmless because it caused an extra "checkout master", but when it
was not the master, the commit could be applied to the wrong branch
and it could even possibly not match the name to stop on.
I'm fed up with having to scroll my terminals trying to look for the
mail send command printed 30 minutes before the release, let's have
it copied into the e-mail template itself, and replace the old headers
that used to be duplicated there and that are not needed anymore.