Sample expressions involving converters in expression simply do not work
if the converter changes the sample type from the original keyword. Either
the keyword is a sample fetch keyword and its own type is used, or it's an
ACL keyword, and the keyword's parse/index/match functions are used despite
the converters. Thus it causes such a stupid error :
redirect location / if { date,ltime(%a) -i Fri }
[ALERT] 240/171746 (29127) : parsing [bug-conv.cfg:35] : error detected in proxy 'svc' while parsing redirect rule : error in condition: 'Fri' is not a number.
In fact now in ACLs, the case where the ACL keyword is alone is an exception
(eventhough the most common one). It's an exception to the sample expression
parsing rules since ACLs allow to redefine alternate parsing functions.
This fix does two things :
- it voids any references to the ACL keyword when a converter is involved
since we certainly not want to enforce a parser for a wrong data type ;
- it ensures that for all other cases (sample expressions), the type of
the expression is used instead of the type of the fetch keyword.
A significant cleanup of the code should be done, but this patch only aims
fixing the bug.
The fix should be backported into 1.5 since this appeared along the redesign
of the acl/pattern processing.
Just like previous patch, this is a remains of an early implementation. Also
fix the outdated comments above. The fix may be backported to 1.5 though the
bug cannot be triggerred, thus it's just a matter of keeping the code clean.
pat_parse_meth() had some remains of an early implementation attempt for
the patterns, it initialises a trash and never sets the pattern value there.
The result is that a non-standard method cannot be matched anymore. The bug
appeared during the pattern rework in 1.5, so this fix must be backported
there. Thanks to Joe Williams of GitHub for reporting the bug.
This results in a string-based HTTP method match returning true when
it doesn't match and conversely. This bug was reported by Joe Williams.
The fix must be backported to 1.5, though it still doesn't work because
of at least 3-4 other bugs in the long path which leads to building this
pattern list.
Sometimes it would be convenient to have a log counter so that from a log
server we know whether some logs were lost or not. The frontend's log counter
serves exactly this purpose. It's incremented each time a traffic log is
produced. If a log is disabled using "http-request set-log-level silent",
the counter will not be incremented. However, admin logs are not accounted
for. Also, if logs are filtered out before being sent to the server because
of a minimum level set on the log line, the counter will be increased anyway.
The counter is 32-bit, so it will wrap, but that's not an issue considering
that 4 billion logs are rarely in the same file, let alone close to each
other.
Just by moving a few struct members around, we can avoid 32-bit holes
between 64-bit pointers and shrink the struct size by 48 bytes. That's
not huge but that's for free, so let's do it.
There are two sample commands to get information about the presence of a
client certificate.
ssl_fc_has_crt is true if there is a certificate present in the current
connection
ssl_c_used is true if there is a certificate present in the session.
If a session has stopped and resumed, then ssl_c_used could be true, while
ssl_fc_has_crt is false.
In the client byte of the TLS TLV of Proxy Protocol V2, there is only one
bit to indicate whether a certificate is present on the connection. The
attached patch adds a second bit to indicate the presence for the session.
This maintains backward compatibility.
[wt: this should be backported to 1.5 to help maintain compatibility
between versions]
Before the commit bbba2a8ecc
(1.5-dev24-8), the tarpit section set timeout and return, after this
commit, the tarpit section set the timeout, and go to the "done" label
which reset the timeout.
Thanks Bryan Talbot for the bug report and analysis.
This should be backported in 1.5.
Google's boringssl has a different cipher_list, we cannot use it as
in OpenSSL. This is due to the "Equal preference cipher groups" feature:
https://boringssl.googlesource.com/boringssl/+/858a88daf27975f67d9f63e18f95645be2886bfb^!/
also see:
https://www.imperialviolet.org/2014/02/27/tlssymmetriccrypto.html
cipher_list is used in haproxy since commit f46cd6e4ec ("MEDIUM: ssl:
Add the option to use standardized DH parameters >= 1024 bits") to
check if DHE ciphers are used.
So, if boringssl is used, the patch just assumes that there is some
DHE cipher enabled. This will lead to false positives, but thats better
than compiler warnings and crashes.
This may be replaced one day by properly implementing the the new style
cipher_list, in the meantime this workaround allows to build and use
boringssl.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
get_rfc2409_prime_1024() and friends are not available in Google's
boringssl, so use the fallback in that case.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
Google's boringssl doesn't currently support OCSP, so
disable it if detected.
OCSP support may be reintroduced as per:
https://code.google.com/p/chromium/issues/detail?id=398677
In that case we can simply revert this commit.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
Google's boringssl doesn't have OPENSSL_VERSION_TEXT, SSLeay_version()
or SSLEAY_VERSION, in fact, it doesn't have any real versioning, its
just git-based.
So in case we build against boringssl, we can't access those values.
Instead, we just inform the user that HAProxy was build against
boringssl.
Signed-off-by: Lukas Tribus <luky-37@hotmail.com>
A couple of typo fixed in 'http-response replace-header':
- an error when counting the number of arguments
- a typo in the alert message
This should be backported to 1.5.
When the child process terminates, it should wake up the associated task to
process the result immediately, otherwise it will be available only when the
task expires.
This fix is specific to the 1.6 branch.
The documentation indicates that external checks can be used in a backend
section, but the code requires a listener to provide information in the script
arguments.
External checks were initialized lately, during the first check, leaving some
variables uninitialized in such a scenario, which trigger the segfault when
accessed to collect errors information.
To prevent the segfault, currently we should initialize the external checks
earlier, during the process initialiation itself and quit if the error occurs.
This fix is specific to the 1.6 branch.
Mark Brooks reported an issue with external healthchecks, where servers are
never marked as UP. This is due to a typo, which flags a successful check as
CHK_RES_FAILED instead of CHK_RES_PASSED.
This bug is specific to the 1.6 branch.
As a consequence of various recent changes on the sample conversion,
a corner case has emerged where it is possible to wait forever for a
sample in track-sc*.
The issue is caused by the fact that functions relying on sample_process()
don't all exactly work the same regarding the SMP_F_MAY_CHANGE flag and
the output result. Here it was possible to wait forever for an output
sample from stktable_fetch_key() without checking the SMP_OPT_FINAL flag.
As a result, if the client connects and closes without sending the data
and haproxy expects a sample which is capable of coming, it will ignore
this impossible case and will continue to wait.
This change adds control for SMP_OPT_FINAL before waiting for extra data.
The various relevant functions have been better documented regarding their
output values.
This fix must be backported to 1.5 since it appeared there.
Move all code out of the signal handlers, since this is potentially
dangerous. To make sure the signal handlers behave as expected, use
sigaction() instead of signal(). That also obsoletes messing with
the signal mask after restart.
Signed-off-by: Conrad Hoffmann <conrad@soundcloud.com>
Searching for the pid file in the list of arguments did not
take flags without parameters into account, like e.g. -de. Because
of this, the wrapper would use a different pid file than haproxy
if such an argument was specified before -p.
The new version can still yield a false positive for some crazy
situations, like your config file name starting with "-p", but
I think this is as good as it gets without using getopt or some
library.
Signed-off-by: Conrad Hoffmann <conrad@soundcloud.com>
If a source file includes proto/server.h twice or more, redefinition errors will
be triggered for such inline functions as server_throttle_rate(),
server_is_draining(), srv_adm_set_maint() and so on. Just move #endif directive
to the end of file to solve this issue.
Signed-off-by: Godbach <nylzhaowei@gmail.com>
Last commit 77d1f01 ("BUG/MEDIUM: connection: fix memory corruption
when building a proxy v2 header") was wrong, using &cn_trash instead
of cn_trash resulting in a warning and the client's SSL cert CN not
being stored at the proper location.
Thanks to Lukas Tribus for spotting this quickly.
This should be backported to 1.5 after the patch above is backported.
Add support for http-request track-sc, similar to what is done in
tcp-request for backends. A new act_prm field was added to HTTP
request rules to store the track params (table, counter). Just
like for TCP rules, the table is resolved while checking for
config validity. The code was mostly copied from the TCP code
with the exception that here we also count the HTTP request count
and rate by hand. Probably that something could be factored out in
the future.
It seems like tracking flags should be improved to mark each hook
which tracks a key so that we can have some check points where to
increase counters of the past if not done yet, a bit like is done
for TRACK_BACKEND.
Currently, the commit ID appears in the sub-version in snapshots, but
when people use the git repository, we only have the commits count,
and not the last commit ID, which requires to count commits when
troubleshooting. This change ensures that unreleased versions also
report the commit ID before the commit number, such as :
1.6-dev0-bbfd1a-50
Tagged versions will not have this, since the post-release commit count
is zero.
Doing so finally allows to apply the hex converter to integers as well.
Note that all integers are represented in 32-bit, big endian so that their
conversion remains human readable and portable. A later improvement to the
hex converter could be to make it trim leading zeroes, and/or to only report
a number of least significant bytes.
From time to time it's useful to hash input data (scramble input, or
reduce the space needed in a stick table). This patch provides 3 simple
converters allowing use of the available hash functions to hash input
data. The output is an unsigned integer which can be passed into a header,
a log or used as an index for a stick table. One nice usage is to scramble
source IP addresses before logging when there are requirements to hide them.
IP addresses are a perfect example of fixed size data which we could
cast to binary, still it was not allowed by lack of cast function,
eventhough the opposite was allowed in ACLs. Make that possible both
in sample expressions and in stick tables.
We're using the internal memory representation of base32 here, which is
wrong since these data might be exported to headers for logs or be used
to stick to a server and replicated to other peers. Let's convert base32
to big endian (network representation) when building the binary block.
This mistake is also present in 1.5, it would be better to backport it.
Some users want to add their own data types to stick tables. We don't
want to use a linked list here for performance reasons, so we need to
continue to use an indexed array. This patch allows one to reserve a
compile-time-defined number of extra data types by setting the new
macro STKTABLE_EXTRA_DATA_TYPES to anything greater than zero, keeping
in mind that anything larger will slightly inflate the memory consumed
by stick tables (not per entry though).
Then calling stktable_register_data_store() with the new keyword will
either register a new keyword or fail if the desired entry was already
taken or the keyword already registered.
Note that this patch does not dictate how the data will be used, it only
offers the possibility to create new keywords and have an index to
reference them in the config and in the tables. The caller will not be
able to use stktable_data_cast() and will have to explicitly cast the
stable pointers to the expected types. It can be used for experimentation
as well.
OpenSSL does not free the DH * value returned by the callback specified with SSL_CTX_set_tmp_dh_callback(),
leading to a memory leak for SSL/TLS connections using Diffie Hellman Ephemeral key exchange.
This patch fixes the leak by allocating the DH * structs holding the DH parameters once, at configuration time.
Note: this fix must be backported to 1.5.
Konstantin Romanenko reported a typo in the HTML documentation. The typo is
already present in the raw text version : the "shutdown sessions" command
should be "shutdown sessions server".
This one is not inherited from defaults into frontends nor backends
because it would create a confusion situation where it would be hard
to disable it (since both frontend and backend would enable it).
Certain compilers running in virtualized environments may produce code
that the same processor cannot execute with -march=native, either because
of hypervisor bugs reporting wrong CPU features, or because of compiler
bugs forgetting to check CPU features. So better stop recommending this
combination so that users don't get trapped anymore.
Daniel Dubovik reported an interesting bug showing that the request body
processing was still not 100% fixed. If a POST request contained short
enough data to be forwarded at once before trying to establish the
connection to the server, we had no way to correctly rewind the body.
The first visible case is that balancing on a header does not always work
on such POST requests since the header cannot be found. But there are even
nastier implications which are that http-send-name-header would apply to
the wrong location and possibly even affect part of the request's body
due to an incorrect rewinding.
There are two options to fix the problem :
- first one is to force the HTTP_MSG_F_WAIT_CONN flag on all hash-based
balancing algorithms and http-send-name-header, but there's always a
risk that any new algorithm forgets to set it ;
- the second option is to account for the amount of skipped data before
the connection establishes so that we always know the position of the
request's body relative to the buffer's origin.
The second option is much more reliable and fits very well in the spirit
of the past changes to fix forwarding. Indeed, at the moment we have
msg->sov which points to the start of the body before headers are forwarded
and which equals zero afterwards (so it still points to the start of the
body before forwarding data). A minor change consists in always making it
point to the start of the body even after data have been forwarded. It means
that it can get a negative value (so we need to change its type to signed)..
In order to avoid wrapping, we only do this as long as the other side of
the buffer is not connected yet.
Doing this definitely fixes the issues above for the requests. Since the
response cannot be rewound we don't need to perform any change there.
This bug was introduced/remained unfixed in 1.5-dev23 so the fix must be
backported to 1.5.
This patch adds two converters :
ltime(<format>[,<offset>])
utime(<format>[,<offset>])
Both use strftime() to emit the output string from an input date. ltime()
provides local time, while utime() provides the UTC time.
These new converters make it possible to look up any sample expression
in a table, and check whether an equivalent key exists or not, and if it
exists, to retrieve the associated data (eg: gpc0, request rate, etc...).
Till now it was only possible using tracking, but sometimes tracking is
not suited to only retrieving such counters, either because it's done too
early or because too many items need to be checked without necessarily
being tracked.
These converters all take a string on input, and then convert it again to
the table's type. This means that if an input sample is of type IPv4 and
the table is of type IP, it will first be converted to a string, then back
to an IP address. This is a limitation of the current design which does not
allow converters to declare that "any" type is supported on input. Since
strings are the only types which can be cast to any other one, this method
always works.
The following converters were added :
in_table, table_bytes_in_rate, table_bytes_out_rate, table_conn_cnt,
table_conn_cur, table_conn_rate, table_gpc0, table_gpc0_rate,
table_http_err_cnt, table_http_err_rate, table_http_req_cnt,
table_http_req_rate, table_kbytes_in, table_kbytes_out,
table_server_id, table_sess_cnt, table_sess_rate, table_trackers.
Currently we have stktable_fetch_key() which fetches a sample according
to an expression and returns a stick table key, but we also need a function
which does only the second half of it from a known sample. So let's cut the
function in two and introduce smp_to_stkey() to perform this lookup. The
first function was adapted to make use of it in order to avoid code
duplication.
When we were generating a hash, it was done using an unsigned long. When the hash was used
to select a backend, it was sent as an unsigned int. This made it difficult to predict which
backend would be selected.
This patch updates get_hash, and the hash methods to use an unsigned int, to remain consistent
throughout the codebase.
This fix should be backported to 1.5 and probably in part to 1.4.
Listening to an abstract namespace socket is quite convenient but
comes with some drawbacks that must be clearly understood when the
socket is being listened to by multiple processes. The trouble is
that the socket cannot be rebound if a new process attempts a soft
restart and fails, so only one of the initially bound processes
will still be bound to it, the other ones will fail to rebind. For
most situations it's not an issue but it needs to be indicated.
Abstract namespace sockets ignore the shutdown() call and do not make
it possible to temporarily stop listening. The issue it causes is that
during a soft reload, the new process cannot bind, complaining that the
address is already in use.
This change registers a new pause() function for unix sockets and
completely unbinds the abstract ones since it's possible to rebind
them later. It requires the two previous patches as well as preceeding
fixes.
This fix should be backported into 1.5 since the issue apperas there.
When a listener resumes operations, supporting a full rebind makes it
possible to perform a full stop as a pause(). This will be used for
pausing abstract namespace unix sockets.
In order to fix the abstact socket pause mechanism during soft restarts,
we'll need to proceed differently depending on the socket protocol. The
pause_listener() function already supports some protocol-specific handling
for the TCP case.
This commit makes this cleaner by adding a new ->pause() function to the
protocol struct, which, if defined, may be used to pause a listener of a
given protocol.
For now, only TCP has been adapted, with the specific code moved from
pause_listener() to tcp_pause_listener().