In preparation for thread-specific runqueues, change the task API so that
the callback takes 3 arguments, the task itself, the context, and the state,
those were retrieved from the task before. This will allow these elements to
change atomically in the scheduler while the application uses the copied
value, and even to have NULL tasks later.
The limit of data read works only if all the data is in the
input buffer. Otherwise (if the data arrive in chunks), the
total amount of data is not taken in acount.
Only the current read data are compared to the expected amout
of data.
This patch must be backported from 1.9 to 1.6
When creating a state file using "show servers state" an empty field is
created in the srv_addr column if the server is from the socket family
AF_UNIX. This leads to a warning on start up when using
"load-server-state-from-file". This patch defaults srv_addr to "-" if
the socket family is not covered.
This patch should be backported to 1.8.
A few users reported that building without threads was accidently broken
after commit 6b96f72 ("BUG/MEDIUM: pollers: Use a global list for fd
shared between threads.") due to all_threads_mask not being defined.
It's OK to set it to zero as other code parts do when threads are
enabled but only one thread is used.
This needs to be backported to 1.8.
When checks fail, the code tries to run a dns resolution, in case the IP
changed.
The old way of doing that was to check, in case the last dns resolution
hadn't expired yet, if there were an applicable IP, which should be useless,
because it has already be done when the resolution was first done, or to
run a new resolution.
Both are a locking nightmare, and lead to deadlocks, so instead, just wake the
resolvers task, that should do the trick.
This should be backported to 1.8.
Sets OpenSSL 1.1.1's SSL_OP_PRIORITIZE_CHACHA unconditionally, as per [1]:
When SSL_OP_CIPHER_SERVER_PREFERENCE is set, temporarily reprioritize
ChaCha20-Poly1305 ciphers to the top of the server cipher list if a
ChaCha20-Poly1305 cipher is at the top of the client cipher list. This
helps those clients (e.g. mobile) use ChaCha20-Poly1305 if that cipher
is anywhere in the server cipher list; but still allows other clients to
use AES and other ciphers. Requires SSL_OP_CIPHER_SERVER_PREFERENCE.
[1] https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_clear_options.html
RFC 7234 says:
A cache MUST NOT store a response to any request, unless:
[...] the Authorization header field (see Section 4.2 of [RFC7235]) does
not appear in the request, if the cache is shared, unless the
response explicitly allows it (see Section 3.2), [...]
In this patch we completely disable the cache upon the receipt of an
Authorization header in the request. In this case it's not possible to
either use the cache or store into the cache anymore.
Thanks to Adam Eijdenberg of Digital Transformation Agency for raising
this issue.
This patch must be backported to 1.8.
The function hlua_ctx_resume return less text message and more error
code. These error code allow the caller to return appropriate
message to the user.
Since commit 36d1374 ("BUG/MINOR: lua: Fix SSL initialisation") in 1.6, the
Lua code always initializes an SSL server. It caused a small visible side
effect which is that by calling ssl_sock_prepare_srv_ctx(), it forces
global.ssl_used_backend to 1 and makes the initialization code believe that
there are some SSL servers in certain backends. This detection is used to
figure how to set the global maxconn value when only the memory usage is
limited. As such, even a configuration with no SSL at all will have a very
conservative maxconn.
The configuration below exhibits this :
global
ssl-server-verify none
stats socket /tmp/sock1 mode 666 level admin
tune.bufsize 16384
listen px
timeout client 5s
timeout server 5s
timeout connect 5s
bind :4445
#bind :4443 ssl crt rsa+dh2048.pem
#server s1 127.0.0.1:8003 ssl
Starting it with "-m 200" to limit it to 200 MB of RAM reports 1500 for
Maxconn, the same when uncommenting the "server" line, and 1300 when
uncommenting the "bind" line, regardless of the "server" line's status.
In practice it doesn't make sense to consider that Lua's server template
counts for one regular SSL server, because even if used for SSL, it will
not take large connection counts, compared to a backend relaying traffic.
Thus the solution consists in resetting the ssl_used_backend to its
previous value after creating the server_ctx from the Lua code. With the
fix, the same config with the same parameters now show :
- maxconn=5700 when neither side uses SSL
- maxconn=1500 when only one side uses SSL
- maxconn=1300 when both sides use SSL
This fix can be backported to versions 1.6 and beyond.
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the modsecurity implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the mod_defender implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
The announced accepted chars are "[a-zA-Z_-.]", but
the real accepted alphabet is "[a-zA-Z0-9_.]".
Numbers are supported and "-" is not supported.
This patch should be backported to 1.8 and 1.7
Function `hlua_socket_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".
Introduced new helper function `hlua_socket_close_helper`, which removed the
Lua stack argument count check and only checked if the first argument was
a socket.
This fix should be backported to 1.8, 1.7 and 1.6.
Commit 821bb9b ("MAJOR: threads/ssl: Make SSL part thread-safe") added
insufficient locking to the cert lookup and generation code : it uses
lru64_lookup(), which will automatically remove and add a list element
to the LRU list. It cannot be simply read-locked.
A long-term improvement should consist in using a lockless mechanism
in lru64_lookup() to safely move the list element at the head. For now
let's simply use a write lock during the lookup. The effect will be
minimal since it's used only in conjunction with automatically generated
certificates, which are much more expensive and rarely used.
This fix must be backported to 1.8.
Pawel Karoluk reported on Discourse[1] that HTTP/2 breaks url_param.
Christopher managed to track it down to the HTTP_MSGF_WAIT_CONN flag
which is set there to ensure the connection is validated before sending
the headers, as we may need to rewind the stream and hash again upon
redispatch. What happens is that in the forwarding code we refrain
from forwarding when this flag is set and the connection is not yet
established, and for this we go through the missing_data_or_waiting
path. This exit path was initially designed only to wait for data
from the client, so it rightfully checks whether or not the client
has already closed since in that case it must not wait for more data.
But it also has the side effect of aborting such a transfer if the
client has closed after the request, which is exactly what happens
in H2.
A study on the code reveals that this whole combined check should
be revisited : while it used to be true that waiting had the same
error conditions as missing data, it's not true anymore. Some other
corner cases were identified, such as the risk to report a server
close instead of a client timeout when waiting for the client to
read the last chunk of data if the shutr is already present, or
the risk to fail a redispatch when a client uploads some data and
closes before the connection establishes. The compression seems to
be at risk of rare issues there if a write to a full buffer is not
yet possible but a shutr is already queued.
At the moment these risks are extremely unlikely but they do exist,
and their impact is very minor since it mostly concerns an issue not
being optimally handled, and the fixes risk to cause more serious
issues. Thus this patch only focuses on how the HTTP_MSGF_WAIT_CONN
is handled and leaves the rest untouched.
This patch needs to be backported to 1.8, and could be backported to
earlier versions to properly take care of HTTP/1 requests passing via
url_param which are closed immediately after the headers, though this
is unlikely as this behaviour is only exhibited by scripts.
[1] https://discourse.haproxy.org/t/haproxy-1-8-x-url-param-issue-in-http2/2482/13
In commit abbf607 ("MEDIUM: cli: Add payload support") some cli keywords
without usage message have been added at the beginning of the keywords
array.
cli_gen_usage_usage_msg() use the kw->usage == NULL to stop generating
the usage message for the current keywords array. With those keywords at
the beginning, the whole array in cli.c was ignored in the usage message
generation.
This patch now checks the keyword itself, allowing a keyword without
usage message anywhere in the array.
When composing the event list for subscribe to kqueue events, the index
where the new event is added must be after the previous events, as such
the changes counter should continue counting.
This caused haproxy to accept connections but not try read and process
the incoming data.
This patch is for 1.9 only
In function hlua_fcn_new_proxy() too small a buffer was passed to
snprintf(), resulting in large proxy or listener IDs to make
snprintf() fail. It is unlikely to meet this case but let's fix it
anyway.
This fix must be backported to all stable branches where it applies.
The parameters like server-address, port and timeout should be set before
process_stream task is called to avoid the stream being 'closed' before it
got initialized properly. This is most clearly visible when running with
tune.lua.forced-yield=1.. So scheduling the task should not be done when
creating the lua socket, but when connect is called. The error
"socket: not yet initialised, you can't set timeouts." would then appear.
Below code for example also shows this issue, as the sleep will
yield the lua code:
local con = core.tcp()
core.sleep(1)
con:settimeout(10)
The polled_mask is only used in the pollers, and removing it from the
struct fdtab makes it fit in one 64B cacheline again, on a 64bits machine,
so make it a separate array.
With the old model, any fd shared by multiple threads, such as listeners
or dns sockets, would only be updated on one threads, so that could lead
to missed event, or spurious wakeups.
To avoid this, add a global list for fd that are shared, using the same
implementation as the fd cache, and only remove entries from this list
when every thread as updated its poller.
[wt: this will need to be backported to 1.8 but differently so this patch
must not be backported as-is]
Modify fd_add_to_fd_list() and fd_rm_from_fd_list() so that they take an
offset in the fdtab to the list entry, instead of hardcoding the fd cache,
so we can use them with other lists.
While running a task, we may try to delete and free a task that is about to
be run, because it's part of the local tasks list, or because rq_next points
to it.
So flag any task that is in the local tasks list to be deleted, instead of
run, by setting t->process to NULL, and re-make rq_next a global,
thread-local variable, that is modified if we attempt to delete that task.
Many thanks to PiBa-NL for reporting this and analysing the problem.
This should be backported to 1.8.
The bug was introduced in the commit 8d85aa4 ("BUG/MAJOR: map: fix
segfault during 'show map/acl' on cli").
This patch should be backported to 1.8, 1.7 and 1.6.
* A few typos
* Fix definitions of values which are tables, not arrays.
* Consistent US English naming for "server" instead of "serveur".
[tfo: should be backported to 1.6 and higher]
For large farms where servers are regularly added or removed, picking
a random server from the pool can ensure faster load transitions than
when using round-robin and less traffic surges on the newly added
servers than when using leastconn.
This commit introduces "balance random". It internally uses a random as
the key to the consistent hashing mechanism, thus all features available
in consistent hashing such as weights and bounded load via hash-balance-
factor are usable. It is extremely convenient because one common concern
when using random is what happens when a server is hammered a bit too
much. Here that can trivially be avoided, like in the configuration below :
backend bk0
balance random
hash-balance-factor 110
server-template s 1-100 127.0.0.1:8000 check inter 1s
Note that while "balance random" internally relies on a hash algorithm,
it holds the same properties as round-robin and as such is compatible with
reusing an existing server connection with "option prefer-last-server".
If a lua socket is waiting for data it currently spins at 100% cpu usage.
This because the TICK_ETERNITY returned by the socket is ignored when
setting the 'expire' time of the task.
Fixed by removing the check for yields that return TICK_ETERNITY.
This should be backported to at least 1.8.
In the sync point, to know if a thread has requested a synchronization, we call
the function thread_need_sync(). It should return 1 if yes, otherwise it should
return 0. It is intended to return a signed integer.
But internally, instead of returning 0 or 1, it returns 0 or tid_bit
(threads_want_sync & tid_bit). So, tid_bit is casted in integer. For the first
32 threads, it's ok, because we always check if thread_need_sync() returns
something else than 0. But this is a problem if HAProxy is started with more
than 32 threads, because for threads 33 to 64 (so for tid 32 to 63), their
tid_bit casted to integer are evaluated to 0. So the sync point does not work for
more than 32 threads.
Now, the function thread_need_sync() respects its contract, returning 0 or
1. the function thread_no_sync() has also been updated to avoid any ambiguities.
This patch must be backported in HAProxy 1.8.
This patch fixes an old bug introduced in the commit 7b1d47ce ("MAJOR: checks:
move health checks changes to set_server_check_status()"). When a DOWN server is
flapping, everytime a check succeds, check->health is incremented. But when a
check fails, it is decremented only when it is higher than the rise value. So if
only one check succeds for a DOWN server, check->health will remain set to 1 for
all subsequent failing checks.
So, at first glance, it seems not that terrible because the server remains
DOWN. But it is reported in the transitional state "DOWN server, going up". And
it will remain in this state until it is UP again. And there is also an
insidious side effect. If a DOWN server is flapping time to time, It will end to
be considered UP after a uniq successful check, , regardless the rise threshold,
because check->health will be increased slowly and never decreased.
To fix the bug, we just need to reset check->health to 0 when a check fails for
a DOWN server. To do so, we just need to relax the condition to handle a failure
in the function set_server_check_status.
This patch must be backported to haproxy 1.5 and newer.
Previously these fetches would return empty results when HAProxy was
compiled
without the requisite SSL support. This results in confusion and problem
reports from people who unexpectedly encounter the behavior.
Louis Chanouha reported an inappropriate warning when http-reuse is
present in a defaults section while a TCP proxy accidently inherits
it and finds a conflict with other options like the use of the PROXY
protocol. To fix this patch removes the http-reuse option for TCP
proxies.
This fix needs to be backported to 1.8, 1.7 and possibly 1.6.
This converter supplements the existing string matching by allowing
strings to be converted to a variable.
Example usage:
http-request set-var(txn.host) hdr(host)
# Check whether the client is attempting domain fronting.
acl ssl_sni_http_host_match ssl_fc_sni,strcmp(txn.host) eq 0
PiBa-NL reported a bug with tasks registered in lua when HAProxy is started with
serveral threads. These tasks have not specific affinity with threads so they
can be woken up on any threads. So, it is impossbile for these tasks to handled
cosockets or applets, because cosockets and applets are sticky on the thread
which created them. It is forbbiden to manipulate a cosocket from another
thread.
So to fix the bug, tasks registered in lua are now sticky to the current
thread. Because these tasks can be registered before threads creation, the
affinity is set the first time a lua's task is processed.
This patch must be backported in HAProxy 1.8.
It is now possible to use a payload with the "set ssl ocsp-response"
command. These syntaxes will work the same way:
# echo "set ssl ocsp-response $(base64 -w 10000 ocsp.der)" | \
socat /tmp/sock1 -
# echo -e "set ssl ocsp-response <<\n$(base64 ocsp.der)\n" | \
socat /tmp/sock1 -
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
It is now possible to use a payload with the "add map" command.
These syntaxes will work the same way:
# echo "add map #-1 key value" | socat /tmp/sock1 -
# echo -e "add map #-1 <<\n$(cat data)\n" | socat /tmp/sock1 -
with
# cat data
key1 value1 with spaces
key2 value2
key3 value3 also with spaces
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
In order to use arbitrary data in the CLI (multiple lines or group of words
that must be considered as a whole, for example), it is now possible to add a
payload to the commands. To do so, the first line needs to end with a special
pattern: <<\n. Everything that follows will be left untouched by the CLI parser
and will be passed to the commands parsers.
Per-command support will need to be added to take advantage of this
feature.
Signed-off-by: Aurélien Nephtali <aurelien.nephtali@corp.ovh.com>
Upload requests not carrying a content-length nor tunnelling data must
be sent chunked-encoded over HTTP/1. The code was planned but for some
reason forgotten during the implementation, leading to such payloads to
be sent as tunnelled data.
Browsers always emit a content length in uploads so this problem doesn't
happen for most sites. However some applications may send data frames
after a request without indicating it earlier.
The only way to detect that a client will need to send data is that the
HEADERS frame doesn't hold the ES bit. In this case it's wise to look
for the content-length header. If it's not there, either we're in tunnel
(CONNECT method) or chunked-encoding (other methods).
This patch implements this.
The following request is sent using content-length :
curl --http2 -sk https://127.0.0.1:4443/s2 -XPOST -T /large/file
and these ones using chunked-encoding :
curl --http2 -sk https://127.0.0.1:4443/s2 -XPUT -T /large/file
curl --http2 -sk https://127.0.0.1:4443/s2 -XPUT -T - < /dev/urandom
Thanks to Robert Samuel Newson for raising this issue with details.
This fix must be backported to 1.8.
We'll need this in order to support uploading chunks. The h2 to h1
converter checks for the presence of the content-length header field
as well as the CONNECT method and returns these information to the
caller. The caller indicates whether or not a body is detected for
the message (presence of END_STREAM or not). No transfer-encoding
header is emitted yet.
PiBa-NL reported that haproxy crashes with a segmentation fault
if a function registered using `core.register_task` returns.
An example Lua script that reproduces the bug is:
mytask = function()
core.Info("Stopping task")
end
core.register_task(mytask)
The Valgrind output is as follows:
==6759== Process terminating with default action of signal 11 (SIGSEGV)
==6759== Access not within mapped region at address 0x20
==6759== at 0x5B60AA9: lua_sethook (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==6759== by 0x430264: hlua_ctx_resume (hlua.c:1009)
==6759== by 0x43BB68: hlua_process_task (hlua.c:5525)
==6759== by 0x4FED0A: process_runnable_tasks (task.c:231)
==6759== by 0x4B2256: run_poll_loop (haproxy.c:2397)
==6759== by 0x4B2256: run_thread_poll_loop (haproxy.c:2459)
==6759== by 0x41A7E4: main (haproxy.c:3049)
Add the missing `task = NULL` for the `HLUA_E_OK` case. The error cases
have been fixed as of 253e53e661 which
first was included in haproxy v1.8-dev3. This bugfix should be backported
to haproxy 1.8.
If TCP content inspection is used, msg_state can be >= HTTP_MSG_ERROR
the first time http_wait_for_request is called. t_idle was being left
unset in that case.
In the example below :
stick-table type string len 64 size 100k expire 60s
tcp-request inspect-delay 1s
tcp-request content track-sc1 hdr(X-Session)
%Ti will always be -1, because the msg_state is already at HTTP_MSG_BODY
when http_wait_for_request is called for the first time.
This patch should backported to 1.8 and 1.7.