When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
When the connection is closed by HAProxy, the status code provided in the
DISCONNECT frame is lost. By retransmitting it in the agent's reply, we are sure
to have it in the SPOE logs.
This patch may be backported in 1.8.
The commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order")
introduced an incompatibility with older agents. So the major version of the
SPOP is increased to make the situation unambiguous. And because before the fix,
the protocol is buggy, the support of the version 1.0 is removed to be sure to
not continue to support buggy agents.
The agents in the contrib folder (spoa_example, modsecurity and mod_defender)
are also updated to announce the SPOP version 2.0.
So, to be clear, from the patch, connections to agents announcing the SPOP
version 1.0 will be rejected.
This patch must be backported in 1.8.
The buffer pointer is already updated. It is again updated
when it is given to the function ci_putblk().
This patch must be backported in 1.6, 1.7 and 1.8
When we write data, we risk to encounter a dead-loack. The
function "stream_int_notify()" cannot be called the the
cosocket because the caller acquire a lock and when the socket
is closed, the cleanup function try to acquire the same lock.,
so a dead-lock raises.
In other way, the function stream_int_update_applet() can't
be called because it schedumes the applet only if some activity
in the buffers were detected. It is not always the case. We
replace this function by appctx_wakeup() which wake up the
applet inconditionnaly.
The last part of the fix is setting right signals. the applet
call the stream_int_update() function if the output buffer si
not empty, and ask for put data if some rite signals are
registered.
This patch must be backported in 1.6, 1.7 and 1.8. Note that it requires
patch "MINOR: task/notification: Is notifications registered" to be
applied.
Each time the send function yields, a notification must be registered.
Without this notification, the task is never wakeup when data arrives.
Today, the notification is registered only if the buffer is not available.
Other cases like the buffer is too small for all data are not processed.
This patch must be backported in 1.6, 1.7 and 1.8
In some cases, when we are waiting for data and the socket
timeout expires, we have a dead lock. The Lua socket locks
the applet socket, and call for a notify. The notify
immediately executes code and try to acquire the same lock,
so ... dead lock.
stream_int_notify() cant be used because it wakeup the applet
task only if the stream have changes. The changes are forces
by Lua, but not repported on the stream.
stream_int_update_applet() cant be used because the deadlock.
So, I inconditionnaly wakeup the applet. This wake is performed
asynchronously, and will call a stream_int_notify().
This patch must be backported in 1.6, 1.7 and 1.8
The appctx pointer is given from any variable which are wrong.
This implies the wakeup of wrong applet, and the socket are no
longer responsive.
This behavior is hidden by another inherited error which is
fixed in the next patch.
This patch remove all wrong appctx affectations.
This patch must be backported in 1.6, 1.7 and 1.8
This function returns true is some notifications are registered.
This function is usefull for the following patch
BUG/MEDIUM: lua/socket: Sheduling error on write: may dead-lock
It should be backported in 1.6, 1.7 and 1.8
This is required to let a message processing timed out. Because, when it
happens, there is no more context attached to the SPOE applet that sent the
NOTIFY frame. So when the ACK is received, it is too late. This is the same
situation when we receive the wrong ACK. It is invalid in sync mode. Otherwise,
the SPOE applet remains in the state "WAITING_SYNC_ACK" until the idle timeout
is reached. In such case, the applet is seen as busy and it is unusable. If this
happens too often, more and more applets will be created because some others are
blocked. If there is a maxconn on the SPOE backend, all processings will be
drastically slowdown.
Returning an error in such cases, in sync mode, allow us to terminate the SPOE
applet. Because it means the agent is unresponsive or too slow.
Note this bug exists only if the sync mode is used.
This patch must be backported in 1.8.
This introduces a new directive for the `resolvers` section:
`parse-resolv-conf`. When present, it will attempt to add any
nameservers in `/etc/resolv.conf` to the list of nameservers
for the current `resolvers` section.
[Mailing list thread][1].
[1]: https://www.mail-archive.com/haproxy@formilux.org/msg29600.html
We're taking tasks from the global runqueue based on the number of tasks
the thread already have in its local runqueue, but now that we have a task
list, we also have to take that into account.
When the task list was introduced, we bogusly lost max_processed--, that means
we would execute as much tasks as present in the list, and we would never
set active_tasks_mask, so the thread would go to sleep even if more tasks were
to be executed.
1.9-dev only, no backport is needed.
Don't forget to increase tasks_run_queue when we're adding a task to the
tasklet list, and to decrease it when we remove a task from a runqueue,
or its value won't be accurate, and could lead to tasks not being executed
when put in the global run queue.
1.9-dev only, no backport is needed.
These ones concern the warnings detected during header addition/insertion.
They are visible in the tooltip reporting the per-status codes stats. The
frontend and backend contain a total of request+response warnings, while
server only has the response warnings.
This patch adds a warning if an http-(request|reponse) (add|set)-header
rewrite fails to change the respective header in a request or response.
This usually happens when tune.maxrewrite is not sufficient to hold all
the headers that should be added.
When using table_* converters ref_cnt was incremented
and never decremented causing entries to not expire.
The root cause appears to be that stktable_lookup_key()
was called within all sample_conv_table_* functions which was
incrementing ref_cnt and not decrementing after completion.
Added stktable_release() to the end of each sample_conv_table_*
function and reworked the end logic to ensure that ref_cnt is
always decremented after use.
This should be backported to 1.8
There's no real reason to have a specific scheduler for applets anymore, so
nuke it and just use tasks. This comes with some benefits, the first one
being that applets cannot induce high latencies anymore since they share
nice values with other tasks. Later it will be possible to configure the
applets' nice value. The second benefit is that the applet scheduler was
not very thread-friendly, having a big lock around it in prevision of this
change. Thus applet-intensive workloads should now scale much better with
threads.
Some more improvement is possible now : some applets also use a task to
handle timers and timeouts. These ones could now be simplified to use only
one task.
Introduce tasklets, lightweight tasks. They have no notion of priority,
they are just run as soon as possible, and will probably be used for I/O
later.
For the moment they're used to replace the temporary thread-local list
that was used in the scheduler. The first part of the struct is common
with tasks so that tasks can be cast to tasklets and queued in this list.
Once a task is in the tasklet list, it has its leaf_p set to 0x1 so that
it cannot accidently be confused as not in the queue.
Pure tasklets are identifiable by their nice value of -32768 (which is
normally not possible).
A lot of tasks are run on one thread only, so instead of having them all
in the global runqueue, create a per-thread runqueue which doesn't require
any locking, and add all tasks belonging to only one thread to the
corresponding runqueue.
The global runqueue is still used for non-local tasks, and is visited
by each thread when checking its own runqueue. The nice parameter is
thus used both in the global runqueue and in the local ones. The rare
tasks that are bound to multiple threads will have their nice value
used twice (once for the global queue, once for the thread-local one).
In preparation for thread-specific runqueues, change the task API so that
the callback takes 3 arguments, the task itself, the context, and the state,
those were retrieved from the task before. This will allow these elements to
change atomically in the scheduler while the application uses the copied
value, and even to have NULL tasks later.
The limit of data read works only if all the data is in the
input buffer. Otherwise (if the data arrive in chunks), the
total amount of data is not taken in acount.
Only the current read data are compared to the expected amout
of data.
This patch must be backported from 1.9 to 1.6
When creating a state file using "show servers state" an empty field is
created in the srv_addr column if the server is from the socket family
AF_UNIX. This leads to a warning on start up when using
"load-server-state-from-file". This patch defaults srv_addr to "-" if
the socket family is not covered.
This patch should be backported to 1.8.
A few users reported that building without threads was accidently broken
after commit 6b96f72 ("BUG/MEDIUM: pollers: Use a global list for fd
shared between threads.") due to all_threads_mask not being defined.
It's OK to set it to zero as other code parts do when threads are
enabled but only one thread is used.
This needs to be backported to 1.8.
When checks fail, the code tries to run a dns resolution, in case the IP
changed.
The old way of doing that was to check, in case the last dns resolution
hadn't expired yet, if there were an applicable IP, which should be useless,
because it has already be done when the resolution was first done, or to
run a new resolution.
Both are a locking nightmare, and lead to deadlocks, so instead, just wake the
resolvers task, that should do the trick.
This should be backported to 1.8.
Sets OpenSSL 1.1.1's SSL_OP_PRIORITIZE_CHACHA unconditionally, as per [1]:
When SSL_OP_CIPHER_SERVER_PREFERENCE is set, temporarily reprioritize
ChaCha20-Poly1305 ciphers to the top of the server cipher list if a
ChaCha20-Poly1305 cipher is at the top of the client cipher list. This
helps those clients (e.g. mobile) use ChaCha20-Poly1305 if that cipher
is anywhere in the server cipher list; but still allows other clients to
use AES and other ciphers. Requires SSL_OP_CIPHER_SERVER_PREFERENCE.
[1] https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_clear_options.html
RFC 7234 says:
A cache MUST NOT store a response to any request, unless:
[...] the Authorization header field (see Section 4.2 of [RFC7235]) does
not appear in the request, if the cache is shared, unless the
response explicitly allows it (see Section 3.2), [...]
In this patch we completely disable the cache upon the receipt of an
Authorization header in the request. In this case it's not possible to
either use the cache or store into the cache anymore.
Thanks to Adam Eijdenberg of Digital Transformation Agency for raising
this issue.
This patch must be backported to 1.8.
The function hlua_ctx_resume return less text message and more error
code. These error code allow the caller to return appropriate
message to the user.
Since commit 36d1374 ("BUG/MINOR: lua: Fix SSL initialisation") in 1.6, the
Lua code always initializes an SSL server. It caused a small visible side
effect which is that by calling ssl_sock_prepare_srv_ctx(), it forces
global.ssl_used_backend to 1 and makes the initialization code believe that
there are some SSL servers in certain backends. This detection is used to
figure how to set the global maxconn value when only the memory usage is
limited. As such, even a configuration with no SSL at all will have a very
conservative maxconn.
The configuration below exhibits this :
global
ssl-server-verify none
stats socket /tmp/sock1 mode 666 level admin
tune.bufsize 16384
listen px
timeout client 5s
timeout server 5s
timeout connect 5s
bind :4445
#bind :4443 ssl crt rsa+dh2048.pem
#server s1 127.0.0.1:8003 ssl
Starting it with "-m 200" to limit it to 200 MB of RAM reports 1500 for
Maxconn, the same when uncommenting the "server" line, and 1300 when
uncommenting the "bind" line, regardless of the "server" line's status.
In practice it doesn't make sense to consider that Lua's server template
counts for one regular SSL server, because even if used for SSL, it will
not take large connection counts, compared to a backend relaying traffic.
Thus the solution consists in resetting the ssl_used_backend to its
previous value after creating the server_ctx from the Lua code. With the
fix, the same config with the same parameters now show :
- maxconn=5700 when neither side uses SSL
- maxconn=1500 when only one side uses SSL
- maxconn=1300 when both sides use SSL
This fix can be backported to versions 1.6 and beyond.
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the modsecurity implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
A recent fix on the SPOE revealed a mismatch between the SPOE specification and
the mod_defender implementation on the way flags are encoded or decoded. They
must be exchanged using the network bytes order and not the host one.
Be careful though, this patch breaks the compatiblity with HAProxy SPOE before
commit c4dcaff3 ("BUG/MEDIUM: spoe: Flags are not encoded in network order").
The announced accepted chars are "[a-zA-Z_-.]", but
the real accepted alphabet is "[a-zA-Z0-9_.]".
Numbers are supported and "-" is not supported.
This patch should be backported to 1.8 and 1.7
Function `hlua_socket_close` expected exactly one argument on the Lua stack.
But when `hlua_socket_close` was called from `hlua_socket_write_yield`,
Lua stack had 3 arguments. So `hlua_socket_close` threw the exception with
message "'close' needs 1 arguments".
Introduced new helper function `hlua_socket_close_helper`, which removed the
Lua stack argument count check and only checked if the first argument was
a socket.
This fix should be backported to 1.8, 1.7 and 1.6.
Commit 821bb9b ("MAJOR: threads/ssl: Make SSL part thread-safe") added
insufficient locking to the cert lookup and generation code : it uses
lru64_lookup(), which will automatically remove and add a list element
to the LRU list. It cannot be simply read-locked.
A long-term improvement should consist in using a lockless mechanism
in lru64_lookup() to safely move the list element at the head. For now
let's simply use a write lock during the lookup. The effect will be
minimal since it's used only in conjunction with automatically generated
certificates, which are much more expensive and rarely used.
This fix must be backported to 1.8.
Pawel Karoluk reported on Discourse[1] that HTTP/2 breaks url_param.
Christopher managed to track it down to the HTTP_MSGF_WAIT_CONN flag
which is set there to ensure the connection is validated before sending
the headers, as we may need to rewind the stream and hash again upon
redispatch. What happens is that in the forwarding code we refrain
from forwarding when this flag is set and the connection is not yet
established, and for this we go through the missing_data_or_waiting
path. This exit path was initially designed only to wait for data
from the client, so it rightfully checks whether or not the client
has already closed since in that case it must not wait for more data.
But it also has the side effect of aborting such a transfer if the
client has closed after the request, which is exactly what happens
in H2.
A study on the code reveals that this whole combined check should
be revisited : while it used to be true that waiting had the same
error conditions as missing data, it's not true anymore. Some other
corner cases were identified, such as the risk to report a server
close instead of a client timeout when waiting for the client to
read the last chunk of data if the shutr is already present, or
the risk to fail a redispatch when a client uploads some data and
closes before the connection establishes. The compression seems to
be at risk of rare issues there if a write to a full buffer is not
yet possible but a shutr is already queued.
At the moment these risks are extremely unlikely but they do exist,
and their impact is very minor since it mostly concerns an issue not
being optimally handled, and the fixes risk to cause more serious
issues. Thus this patch only focuses on how the HTTP_MSGF_WAIT_CONN
is handled and leaves the rest untouched.
This patch needs to be backported to 1.8, and could be backported to
earlier versions to properly take care of HTTP/1 requests passing via
url_param which are closed immediately after the headers, though this
is unlikely as this behaviour is only exhibited by scripts.
[1] https://discourse.haproxy.org/t/haproxy-1-8-x-url-param-issue-in-http2/2482/13
In commit abbf607 ("MEDIUM: cli: Add payload support") some cli keywords
without usage message have been added at the beginning of the keywords
array.
cli_gen_usage_usage_msg() use the kw->usage == NULL to stop generating
the usage message for the current keywords array. With those keywords at
the beginning, the whole array in cli.c was ignored in the usage message
generation.
This patch now checks the keyword itself, allowing a keyword without
usage message anywhere in the array.
When composing the event list for subscribe to kqueue events, the index
where the new event is added must be after the previous events, as such
the changes counter should continue counting.
This caused haproxy to accept connections but not try read and process
the incoming data.
This patch is for 1.9 only
In function hlua_fcn_new_proxy() too small a buffer was passed to
snprintf(), resulting in large proxy or listener IDs to make
snprintf() fail. It is unlikely to meet this case but let's fix it
anyway.
This fix must be backported to all stable branches where it applies.
The parameters like server-address, port and timeout should be set before
process_stream task is called to avoid the stream being 'closed' before it
got initialized properly. This is most clearly visible when running with
tune.lua.forced-yield=1.. So scheduling the task should not be done when
creating the lua socket, but when connect is called. The error
"socket: not yet initialised, you can't set timeouts." would then appear.
Below code for example also shows this issue, as the sleep will
yield the lua code:
local con = core.tcp()
core.sleep(1)
con:settimeout(10)
The polled_mask is only used in the pollers, and removing it from the
struct fdtab makes it fit in one 64B cacheline again, on a 64bits machine,
so make it a separate array.
With the old model, any fd shared by multiple threads, such as listeners
or dns sockets, would only be updated on one threads, so that could lead
to missed event, or spurious wakeups.
To avoid this, add a global list for fd that are shared, using the same
implementation as the fd cache, and only remove entries from this list
when every thread as updated its poller.
[wt: this will need to be backported to 1.8 but differently so this patch
must not be backported as-is]