In order to manage "If-Modified-Since" requests, we need to keep a
reference time for our cache entries (to which the conditional request's
date will be compared).
This reference is either extracted from the "Last-Modified" header, or
the "Date" header, or the reception time of the response (in decreasing
order of priority).
The date values are converted into seconds since epoch in order to ease
comparisons and to limit storage space.
BorinSSL pretends to be 1.1.1 version of OpenSSL. It messes some
version based feature presense checks. For example, OpenSSL specific
early data support.
Let us change that feature detction to SSL_READ_EARLY_DATA_SUCCESS
macro check instead of version comparision.
Previous commit ae32ac74db ("BUG/MINOR: log: fix memory leak on logsrv
parse error") addressed one issue and introduced another one, the logsrv
pointer may also be null at the end of the function so we must test it
before deciding to dereference it.
This should be backported along with the patch above to 2.2.
In case of parsing error on logsrv, we can leave parse_logsrv() without
releasing logsrv->ring_name or smp_rgs. Let's free them on the error path.
This should fix issue #926 detected by Coverity.
The impact is only a tiny leak just before reporting a fatal error, so it
will essentially annoy valgrind.
This can be backported to 2.0 (just drop the ring part).
It's a regression from b3201a3e "BUG/MINOR: disable dynamic OCSP load
with BoringSSL". The origin bug is link to 76b4a12 "BUG/MEDIUM: ssl:
memory leak of ocsp data at SSL_CTX_free()": ssl_sock_free_ocsp()
shoud be in #ifndef OPENSSL_IS_BORINGSSL.
To avoid long #ifdef for small code, the BoringSSL part for ocsp load
is isolated in a simple #ifdef.
This must be backported in 2.2 and 2.1
`att_beg` is assigned to `next` at the end of the `for` loop, but is
assigned to `prev` at the beginning of the loop, which is itself
assigned to `next` after each loop. So it represents a double
assignation for the same value. Also `att_beg` is not used after the end
of the loop.
this is a partial fix for github issue #923, all the others could
probably be marked as intentional to protect future changes.
no backport needed.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Released version 2.3-dev8 with the following main changes :
- MINOR: backend: replace the lbprm lock with an rwlock
- MINOR: lb/map: use seek lock and read locks where appropriate
- MINOR: lb/leastconn: only take a read lock in fwlc_get_next_server()
- MINOR: lb/first: use a read lock in fas_get_next_server()
- MINOR: lb/chash: use a read lock in chash_get_server_hash()
- BUG/MINOR: disable dynamic OCSP load with BoringSSL
- BUILD: ssl: make BoringSSL use its own version numbers
- CLEANUP: threads: don't register an initcall when not debugging
- MINOR: threads: change lock_t to an unsigned int
- CLEANUP: tree-wide: reorder a few structures to plug some holes around locks
- CLEANUP: task: remove the unused and mishandled global_rqueue_size
- BUG/MEDIUM: connection: Never cleanup server lists when freeing private conns
- MEDIUM: config: report that "nbproc" is deprecated
- BUG/MINOR: listener: close before free in `listener_accept`
- MINOR: ssl: 'ssl-load-extra-del-ext' removes the certificate extension
- BUG/MINOR: queue: properly report redistributed connections
- CONTRIB: tcploop: remove unused local variables in tcp_pause()
- BUILD: makefile: add entries to build common debugging tools
- BUG/MEDIUM: server: support changing the slowstart value from state-file
- MINOR: http: Add `enum etag_type http_get_etag_type(const struct ist)`
- MINOR: http: Add etag comparison function
- MEDIUM: cache: Store the ETag information in the cache_entry
- MEDIUM: cache: Add support for 'If-None-Match' request header
- REGTEST: cache: Add if-none-match test case
- CLEANUP: compression: Make use of http_get_etag_type()
- BUG/MINOR: http-ana: Don't send payload for internal responses to HEAD requests
- BUG/MAJOR: mux-h2: Don't try to send data if we know it is no longer possible
- MINOR: threads/debug: only report used lock stats
- MINOR: threads/debug: only report lock stats for used operations
- MINOR: proxy; replace the spinlock with an rwlock
- MINOR: server: read-lock the cookie during srv_set_dyncookie()
- MINOR: proxy/cli: only take a read lock in "show errors"
- OPTIM: queue: don't call pendconn_unlink() when the pendconn is not queued
- MINOR: queue: split __pendconn_unlink() in per-srv and per-prx
- MINOR: queue: reduce the locked area in pendconn_add()
- OPTIM: queue: make the nbpend counters atomic
- OPTIM: queue: decrement the nbpend and totpend counters outside of the lock
- MINOR: leastconn: take the queue length into account when queuing servers
- MEDIUM: fwlc: re-enable per-server queuing up to maxqueue
- Revert "OPTIM: queue: don't call pendconn_unlink() when the pendconn is not queued"
- MINOR: stats: support the "up" output modifier for "show stat"
- MINOR: stats: also support a "no-maint" show stat modifier
- MINOR: stats: indicate the number of servers in a backend's status
- MEDIUM: ssl: ssl-load-extra-del-ext work only with .crt
- REGTEST: ssl: test "set ssl cert" with separate key / crt
- DOC: management: apply the "show stat" modifiers to "show stat", not "show info"
- MINOR: stats: report server's user-configured weight next to effective weight
- CI: travis-ci: switch to Ubuntu 20.04
- CONTRIB: release-estimator: Add release estimating tool
- BUG/MEDIUM: queue: fix unsafe proxy pointer when counting nbpend
- BUG/MINOR: extcheck: add missing checks on extchk_setenv()
Issue #910 reports that we fail to check a few extchk_setenv() in the
child process. These are mostly harmless, but instead of counting on
the external check script to fail the dirty way, better fail cleanly
when detecting the failure.
This could probably be backported to all stable branches.
As reported by Coverity in issue #917, commit 96bca33 ("OPTIM: queue:
decrement the nbpend and totpend counters outside of the lock")
introduced a bug when moving the increments outside of the loop,
because we can't always rely on the pendconn "p" here as it may
be null. We can retrieve the proxy pointer directly from s->proxy
instead. The same is true for pendconn_redistribute(), though the
last "p" pointer there was still valid. This patch fixes both.
No backport is needed, this was introduced just before 2.3-dev8.
This tool monitors the HAProxy stable branches and calculates a proposed
release date for the next minor release based on the bug fixes that are in
the queue.
Print only:
./release-estimator.py --print
Send email:
./release-estimator.py --send-mail --from-email from@domain.local --to-email to@domain.local
See contrib/release-estimator/README.md for details.
The "weight" column on the stats page is somewhat confusing when using
slowstart becaue it reports the effective weight, without being really
explicit about it. In some situations the user-configured weight is more
relevant (especially with long slowstarts where it's important to know
if the configured weight is correct).
This adds a new uweight stat which reports a server's user-configured
weight, and in a backend it receives the sum of all servers' uweights.
In addition it adds the mention of "effective" in a few descriptions
for the "weight" column (help and doc).
As a result, the list of servers in a backend is now always scanned
when dumping the stats. But this is not a problem given that these
servers are already scanned anyway and for way heavier processing.
By mistake I added the "up" then "maint" output modifiers to the "show info"
block instead of the "show stat" one in the two previous commits 65141ffc4
("MINOR: stats: support the "up" output modifier for "show stat"") and
3e3203670 ("MINOR: stats: also support a "no-maint" show stat modifier").
No backport is needed.
This reg-test tests the "set ssl cert" command the same way the
set_ssl_cert.vtc does, but with separate key/crt files and with the
ssl-load-extra-del-ext.
It introduces new key/.crt files that contains the same pair as the
existing .pem.
In order to be compatible with the "set ssl cert" command of the CLI,
this patch restrict the ssl-load-extra-del-ext to files with a ".crt"
extension in the configuration.
Related to issue #785.
Should be backported where 8e8581e ("MINOR: ssl: 'ssl-load-extra-del-ext'
removes the certificate extension") was backported.
When dumping the stats page (or the CSV output), when many states are
mixed, it's hard to figure the number of up servers. But when showing
only the "up" servers or hiding the "maint" servers, there's no way to
know how many servers are configured, which is problematic when trying
to update server-templates.
What this patch does, for dumps in "up" or "no-maint" modes, is to add
after the backend's "UP" or "DOWN" state "(%d/%d)" indicating the number
of servers seen as UP to the total number of servers in the backend. As
such, seeing "UP (33/39)" immediately tells that there are 6 servers that
are not listed when using "up", or will let the client figure how many
servers are left once deducted the number of non-maintenance ones. It's
not done on default dumps so as not to disturb existing tools, which
already have all the information they need in the dump.
"no-maint" is a bit similar to "up" except that it will only hide
servers that are in maintenance (or disabled in the configuration), and
not those that are enabled but failed a check. One benefit here is to
significantly reduce the output of the "show stat" command when using
large server-templates containing entries that are not yet provisioned.
Note that the prometheus exporter also has such an option which does
the exact same.
We already had it on the HTTP interface but it was not accessible on the
CLI. It can be very convenient to hide servers which are down, do not
resolve, or are in maintenance.
This reverts commit b7ba1d9011. Actually
this test had already been removed in the past by commit fac0f645d
("BUG/MEDIUM: queue: make pendconn_cond_unlink() really thread-safe"),
but the condition to reproduce the bug mentioned there was not clear.
Now after analysis and a certain dose of code cleanup, things start to
appear more obvious. what happens is that if we check the presence of
the node in the tree without taking the lock, we can see the NULL at
the instant the node is being unlinked by another thread in
pendconn_process_next_strm() as part of __pendconn_unlink_prx() or
__pendconn_unlink_srv(). Till now there is no issue except that the
pendconn is not removed from the queue during this operation and that
the task is scheduled to be woken up by pendconn_process_next_strm()
with the stream being added to the list of the server's active
connections by __stream_add_srv_conn(). The first thread finishes
faster and gets back to stream_free() faster than the second one
sets the srv_conn on the stream, so stream_free() skips the s->srv_conn
test and doesn't try to dequeue the freshly queued entry. At the
very least a barrier would be needed there but we can't afford to
free the stream while it's being queued. So there's no other solution
than making sure that either __pendconn_unlink_prx() or
pendconn_cond_unlink() get the entry but never both, which is why the
lock is required around the test. A possible solution would be to set
p->target before unlinking the entry and using it to complete the test.
This would leave no dead period where the pendconn is not seen as
attached.
It is possible, yet extremely difficult, to reproduce this bug, which
was first noticed in bug #880. Running 100 servers with maxconn 1 and
maxqueue 1 on leastconn and a connect timeout of 30ms under 16 threads
with DEBUG_UAF, with a traffic making the backend's queue oscillate
around zero (typically using 250 connections with a local httpterm
server) may rarely manage to trigger a use-after-free.
No backport is needed.
Leastconn has the nice propery of being able to sort servers by their
current usage. It's really a shame to force all requests into the backend
queue when the algo would be able to also consider their current queue.
In order not to change existing behavior but extend it, this patch allows
leastconn to elect servers which are already full if they have an explicitly
configured maxqueue setting above zero and their queue hasn't reached that
threshold. This will significantly reduce the pressure in the backend queue
when queuing a lot with lots of servers.
A test on 8 threads with 100 servers configured with maxconn 1 jumped
from 165krps to 330krps with maxqueue 15 with this patch.
This partially undoes commit 82cd5c13a ("OPTIM: backend: skip LB when we
know the backend is full") but allows to scale much better even by setting
a single-digit maxqueue value. Some better heuristics could be used to
maintain the behavior of the bypass in the patch above, consisting in
keeping it if it's known that there is no server with a configured
maxqueue in the farm (or in the backend).
When servers are queued into the leastconn tree, it's important to also
consider their queue length. There could be some servers with lots of
queued requests that we don't want to hammer with extra connections. In
order not to add extra stress to the LB algorithm, we don't update the
value when adding to the queue, only when updating the connection count
(i.e. picking from the queue or releasing a connection). This will be
sufficient to significantly improve the fairness in such situations.
We don't need to do that inside the lock. However since the operation
used to be done in deep functions, we have to make it resurface closer
to visible parts. It remains reasonably self-contained in queue.c so
that's not that big of a deal. Some places (redistribute) could benefit
from a single operation for all counts at once. Others like
pendconn_process_next_strm() are still called with both locks held but
now it will be possible to change this.
Instead of incrementing, decrementing them and updating their max under
the lock, make them atomic and keep them out of the lock as much as
possible. For __pendconn_unlink_* it would be wide to decide to move
these counters outside of the function, inside the callers so that a
single atomic op can be done per counter even for groups of operations.
Similarly to previous changes, we know if we're dealing with a server
or proxy lock so let's directly lock at the finest possible places
there. It's worth noting that a part of the operation consisting in
an increment and update of a max could be done outside of the lock
using atomic ops and a CAS.
The function is called with the lock held and does too many tests for
things that are already known from its callers. Let's split it in two
so that its callers call either the per-server or per-proxy function
depending on where the element is (since they had to determine it
prior to taking the lock).
On connection error processing, we can see massive storms of calls to
pendconn_cond_unlink() to release a possible place in the queue. For
example, in issue #908, on average half of the threads are caught in
this function via back_try_conn_req() consecutive to a synchronous
error. However we wait until grabbing the lock to know if the pendconn
is effectively in a queue, which is expensive for many cases. We know
the transition may only happen from in-queue to out-of-queue so it's safe
to first run a preliminary check to see if it's worth going further. This
will allow to avoid the cost of locking for most requests. This should
not change anything for those completing correctly as they're already
run through pendconn_free() which doesn't call pendconn_cond_unlink()
unless deemed necessary.
No need to use an exclusive lock on the proxy anymore when reading its
setting, a read lock is enough. A few other places continue to use a
write-lock when modifying simple flags only in order to let this
function see a consistent value all along. This might be changed in
the future using barriers and local copies.
This is an anticipation of finer grained locking for the queues. For now
all lock places take a write lock so that there is no difference at all
with previous code.
In addition to the previous simplification, most locks don't use the
seek or read lock (e.g. spinlocks etc) so let's split the dump into
distinct operations (write/seek/read) and only report those which
were used. Now the output size is roughly divided by 5 compared
to previous ones.
The lock stats are very verbose and more than half of them are used in
a typical test, making it hard to spot the sought values. Let's simply
report "not used" for those which have not been called at all.
In h2_send(), if we are in a state where we know it is no longer possible to
send data, we must exit the sending loop to avoid any possiblity to loop
forever. It may happen if the mbuf ring is released while the H2_CF_MUX_MFULL
flag is still set. Here is a possible scenario to trigger the bug :
1) The mbuf ring is full because we are unable to send data. The
H2_CF_MUX_MFULL flag is set on the H2 connection.
2) At this stage, the task timeout expires because the H2 connection is
blocked. We enter in h2_timeout_task() function. Because the mbuf ring is
full, we cannot send the GOAWAY frame. Thus the H2_CF_GOAWAY_FAILED flag is
set. The H2 connection is not released yet because there is still a stream
attached. Here we leave h2_timeout_task() function.
3) A bit later, the H2 connection is woken up. If h2_process(), nothing is
performed by the first attempt to send data, in h2_send(). Then, because
the H2_CF_GOAWAY_FAILED flag is set, the mbuf ring is released. But the
H2_CF_MUX_MFULL flag is still there. At this step a second attempt to send
data is performed.
4) In h2_send(), we try to send data in a loop. To exist this loop, done
variable must be set to 1. Because the H2_CF_MUX_MFULL flag is set, we
don't call h2_process_mux() and done is not updated. Because the mbuf ring
is now empty, nothing is sent and the H2_CF_MUX_MFULL flag is never
removed. Now, we loop forever... waiting for the watchdog.
To fix the bug, we now exit the loop if one of these conditions is true :
- The H2_CF_GOAWAY_FAILED flag is set on the H2 connection
- The CO_FL_SOCK_WR_SH flag is set on the underlying connection
- The H2 connection is in the H2_CS_ERROR2 state
This patch should fix the issue #912 and most probably #875. It must be
backported as far as the 1.8.
When an internal response is returned to a client, the message payload must be
skipped if it is a reply to a HEAD request. The payload is removed from the HTX
message just before the message forwarding.
This bugs has been around for a long time. It was already there in the pre-HTX
versions. In legacy HTTP mode, internal errors are not parsed. So this bug
cannot be easily fixed. Thus, this patch should only be backported in all HTX
versions, as far as 2.0. However, the code has significantly changed in the
2.2. Thus in the 2.1 and 2.0, the patch must be entirely reworked.
Test that if-none-match header is properly taken into account and that
when the conditions are fulfilled, a "304 Not Modified" response can be
sent to the client.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
Partial support of conditional HTTP requests. This commit adds the
support of the 'If-None-Match' header (see RFC 7232#3.2).
When a client specifies a list of ETags through one or more
'If-None-Match' headers, they are all compared to the one that might have
been stored in the corresponding http cache entry until one of them
matches.
If a match happens, a specific "304 Not Modified" response is
sent instead of the cached data. This response has all the stored
headers but no other data (see RFC 7232#4.1). Otherwise, the whole cached data
is sent.
Although unlikely in a GET/HEAD request, the "If-None-Match: *" syntax is
valid and also receives a "304 Not Modified" response (RFC 7434#4.3.2).
This resolves a part of GitHub issue #821.
When sent by a server for a given resource, the ETag header is
stored in the coresponding cache entry (as any other header). So in
order to perform future ETag comparisons (for subsequent conditional
HTTP requests), we keep the length of the ETag and its offset
relative to the start of the cache_entry.
If no ETag header exists, the length and offset are zero.
Add a function that compares two etags that might be of different types.
If any of them is weak, the 'W/' prefix is discarded and a strict string
comparison is performed.
Co-authored-by: Tim Duesterhus <tim@bastelstu.be>
If the slowstart value in a state file implies the latest state change
is within the slowstart period, we end up calling srv_update_status()
to reschedule the server's state change but its task is not yet
allocated and remains null, causing a crash on startup.
Make sure srv_update_status() supports being called with partially
initialized servers which do not yet have a task. If the task has to
be scheduled, it will necessarily happen after initialization since
it will result from a state change.
This should be backported wherever server-state is present.
A few tools in contrib/ such as halog, flags, poll and tcploop are
occasionally useful at least to developers, and some of them such as
halog or flags can occasionally break due to some changes in the include
files. As reported in issue #907, their inability to inherit the global
build options also causes some warnings related to some specificities
of the main include files. Let's just add entries in the main makefile
to build them.
In commit 5cd4bbd7a ("BUG/MAJOR: threads/queue: Fix thread-safety issues
on the queues management") the counter of transferred connections was
accidently lost, so that when a server goes down with connections in its
queue, it will always be reported that 0 connection were transferred.
This should be backported as far as 1.8 since the patch above was
backported there.
In issue #785, users are reporting that it's not convenient to load a
".crt.key" when the configuration contains a ".crt".
This option allows to remove the extension of the certificate before
trying to load any extra SSL file (.key, .ocsp, .sctl, .issuer etc.)
The patch changes a little bit the way ssl_sock_load_files_into_ckch()
looks for the file.
safer to close handle before the object is put back in the global pool.
this was introduced by commit 9378bbe0be ("MEDIUM: listener:
use protocol->accept_conn() to accept a connection")
this should fix github issue #902
no backport needed.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
As previously discussed, nbproc usage is bad, deprecated, and scheduled
for removal in 2.5.
If "nbproc" is found with more than one process while nbthread is not
set, a warning will be emitted encouraging to remove it or to migrate
to nbthread instead. This makes sure the user has an opportunity to
both see the message and silence it.
When a connection is released, depending on its state, it may be detached from
the session and it may be removed from the server lists. The first case may
happen for private or unsharable active connections. The second one should only
be performed for idle or available connections. We never try to remove a
connection from the server list if it is attached to a session. But it is also
important to never try to remove a private connecion from the server lists, even
if it is not attached to a session. Otherwise, the curr_used_conn server counter
is decremented once too often.
This bug was introduced by the commit 04a24c5ea ("MINOR: connection: don't check
priv flag on free"). It is related to the issue #881. It only affects the 2.3,
no backport is needed.
This counter is only updated and never used, and in addition it's done
without any atomicity so it's very unlikely to be correct on multi-CPU
systems! Let's just remove it since it's not used.