The rcsid variable is static an unused, causing a build warning. Let's
just add __attribute__((unused)) to shut the warning.
This may be backported to 2.0.
A corner case was opened in the listener_accept() code by commit 3f0d02bbc2
("MAJOR: listener: do not hold the listener lock in listener_accept()"). The
issue is when one listener (or a group of) managed to eat all the proxy's or
all the process's maxconn, and another listener tries to accept a new socket.
This results in the atomic increment to detect the excess connection count
and immediately abort, without pausing the listener, thus the call is
immediately performed again. This doesn't happen when the test is run on a
single listener because this listener got limited when crossing the limit.
But with 2 or more listeners, we don't have this luxury.
The solution consists in limiting the listener as soon as we have to
decline accepting an incoming connection. This means that the listener
will not be marked full yet if it gets the exact connection count but
this is not a problem in practice since all other listeners will only be
marked full after their first attempt. Thus from now on, a listener is
only full once it has already failed taking an incoming connection.
This bug was definitely responsible for the unreproduceable occasional
reports of high CPU usage showing epoll_wait() returning immediately
without accepting an incoming connection, like in bug #129.
This fix must be backported to 1.9 and 1.8.
The "shutdown sessions" admin-mode command used to open-code the list
traversal while there's already a function for this: srv_shutdown_streams().
Better use it.
The "shutdown session server" command used to open-code the list traversal
while there's already a function for this: srv_shutdown_streams(). Better
use it.
Since previous commit a132e5efa9 ("BUG/MEDIUM: Make sure we leave the
session list in session_free().") it's pointless to delete the conn
element inside "if" blocks given that the second test is always true
as well. Let's simplify this with a single LIST_DEL_INIT() before the
test.
In session_free(), if we're about to destroy a connection that had no mux,
make sure we leave the session_list before calling conn_free(). Otherwise,
conn_free() would call session_unown_conn(), which would potentially free
the associated srv_list, but session_free() also frees it, so that would
lead to a double free, and random memory corruption.
This should be backported to 1.9 and 2.0.
There is a very short race in the queues which happens in the following
situation:
- stream A on thread 1 is being processed by a server
- stream B on thread 2 waits in the backend queue for a server
- stream B on thread 2 is fed up with waiting and expires, calls
stream_free() which calls pendconn_free(), which sees the
stream attached
- at the exact same instant, stream A finishes on thread 1, sees
one stream is waiting (B), detaches it and wakes it up
- stream B continues pendconn_free() and calls pendconn_unlink()
- pendconn_unlink() now detaches the node again and performs a
second deletion (harmless since idempotent), and decrements
srv/px->nbpend again
=> the number of connections on the proxy or server may reach -1 if/when
this race occurs.
It is extremely tight as it can only occur during the test on p->leaf_p
though it has been witnessed at least once. The solution consists in
testing leaf_p again once the lock is held to make sure the element was
not removed in the mean time.
This should be backported to 2.0 and 1.9, probably even 1.8.
In tasklet_remove_from_tasket_list(), we can be called for a tasklet that is
either in the private task list, or in the shared tasklet list. Take that into
account and always use MT_LIST_DEL() to remove it, otherwise if we're in the
shared list and another thread attempts to add a tasklet in it, bad things
will happen.
__tasklet_remove_from_tasklet_list() is left unchanged, it's only supposed
to be used by process_runnable_task() to remove task/tasklets from the private
tast list.
This should not be backported.
This should fix github issue #357.
We need to call vars_init() when the list is empty otherwise we
can't use variables in the response scope. This regression was
introduced by cda7f3f5 (MINOR: stream: don't prune variables if
the list is empty).
The following config reproduces the issue:
defaults
mode http
frontend in
bind *:11223
http-request set-var(req.foo) str("foo") if { path /bar }
http-request set-header bar %[var(req.foo)] if { var(req.foo) -m found }
http-response set-var(res.bar) str("bar")
http-response set-header foo %[var(res.bar)] if { var(res.bar) -m found }
use_backend out
backend out
server s1 127.0.0.1:11224
listen back
bind *:11224
http-request deny deny_status 200
> GET /ba HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
> GET /bar HTTP/1.1
> Host: localhost:11223
> User-Agent: curl/7.66.0
> Accept: */*
>
< HTTP/1.0 200 OK
< Cache-Control: no-cache
< Content-Type: text/html
< foo: bar
This must be backported as far as 1.9.
Documentation states that the interval between 2 DNS resolution is
driven by "timeout resolve <time>" directive.
From a code point of view, this was applied unless the latest status of
the resolution was VALID. In such case, "hold valid" was enforce.
This is a bug, because "hold" timers are not here to drive how often we
want to trigger a DNS resolution, but more how long we want to keep an
information if the status of the resolution itself as changed.
This avoid flapping and prevent shutting down an entire backend when a
DNS server is not answering.
This issue was reported by hamshiva in github issue #345.
Backport status: 1.8
As reported by David Birdsong on the ML, the HTTP action do-resolve does
not use the DNS cache.
Actually, the action is "registred" to the resolution for said name to
be resolved and wait until an other requester triggers the it. Once the
resolution is finished, then the action is updated with the result.
To trigger this, you must have a server with runtime DNS resolution
enabled and run a do-resolve action with the same fqdn AND they use the
same resolvers section.
This patch fixes this behavior by ensuring the resolution associated to
the action has a valid answer which is not considered as expired. If
those conditions are valid, then we can use it (it's the "cache").
Backport status: 2.0
Since the legacy HTTP mode was removed, the stream is always released at the end
of each HTTP transaction and a new is created to handle the next request for
keep-alive connections. So the HTTP transaction is no longer reset and the
function http_reset_txn() can be removed.
All TCP and HTTP captures are stored in 2 arrays, one for the request and
another for the response. In HAPRoxy 1.5, these arrays are part of the HTTP
transaction and thus are released during its cleanup. Because in this version,
the transaction is part of the stream (in 1.5, streams are still called
sessions), the cleanup is always performed, for HTTP and TCP streams.
In HAProxy 1.6, the HTTP transaction was moved out from the stream and is now
dynamically allocated only when required (becaues of an HTTP proxy or an HTTP
sample fetch). In addition, still in 1.6, the captures arrays were moved from
the HTTP transaction to the stream. This way, it is still possible to capture
elements from TCP rules for a full TCP stream. Unfortunately, the release is
still exclusively performed during the HTTP transaction cleanup. Thus, for a TCP
stream where the HTTP transaction is not required, the TCP captures, if any, are
never released.
Now, all captures are released when the stream is freed. This fixes the memory
leak for TCP streams. For streams with an HTTP transaction, the captures are now
released when the transaction is reset and not systematically during its
cleanup.
This patch must be backported as fas as 1.6.
Runtime traces are now supported for the streams, only if compiled with
debug. process_stream() is covered as well as TCP/HTTP analyzers and filters.
In traces, the first argument is always a stream. So it is easy to get the info
about the channels and the stream-interfaces. The second argument, when defined,
is always a HTTP transaction. And the third one is an HTTP message. The trace
message is adapted to report HTTP info when possible.
The macros DBG_TRACE_*() can be used instead of existing trace macros to emit
trace messages in debug mode only, ie, when HAProxy is compiled with DEBUG_FULL
or DEBUG_DEV. Otherwise, these macros do nothing. So it is possible to add
traces for development purpose without impacting performance of production
instances.
Names of these macros may enter in conflict with the macros of the runtime
tracing mechanism. So the prefix "FLT_" has been added to avoid any ambiguities.
Despite the addition of the mux layer, no change have been made on how to enable
the TCP splicing on process_stream(). We still check if transport layer on both
sides support the splicing, but we don't check the muxes support. So it is
possible to start to splice data with an unencrypted H2 connection on a side and
an H1 connection on the other. This leads to a freeze of the stream until a
client or server timeout is reached.
This patch fixed a part of the issue #356. It must be backported as far as 1.8.
The mux H1 announces the support of the TCP splicing. It only works for payload
data. It works for messages with an explicit content-length or for tunnelled
data. For chunked messages, the mux H1 should normally not try to xfer more than
the current chunk through the pipe. Unfortunately, this works on the read side
but the send is completely bogus. During the output formatting, the announced
size of chunks does not handle the size that will be spliced. Because there is
no formatting when spliced data are sent, the produced message is malformed and
rejected by the peer.
For now, because it is quick and simple, the TCP splicing is disabled for
chunked messages. I will try to enable it again in a proper way. I don't know
for now if it will be backportable in previous versions. This will depend on the
amount of changes required to handle it.
This patch fixes a part of the issue #356. It must be backported to 2.0 and 1.9.
This patch is easy to review: let's call parse_logsrv() function to parse
"log" directive as this is already for other sections for proxies.
This enable us to log incoming TCP connections for the listeners for "peers"
sections.
Update the documentation for "peers" section.
These keywords received a second argument with commit ae6f125 ("MINOR:
sample: add us/ms support to date/http_date"). Each argument is optional,
it's not either both or none.
If the SSL_CTX of a previous instance (ckch_inst) was used as a
default_ctx, replace the default_ctx of the bind_conf by the first
SSL_CTX inserted in the SNI tree.
Use the RWLOCK of the sni tree to handle the change of the default_ctx.
When trying to update a certificate <file>.{rsa,ecdsa,dsa}, but this one
does not exist and if <file> was used as a regular file in the
configuration, the error was ambiguous. Correct it so we can return a
certificate not found error.
Commit bc6ca7c ("MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'")
broke the ability to commit a unique certificate which does not use a
bundle extension .{rsa,ecdsa,dsa}.
When doing an 'ssl set cert' with a certificate which does not exist in
configuration, the appctx->ctx.ssl.old_ckchs->path was duplicated while
app->ctx.ssl.old_ckchs was NULL, resulting in a NULL dereference.
Move the code so the 'not referenced' error is done before this.
Released version 2.1-dev4 with the following main changes :
- BUG/MINOR: cli: don't call the kw->io_release if kw->parse failed
- BUG/MINOR: mux-h2: Don't pretend mux buffers aren't full anymore if nothing sent
- BUG/MAJOR: stream-int: Don't receive data from mux until SI_ST_EST is reached
- DOC: remove obsolete section about header manipulation
- BUG/MINOR: ssl/cli: cleanup on cli_parse_set_cert error
- MINOR: ssl/cli: rework the 'set ssl cert' IO handler
- BUILD: CI: comment out cygwin build, upgrade various ssl libraries
- DOC: Improve documentation of http-re(quest|sponse) replace-(header|value|uri)
- BUILD/MINOR: tools: shut up the format truncation warning in get_gmt_offset()
- BUG/MINOR: spoe: fix off-by-one length in UUID format string
- BUILD/MINOR: ssl: shut up a build warning about format truncation
- BUILD: do not disable -Wformat-truncation anymore
- MINOR: chunk: add chunk_istcat() to concatenate an ist after a chunk
- Revert "MINOR: istbuf: add b_fromist() to make a buffer from an ist"
- MINOR: mux: Add a new method to get informations about a mux.
- BUG/MEDIUM: stream_interface: Only use SI_ST_RDY when the mux is ready.
- BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready.
- MINOR: doc: fix busy-polling performance reference
- MINOR: config: allow no set-dumpable config option
- MINOR: init: always fail when setrlimit fails
- MINOR: ssl/cli: rework 'set ssl cert' as 'set/commit'
- CLEANUP: ssl/cli: remove leftovers of bundle/certs (it < 2)
- REGTEST: vtest can now enable mcli with its own flag
- BUG/MINOR: config: Update cookie domain warn to RFC6265
- MINOR: sample: add us/ms support to date/http_date
- BUG/MINOR: ssl/cli: check trash allocation in cli_io_handler_commit_cert()
- BUG/MEDIUM: mux-h2: report no available stream on a connection having errors
- BUG/MEDIUM: mux-h2: immediately remove a failed connection from the idle list
- BUG/MEDIUM: mux-h2: immediately report connection errors on streams
- BUG/MINOR: stats: properly check the path and not the whole URI
- BUG/MINOR: ssl: segfault in cli_parse_set_cert with old openssl/boringssl
- BUG/MINOR: ssl: ckch->chain must be initialized
- BUG/MINOR: ssl: double free on error for ckch->{key,cert}
- MINOR: ssl: BoringSSL ocsp_response does not need issuer
- BUG/MEDIUM: ssl/cli: fix dot research in cli_parse_set_cert
- MINOR: backend: Add srv_name sample fetche
- DOC: Add GitHub issue config.yml
The sample fetche can get srv_name without foreach
`core.backends["bk"].servers`.
Then we can get Server class quickly via
`core.backends[txn.f:be_name()].servers[txn.f:srv_name()]`.
Issue#342
Fix 541a534 ("BUG/MINOR: ssl/cli: fix build of SCTL and OCSP") was not
enough.
[wla: It will probably be better later to put the #ifdef in the
functions so they can return an error if they are not implemented]
Since we now have full URIs with h2, stats may fail to work over H2
so we must carefully only check the path there if the stats URI was
passed with a path only. This way it remains possible to intercept
proxy requests to report stats on explicit domains but it continues
to work as expected on origin requests.
No backport needed.
In case a stream tries to send on a connection error, we must report the
error so that the stream interface keeps the data available and may safely
retry on another connection. Till now this would happen only before the
connection was established, not in case of a failed handshake or an early
GOAWAY for example.
This should be backported to 2.0 and 1.9.
If a connection faces an error or a timeout, it must be removed from its
idle list ASAP. We certainly don't want to risk sending new streams on
it.
This should be backported to 2.0 (replacing MT_LIST_DEL with LIST_DEL_LOCKED)
and 1.9 (there's no lock there, the idle lists are per-thread and per-server
however a LIST_DEL_INIT will be needed).
If an H2 mux has met an error, we must not report available streams
anymore, or it risks to accumulate new streams while not being able
to process them.
This should be backported to 2.0 and 1.9.
It can be sometimes interesting to have a timestamp with a
resolution of less than a second.
It is currently painful to obtain this, because concatenation
of date and date_us lead to a shorter timestamp during first
100ms of a second, which is not parseable and needs ugly ACLs
in configuration to prepend 0s when needed.
To improve this, add an optional <unit> parameter to date sample
to report an integer with desired unit.
Also support this unit in http_date converter to report
a date string with sub-second precision.
The domain option of the cookie keyword allows to define which domain or
domains should use the the cookie value of a cookie-based server
affinity. If the domain does not start with a dot, the user agent should
only use the cookie on hosts that matches the provided domains. If the
configured domain starts with a dot, the user agent can use the cookie
with any host ending with the configured domain.
haproxy config parser helps the admin warning about a potentially buggy
config: defining a domain without an embedded dot which does not start
with a dot, which is forbidden by the RFC.
The current condition to issue the warning implements RFC2109. This
change updates the implementation to RFC6265 which allows domain without
a leading dot.
Should be backported to all supported versions. The feature exists at least
since 1.5.
VTest can now enable mworker and mcli with separate flags so lets
update vtc files that need it. This also allows to revert the change
made with 1545a59c ("REGTESTS: make seamless-reload depend on 1.9
and above").
Remove the leftovers of the certificate + bundle updating in 'ssl set
cert' and 'commit ssl cert'.
* Remove the it variable in appctx.ctx.ssl.
* Stop doing everything twice.
* Indent
This patch splits the 'set ssl cert' CLI command into 2 commands.
The previous way of updating the certificate on the CLI was limited with
the bundles. It was only able to apply one of the tree part of the
certificate during an update, which mean that we needed 3 updates to
update a full 3 certs bundle.
It was also not possible to apply atomically several part of a
certificate with the ability to rollback on error. (For example applying
a .pem, then a .ocsp, then a .sctl)
The command 'set ssl cert' will now duplicate the certificate (or
bundle) and update it in a temporary transaction..
The second command 'commit ssl cert' will commit all the changes made
during the transaction for the certificate.
This commit breaks the ability to update a certificate which was used as
a unique file and as a bundle in the HAProxy configuration. This way of
using the certificates wasn't making any sense.
Example:
// For a bundle:
$ echo -e "set ssl cert localhost.pem.rsa <<\n$(cat kikyo.pem.rsa)\n" | socat /tmp/sock1 -
Transaction created for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.dsa <<\n$(cat kikyo.pem.dsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo -e "set ssl cert localhost.pem.ecdsa <<\n$(cat kikyo.pem.ecdsa)\n" | socat /tmp/sock1 -
Transaction updated for certificate localhost.pem!
$ echo "commit ssl cert localhost.pem" | socat /tmp/sock1 -
Committing localhost.pem.
Success!
this patch introduces a strict-limits parameter which enforces the
setrlimit setting instead of a warning. This option can be forcingly
disable with the "no" keyword.
The general aim of this patch is to avoid bad surprises on a production
environment where you change the maxconn for example, a new fd limit is
calculated, but cannot be set because of sysfs setting. In that case you
might want to have an explicit failure to be aware of it before seeing
your traffic going down. During a global rollout it is also useful to
explictly fail as most progressive rollout would simply check the
general health check of the process.
As discussed, plan to use the strict by default mode starting from v2.3.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
in global config parsing, we currently expect to have a possible no
keyword (KWN_NO), but we never allow it in config parsing.
another patch could have been to simply remove the code handling a
possible KWN_NO.
take this opportunity to update documentation of set-dumpable.
Signed-off-by: William Dauchy <w.dauchy@criteo.com>
In connect_server(), if we're reusing a connection, only use SF_SRV_REUSED
if the connection is fully ready. We may be using a multiplexed connection
created by another stream that is not yet ready, and may fail.
If we set SF_SRV_REUSED, process_stream() will then not wait for the timeout
to expire, and retry to connect immediately.
This should be backported to 1.9 and 2.0.
This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518.
In si_connect(), only switch the strema_interface status to SI_ST_RDY if
we're reusing a connection and if the connection's mux is ready. Otherwise,
maybe we're reusing a connection that is not fully established yet, and may
fail, and setting SI_ST_RDY would mean we would not be able to retry to
connect.
This should be backported to 1.9 and 2.0.
This commit depends on 55234e33708c5a584fb9efea81d71ac47235d518.