GitHub uses github/linguist to determine the programming language used for each
source file to show statistics and to power the search. In cases of unique file
extensions this is easy, but for `.h` files the situation is less clear as they
are used for C, C++, Objective C and more. In these cases linguist makes use of
heuristics to determine the language.
One of these heuristics for C++ is that the file contains a line beginning with
`try`, only preceded by whitespace indentation. This heuristic matches the long
comment at the bottom of `channel-t.h`, as one sentence includes the word `try`
after a linebreak.
Fix this misdetection by changing the comment to follow the convention that all
lines start with an asterisk.
Since commit c7eedf7a5 ("MINOR: queue: reduce the locked area in
pendconn_add()") the stream's pend_pos is set out of the lock, after
the pendconn is queued. While this entry is only manipulated by the
stream itself and there is no bug caused by this right now, it's a
bit dangerous because another thread could decide to look at this
field during dequeuing and could randomly see something else. Also
in case of crashes, memory inspection wouldn't be as trustable.
Let's assign the pendconn before it can be found in the queue.
Create a new regtest to test SSL support for dynamic servers.
The first step of the test is to create the ca-file via the CLI. Then a
dynamic server is created with the ssl option using the ca-file. A
client request is made through it to achieve the test.
Activate the 'ssl' keyword for dynamic servers. This is the final step
to have ssl dynamic servers feature implemented. If activated,
ssl_sock_prepare_srv_ctx will be called at the end of the 'add server'
CLI handler.
At the same time, update the management doc to list all ssl keywords
implemented for dynamic servers.
These keywords are deemed safe-enough to be enable on dynamic servers.
Their parsing functions are simple and can be called at runtime.
- allow-0rtt
- alpn
- ciphers
- ciphersuites
- force-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- no-sslv3/tlsv10/tlsv11/tlsv12/tlsv13
- no-ssl-reuse
- no-tls-tickets
- npn
- send-proxy-v2-ssl
- send-proxy-v2-ssl-cn
- sni
- ssl-min-ver
- ssl-max-ver
- tls-tickets
- verify
- verifyhost
'no-ssl-reuse' and 'no-tls-tickets' are enabled to override the default
behavior.
'tls-tickets' is enable to override a possible 'no-tls-tickets' set via
the global option 'ssl-default-server-options'.
'force' and 'no' variants of tls method options are useful to override a
possible 'ssl-default-server-options'.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_crl is used at runtime for a dynamic server. The crl must
have already been loaded either in the config or through the 'ssl crl'
CLI commands.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_crt is used at runtime for a dynamic server. The cert must
have already been loaded either in the config or through the 'ssl cert'
CLI commands.
File-access through ssl_store_load_locations_file is deactivated if
srv_parse_ca_file is used at runtime for a dynamic server. The ca-file
must have already been loaded either in the config or through the 'ssl
ca-file' CLI commands.
This will be in preparation for support of ssl on dynamic servers. The
'alpn' keyword will be allowed for dynamic servers but not the
'check-alpn'.
The alpn parsing is extracted into a new function parse_alpn. Each
srv_parse_alpn and srv_parse_check_alpn called it.
The function ssl_sock_load_srv_cert will be used at runtime for dynamic
servers. If the cert is not loaded on ckch tree, we try to access it
from the file-system.
Now this access operation is rendered optional by a new function
argument. It is only allowed at parsing time, but will be disabled for
dynamic servers at runtime.
'set server ssl' uses ssl parameters from default-server. As dynamic
servers does not reuse any default-server parameters, this command has
no sense for them.
Explicitly call ssl_initialize_random to initialize the random generator
in init() global function. If the initialization fails, the startup is
interrupted.
This commit is in preparation for support of ssl on dynamic servers. To
be able to activate ssl on dynamic servers, it is necessary to ensure
that the random generator is initialized on startup regardless of the
config. It cannot be called at runtime as access to /dev/urandom is
required.
This also has the effect to fix the previous non-consistent behavior.
Indeed, if bind or server in the config are using ssl, the
initialization function was called, and if it failed, the startup was
interrupted. Otherwise, the ssl initialization code could have been
called through the ssl server for lua, but this times without blocking
the startup on error. Or not called at all if lua was deactivated.
RFC 7231#5.3.4 makes a difference between a completely missing
'accept-encoding' header and an 'accept-encoding' header without any values.
This case was already correctly handled by accident, because an empty accept
encoding does not match any known encoding. However this resulted in the
'other' encoding being added to the bitmap. Usually this also succeeds in
serving cached responses, because the cached response likely has no
'content-encoding', thus matching the identity case instead of not serving the
response, due to the 'other' encoding. But it's technically not 100% correct.
Fix this by special-casing 'accept-encoding' values with a length of zero and
extend the test to check that an empty accept-encoding is correctly handled.
Due to the reasons given above the test also passes without the change in
cache.c.
Vary support was added in HAProxy 2.4. This fix should be backported to 2.4+.
The commit c7b391aed ("BUG/MEDIUM: server/cli: Fix ABBA deadlock when fqdn
is set from the CLI") introduced 2 bugs. The first one is a typo on the
server's lock label (s/SERVER_UNLOCK/SERVER_LOCK/). The second one is about
the server's lock itself. It must be acquired to execute the "agent-send"
subcommand.
The patch above is marked to be backported as far as 1.8. Thus, this one
must also backported as far 1.8.
BUG/MINOR: server/cli: Don't forget to lock server on agent-send subcommand
The commit dcac41806 ("BUG/MEDIUM: resolvers: Add a task on servers to check
SRV resolution status") introduced a type. In resolv_srvrq_expire_task()
function, the resolver's lock must be used instead of the resolver itself.
This patch must be backported with the patch above (at least as far as 2.2).
When reusing a backend connection, do not reapply the SNI on the
connection. It should already be defined when the connection was
instantiated on a previous connect_server invocation. As the SNI is a
parameter used to select a connection, only connection with same value
can be reused.
The impact of this bug is unknown and may be null. No memory leak has
been reported by valgrind. So this is more a cleaning fix.
This commit relies on the SF_SRV_REUSED flag and thus depends on the
following fix :
BUG/MINOR: backend: restore the SF_SRV_REUSED flag original purpose
This should be backported up to 2.4.
The SF_SRV_REUSED flag was set if a stream reused a backend connection.
One of its purpose is to count the total reuse on the backend in
opposition to newly instantiated connection.
However, the flag was diverted from its original purpose since the
following commit :
e8f5f5d8b2
BUG/MEDIUM: servers: Only set SF_SRV_REUSED if the connection if fully ready.
With this change, the flag is not set anymore if the mux is not ready
when a connection is picked for reuse. This can happen for multiplexed
connections which are inserted in the available list as soon as created
in http-reuse always mode. The goal of this change is to not retry
immediately this request in case on an error on the same server if the
reused connection is not fully ready.
This change is justified for the retry timeout handling but it breaks
other places which still uses the flag for its original purpose. Mainly,
in this case the wrong 'connect' backend counter is incremented instead
of the 'reuse' one. The flag is also used in http_return_srv_error and
may have an impact if a http server error is replied for this stream.
To fix this problem, the original purpose of the flag is restored by
setting it unconditionaly when a connection is reused. Additionally, a
new flag SF_SRV_REUSED_ANTICIPATED is created. This flag is set when the
connection is reused but the mux is not ready yet. For the timeout
handling on error, the request is retried immediately only if the stream
reused a connection without this newly anticipated flag.
This must be backported up to 2.1.
When a server relies on a SRV resolution, a task is created to clean it up
(fqdn/port and address) when the SRV resolution is considered as outdated
(based on the resolvers 'timeout' value). It is only possible if the server
inherits outdated info from a state file and is no longer selected to be
attached to a SRV item. Note that most of time, a server is attached to a
SRV item. Thus when the item becomes obsolete, the server is cleaned
up.
It is important to have such task to be sure the server will be free again
to have a chance to be resolved again with fresh information. Of course,
this patch is a workaround to solve a design issue. But there is no other
obvious way to fix it without rewritting all the resolvers part. And it must
be backportable.
This patch relies on following commits:
* MINOR: resolvers: Clean server in a dedicated function when removing a SRV item
* MINOR: resolvers: Remove server from named_servers tree when removing a SRV item
All the series must be backported as far as 2.2 after some observation
period. Backports to 2.0 and 1.8 must be evaluated.
When a server is cleaned up because the corresponding SRV item is removed,
we always remove the server from the srvrq's name_servers tree. For now, it
is useless because, if a server was attached to a SRV item, it means it was
already removed from the tree. But it will be mandatory to fix a bug.
A dedicated function is now used to clean up servers when a SRV item becomes
obsolete or when a requester is removed from a resolution. This patch is
mandatory to fix a bug.
To perform servers resolution, the resolver's lock is first acquired then
the server's lock when necessary. However, when the fqdn is set via the CLI,
the opposite is performed. So, it is possible to experience an ABBA
deadlock.
To fix this bug, the server's lock is acquired and released for each
subcommand of "set server" with an exception when the fqdn is set. The
resolver's lock is first acquired. Of course, this means we must be sure to
have a resolver to lock.
This patch must be backported as far as 1.8.
If a server is configured to rely on a SRV resolution, we must forbid to
change its fqdn on the CLI. Indeed, in this case, the server retrieves its
fqdn from the SRV resolution. If the fqdn is changed via the CLI, this
conflicts with the SRV resolution and leaves the server in an undefined
state. Most of time, the SRV resolution remains enabled with no effect on
the server (no update). Some time the A/AAAA resolution for the new fqdn is
not enabled at all. It depends on the server state and resolver state when
the CLI command is executed.
This patch must be backported as far as 2.0 (maybe to 1.8 too ?) after some
observation period.
To avoid repeating the same source code, allocating memory and initializing
the per_thr field from the server structure is transferred to a separate
function.
I forgot about OpenSSL 1.0.2, which neither supports the build_sw target
to build only the software, nor reliably supports parallel builds. Given
that we're building 1.0.2 and 3.0.0, let's stay on the safe side and
keep 1.x sequential.
1/4 of the OpenSSL build time is spent building the docs, let's just
build the software and not the doc, by replacing the "all" target
with "build_sw". With this my build time drops from 1'28 to 1'09.
Nothing was done for the other libs, as it's unknown whether they
provide specific build targets.
Running the "make all" phase on my machine with -j$(nproc) shrinks the
build time from 4'52 to 1'28. It will not be that big of a change in
the CI since it looks like two CPUs are exposed, but it should still
remain a net win. Let's enable it. The install phase obviously remains
sequential however.
Using 'sudo' required quite a few workarounds in various places. Setting an
explicit 'ulimit -n' removes the requirement for 'sudo', resulting in a cleaner
workflow configuration.
In 2.4, commit d1ac2b90c ("MAJOR: htx: Remove the EOM block type and
use HTX_FL_EOM instead") changed the HTX processing to destroy the
blocks as they are processed. So the traces that were emitted at the
end of the send headers functions didn't have anything to show.
Let's move these traces earlier in the function, right before the HTX
processing, so that everything is still in place.
This should be backported to 2.4.
Since commit 7d013e796 ("BUG/MEDIUM: mux-h2: Xfer rxbuf to the upper
layer when creating a front stream"), the rxbuf is lost during the
call to h2c_frt_stream_new(), so the trace that happens later cannot
find a request there and we've lost the useful part indicating what
the request looked like. Let's move the trace before this call.
This should be backported to 2.4.
We're seeing some browsers setting up multiple connections and closing
some to just keep one. It looks like they do this in case they'd
negotiate H1. This results in aborted prefaces and log pollution about
bad requests and "PR--" in the status flags.
We already have an option to ignore connections with no data, it's called
http-ignore-probes. But it was not used by the H2 mux. However it totally
makes sense to use it during the preface.
This patch changes this so that connections aborted before sending the
preface can avoid being logged.
This should be backported to 2.4 and 2.3 at least, and probably even
as far as 2.0.
As part of the changes to support per-module stats data in 2.3-dev6
with commit ee63d4bd6 ("MEDIUM: stats: integrate static proxies stats
in new stats"), a small change resulted in the description field to
be replaced by the name field, making it pointless. Let's fix this
back.
This should fix issue #1291. Thanks to Nick Ramirez for reporting this
issue.
This patch can be backported to 2.3.
"sent H2 request" was already misaligned with the 3 other ones
(sent/rcvd, request/response), and now with "new H2 connection" that's
yet another alignment making the traces even less legible. Let's just
realign all 5 messages, this even eases quick pointer comparisons. This
should probably be backported to 2.4 as it's where it's the most likely
to be used in the mid-term.
It is currently very difficult to match some H2 trace outputs against
some log extracts because there's no exactly equivalent info.
This patch tries to address this by adding a TRACE_USER() call in h2_init()
that is matched in h2_trace() to report:
- connection pointer and direction
- frontend's name or server's name
- transport layer and control layer (e.g. "SSL/tcpv4")
- source and/or destination depending on what is set
This now permits to get something like this at verbosity level complete:
<0>2021-06-16T18:30:19.810897+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x19fee50(F,PRF) : conn=0x7f373c026850(IN) fe=h2gw RAW/tcpv4 src=127.0.0.1:19540
<0>2021-06-16T18:30:19.810919+02:00 [00|h2|1|mux_h2.c:2731] rcvd H2 request : h2c=0x19fee50(F,FRH)
<0>2021-06-16T18:30:19.810998+02:00 [00|h2|1|mux_h2.c:1006] new H2 connection : h2c=0x1a04ee0(B,PRF) : conn=0x1a04ce0(OUT) sv=h2gw/s1 RAW/tcpv4 dst=127.0.0.1:4446
This function appends to a buffer some information from a connection.
This will be used by traces and possibly some debugging as well. A
frontend/backend/server, transport/control layers, source/destination
ip:port, connection pointer and direction are reported depending on
the available information.
Since 2.4-dev3 with commit c4bfa59f1 ("MAJOR: mux-h1: Create the client
stream as later as possible"), a request error doesn't result in any
error response if "option http-ignore-probes" is set, there's just a
close. This is caused by an unneeded b_reset() in h1_process_demux()'s
error path, which makes h1_handle_bad_req() believe there was an empty
request. There is no reason for this reset to be there, it must have
been a leftover of an earlier attempt at dealing with the error, let's
drop it.
This should be backported to 2.4.
In 2.3, a significant improvement was brought against situations where
the queue was heavily used, because some LB algos were still checked
for no reason before deciding to put the request into the queue. This
was commit 82cd5c13a ("OPTIM: backend: skip LB when we know the backend
is full").
As seen in previous commit ("BUG/MAJOR: queue: set SF_ASSIGNED when
setting strm->target on dequeue") the dequeuing code is extremely
tricky, and the optimization above tends to emphasize transient issues
by making them permanent until the next reload, which is not acceptable
as the code must always be robust against any bad situation.
This commit brings a protection against such a situation by slightly
relaxing the test. Instead of checking that there are pending connections
in the backend queue, it also verifies that the backend's connections are
not solely composed of queued connections, which would then indicate we
are in this situation. This is not rocket science, but at least if the
situation happens, we know that it will unlock by itself once the streams
have left, as new requests will be allowed to reach the servers and to
flush the queue again.
This needs to be backported to 2.4 and 2.3.
Commit 82cd5c13a ("OPTIM: backend: skip LB when we know the backend is
full") has uncovered a long-burried bug in the dequeing code: when a
server releases a connection, it picks a new one from the proxy's or
its queue. Technically speaking it only picks a pendconn which is a
link between a position in the queue and a stream. It then sets this
pendconn's target to itself, and wakes up the stream's task so that
it can try to connect again.
The stream then goes through the regular connection setup phases,
calls back_try_conn_req() which calls pendconn_dequeue(), which
sets the stream's target to the pendconn's and releases the pendconn.
It then reaches assign_server() which sees no SF_ASSIGNED and calls
assign_server_and_queue() to perform load balancing or queuing. This
one first destroys the stream's target and gets ready to perform load
balancing. At this point we're load-balancing for no reason since we
already knew what server was available. And this is where the commit
above comes into play: the check for the backend's queue above may
detect other connections that arrived in between, and will immediately
return FULL, forcing this request back into the queue. If the server
had a very low maxconn (e.g. 1 due to a long slowstart), it's possible
that this evicted connection was the last one on the server and that
no other one will ever be present to process the queue. Usually a
regularly processed request will still have its own srv_conn that will
be used during stream_free() to dequeue other connections. But if the
server had a down-up cycle, then a call to pendconn_grab_from_px()
may start to dequeue entries which had no srv_conn and which will have
no server slot to offer when they expire, thus maintaining the situation
above forever. Worse, as new requests arrive, there are always some
requests in the queue and the situation feeds on itself.
The correct fix here is to properly set SF_ASSIGNED in pendconn_dequeue()
when the stream's target is assigned (as it's what this flag means), so
as to avoid a load-balancing pass when dequeuing.
Many thanks to Pierre Cheynier for the numerous detailed traces he
provided that helped narrow this problem down.
This could be backported to all stable versions, but in practice only
2.3 and above are really affected since the presence of the commit
above. Given how tricky this code is it's better to limit it to those
versions that really need it.
With a single process, we don't need to USE_PRIVATE_CACHE, USE_FUTEX
nor USE_PTHREAD_PSHARED anymore. Let's only keep the basic spinlock
to lock between threads.
The relative_pid is always 1. In mworker mode we also have a
child->relative_pid which is always equalt relative_pid, except for a
master (0) or external process (-1), but these types are usually tested
for, except for one place that was amended to carefully check for the
PROC_O_TYPE_WORKER option.
Changes were pretty limited as most usages of relative_pid were for
designating a process in stats output and peers protocol.