Add an HTX start-line flag and its counterpart into the HTTP message to
track the presence of the Upgrade option into the Connection header. This
way, without parsing the Connection header again, it will be easy to know if
a client asks for a protocol upgrade and if the server agrees to do so. It
will also be easy to perform some conformance checks when a
101-switching-protocols is received.
TCP to H1 upgrades are buggy for now. When such upgrade is performed, a
crash is experienced. The bug is the result of the recent H1 mux
refactoring, and more specifically because of the commit c4bfa59f1 ("MAJOR:
mux-h1: Create the client stream as later as possible"). Indeed, now the H1
mux is responsible to create the frontend conn-stream once the request
headers are fully received. Thus the TCP to H1 upgrade is a problem because
the frontend conn-stream already exists.
To fix the bug, we must keep this conn-stream and the associate stream and
use it in the H1 mux. To do so, the upgrade will be performed in two
steps. First, the mux is upgraded from mux-pt to mux-h1. Then, the mux-h1
performs the stream upgrade, once the request headers are fully received and
parsed. To do so, stream_upgrade_from_cs() must be used. This function set
the SF_HTX flags to switch the stream to HTX mode, it removes the SF_IGNORE
flags and eventually it fills the request channel with some input data.
This patch is required to fix the TCP to H1 upgrades and is intimately
linked with the next commits.
A bug was introduced by the early insertion of idle connections at the
end of connect_server. It is possible to reuse a connection not yet
ready waiting for an handshake (for example with proxy protocol or ssl).
A wrong duplicate xprt_handshake_io_cb tasklet is thus registered as a
side-effect.
This triggers the BUG_ON statement of xprt_handshake_subscribe :
BUG_ON(ctx->subs && ctx->subs != es);
To counter this, a check is now present in session_get_conn to only
return a connection without the flag CO_FL_WAIT_XPRT. This might cause
sometimes the creation of dedicated server connections when in theory
reuse could have been used, but probably only occurs rarely in real
condition.
This behavior is present since commit :
MEDIUM: connection: Add private connections synchronously in session server list
It could also be further exagerated by :
MEDIUM: backend: add reused conn to sess if mux marked as HOL blocking
It can be backported up to 2.3.
NOTE : This bug seems to be only reproducible with mode tcp, for an
unknown reason. However, reuse should never happen when not in http
mode. This improper behavior will be the subject of a dedicated patch.
This bug can easily be reproducible with the following config (a
webserver is required to accept proxy protocol on port 31080) :
global
defaults
mode tcp
timeout connect 1s
timeout server 1s
timeout client 1s
listen li
bind 0.0.0.0:4444
server bla1 127.0.0.1:31080 check send-proxy-v2
with the inject client :
$ inject -u 10000 -d 10 -G 127.0.0.1:4444
This should fix the github issue #1058.
Building with `"DEBUG=-DDEBUG_STRICT=1 -DDEBUG_USE_ABORT=1"` previously emitted the warning:
In file included from include/haproxy/api.h:35:0,
from src/mux_pt.c:13:
include/haproxy/buf.h: In function ‘br_init’:
include/haproxy/bug.h:42:90: warning: implicit declaration of function ‘abort’ [-Wimplicit-function-declaration]
#define ABORT_NOW() do { extern void ha_backtrace_to_stderr(); ha_backtrace_to_stderr(); abort(); } while (0)
^
include/haproxy/bug.h:56:21: note: in expansion of macro ‘ABORT_NOW’
#define CRASH_NOW() ABORT_NOW()
^
include/haproxy/bug.h:68:4: note: in expansion of macro ‘CRASH_NOW’
CRASH_NOW(); \
^
include/haproxy/bug.h:62:35: note: in expansion of macro ‘__BUG_ON’
#define _BUG_ON(cond, file, line) __BUG_ON(cond, file, line)
^
include/haproxy/bug.h:61:22: note: in expansion of macro ‘_BUG_ON’
#define BUG_ON(cond) _BUG_ON(cond, __FILE__, __LINE__)
^
include/haproxy/buf.h:875:2: note: in expansion of macro ‘BUG_ON’
BUG_ON(size < 2);
^
This patch fixes that issue. The `DEBUG_USE_ABORT` option exists for use with
static analysis tools. No backport needed.
Since the server SSL_CTX is now stored in the ckch_inst, it is not
needed anymore to pass an SSL_CTX to ckch_inst_new_load_srv_store() and
ssl_sock_load_srv_ckchs().
The client_crt member is not used anymore since the server's ssl context
initialization now behaves the same way as the bind lines one (using
ckch stores and instances).
When trying to update a backend certificate, we should find a
server-side ckch instance thanks to which we can rebuild a new ssl
context and a new ckch instance that replace the previous ones in the
server structure. This way any new ssl session will be built out of the
new ssl context and the newly updated certificate.
This resolves a subpart of GitHub issue #427 (the certificate part)
In order for the backend server's certificate to be hot-updatable, it
needs to fit into the implementation used for the "bind" certificates.
This patch follows the architecture implemented for the frontend
implementation and reuses its structures and general function calls
(adapted for the server side).
The ckch store logic is kept and a dedicated ckch instance is used (one
per server). The whole sni_ctx logic was not kept though because it is
not needed.
All the new functions added in this patch are basically server-side
copies of functions that already exist on the frontend side with all the
sni and bind_cond references removed.
The ckch_inst structure has a new 'is_server_instance' flag which is
used to distinguish regular instances from the server-side ones, and a
new pointer to the server's structure in case of backend instance.
Since the new server ckch instances are linked to a standard ckch_store,
a lookup in the ckch store table will succeed so the cli code used to
update bind certificates needs to be covered to manage those new server
side ckch instances.
Split the server's ssl context initialization into the general ssl
related initializations and the actual initialization of a single
SSL_CTX structure. This way the context's initialization will be
usable by itself from elsewhere.
It is only a problem on the response path because the request payload length
it always known. But when a filter is registered to analyze the response
payload, the filtering may hang if the server closes just after the headers.
The root cause of the bug comes from an attempt to allow the filters to not
immediately forward the headers if necessary. A filter may choose to hold
the headers by not forwarding any bytes of the payload. For a message with
no payload but a known payload length, there is always a EOM block to
forward. Thus holding the EOM block for bodyless messages is a good way to
also hold the headers. However, messages with an unknown payload length,
there is no EOM block finishing the message, but only a SHUTR flag on the
channel to mark the end of the stream. If there is no payload when it
happens, there is no payload at all to forward. In the filters API, it is
wrongly detected as a condition to not forward the headers.
Because it is not the most used feature and not the obvious one, this patch
introduces another way to hold the message headers at the begining of the
forwarding. A filter flag is added to explicitly says the headers should be
hold. A filter may choose to set the STRM_FLT_FL_HOLD_HTTP_HDRS flag and not
forwad anything to hold the headers. This flag is removed at each call, thus
it must always be explicitly set by filters. This flag is only evaluated if
no byte has ever been forwarded because the headers are forwarded with the
first byte of the payload.
reg-tests/filters/random-forwarding.vtc reg-test is updated to also test
responses with unknown payload length (with and without payload).
This patch must be backported as far as 2.0.
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
This patch follows what has already been done on frontend and backend
side.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the server.
A few things to note though:
- state require prior calculation, so I moved that to a sort of helper
`stats_fill_be_stats_computestate`.
- all ST_F*TIME fields requires some minor compute, so I moved it at te
beginning of the function under a condition.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
This patch follows what has already been done on frontend side.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the backend
A few things to note though:
- status and uweight field requires prior compute, so I moved that to a
sort of helper `stats_fill_be_stats_computesrv`.
- all ST_F*TIME fields requires some minor compute, so I moved it at te
beginning of the function under a condition.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
while working on backend/servers I realised I could have written that in
a better way and avoid one extra break. This is slightly improving
readiness.
also while being here, fix function declaration which was not 100%
accurate.
this patch does not change the behaviour of the code.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The dump state is now passed to the function so that the caller can adjust
the behavior. A new series of 4 values allow to stop *after* dumping main
instead of before it or any of the usual loops. This allows to also report
BUG_ON() that could happen very high in the call graph (e.g. startup, or
the scheduler itself) while still understanding what the call path was.
The purpose is to enable the dumping of a backtrace on BUG_ON(). While
it's very useful to know that a condition was met, very often some
caller context is missing to figure how the condition could happen.
From now on, on systems featuring backtrace, a backtrace of the calling
thread will also be dumped to stderr in addition to the unexpected
condition. This will help users of DEBUG_STRICT as they'll most often
find this backtrace in their logs even if they can't find their core
file.
A new "debug dev bug" expert-mode CLI command was added to test the
feature.
This function calls the ha_dump_backtrace() function with a locally
allocated buffer and sends the output slightly indented to fd #2. It's
meant to be used as an emergency backtrace dump.
The backtrace dumping code was located into the thread dump function
but it looks particularly convenient to be able to call it to produce
a dump in other situations, so let's move it to its own function and
make sure it's called last in the function so that we can benefit from
tail merging to save one entry.
In order to simplify the code and remove annoying ifdefs everywhere,
let's always export my_backtrace() and make it adapt to the situation
and return zero if not supported. A small update in the thread dump
function was needed to make sure we don't use its results if it fails
now.
use `stats_fill_fe_stats` when possible to avoid duplicating code; make
use of field selector to get the needed field only.
this should not introduce any difference of output.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
prometheus approach requires to output all values for a given metric
name; meaning we iterate through all metrics, and then iterate in the
inner loop on all objects for this metric.
In order to allow more code reuse, adapt the stats API to be able to
select one field or fill them all otherwise.
From this patch it should be possible to remove most of the duplicate
code on prometheuse side for the frontend.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
Another patch in order to try to reconciliate haproxy stats and
prometheus. Here I'm adding a proper start time field in order to make
proper use of uptime field.
That being done we can move the calculation in `fill_info`
Signed-off-by: William Dauchy <wdauchy@gmail.com>
in order to prepare a possible merge of fields between haproxy stats and
prometheus, duplicate 3 fields:
INF_MEMMAX
INF_POOL_ALLOC
INF_POOL_USED
Those were specifically named in MB unit which is not what prometheus
recommends. We therefore used them but changed the unit while doing the
calculation. It created a specific case for that, up to the description.
This patch:
- removes some possible confusion, i.e. using MB field for bytes
- will permit an easier merge of fields such as description
First consequence for now, is that we can remove the calculation on
prometheus side and move it on `fill_info`.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The MUX_ES_NOTIMPL_ERR exit status is added to allow the multiplexers to
report errors about not implemented features. This will be used by the H1
mux to return 501-not-implemented errors.
Add the support for the 501-not-implemented status code with the
corresponding default message. The documentation is updated accordingly
because it is now part of status codes HAProxy may emit via an errorfile or
a deny/return HTTP action.
When a tcpcheck ruleset uses multiple connections, the existing one must be
closed and destroyed before openning the new one. This part is handled in
the tcpcheck_main() function, when called from the wake callback function
(wake_srv_chk). But it is indeed a problem, because this function may be
called from the mux layer. This means a mux may call the wake callback
function of the data layer, which may release the connection and the mux. It
is easy to see how it is hazardous. And actually, depending on the
scheduling, it leads to crashes.
Thus, we must avoid to release the connection in the wake callback context,
and move this part in the check's process function instead. To do so, we
rely on the CHK_ST_CLOSE_CONN flags. When a connection must be replaced by a
new one, this flag is set on the check, in tcpcheck_main() function, and the
check's task is woken up. Then, the connection is really closed in
process_chk_conn() function.
This patch must be backported as far as 2.2, with some adaptations however
because the code is not exactly the same.
Now the show_fd helpers at the transport and mux levels return an integer
which indicates whether or not the inspected entry looks suspicious. When
an entry is reported as suspicious, "show fd" will suffix it with an
exclamation mark ('!') in the dump, that is supposed to help detecting
them.
For now, helpers were adjusted to adapt to the new API but none of them
reports any suspicious entry yet.
Just like we did for the muxes, now the transport layers will have the
ability to provide helpers to report more detailed information about their
internal context. When the helper is not known, the pointer continues to
be dumped as-is if it's not NULL. This way a transport with no context nor
dump function will not add a useless "xprt_ctx=(nil)" but the pointer will
be emitted if valid or if a helper is defined.
When 0c439d895 ("BUILD: tools: make resolve_sym_name() return a const")
was written, the pointer argument ought to have been turned to const for
more flexibility. Let's do it now.
It's about the third time I get confused by these functions, half of
which manipulate the reference as a whole and those manipulating only
an entry. For me "pat_ref_commit" means committing the pattern reference,
not just an element, so let's rename it. A number of other ones should
really be renamed before 2.4 gets released :-/
This function must be used to emit an alert if a proxy does not have at
least one of the requested capabilities. An additional message may be
appended to the alert.
This is from the output of codespell. It's done at once over a bunch
of files and only affects comments, so there is nothing user-visible.
No backport needed.
commit c55a626217 ("MINOR: contrib/prometheus-exporter: Add
missing global and per-server metrics") is renaming two metrics between
v2.2 and v2.3:
server_idle_connections_current
server_idle_connections_limit
It is breaking some tools which are making use of those metrics while
supporting several haproxy versions. This build_info will permit tools
which make use of metrics to be able to match the haproxy version and
change the list of expected metrics. This was possible using the haproxy
stats socket but not with prometheus export.
This patch follows prometheus best pratices to export specific software
informations. It is adding a new field `build_info` so we can extend it
to other parameters if needed in the future.
example output:
# HELP haproxy_process_build_info HAProxy build info.
# TYPE haproxy_process_build_info gauge
haproxy_process_build_info{version="2.4-dev5-2e1a3f-5"} 1
Even though it is not a bugfix, this patch will make more sense when
backported up to >= 2.0
Signed-off-by: William Dauchy <wdauchy@gmail.com>
This allows using the address of the server rather than the name of the
server for keeping track of servers in a backend for stickiness.
The peers code was also extended to support feeding the dictionary using
this key instead of the name.
Fixes#814
The accept-encoding normalizer now explicitely manages a subset of
encodings which will all have their own bit in the encoding bitmap
stored in the cache entry. This way two requests with the same primary
key will be served the same cache entry if they both explicitely accept
the stored response's encoding, even if their respective secondary keys
are not the same and do not match the stored response's one.
The actual hash of the accept-encoding will still be used if the
response's encoding is unmanaged.
The encoding matching and the encoding weight parsing are done for every
subpart of the accept-encoding values, and a bitmap of accepted
encodings is built for every request. It is then tested upon any stored
response that has the same primary key until one with an accepted
encoding is found.
The specific "identity" and "*" accept-encoding values are managed too.
When storing a response in the key, we also parse the content-encoding
header in order to only set the response's corresponding encoding's bit
in its cache_entry encoding bitmap.
This patch fixes GitHub issue #988.
It does not need to be backported.
If any of the secondary hash normalizing functions raises an error, the
secondary hash will be unusable. In this case, the response will not be
stored anymore.
Add traces to have an idea why this function may fail. In fact
in never fails when the passed parameters are correct, especially the
lengths. This is not the case when a packet is not correctly built
before being encrypted.
Even if the size of frames built by qc_build_frm() are computed so that
not to overflow a buffer, do not rely on this and always makes a packet
build fails if we could not build a frame.
Also add traces to have an idea where qc_build_frm() fails.
Fixes a memory leak in qc_build_phdshk_apkt().