In FD dumps it's often very important to figure what upper layer function
is going to be called. Let's export the few I/O callbacks that appear as
tasklet functions so that "show fd" can resolve them instead of printing
a pointer relative to main. For example:
1028 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7f00b889b200 iocb=0x65b638(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7f00c8824de0 h2c.st0=FRH .err=0 .maxid=795 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=795 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7f00c86d0750 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7f00c88252e0(ev=1 tl=0x7f00a07d1aa0 tl.calls=1047 tl.ctx=0x7f00c8824de0 tl.fct=h2_io_cb) .sent_early=0 .early_in=0
The SSL context contains a lot of important details that are currently
missing from debug outputs. Now that we detect ssl_sock, we can perform
some sanity checks, print the next xprt, the subscriber callback's context,
handler and number of calls. The process function is also resolved. This
now gives for example on an H2 connection:
1029 : st=0x21(R:rA W:Ra) ev=0x01(heopI) [lc] tmask=0x2 umask=0x2 owner=0x7fc714881700 iocb=0x65b528(sock_conn_iocb) back=0 cflg=0x00001300 fe=recv mux=H2 ctx=0x7fc734545e50 h2c.st0=FRH .err=0 .maxid=217 .lastid=-1 .flg=0x0000 .nbst=0 .nbcs=0 .fctl_cnt=0 .send_cnt=0 .tree_cnt=0 .orph_cnt=0 .sub=1 .dsi=217 .dbuf=0@(nil)+0/0 .msi=-1 .mbuf=[1..1|32],h=[0@(nil)+0/0],t=[0@(nil)+0/0] xprt=SSL xprt_ctx=0x7fc73478f230 xctx.st=0 .xprt=RAW .wait.ev=1 .subs=0x7fc734546350(ev=1 tl=0x7fc7346702e0 tl.calls=278 tl.ctx=0x7fc734545e50 tl.fct=main-0x144efa) .sent_early=0 .early_in=0
Just like we did for the muxes, now the transport layers will have the
ability to provide helpers to report more detailed information about their
internal context. When the helper is not known, the pointer continues to
be dumped as-is if it's not NULL. This way a transport with no context nor
dump function will not add a useless "xprt_ctx=(nil)" but the pointer will
be emitted if valid or if a helper is defined.
These ones are definitely missing from some dumps, let's report them! We
print the xprt's name instead of its useless pointer, as well as its ctx
when xprt is not NULL.
Over time the code has uglified, casting fdt.owner as a struct connection
for about everything. Let's have a const struct connection* there and take
this opportunity for passing all fields as const as well.
Additionally a misplaced closing parenthesis on the output was fixed.
When 0c439d895 ("BUILD: tools: make resolve_sym_name() return a const")
was written, the pointer argument ought to have been turned to const for
more flexibility. Let's do it now.
That was causing confusing outputs like this one whenan H2S is known:
1030 : ... last_h2s=0x2ed8390 .id=775 .st=HCR.flg=0x4001 .rxbuf=...
^^^^
This was introduced by commit ab2ec4540 in 2.1-dev2 so the fix can be
backported as far as 2.1.
With a 2.6.8 wireshark, this module could not compile because of ws_version.h
missing header. This patch offers the possibility to compile this plugin without
having to include this header. Furthermore with my wireshark version a
"plugin_release" object is required to make it be loaded by wireshark. This is
a string which seems to have to match a dotted string made of you wireshark
major and minor versions.
This counter could be hugely incremented by the peer task responsible of managing
peer synchronizations and reconnections, for instance when a peer is not reachable
there is a period where the appctx is not created. If we receive stick-table
updates before the peer session (appctx) is instantiated, we reach the code
responsible of incrementing the "new_conn" counter.
With this patch we increment this counter only when we really instantiate a new
peer session thanks to peer_session_create().
May be backported as far as 2.0.
When V=1 is used in conjuction with help, the output becomes pretty
difficult to read properly.
$ make TARGET=linux-glibc V=1 help
..
DEBUG_USE_ABORT: use abort() for program termination, see include/haproxy/bug.h for details
echo; \
if [ -n "" ]; then \
if [ -n "" ]; then \
echo "Current TARGET: "; \
else \
echo "Current TARGET: (custom target)"; \
fi; \
else \
echo "TARGET not set, you may pass 'TARGET=xxx' to set one among :";\
echo " linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, netbsd,"; \
echo " osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,"; \
echo " custom"; \
fi
TARGET not set, you may pass 'TARGET=xxx' to set one among :
linux-glibc, linux-glibc-legacy, solaris, freebsd, dragonfly, netbsd,
osx, openbsd, aix51, aix52, aix72-gcc, cygwin, haiku, generic,
custom
echo;echo "Enabled features for TARGET '' (disable with 'USE_xxx=') :"
Enabled features for TARGET '' (disable with 'USE_xxx=') :
set -- POLL ; echo " $*" | (fmt || cat) 2>/dev/null
POLL
echo;echo "Disabled features for TARGET '' (enable with 'USE_xxx=1') :"
Disabled features for TARGET '' (enable with 'USE_xxx=1') :
set -- EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE THREAD PTHREAD_PSHARED BACKTRACE STATIC_PCRE STATIC_PCRE2 TPROXY LINUX_TPROXY LINUX_SPLICE LIBCRYPT CRYPT_H GETADDRINFO OPENSSL LUA FUTEX ACCEPT4 CLOSEFROM ZLIB SLZ CPU_AFFINITY TFO NS DL RT DEVICEATLAS 51DEGREES WURFL SYSTEMD OBSOLETE_LINKER PRCTL THREAD_DUMP EVPORTS OT QUIC; echo " $*" | (fmt || cat) 2>/dev/null
EPOLL KQUEUE NETFILTER PCRE PCRE_JIT PCRE2 PCRE2_JIT PRIVATE_CACHE
This commit ensure the help target always discard line echoing
regardless of V variable as done for reg-tests-help target.
If a server varies on the accept-encoding header and it sends a response
with an encoding we do not know (see parse_encoding_value function), we
will not store it. This will prevent unexpected errors caused by
cache collisions that could happen in accept_encoding_hash_cmp.
commit 5a982a7165 ("MINOR:
contrib/prometheus-exporter: export build_info") is breaking lua
`core.get_info()`.
This patch makes sure build_info is correctly initialised in all cases.
Reviewed-by: William Dauchy <wdauchy@gmail.com>
Previous commit da2b0844f ("MINOR: peers: Add traces for peer control
messages.") introduced a build warning on some compiler versions after
the removal of variable "peers" in peer_send_msgs() because variable
"s" was used only to assign this one, and variable "si" to assign "s".
Let's remove both to fix the warning. No backport is needed.
V2 of this fix which includes a missing pointer initialization which was
causing a segfault in v1 (949a7f6459)
This bug happens when a service has multiple records on the same host
and the server provides the A/AAAA resolution in the response as AR
(Additional Records).
In such condition, the first occurence of the host will be taken from
the Additional section, while the second (and next ones) will be process
by an independent resolution task (like we used to do before 2.2).
This can lead to a situation where the "synchronisation" of the
resolution may diverge, like described in github issue #971.
Because of this behavior, HAProxy mixes various type of requests to
resolve the full list of servers: SRV+AR for all "first" occurences and
A/AAAA for all other occurences of an existing hostname.
IE: with the following type of response:
;; ANSWER SECTION:
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A2.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 86 A3.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 80 A1.tld.
_http._tcp.be2.tld. 3600 IN SRV 5 500 85 A3.tld.
;; ADDITIONAL SECTION:
A2.tld. 3600 IN A 192.168.0.2
A3.tld. 3600 IN A 192.168.0.3
A1.tld. 3600 IN A 192.168.0.1
A3.tld. 3600 IN A 192.168.0.3
the first A3 host is resolved using the Additional Section and the
second one through a dedicated A request.
When linking the SRV records to their respective Additional one, a
condition was missing (chek if said SRV record is already attached to an
Additional one), leading to stop processing SRV only when the target
SRV field matches the Additional record name. Hence only the first
occurence of a target was managed by an additional record.
This patch adds a condition in this loop to ensure the record being
parsed is not already linked to an Additional Record. If so, we can
carry on the parsing to find a possible next one with the same target
field value.
backport status: 2.2 and above
Display traces when sending/receiving peer control messages (synchronisation, heartbeat).
Add remaining traces when parsing malformed messages (acks, stick-table definitions)
or ignoring them.
Also add traces when releasing session or when reaching the PEER_SESS_ST_ERRPROTO
peer protocol state.
It's about the third time I get confused by these functions, half of
which manipulate the reference as a whole and those manipulating only
an entry. For me "pat_ref_commit" means committing the pattern reference,
not just an element, so let's rename it. A number of other ones should
really be renamed before 2.4 gets released :-/
There is no low level api to achieve same as Linux/FreeBSD, we rely
on CPUs available. Without this, the number of threads is just 1 for
Mac while having 8 cores in my M1.
Backporting to 2.1 should be enough if that's possible.
Signed-off-by: David CARLIER <devnexen@gmail.com>
An fatal error is now reported if a server is defined in a frontend
section. til now, a warning was just emitted and the server was ignored. The
warning was added in the 1.3.4 when the frontend/backend keywords were
introduced to allow a smooth transition and to not break existing
configs. It is old enough now to emit an fatal error in this case.
This patch is related to the issue #1043. It may be backported at least as
far as 2.2, and possibly to older versions. It relies on the previous commit
("MINOR: config: Add failifnotcap() to emit an alert on proxy capabilities").
This function must be used to emit an alert if a proxy does not have at
least one of the requested capabilities. An additional message may be
appended to the alert.
The HAPROXY_CFGFILES env variable is built using a static trash chunk, via a
call to get_trash_chunk() function. This chunk is reserved during the whole
configuration parsing. It is far too large to guarantee it will not be
reused during the configuration parsing. And in fact, it happens in the lua
code since the commit f67442efd ("BUG/MINOR: lua: warn when registering
action, conv, sf, cli or applet multiple times"), when a lua script is
loaded.
To fix the bug, we now use a dynamic buffer instead. And we call memprintf()
function to handle both the allocation and the formatting. Allocation errors
at this stage are fatal.
This patch should fix the issue #1041. It must be backported as far as 2.0.
it does not seem to have a reason to close connections after each
request; reflect that in tests by doing all requests within the same
client.
Signed-off-by: William Dauchy <wdauchy@gmail.com>
The strict-limits global option was introduced with commit 0fec3ab7b
("MINOR: init: always fail when setrlimit fails"). When used in
conjuction with master-worker, haproxy will not fail when a setrlimit
fails. This happens because we only exit() if master-worker isn't used.
This patch removes all tests for master-worker mode for all cases covered
by strict-limits scope.
This should be backported from 2.1 onward.
This should fix issue #1042.
Reviewed by William Dauchy <wdauchy@gmail.com>
If a server is defined in a frontend, thus a proxy without the backend
capability, the 'check' and 'agent-check' keywords are ignored. This way, no
check is performed on an ignored server. This avoids a segfault because some
part of the tcpchecks are not fully initialized (or released for frontends
during the post-check).
In addition, an test on the server's proxy capabilities is performed when
checks or agent-checks are initialized and nothing is performed for servers
attached to a non-backend proxy.
This patch should fix the issue #1043. It must be backported as far as 2.2.
If an errors occurs during the sample expression parsing, the alloced
sample_expr is not freed despite having its main pointer reset.
This fixes GitHub issue #1046.
It could be backported as far as 1.8.
This reverts commit 949a7f6459.
The first part of the patch introduces a bug. When a dns answer item is
allocated, its <ar_item> is only initialized at the end of the parsing, when
the item is added in the answer list. Thus, we must not try to release it
during the parsing.
The second part is also probably buggy. It fixes the issue #971 but reverts
a fix for the issue #841 (see commit fb0884c8297 "BUG/MEDIUM: dns: Don't
store additional records in a linked-list"). So it must be at least
revalidated.
This revert fixes a segfault reported in a comment of the issue #971. It
must be backported as far as 2.2.
Add a base test to start with something, even though this is not
necessarily complete.
Also make use of the recent REQUIRE_SERVICE option to exclude it from
test list of it was not build with prometheus included.
note: I thought it was possible to send multiple requests within the
same client, but I'm getting "HTTP header is incomplete" from the second
request
Signed-off-by: William Dauchy <wdauchy@gmail.com>
I badly tested my previous patch forgetting to remove the "+" testing
present in options, and not in services; the list of services do not
have any "+" at the beginning of each service
this patch is fixing commit aabde71332 ("MINOR:
reg-tests: add a way to add service dependency")
Signed-off-by: William Dauchy <wdauchy@gmail.com>
like it is done in other places, check the return value of
`alloc_trash_chunk` before using it. This was detected by coverity.
this patch fixes commit 591fc3a330
("BUG/MINOR: sample: fix concat() converter's corruption with non-string
variables"
As a consequence, this patch should be backported as far as 2.0
this should fix github issue #1039
Signed-off-by: William Dauchy <wdauchy@gmail.com>
I was looking at writing a simple first test for prometheus but I
realised there is no proper way to exclude it if haproxy was not built
with prometheus plugin.
Today we have `REQUIRE_OPTIONS` in reg-tests which is based on `Feature
list` from `haproxy -vv`. Those options are coming from the Makefile
itself.
A plugin is build this way:
EXTRA_OBJS="contrib/prometheus-exporter/service-prometheus.o"
It does register service actions through `service_keywords_register`.
Those are listed through `list_services` in `haproxy -vv`.
To facilitate parsing, I slightly changed the output to a single line
and integrate it in regtests shell script so that we can now specify a
dependency while writing a reg-test for prometheus, e.g:
#REQUIRE_SERVICE=prometheus-exporter
#REQUIRE_SERVICES=prometheus-exporter,foo
There might be other ways to handle this, but that's the cleanest I
found; I understand people might be concerned by this output change in
`haproxy -vv` which goes from:
Available services :
foo
bar
to:
Available services : foo bar
Signed-off-by: William Dauchy <wdauchy@gmail.com>
- check functions are never called with a NULL args list, it is always
an array, so first check can be removed
- the expression parser guarantees that we can't have anything else,
because we mentioned json converter takes a mandatory string argument.
Thus test on `ARGT_STR` can be removed as well
- also add breaking line between enum and function declaration
In order to validate it, add a simple json test testing very simple
cases but can be improved in the future:
- default json converter without args
- json converter failing on error (utf8)
- json converter with error being removed (utf8s)
Signed-off-by: William Dauchy <wdauchy@gmail.com>
GitHub Issue #1037 Reported a memory leak in deinit() caused by an
allocation made in sa2str() that was stored in srv_set_addr_desc().
When destroying each server for a proxy in deinit, include freeing the
memory in the key of server->addr_node.
The leak was introduced in commit 92149f9a8 ("MEDIUM: stick-tables: Add
srvkey option to stick-table") which is not in any released version so
no backport is needed.
Cc: Tim Duesterhus <tim@bastelstu.be>
Patrick Hemmer reported that calling concat() with an integer variable
causes a %00 to appear at the beginning of the output. Looking at the
code, it's not surprising. The function uses get_trash_chunk() to get
one of the trashes, but can call casting functions which will also use
their trash in turn and will cycle back to ours, causing the trash to
be overwritten before being assigned to a sample.
By allocating the trash from a pool using alloc_trash_chunk(), we can
avoid this. However we must free it so the trash's contents must be
moved to a permanent trash buffer before returning. This is what's
achieved using smp_dup().
This should be backported as far as 2.0.
William Dauchy and Christopher Faulet are the official maintainers of the
Prometheus exporter. William better knows the Prometheus usages and is the
referent for the features while Christopher's role is more code-centric, he
takes care of bugs and the exporter integration into Haproxy.
This adds "referer,ot,uint,iif,fo,keep-alives" to the ignore list:
- "referer" is the well-known HTTP header field name (with its
spelling error)
- "ot" appears a lot in the opentracing contrib.
- "iff" often stands for "if and only if"
- "fo" appears as a test string in tests/ist.c (could possibly be
changed)
- "keep-alives" appears as a noon in "...enable TCP keep-alives".
This is from the output of codespell. It's done at once over a bunch
of files and only affects comments, so there is nothing user-visible.
No backport needed.
commit c55a626217 ("MINOR: contrib/prometheus-exporter: Add
missing global and per-server metrics") is renaming two metrics between
v2.2 and v2.3:
server_idle_connections_current
server_idle_connections_limit
It is breaking some tools which are making use of those metrics while
supporting several haproxy versions. This build_info will permit tools
which make use of metrics to be able to match the haproxy version and
change the list of expected metrics. This was possible using the haproxy
stats socket but not with prometheus export.
This patch follows prometheus best pratices to export specific software
informations. It is adding a new field `build_info` so we can extend it
to other parameters if needed in the future.
example output:
# HELP haproxy_process_build_info HAProxy build info.
# TYPE haproxy_process_build_info gauge
haproxy_process_build_info{version="2.4-dev5-2e1a3f-5"} 1
Even though it is not a bugfix, this patch will make more sense when
backported up to >= 2.0
Signed-off-by: William Dauchy <wdauchy@gmail.com>
During a configuration check valgrind reports:
==14425== 0 bytes in 106 blocks are definitely lost in loss record 1 of 107
==14425== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==14425== by 0x443CFC: hlua_alloc (hlua.c:8662)
==14425== by 0x5F72B11: luaM_realloc_ (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F78089: luaH_free (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F707D3: sweeplist (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F710D0: luaC_freeallobjects (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x5F7715D: close_state (in /usr/lib/x86_64-linux-gnu/liblua5.3.so.0.0.0)
==14425== by 0x443D4C: hlua_deinit (hlua.c:9302)
==14425== by 0x543F88: deinit (haproxy.c:2742)
==14425== by 0x5448E7: deinit_and_exit (haproxy.c:2830)
==14425== by 0x5455D9: init (haproxy.c:2044)
This is due to Lua calling `hlua_alloc()` with `ptr = NULL` and `nsize = 0`.
While `realloc` is supposed to be equivalent `free()` if the size is `0` this
is only required for a non-NULL pointer. Apparently my allocator (or valgrind)
actually allocates a zero size area if the pointer is NULL, possibly taking up
some memory for management structures.
Fix this leak by specifically handling the case where both the pointer and the
size are `0`.
This bug appears to have been introduced with the introduction of the
multi-threaded Lua, thus this fix is specific for 2.4. No backport needed.
it could be sometimes a bit confusing to have tests which are known to
be broken executed in the default `make reg-tests` command, especially
for not frequent contributors which are not necessarily aware of all our
quirks.
without this patch, this test is failing on my side:
# top TEST reg-tests/seamless-reload/abns_socket.vtc FAILED (2.228) exit=2
1 tests failed, 0 tests skipped, 107 tests passed
Signed-off-by: William Dauchy <wdauchy@gmail.com>