The CLOEXEC flag was set using a F_SETFL which can't work.
To set the CLOEXEC flag F_SETFD should be used, the problem is that it
needs a new call to fcntl() and it's on the path of every accept.
This flag was only needed in the case of the master, so the patch was
reverted and the flag set only in this case.
The bug was introduced by 0b3e849 ("MEDIUM: listeners: set O_CLOEXEC on
the accepted FDs").
No backport needed.
If the master was reloaded and there was a established connection to a
server, the FD resulting from the accept was leaking.
There was no CLOEXEC flag set on the FD of the socketpair created during
a connect call. This is specific to the socketpair in the master process
but it should be applied to every protocol in case we use them in the
master at some point.
No backport needed.
The previous fix da95fd90 ("BUILD/MINOR: ssl: fix build with non-alpn/
non-npn libssl") does fix the build in old OpenSSL release, but I
overlooked that the ctx is only freed when NPN is supported.
Fix this by moving the #endif to the proper place (this was broken in
c7566001 ("MINOR: server: Add "alpn" and "npn" keywords")).
Having a thread_local for the pool cache is messy as we need to
initialize all elements upon startup, but we can't until the threads
are created, and once created it's too late. For this reason, the
allocation code used to check for the pool's initialization, and
it was the release code which used to detect the first call and to
initialize the cache on the fly, which is not exactly optimal.
Now that we have initcalls, let's turn this into a per-thread array.
This array is initialized very early in the boot process (STG_PREPARE)
so that pools are always safe to use. This allows to remove the tests
from the alloc/free calls.
Doing just this has removed 2.5 kB of code on all cumulated pool_alloc()
and pool_free() paths.
signal_init(), init_log(), init_stream(), and init_task() all used to
only preset some values and lists. This needs to be done very early to
provide a reliable interface to all other users. The calls used to be
explicit in haproxy.c:init(). Now they're placed in initcalls at the
STG_PREPARE stage. The functions are not exported anymore.
Instead of exporting a number of pools and having to manually delete
them in deinit() or to have dedicated destructors to remove them, let's
simply kill all pools on deinit().
For this a new function pool_destroy_all() was introduced. As its name
implies, it destroys and frees all pools (provided they don't have any
user anymore of course).
This allowed to remove 4 implicit destructors, 2 explicit ones, and 11
individual calls to pool_destroy(). In addition it properly removes
the mux_pt_ctx pool which was not cleared on exit (no backport needed
here since it's 1.9 only). The sig_handler pool doesn't need to be
exported anymore and became static now.
This commit replaces the explicit pool creation that are made in
constructors with a pool registration. Not only this simplifies the
pools declaration (it can be done on a single line after the head is
declared), but it also removes references to pools from within
constructors. The only remaining create_pool() calls are those
performed in init functions after the config is parsed, so there
is no more user of potentially uninitialized pool now.
It has been the opportunity to remove no less than 12 constructors
and 6 init functions.
The new function create_pool_callback() takes 3 args including the
return pointer, and creates a pool with the specified name and size.
In case of allocation error, it emits an error message and returns.
The new macro REGISTER_POOL() registers a callback using this function
and will be usable to request some pools creation and guarantee that
the allocation will be checked. An even simpler approach is to use
DECLARE_POOL() and DECLARE_STATIC_POOL() which declare and register
the pool.
Most calls to hap_register_post_check(), hap_register_post_deinit(),
hap_register_per_thread_init(), hap_register_per_thread_deinit() can
be done using initcalls and will not require a constructor anymore.
Let's create a set of simplified macros for this, called respectively
REGISTER_POST_CHECK, REGISTER_POST_DEINIT, REGISTER_PER_THREAD_INIT,
and REGISTER_PER_THREAD_DEINIT.
Some files were not modified because they wouldn't benefit from this
or because they conditionally register (e.g. the pollers).
Most register_build_opts() calls use static strings. These ones were
replaced with a trivial REGISTER_BUILD_OPTS() statement adding the string
and its call to the STG_REGISTER section. A dedicated section could be
made for this if needed, but there are very few such calls for this to
be worth it. The calls made with computed strings however, like those
which retrieve OpenSSL's version or zlib's version, were moved to a
dedicated function to guarantee they are called late in the process.
For example, the SSL call probably requires that SSL_library_init()
has been called first.
This patch replaces a number of __decl_hathread() followed by HA_SPIN_INIT
or HA_RWLOCK_INIT by the new __decl_spinlock() or __decl_rwlock() which
automatically registers the lock for initialization in during the STG_LOCK
init stage. A few static modifiers were lost in the process, but since they
were not essential at all it was not worth extending the API to provide such
a variant.
Using __decl_spinlock(), __decl_rwlock(), __decl_aligned_spinlock()
and __decl_aligned_rwlock(), one can now simply declare a spinlock
or an rwlock which will automatically be initialized at boot time
by calling the ha_spin_init() or ha_rwlock_init() callback. The
"aligned" variants enforce a 64-byte alignment on the lock.
This patch adds ha_spin_init() and ha_rwlock_init() which are used as
a callback to initialise locks at boot time. They perform exactly the
same as HA_SPIN_INIT() or HA_RWLOCK_INIT() but from within a real
function.
This switches explicit calls to various trivial registration methods for
keywords, muxes or protocols from constructors to INITCALL1 at stage
STG_REGISTER. All these calls have in common to consume a single pointer
and return void. Doing this removes 26 constructors. The following calls
were addressed :
- acl_register_keywords
- bind_register_keywords
- cfg_register_keywords
- cli_register_kw
- flt_register_keywords
- http_req_keywords_register
- http_res_keywords_register
- protocol_register
- register_mux_proto
- sample_register_convs
- sample_register_fetches
- srv_register_keywords
- tcp_req_conn_keywords_register
- tcp_req_cont_keywords_register
- tcp_req_sess_keywords_register
- tcp_res_cont_keywords_register
- flt_register_keywords
We currently have to deal with multiple initialization stages in a way
that can be confusing, because certain parts rely on others having been
properly initialized. Most calls consist in adding lists to existing
lists, whose heads are initialized in the declaration so this is easy.
But some calls create new pools and require pools to be properly
initialized. Pools currently are thread-local and as such cannot be
pre-initialized, requiring run-time checks.
All this could be simplified by using multiple boot stages and allowing
functions to be registered at various stages.
One approach might be to use gcc's constructor priorities, but this
requires gcc >= 4.3 which eliminates a wide spectrum of working compilers,
and some versions of certain compilers (like clang 3.0) are known for
silently ignore these priorities.
Instead we can use our own init function registration mechanism. A first
attempt was made using register_function() calls in all constructors but
this made the code more painful.
This patch's approach is different. It creates sections containing
arrays of pointers to "initcall" descriptors. An initcall contains a
pointer to a function and an argument. Each section corresponds to a
specific initialization stage. Each module creates such descriptors
for various calls it requires. The main() function starts by scanning
each of these sections in turn to process these initcalls.
This will make it possible to remove many constructors from various
modules, by simply placing initcalls for the requested functions next
to the keyword lists that need to be called.
A first attempt was made by placing the initcalls directly into the
sections instead of creating an array of pointers, but it becomes
sensitive to the array's alignment which depends on the compiler and
the linker, so it seems too fragile.
For now we support 6 init stages :
- STG_PREPARE : preset variables, tables and list heads
- STG_LOCK : initialize spinlocks and rwlocks
- STG_ALLOC : allocate the required structures
- STG_POOL : create pools
- STG_REGISTER : register static lists (keywords etc)
- STG_INIT : subsystems normal initialization
These ones are declared directly in the files where they are needed
using one of the INITCALL* macros, passing 0 to 3 pointers as
arguments.
The API should possibly be extended to support a return value to give
a status to the caller, and to support a unified API, possibly a bit
more flexibility in the arguments. In this case it might make sense to
support a set of macros to register functions having a different API
and to pass the function type in the initcall itself.
Special thanks to Olivier for showing how to scan sections as this is
not something particularly well documented and exactly what I've been
missing to achieve this.
Building with musl and gcc-5.3 for MIPS returns this :
include/common/buf.h: In function 'b_dist':
include/common/buf.h:252:2: error: unknown type name 'ssize_t'
ssize_t dist = to - from;
^
Including stdint or stddef is not sufficient there to get ssize_t,
unistd is needed as well. It's likely that other platforms will have
the same issue. This patch also addresses it in ist.h and memory.h.
Building on 32 bits gives this :
include/proto/htx.h: In function 'htx_dump':
include/proto/htx.h:443:25: warning: format '%lu' expects argument of type 'long unsigned int', but argument 8 has type 'uint64_t {aka long long unsigned int}' [-Wformat=]
fprintf(stderr, "htx:%p [ size=%u - data=%u - used=%u - wrap=%s - extra=%lu]\n",
^
In htx_dump(), fprintf() uses %lu but the value is an uint64_t so it
doesn't match on 32-bit. Let's cast this to unsigned long long and use
%llu instead.
We reintroduced some FDs leaking by using a poller and some listeners in
the master.
The master proxy needs to be stopped to avoid leaking its listeners, the
polling loop needs to be deinit, and the thread waker pipe need to be
closed too.
No backport needed.
Surprisingly, the compression pool was created at runtime on first use,
which is not very convenient, has performance and reliability impacts,
and even makes monitoring less easy. Let's move the pool creation at
startup time instead. This even removes the need for the spinlock in
case USE_ZLIB is not defined.
The tune.maxzlibmem setting was moved with commit 368780334 ("MEDIUM:
compression: move the zlib-specific stuff from global.h to compression.c")
but the preset value using DEFAULT_MAXZLIBMEM was incorrectly moved :
- the field is in "global" and not "global.tune"
- the trailing comma instead of semi-colon will make it either zero
(threads enabled), break (threads enabled with debugging), or cast
the memprintf's return pointer to int (threads disabled)
It simply proves that nobody ever used DEFAULT_MAXZLIBMEM since 1.8!
This needs to be backported to 1.8.
Valgrind reports:
==3389== Warning: invalid file descriptor -1 in syscall close()
Check for >= 0 before closing.
This bug was introduced in commit ce83b4a5dd
and is specific to 1.9. No backport needed.
In commit c7566001 ("MINOR: server: Add "alpn" and "npn" keywords") and
commit 201b9f4e ("MAJOR: connections: Defer mux creation for outgoing
connection if alpn is set"), the build was broken on older OpenSSL
releases.
Move the #ifdef's around so that we build again with older OpenSSL
releases (0.9.8 was tested).
Released version 1.9-dev8 with the following main changes :
- REORG: config: extract the global section parser into cfgparse-global
- REORG: config: extract the proxy parser into cfgparse-listen.c
- BUILD: update the list of supported targets and compilers in makefile and readme
- BUILD: reorder the objects in the makefile
- BUILD: Makefile: make "V=1" show some of the commands that are executed
- BUILD: Makefile: add the quiet mode to a few more targets
- BUILD: Makefile: add "$(Q)" to clean, tags and cscope targets
- BUILD: Makefile: switch to quiet mode by default for CC/LD/AR
- MINOR: cli: format `show proc` to be more readable
- MINOR: cli: displays uptime in `show proc`
- MINOR: cli: show master information in 'show proc'
- BUG/MEDIUM: hpack: fix encoding of "accept-ranges" field
- MAJOR: mux-h1: Remove the rxbuf and decode HTTP messages in channel's buffer
- BUG/MINOR: mux-h1: Enable keep-alive on server side
- BUG/MEDIUM: mux-h1: Fix freeze when the kernel splicing is used
- BUG/MEDIUM: mux-h1: Don't set the flag CS_FL_RCV_MORE when nothing was parsed
- BUG/MINOR: stats/htx: Remove channel's output when the request is eaten
- BUG/MINOR: proto_htx: Fix request/response synchronisation on error
- MINOR: stream-int: Notify caller when an error is reported after a rcv_pipe()
- MINOR: stream-int: Notify caller when an error is reported after a rcv_buf()
- BUG/MINOR: stream-int: Don't call snd_buf() if there are still data in the pipe
- MINOR: stream-int: remove useless checks on CS and conn flags in si_cs_send()
- BUG/MINOR: config: Be aware of the HTX during the check of mux protocols
- BUG/MINOR: mux-htx: Fix bad test on h1c flags in h1_recv_allowed()
- MEDIUM: mworker: wait mode use standard init code path
- MINOR: log: introduce ha_notice()
- MINOR: mworker: use ha_notice to announce a new worker
- BUG/MEDIUM: http_fetch: Make sure name is initialized before http_find_header.
- MINOR: cli: add mworker_accept_wrapper to 'show fd'
- MEDIUM: signal: signal_unregister() removes every handlers
- BUG/MEDIUM: mworker: unregister the signals of main()
- MINOR: cli: add a few missing includes in proto/cli.h
- REORG: time/activity: move activity measurements to activity.{c,h}
- MINOR: activity: report the average loop time in "show activity"
- MINOR: activity: add configuration and CLI support for "profiling.tasks"
- MEDIUM: tasks: collect per-task CPU time and latency
- MINOR: sample: add cpu_calls, cpu_ns_avg, cpu_ns_tot, lat_ns_avg, lat_ns_tot
- MINOR: cli/activity: rename the stolen CPU time fields to mention milliseconds
- BUG/MINOR: cli: Fix memory leak
- BUG/MINOR: mworker: fix FD leak and memory leak in error path
- MINOR: poller: move the call of tv_update_date() back to the pollers
- MINOR: polling: add an option to support busy polling
- MINOR: server: Add "alpn" and "npn" keywords.
- MEDIUM: connection: Don't bother reactivating polling after connection retry.
- MAJOR: connections: Defer mux creation for outgoing connection if alpn is set.
- MEDIUM: ssl: Add ssl_bc_alpn and ssl_bc_npn sample fetches.
- MINOR: servers: Free [idle|safe|priv]_conns on exit.
- REGTEST: add the option to test only a specific set of files
- REGTEST: add a test for connections to a "dispatch" address
- BUG/MEDIUM: connections: Don't reset the conn flags in *connect_server().
- MINOR: server: Only defined conn_complete_server if USE_OPENSSL is set.
- BUG/MEDIUM: servers: Don't check if we have a conn_stream too soon.
- BUG/MEDIUM: sessions: Set sess->origin to NULL if the origin was destroyed.
- MEDIUM: servers: Store the connection in the SI until we have a mux.
- BUG/MEDIUM: h2: wake the processing task up after demuxing
- BUG/MEDIUM: h2: restart demuxing after releasing buffer space
Since the connection changes in 1.9, some breakage happened to the H2 mux
whose initial design was heavily relying on the fact that connection-level
functions were woken up after data were transferred to the stream layer.
We need to wake the demux up after receiving such data if the demux is
blocked. This at least allows to receive POSTs again. One issue remains,
it looks like the end of the uploaded data is silently discarded if the
server responds before the end of the transfer (H2 in half-closed(local)
state), which doesn't happen with 1.8.14 and nghttp as the client.
No backport is needed.
After the changes to the connection layer in 1.9, some wake up calls
need to be introduced to re-activate reading from the connection. One
such place is at the end of h2_process_demux(), otherwise processing
of input data stops after a few frames.
No backport is needed.
When we create a connection, if we have to defer the conn_stream and the
mux creation until we can decide it (ie until the SSL handshake is done, and
the ALPN is decided), store the connection in the stream_interface, so that
we're sure we can destroy it if needed.
When ending a stream, if the origin is an appctx, the appctx will have been
destroyed already, but it does not destroy the session. So later, when we
try to destroy the session, we try to dereference sess->origin and die
trying.
Fix this by explicitely setting sess->origin to NULL before calling
session_free().
The creation of the conn_stream for an outgoing connection has been delayed
a bit, and when using dispatch, a check was made to see if a conn_stream
was attached before the conn_stream was created, so remove the test, as
it's done later anyway, and create and install the conn_stream right away
when we don't have a server, as is done when we don't have an alpn/npn
defined.
In the various connect_server() functions, don't reset the connection flags,
as some may have been set before. The flags are initialized in conn_init(),
anyway.
It currently is quite difficult to re-reun a specific test after an
error occurs. This patch adds a REG_TEST_FILES variable to the makefile,
which will be used to override the find operation. This helps focusing
on a specific file, which is essential during bisect to figure what
commit introduced a specific regression. Multiple files may be tested,
the return code will indicate the number of failed tests.
If an ALPN (or a NPN) was chosen for a server, defer choosing the mux until
after the SSL handshake is done, and the ALPN/NPN has been negociated, so
that we know which mux to pick.
As we now will no longer try tro subscribe to recv/send events before the
connection is established, there's no need to reactivate polling on the fd
when retrying connection. It will be activated later on subscribe.
In some situations, especially when dealing with low latency on processors
supporting a variable frequency or when running inside virtual machines,
each time the process waits for an I/O using the poller, the processor
goes back to sleep or is offered to another VM for a long time, and it
causes excessively high latencies.
A solution to this provided by this patch is to enable busy polling using
a global option. When busy polling is enabled, the pollers never sleep and
loop over themselves waiting for an I/O event to happen or for a timeout
to occur. On multi-processor machines it can significantly overheat the
processor but it usually results in much lower latencies.
A typical test consisting in injecting traffic over a single connection at
a time over the loopback shows a bump from 4640 to 8540 connections per
second on forwarded connections, indicating a latency reduction of 98
microseconds for each connection, and a bump from 12500 to 21250 for
locally terminated connections (redirects), indicating a reduction of
33 microseconds.
It is only usable with epoll and kqueue because select() and poll()'s
API is not convenient for such usages, and the level of performance they
are used in doesn't benefit from this anyway.
The option, which obviously remains disabled by default, can be turned
on using "busy-polling" in the global section, and turned off later
using "no busy-polling". Its status is reported in "show info" to help
troubleshooting suspicious CPU spikes.
Fix some memory leak and a FD leak in the error path of the master proxy
initialisation. It's a really minor issue since the process is exiting
when taking those error paths.
Valgrind's memcheck reports memory leaks in cli.c, because
the out parameter of memprintf is not properly freed:
==31035== 11 bytes in 1 blocks are definitely lost in loss record 16 of 101
==31035== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==31035== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==31035== by 0x4A3C72: my_realloc2 (standard.h:1364)
==31035== by 0x4A3C72: memvprintf (standard.c:3459)
==31035== by 0x4A3D93: memprintf (standard.c:3482)
==31035== by 0x4AF77E: mworker_cli_sockpair_new (cli.c:2324)
==31035== by 0x48E826: init (haproxy.c:1749)
==31035== by 0x408BBC: main (haproxy.c:2725)
==31035==
==31035== 11 bytes in 1 blocks are definitely lost in loss record 17 of 101
==31035== at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==31035== by 0x4C2FDEF: realloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==31035== by 0x4A3C72: my_realloc2 (standard.h:1364)
==31035== by 0x4A3C72: memvprintf (standard.c:3459)
==31035== by 0x4A3D93: memprintf (standard.c:3482)
==31035== by 0x4AF071: mworker_cli_proxy_create (cli.c:2172)
==31035== by 0x48EC89: init (haproxy.c:1760)
==31035== by 0x408BBC: main (haproxy.c:2725)
These leaks were introduced in commits
ce83b4a5dd and
8a02257d88
which are specific to haproxy 1.9 dev.
The "cpust_{tot,1s,15s}" fields used to report milliseconds but nothing
in the value's title made this explicit. Let's rename the field to report
"cpust_ms_{tot,1s,15s}" to more easily remind that the unit represents
milliseconds.
These sample fetch keywords report performance metrics about the task calling
them. They are useful to report in logs which requests consume too much CPU
time and what negative performane impact it has on other requests. Typically
logging cpu_ns_avg and lat_ns_avg will show culprits and victims.
Right now we measure for each task the cumulated time spent waiting for
the CPU and using it. The timestamp uses a 64-bit integer to report a
nanosecond-level date. This is only enabled when "profiling.tasks" is
enabled, and consumes less than 1% extra CPU on x86_64 when enabled.
The cumulated processing time and wait time are reported in "show sess".
The task's counters are also reset when an HTTP transaction is reset
since the HTTP part pretends to restart on a fresh new stream. This
will make sure we always report correct numbers for each request in
the logs.