With the previous connection model, when we purposely decided to stop
receiving in order to avoid polling after a complete request was received
for example, it was needed to set SI_FL_WAIT_ROOM to prevent receive
polling from being re-armed. Now with the new subscription-based model
there is no such thing anymore and there is noone to remove this flag
either. Thus if a request takes more than one packet to come in or
spans over too many packets, this flag will cause it to wait forever.
Let's simply remove this flag now.
This patch should not be backported since older versions still need
that this flag is set here to stop receiving.
Some samples representing time will cover more than one sample at once
if they are units of time per time. For this we'd need to have the
ability to loop over swrate_add() multiple times but that would be
inefficient. By developing the function elevated to power N, it's
visible that some coefficients quickly disappear and that those which
remain at the first order more or less compensate each other.
Thus a simplified version of this function was added to provide a single
value for a given number of samples. Tests with multiple values, window
sizes and sample sizes have shown that it is possible to make it remain
surprisingly accurate (typical error < 0.2% over various large window
and sample sizes, even samples representing up to 1/4 of the window).
Released version 1.9-dev4 with the following main changes :
- BUILD: Allow configuration of pcre-config path
- DOC: clarify force-private-cache is an option
- BUG/MINOR: connection: avoid null pointer dereference in send-proxy-v2
- REORG: http: move the code to different files
- REORG: http: move HTTP rules parsing to http_rules.c
- CLEANUP: http: remove some leftovers from recent cleanups
- BUILD: Makefile: add a "make opts" target to simply show the build options
- BUILD: Makefile: speed up compiler options detection
- BUG/MINOR: backend: check that the mux installed properly
- BUG/MEDIUM: h2: check that the connection is still valid at the end of init()
- BUG/MEDIUM: h2: make h2_stream_new() return an error on memory allocation failure
- REGTEST/MINOR: compatibility: use unix@ instead of abns@ sockets
- MINOR: ssl: cleanup old openssl API call
- MINOR: ssl: generate-certificates for BoringSSL
- BUG/MEDIUM: buffers: Make sure we don't wrap in ci_insert_line2/b_rep_blk.
- MEDIUM: ssl: add support for ciphersuites option for TLSv1.3
- CLEANUP: haproxy: Remove unused variable
- CLEANUP: h1: Fix debug warnings for h1 headers
- CLEANUP: stick-tables: Remove unneeded double (()) around conditional clause
- MEDIUM: task: perform a single tree lookup per run queue batch
- BUG/MEDIUM: Cur/CumSslConns counters not threadsafe.
- BUG/MINOR: threads: move declaration of capabilities to config.h
- OPTIM: tools: optimize my_ffsl() for x86_64
- BUG/MINOR: h2: null-deref
- BUG/MINOR: checks: queues null-deref
- MINOR: connections: Introduce an unsubscribe method.
- MEDIUM: connections: Change struct wait_list to wait_event.
- BUG/MEDIUM: h2: Make sure we're not in the send list on flow control.
- BUG/MEDIUM: mworker: segfault receiving SIGUSR1 followed by SIGTERM.
- BUG/MEDIUM: stream: Make sure to unsubscribe before si_release_endpoint.
- MINOR: http: Move comment about some HTTP macros in the right header file
- MINOR: stats: Add missing include
- MINOR: http: Export some functions and do cleanup to prepare HTTP refactoring
- MEDIUM: http: Ignore http-pretend-keepalive option on frontend
- MEDIUM: http: Ignore http-tunnel option on backend
- MINOR: http: Use same flag for httpclose and forceclose options
- MINOR: h1: Add EOH marker during headers parsing
- MINOR: conn-stream: Add CL_FL_NOT_FIRST flag
- MINOR: h1: Change the union h1_sl to use indirect strings to store infos
- MINOR: h1: Add the flag H1_MF_NO_PHDR to not add pseudo-headers during parsing
- MINOR: log: make sess_log() support sess=NULL
- MINOR: chunk: add chunk_cpy() and chunk_cat()
- MEDIUM: h2: stop relying on H2_SS_IDLE / H2_SS_CLOSED
- CLEANUP: h2: rename h2c_snd_settings() to h2c_send_settings()
- MINOR: h2: don't try to send data before preface
- MINOR: h2: unify the mux init function
- MINOR: h2: retrieve the front proxy from the caller instead of the session
- MINOR: h2: split h2c_stream_new() into h2s_new() + h2c_frt_stream_new()
- MINOR: h2: add a new flag to quickly distinguish front vs back connection
- BUG/MEDIUM: mworker: don't poll on LI_O_INHERITED listeners
- BUG/MEDIUM: stream: don't crash on out-of-memory
- BUILD: compiler: add a new statement "__unreachable()"
- BUILD: lua: silence some compiler warnings about potential null derefs
- BUILD: ssl: fix null-deref warning in ssl_fc_cipherlist_str sample fetch
- BUILD: ssl: fix another null-deref warning in ssl_sock_switchctx_cbk()
- BUILD: stick-table: make sure not to fail on task_new() during initialization
- BUILD: peers: check allocation error during peers_init_sync()
- MINOR: tools: add a new function atleast2() to test masks for more than 1 bit
- MINOR: config: use atleast2() instead of my_popcountl() where relevant
- MEDIUM: fd/threads: only grab the fd's lock if the FD has more than one thread
- MAJOR: tasks: create per-thread wait queues
- OPTIM: tasks: group all tree roots per cache line
- DOC: Fix a few typos
- MINOR: pools: allocate most memory pools from an array
- MINOR: pools: split pool_free() in the lockfree variant
- MEDIUM: pools: implement a thread-local cache for pool entries
- BUG/MEDIUM: threads: fix thread_release() at the end of the rendez-vous point
- Revert "BUILD: lua: silence some compiler warnings about potential null derefs"
- BUILD: lua: silence some compiler warnings about potential null derefs (#2)
- MINOR: lua: all functions calling lua_yieldk() may return
- BUILD: lua: silence some compiler warnings after WILL_LJMP
- BUILD: Makefile: silence an option conflict warning with clang
- MINOR: server: Use memcpy() instead of strncpy().
- CLEANUP: state-file: make the path concatenation code a bit more consistent
- MINOR: build: Disable -Wstringop-overflow.
- MINOR: cfgparse: Write 130 as 128 as 0x82 and 0x80.
- MINOR: peers: use defines instead of enums to appease clang.
- DOC: fix reference to map files in MAINTAINERS
- MINOR: fd: centralize poll timeout computation in compute_poll_timeout()
- MINOR: poller: move time and date computation out of the pollers
- BUILD: memory: fix pointer declaration for atomic CAS
- BUILD: Makefile: add USE_RT to pass -lrt for clock_gettime() and friends
- MINOR: time: add now_mono_time() and now_cpu_time()
- MEDIUM: time: measure the time stolen by other threads
- BUILD: memory: fix free_list pointer declaration again for atomic CAS
- BUILD: compiler: rename __unreachable() to my_unreachable()
- BUG/MEDIUM: pools: Fix the usage of mmap()) with DEBUG_UAF.
- BUILD: memory: fix free_list pointer declaration again for atomic CAS
- BUG/MEDIUM: h2: Close connection if no stream is left an GOAWAY was sent.
- BUG/MEDIUM: connections: Remove subscription if going in idle mode.
- BUG/MEDIUM: stream: Make sure polling is right on retry.
- MINOR: h2: Make sure to return 1 in h2_recv() when needed.
- MEDIUM: connections: Don't directly mess with the polling from the upper layers.
- MINOR: streams: Call tasklet_free() after si_release_endpoint().
- MINOR: connection: Add a SUB_CALL_UNSUBSCRIBE event.
- MINOR: h2: Don't run tasks that are waiting to send if mux in full.
- MINOR: ebtree: save 8 bytes in struct eb32sc_node
There is a 4-bytes hole in this structure after the eb_node and the
last field is 4-bytes as well, resulting in a total of 64 bytes with
8 bytes holes. Just moving the key after the eb_node (like in eb32_node)
fills the hole and reduces the structure's size by 8 bytes.
We wake up all the streams waiting to send data when we have space available
in the mux buffer. Doing so means we probably wake way too many streams,
because after a few the buffer will probably be full instead. So keep a
list of all the streams that are about to send data, and if we detect that
the buffer is full, unschedule the tasks and put the streams back to the
send_list.
Make sure we call tasklet_free() only after si_release_endpoint(), when the
unsubscribe() method has been called, so that we're sure the mux won't
attempt to access the taslet.
Avoid using conn_xprt_want_send/recv, and totally nuke cs_want_send/recv,
from the upper layers. The polling is now directly handled by the connection
layer, it is activated on subscribe(), and unactivated once we got the event
and we woke the related task.
In h2_recv(), return 1 if we have data available, or if h2_recv_allowed()
failed, to be sure h2_process() is called.
Also don't subscribe if our buffer is full.
When retrying to connect to a server, because the previous connection failed,
make sure if we subscribed to the previous connection, the polling flags will
be true for the new fd.
No backport is needed.
Make sure we don't have any subscription when the connection is going in
idle mode, otherwise there's a race condition when the connection is
reused, if there are still old subscriptions, new ones won't be done.
No backport is needed.
When we're closing a stream, is there's no stream left and a goaway was sent,
close the connection, there's no reason to keep it open.
[wt: it's likely that this is needed in 1.8 as well, though it's unclear
how to trigger this issue, some tests are needed]
Similary to what's been done in 7a6ad88b02,
take into account that free_list that free_list is a void **, and so use
a void ** too when attempting to do a CAS.
When mapping memory with mmap(), we should use a fd of -1, not 0. 0 may
work on linux, but it doesn't work on FreeBSD, and probably other OSes.
It would be nice to backport this to 1.8 to help debugging there.
Commit ac6c880 ("BUILD: memory: fix pointer declaration for atomic CAS")
attemtped to fix a build warning affecting the lock-free version of the
pool allocator. But the fix tried to hide the cause instead of addressing
it, thus clang still complains about (void **) not matching (void ***).
The real solution is to declare free_list (void **) and not to use a cast.
Now this builds fine with gcc/clang with and without threads.
No backport is needed.
The purpose is to detect if threads or processes are competing for the
same CPU. This can happen when threads are incorrectly bound, or after a
reload if the previous process still has an important activity. With
threads this situation is problematic because a preempted thread holding
a lock will block other ones waiting for this lock to be released.
A first attempt consisted in measuring the cumulated lost time more
precisely but the system's scheduler is smart enough to try to limit the
thread preemption rate by mostly context switching during poll()'s blank
periods, so most of the time lost is not seen. In essence this is good
because it means a thread is not preempted with a lock held, and even
regarding the rendez-vous point it cannot prevent the other ones from
making progress. But still it happens tens to hundreds of times per
second that a thread might be preempted, so it's still possible to detect
that the situation is happening, thus it's interesting to measure and
report its frequency.
Each time we enter the poller, we check the CPU time spent working and
see if we've lost time doing something else. To limit false positives,
we're only interested in losses of 500 microseconds or more (i.e. half
a clock tick on a 1 kHz system). If so, it indicates that some time was
stolen by another thread or process. Note that we purposely store some
sub-millisecond counters so that under heavy traffic with a 1 kHz clock,
it's still possible to measure something without being subject to the
risk of rounding errors (i.e. if exactly 1 ms is stolen it's possible
that the time difference could often be slightly lower).
This counter of lost CPU time slots time is reported in "show activity"
in numbers of milliseconds of CPU lost per second, per 15s, and total
over the process' life. By definition, the per-second counter cannot
report values larger than 1000 per thread per second and the 15s one
will be limited to 15000/s in the worst case, but it's possible that
peak values exceed such thresholds after long pauses.
These two functions retrieve respectively the monotonic clock time and
the per-thread CPU time when available on the platform, or return zero.
These syscalls may require to link with -lrt on certain libc, which is
enabled in the Makefile with USE_RT=1 (default on Linux systems).
Some code will require clock_gettime() which needs -lrt on most Linux
distros (those with glibc < 2.17). For this reason, this patch introduces
USE_RT to enable -lrt, which is implicitly set for all Linux flavors,
since it's harmless to link with it on more recent ones. Those who know
they can safely get rid of -lrt can remove it using "USE_RT=".
The calls to HA_ATOMIC_CAS() on the lockfree version of the pool allocator
were mistakenly done on (void*) for the old value instead of (void **).
While this has no impact on "recent" gcc, it does have one for gcc < 4.7
since the CAS was open coded and it's not possible to assign a temporary
variable of type "void".
No backport is needed, this only affects 1.9.
By placing this code into time.h (tv_entering_poll() and tv_leaving_poll())
we can remove the logic from the pollers and prepare for extending this to
offer more accurate time measurements.
The 4 pollers all contain the same code used to compute the poll timeout.
This is pointless, let's centralize this into fd.h. This also gets rid of
the useless SCHEDULER_RESOLUTION macro which used to work arond a very old
linux 2.2 bug causing select() to wake up slightly before the timeout.
There are as many ways to build the globalfilepathlen variable as branches
in the if/then/else, creating lots of confusion. Address the most obvious
parts, but some polishing definitely is still needed.
clang complains that -fno-strict-overflow is not used when -fwrapv is
used, which breaks the build when -Werror is used. Let's introduce a
cc-opt-alt function to emit the former only then the latter is not
supported (since it implies the former).
These ones are on error paths that are properly handled by luaL_error()
which does a longjmp() but the compiler cannot know it. By adding an
__unreachable() statement in WILL_LJMP(), there is no ambiguity anymore.
This may be backported to 1.8 but these previous patches are needed first :
- BUILD: compiler: add a new statement "__unreachable()"
- MINOR: lua: all functions calling lua_yieldk() may return
- BUILD: lua: silence some compiler warnings about potential null derefs (#2)
There was a mistake when tagging functions which always use longjmp and
those which may use it in that all those supposed to call lua_yieldk()
may return without calling longjmp. Thus they must not use WILL_LJMP()
but MAY_LJMP(). It has zero impact on the code emitted as such, but
prevents other fixes from being properly implemented : this was the
cause of the previous failure with the __unreachable() calls.
This may be backported to older versions. It may or may not apply
well depending on the context, though the change simply consists in
replacing "WILL_LJMP(hlua_yieldk" with "MAY_LJMP(hlua_yieldk", and
same with the single call to lua_yieldk() in hlua_yieldk().
Here we make sure that appctx is always taken from the unchecked value
since we know it's an appctx, which explains why it's immediately
dereferenced. A missing test was added to ensure that task_new() does
not return a NULL.
This may be backported to 1.8.
This reverts commit f1ffb39b61.
It breaks Lua causing some timeouts. Removing the __unreachable() statement
from WILL_LJMP() fixes it. It's very strange and unclear whether it's an
issue with WILL_LJMP() not fullfilling its promise of not returning, if
the code emitted with __unreachable() gets broken, or anything else. Let's
revert this for now.
There is a bug in this function used to release other threads. It leaves
the current thread marked as harmless. If after this another thread does
a thread_isolate(), but before the first one reaches poll(), the second
thread will believe it's alone while it's not.
This must be backported to 1.8 since the rendez-vous point was merged
into 1.8.14.
Each thread now keeps the last ~512 kB of freed objects into a local
cache. There are some heuristics involved so that a specific pool cannot
use more than 1/8 of the total cache in number of objects. Tests have
shown that 512 kB is an optimal size on a 24-thread test running on a
dual-socket machine, resulting in an overall 7.5% performance increase
and a cache miss ratio reducing from 19.2 to 17.7%. Anyway it seems
pointless to keep more than an L2 cache, which probably explains why
sizes between 256 and 512 kB are optimal.
Cached objects appear in two lists, one per pool and one LRU to help
with fair eviction. Currently there is no way to check each thread's
cache state nor to flush it. This cache cannot be disabled and is
enabled as soon as the lockless pools are enabled (i.e.: threads are
enabled, no pool debugging is in use and the CPU supports a double word
CAS).
For caching it will be convenient to have indexes associated with pools,
without having to dereference the pool itself. One solution could consist
in replacing all pool pointers with integers but this would limit the
number of allocatable pools. Instead here we allocate the 32 first pools
from a pre-allocated array whose base address is known so that it's trivial
to convert a pool to an index in this array. Pools that cannot fit there
will be allocated normally.
Currently we have per-thread arrays of trees and counts, but these
ones unfortunately share cache lines and are accessed very often. This
patch moves the task-specific stuff into a structure taking a multiple
of a cache line, and has one such per thread. Just doing this has
reduced the cache miss ratio from 19.2% to 18.7% and increased the
12-thread test performance by 3%.
It starts to become visible that we really need a process-wide per-thread
storage area that would cover more than just these parts of the tasks.
The code was arranged so that it's easy to move the pieces elsewhere if
needed.
Now we still have a main contention point with the timers in the main
wait queue, but the vast majority of the tasks are pinned to a single
thread. This patch creates a per-thread wait queue and queues a task
to the local wait queue without any locking if the task is bound to a
single thread (the current one) otherwise to the shared queue using
locking. This significantly reduces contention on the wait queue. A
test with 12 threads showed 11 ms spent in the WQ lock compared to
4.7 seconds in the same test without this change. The cache miss ratio
decreased from 19.7% to 19.2% on the 12-thread test, and its performance
increased by 1.5%.
Another indirect benefit is that the average queue size is divided
by the number of threads, which roughly removes log(nbthreads) levels
in the tree and further speeds up lookups.
The vast majority of FDs are only seen by one thread. Currently the lock
on FDs costs a lot because it's touched often, though there should be very
little contention. This patch ensures that the lock is only grabbed if the
FD is shared by more than one thread, since otherwise the situation is safe.
Doing so resulted in a 15% performance boost on a 12-threads test.
peers_init_sync() doesn't check task_new()'s return value and doesn't
return any result to indicate success or failure. Let's make it return
an int and check it from the caller.
This can be backported as far as 1.6.
Gcc reports a potential null-deref error in the stick-table init code.
While not critical there, it's trivial to fix. This check has been
missing since 1.4 so this fix can be backported to all supported versions.
This null-deref cannot happen either as there necesarily is a listener
where this function is called. Let's use __objt_listener() to address
this.
This may be backported to 1.8.