Commit Graph

5281 Commits

Author SHA1 Message Date
Willy Tarreau
ea57a9b103 BUILD: ssl: next round of build warnings on LIBRESSL_VERSION_NUMBER
Other build warnings were emitted on LIBRESSL_VERSION_NUMBER with -Wundef
under openssl < 1.1. Related to GH issue #1369. Seems like some of them
could be simplified a little bit.
2021-08-30 06:20:46 +02:00
Willy Tarreau
a01f8ce2d4 BUILD/MINOR: regex: avoid a build warning on USE_PCRE2 with -Wundef
regex-t emits a warning on #elif USE_PCRE2 when built with -Wundef,
let's just fix it. This was reported in GH issue #1369.
2021-08-28 12:49:58 +02:00
Willy Tarreau
6e5542e9f4 BUILD/MINOR: ssl: avoid a build warning on LIBRESSL_VERSION with -Wundef
Openssl-compat emits a warning for the test on LIBRESSL_VERSION that might
be underfined, if built with -Wundef. The fix is easy, let's do it. Related
to GH issue #1369.
2021-08-28 12:06:51 +02:00
Willy Tarreau
33056436c7 BUILD/MINOR: defaults: eliminate warning on MAXHOSTNAMELEN with -Wundef
As reported in GH issue #1369, there is a single case of #if with a
possibly undefined value in defaults.h which is on MAXHOSTNAMELEN. Let's
turn it to a #ifdef.
2021-08-28 12:05:32 +02:00
Willy Tarreau
cbdc74b4b3 BUG/MINOR: ebtree: remove dependency on incorrect macro for bits per long
The code used to rely on BITS_PER_LONG to decide on the most efficient
way to perform a 64-bit shift, but this macro is not defined (at best
it's __BITS_PER_LONG) and it's likely that it's been like this since
the early implementation of ebtrees designed on i386. Let's remove the
test on this macro and rely on sizeof(long) instead, it also has the
benefit of letting the compiler validate the two branches.

This can be backported to all versions. Thanks to Ezequiel Garcia for
reporting this one in issue #1369.
2021-08-28 11:55:53 +02:00
Willy Tarreau
fe456c581f MINOR: time: add report_idle() to report process-wide idle time
Before threads were introduced in 1.8, idle_pct used to be a global
variable indicating the overall process idle time. Threads made it
thread-local, meaning that its reporting in the stats made little
sense, though this was not easy to spot. In 2.0, the idle_pct variable
moved to the struct thread_info via commit 81036f273 ("MINOR: time:
move the cpu, mono, and idle time to thread_info"). It made it more
obvious that the idle_pct was per thread, and also allowed to more
accurately measure it. But no more effort was made in that direction.

This patch introduces a new report_idle() function that accurately
averages the per-thread idle time over all running threads (i.e. it
should remain valid even if some threads are paused or stopped), and
makes use of it in the stats / "show info" reports.

Sending traffic over only two connections of an 8-thread process
would previously show this erratic CPU usage pattern:

  $ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
  Idle_pct: 30
  Idle_pct: 35
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 35
  Idle_pct: 33
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100
  Idle_pct: 100

Now it shows this more accurate measurement:

  $ while :; do socat /tmp/sock1 - <<< "show info"|grep ^Idle;sleep 0.1;done
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83
  Idle_pct: 83

This is not technically a bug but this lack of precision definitely affects
some users who rely on the idle_pct measurement. This should at least be
backported to 2.4, and might be to some older releases depending on users
demand.
2021-08-28 11:18:10 +02:00
Willy Tarreau
e365aa28d4 BUG/MINOR: time: fix idle time computation for long sleeps
In 2.4 we extended the max poll time from 1s to 60s with commit
4f59d3861 ("MINOR: time: increase the minimum wakeup interval to 60s").

This had the consequence that the calculation of the idle time percentage
may overflow during the multiply by 100 if the thread had slept 43s or
more. Let's change this to a 64 bit computation. This will have no
performance impact since this is done at most twice per second.

This should fix github issue #1366.

This must be backported to 2.4.
2021-08-27 23:36:20 +02:00
Marcin Deranek
310a260e4a MEDIUM: config: Deprecate tune.ssl.capture-cipherlist-size
Deprecate tune.ssl.capture-cipherlist-size in favor of
tune.ssl.capture-buffer-size which better describes the purpose of the
setting.
2021-08-26 19:52:04 +02:00
Marcin Deranek
959a48c116 MINOR: sample: Expose SSL captures using new fetchers
To be able to provide JA3 compatible TLS Fingerprints we need to expose
all Client Hello captured data using fetchers. Patch provides new
and modifies existing fetchers to add ability to filter out GREASE values:
- ssl_fc_cipherlist_*
- ssl_fc_ecformats_bin
- ssl_fc_eclist_bin
- ssl_fc_extlist_bin
- ssl_fc_protocol_hello_id
2021-08-26 19:48:34 +02:00
Marcin Deranek
769fd2e447 MEDIUM: ssl: Capture more info from Client Hello
When we set tune.ssl.capture-cipherlist-size to a non-zero value
we are able to capture cipherlist supported by the client. To be able to
provide JA3 compatible TLS fingerprinting we need to capture more
information from Client Hello message:
- SSL Version
- SSL Extensions
- Elliptic Curves
- Elliptic Curve Point Formats
This patch allows HAProxy to capture such information and store it for
later use.
2021-08-26 19:48:33 +02:00
Willy Tarreau
906f7daed1 MINOR: compiler: implement an ONLY_ONCE() macro
There are regularly places, especially in config analysis, where we
need to report certain things (warnings or errors) only once, but
where implementing a counter is sufficiently deterrent so that it's
not done.

Let's add a simple ONLY_ONCE() macro that implements a static variable
(char) which is atomically turned on, and returns true if it's set for
the first time. This uses fairly compact code, a single byte of BSS
and is thread-safe. There are probably a number of places in the config
parser where this could be used. It may also be used to implement a
WARN_ON() similar to BUG_ON() but which would only warn once.
2021-08-26 16:35:00 +02:00
Amaury Denoyelle
5cca48cba2 MINOR: server: define non purgeable server flag
Define a flag to mark a server as non purgeable. This flag will be used
for "delete server" CLI handler. All servers without this flag will be
eligible to runtime suppression.
2021-08-25 15:53:54 +02:00
Amaury Denoyelle
bc2ebfa5a4 MEDIUM: server: extend refcount for all servers
In a future patch, it will be possible to remove at runtime every
servers, both static and dynamic. This requires to extend the server
refcount for all instances.

First, refcount manipulation functions have been renamed to better
express the API usage.

* srv_refcount_use -> srv_take
The refcount is always initialize to 1 on the server creation in
new_server. It's also incremented for each check/agent configured on a
server instance.

* free_server -> srv_drop
This decrements the refcount and if null, the server is freed, so code
calling it must not use the server reference after it. As a bonus, this
function now returns the next server instance. This is useful when
calling on the server loop without having to save the next pointer
before each invocation.

In these functions, remove the checks that prevent refcount on
non-dynamic servers. Each reference to "dynamic" in variable/function
naming have been eliminated as well.
2021-08-25 15:53:54 +02:00
Amaury Denoyelle
0a8d05d31c BUG/MINOR: stats: use refcount to protect dynamic server on dump
A dynamic server may be deleted at runtime at the same moment when the
stats applet is pointing to it. Use the server refcount to prevent
deletion in this case.

This should be backported up to 2.4, with an observability period of 2
weeks. Note that it requires the dynamic server refcounting feature
which has been implemented on 2.5; the following commits are required :

- MINOR: server: implement a refcount for dynamic servers
- BUG/MINOR: server: do not use refcount in free_server in stopping mode
- MINOR: server: return the next srv instance on free_server
2021-08-25 15:53:43 +02:00
Amaury Denoyelle
f5c1e12e44 MINOR: server: return the next srv instance on free_server
As a convenience, return the next server instance from servers list on
free_server.

This is particularily useful when using this function on the servers
list without having to save of the next pointer before calling it.
2021-08-25 15:29:19 +02:00
Ilya Shipitsin
ff0f278860 CLEANUP: assorted typo fixes in the code and comments
This is 26th iteration of typo fixes
2021-08-25 05:13:31 +02:00
William Lallemand
3aeb3f9347 MINOR: cfgcond: implements openssl_version_atleast and openssl_version_before
Implements a way of checking the running openssl version:

If the OpenSSL support was not compiled within HAProxy it will returns a
error, so it's recommanded to do a SSL feature check before:

	$ ./haproxy -cc 'feature(OPENSSL) && openssl_version_atleast(0.9.8zh) && openssl_version_before(3.0.0)'

This will allow to select the SSL reg-tests more carefully.
2021-08-22 00:30:24 +02:00
William Lallemand
44d862d8d4 MINOR: ssl: add an openssl version string parser
openssl_version_parser() parse a string in the OpenSSL version format
which is documented here:

https://www.openssl.org/docs/man1.1.1/man3/OPENSSL_VERSION_NUMBER.html

The function returns an unsigned int that could be used for comparing
openssl versions.
2021-08-21 23:44:02 +02:00
William Lallemand
2a8fe8bb48 MINOR: httpclient: cleanup the include files
Include the correct .h files in http_client.c and http_client.h.

The api.h is needed in http_client.c and http_client-t.h is now include
directly from http_client.h
2021-08-20 14:25:15 +02:00
Remi Tricot-Le Breton
f95c29546c BUILD/MINOR: ssl: Fix compilation with OpenSSL 1.0.2
The X509_STORE_CTX_get0_cert did not exist yet on OpenSSL 1.0.2 and
neither did X509_STORE_CTX_get0_chain, which was not actually needed
since its get1 equivalent already existed.
2021-08-20 10:05:58 +02:00
Remi Tricot-Le Breton
74f6ab6e87 MEDIUM: ssl: Keep a reference to the client's certificate for use in logs
Most of the SSL sample fetches related to the client certificate were
based on the SSL_get_peer_certificate function which returns NULL when
the verification process failed. This made it impossible to use those
fetches in a log format since they would always be empty.

The patch adds a reference to the X509 object representing the client
certificate in the SSL structure and makes use of this reference in the
fetches.

The reference can only be obtained in ssl_sock_bind_verifycbk which
means that in case of an SSL error occurring before the verification
process ("no shared cipher" for instance, which happens while processing
the Client Hello), we won't ever start the verification process and it
will be impossible to get information about the client certificate.

This patch also allows most of the ssl_c_XXX fetches to return a usable
value in case of connection failure (because of a verification error for
instance) by making the "conn->flags & CO_FL_WAIT_XPRT" test (which
requires a connection to be established) less strict.

Thanks to this patch, a log-format such as the following should return
usable information in case of an error occurring during the verification
process :
    log-format "DN=%{+Q}[ssl_c_s_dn] serial=%[ssl_c_serial,hex] \
                hash=%[ssl_c_sha1,hex]"

It should answer to GitHub issue #693.
2021-08-19 23:26:05 +02:00
William Lallemand
33b0d095cc MINOR: httpclient: implement a simple HTTP Client API
This commit implements a very simple HTTP Client API.

A client can be operated by several functions:

    - httpclient_new(), httpclient_destroy(): create
      and destroy the struct httpclient instance.

    - httpclient_req_gen(): generate a complete HTX request using the
      the absolute URL, the method and a list of headers. This request
      is complete and sets the HTX End of Message flag. This is limited
      to small request we don't need a body.

    - httpclient_start() fill a sockaddr storage with a IP extracted
      from the URL (it cannot resolve an fqdm for now), start the
      applet. It also stores the ptr of the caller which could be an
      appctx or something else.

   - hc->ops contains a list of callbacks used by the
     HTTPClient, they should be filled manually after an
     httpclient_new():

        * res_stline(): the client received a start line, its content
          will be stored in hc->res.vsn, hc->res.status, hc->res.reason

        * res_headers(): the client received headers, they are stored in
          hc->res.hdrs.

        * res_payload(): the client received some payload data, they are
        stored in the hc->res.buf buffer and could be extracted with the
        httpclient_res_xfer() function, which takes a destination buffer
        as a parameter

        * res_end(): this callback is called once we finished to receive
        the response.
2021-08-18 17:36:32 +02:00
Willy Tarreau
d3d8d03d98 MINOR: http: add a new function http_validate_scheme() to validate a scheme
While http_parse_scheme() extracts a scheme from a URI by extracting
exactly the valid characters and stopping on delimiters, this new
function performs the same on a fixed-size string.
2021-08-17 10:16:22 +02:00
Ilya Shipitsin
01881087fc CLEANUP: assorted typo fixes in the code and comments
This is 25th iteration of typo fixes
2021-08-16 12:37:59 +02:00
Christopher Faulet
df97ac4584 MEDIUM: filters/lua: Add HTTPMessage class to help HTTP filtering
This new class exposes methods to manipulate HTTP messages from a filter
written in lua. Like for the HTTP class, there is a bunch of methods to
manipulate the message headers. But there are also methods to manipulate the
message payload. This part is similar to what is available in the Channel
class. Thus the payload can be duplicated, erased, modified or
forwarded. For now, only DATA blocks can be retrieved and modified because
the current API is limited. No HTTPMessage method is able to yield. Those
manipulating the headers are always called on messages containing all the
headers, so there is no reason to yield. Those manipulating the payload are
called from the http_payload filters callback function where yielding is
forbidden.

When an HTTPMessage object is instantiated, the underlying Channel object
can be retrieved via the ".channel" field.

For now this class is not used because the HTTP filtering is not supported
yet. It will be the purpose of another commit.

There is no documentation for now.
2021-08-12 08:57:07 +02:00
Christopher Faulet
8c9e6bba0f MINOR: lua: Add flags on the lua TXN to know the execution context
A lua TXN can be created when a sample fetch, an action or a filter callback
function is executed. A flag is now used to track the execute context.
Respectively, HLUA_TXN_SMP_CTX, HLUA_TXN_ACT_CTX and HLUA_TXN_FLT_CTX. The
filter flag is not used for now.
2021-08-12 08:57:07 +02:00
Christopher Faulet
1f43a3430e MINOR: lua: Add a flag on lua context to know the yield capability at run time
When a script is executed, a flag is used to allow it to yield. An error is
returned if a lua function yield, explicitly or not. But there is no way to
get this capability in C functions. So there is no way to choose to yield or
not depending on this capability.

To fill this gap, the flag HLUA_NOYIELD is introduced and added on the lua
context if the current script execution is not authorized to yield. Macros
to set, clear and test this flags are also added.

This feature will be usefull to fix some bugs in lua actions execution.
2021-08-12 08:57:07 +02:00
William Lallemand
8c29fa7454 MINOR: channel: remove an htx block from a channel
co_htx_remove_blk() implements a way to remove an htx block from a
channel buffer and update the channel output.
2021-08-12 00:51:59 +02:00
Amaury Denoyelle
7afa5c1843 MINOR: global: define MODE_STOPPING
Define a new mode MODE_STOPPING. It is used to indicate that the process
is in the stopping stage and no event loop runs anymore.
2021-08-09 17:51:55 +02:00
Amaury Denoyelle
b33a0abc0b MEDIUM: check: implement check deletion for dynamic servers
Implement a mechanism to free a started check on runtime for dynamic
servers. A new function check_purge is created for this. The check task
will be marked for deletion and scheduled to properly close connection
elements and free the task/tasklet/buf_wait elements.

This function will be useful to delete a dynamic server wich checks.
2021-08-06 11:09:48 +02:00
Amaury Denoyelle
d6b7080cec MINOR: server: implement a refcount for dynamic servers
It is necessary to have a refcount mechanism on dynamic servers to be
able to enable check support. Indeed, when deleting a dynamic server
with check activated, the check will be asynchronously removed. This is
mandatory to properly free the check resources in a thread-safe manner.
The server instance must be kept alive for this.
2021-08-06 11:09:48 +02:00
Amaury Denoyelle
3c2ab1a0d4 MINOR: check: export check init functions
Remove static qualifier on init_srv_check, init_srv_agent_check and
start_check_task. These functions will be called in server.c for dynamic
servers with checks.
2021-08-06 11:08:04 +02:00
Amaury Denoyelle
7b368339af MEDIUM: task: implement tasklet kill
Implement an equivalent of task_kill for tasklets. This function can be
used to request a tasklet deletion in a thread-safe way.

Currently this function is unused.
2021-08-06 11:07:48 +02:00
Christopher Faulet
434b8525ee MINOR: spoe: Add a pointer on the filter config in the spoe_agent structure
There was no way to access the SPOE filter configuration from the agent
object. However it could be handy to have it. And in fact, this will be
required to fix a bug.
2021-08-05 10:07:43 +02:00
Willy Tarreau
7b2ac29a92 CLEANUP: fd: remove the now unneeded fd_mig_lock
This is not needed anymore since we don't use it when setting the running
mask anymore.
2021-08-04 16:03:36 +02:00
Willy Tarreau
b201b1dab1 CLEANUP: fd: remove the now unused fd_set_running()
It was inlined inside fd_update_events() since it relies on a loop that
may return immediate failure codes.
2021-08-04 16:03:36 +02:00
Willy Tarreau
f69fea64e0 MAJOR: fd: get rid of the DWCAS when setting the running_mask
Right now we're using a DWCAS to atomically set the running_mask while
being constrained by the thread_mask. This DWCAS is annoying because we
may seriously need it later when adding support for thread groups, for
checking that the running_mask applies to the correct group.

It turns out that the DWCAS is not strictly necessary because we never
need it to set the thread_mask based on the running_mask, only the other
way around. And in fact, the running_mask is always cleared alone, and
the thread_mask is changed alone as well. The running_mask is only
relevant to indicate a takeover when the thread_mask matches it. Any
bit set in running and not present in thread_mask indicates a transition
in progress.

As such, it is possible to re-arrange this by using a regular CAS around a
consistency check between running_mask and thread_mask in fd_update_events
and by making a CAS on running_mask then an atomic store on the thread_mask
in fd_takeover(). The only other case is fd_delete() but that one already
sets the running_mask before clearing the thread_mask, which is compatible
with the consistency check above.

This change has happily survived 10 billion takeovers on a 16-thread
machine at 800k requests/s.

The fd-migration doc was updated to reflect this change.
2021-08-04 16:03:36 +02:00
Willy Tarreau
b1f29bc625 MINOR: activity/fd: remove the dead_fd counter
This one is set whenever an FD is reported by a poller with a null owner,
regardless of the thread_mask. It has become totally meaningless because
it only indicates a migrated FD that was not yet reassigned to a thread,
but as soon as a thread uses it, the status will change to skip_fd. Thus
there is no reason to distinguish between the two, it adds more confusion
than it helps. Let's simply drop it.
2021-08-04 16:03:36 +02:00
Willy Tarreau
88d1c5d3fb MEDIUM: threads: add a stronger thread_isolate_full() call
The current principle of running under isolation was made to access
sensitive data while being certain that no other thread was using them
in parallel, without necessarily having to place locks everywhere. The
main use case are "show sess" and "show fd" which run over long chains
of pointers.

The thread_isolate() call relies on the "harmless" bit that indicates
for a given thread that it's not currently doing such sensitive things,
which is advertised using thread_harmless_now() and which ends usings
thread_harmless_end(), which also waits for possibly concurrent threads
to complete their work if they took this opportunity for starting
something tricky.

As some system calls were notoriously slow (e.g. mmap()), a bunch of
thread_harmless_now() / thread_harmless_end() were placed around them
to let waiting threads do their work while such other threads were not
able to modify memory contents.

But this is not sufficient for performing memory modifications. One such
example is the server deletion code. By modifying memory, it not only
requires that other threads are not playing with it, but are not either
in the process of touching it. The fact that a pool_alloc() or pool_free()
on some structure may call thread_harmless_now() and let another thread
start to release the same object's memory is not acceptable.

This patch introduces the concept of "idle threads". Threads entering
the polling loop are idle, as well as those that are waiting for all
others to become idle via the new function thread_isolate_full(). Once
thread_isolate_full() is granted, the thread is not idle anymore, and
it is released using thread_release() just like regular isolation. Its
users have to keep in mind that across this call nothing is granted as
another thread might have performed shared memory modifications. But
such users are extremely rare and are actually expecting this from their
peers as well.

Note that that in case of backport, this patch depends on previous patch:
  MINOR: threads: make thread_release() not wait for other ones to complete
2021-08-04 14:49:36 +02:00
William Lallemand
8e765b86fd MINOR: proxy: disabled takes a stopping and a disabled state
This patch splits the disabled state of a proxy into a PR_DISABLED and a
PR_STOPPED state.

The first one is set when the proxy is disabled in the configuration
file, and the second one is set upon a stop_proxy().
2021-08-03 14:17:45 +02:00
William Lallemand
56f1f75715 MINOR: log: rename 'dontloglegacyconnerr' to 'log-error-via-logformat'
Rename the 'dontloglegacyconnerr' option to 'log-error-via-logformat'
which is much more self-explanatory and readable.

Note: only legacy keywords don't use hyphens, it is recommended to
separate words with them in new keywords.
2021-08-02 10:42:42 +02:00
Willy Tarreau
99198546f6 MEDIUM: atomic: relax the load/store barriers on x86_64
The x86-tso model makes the load and store barriers unneeded for our
usage as long as they perform at least a compiler barrier: the CPU
will respect store ordering and store vs load ordering. It's thus
safe to remove the lfence and sfence which are normally needed only
to communicate with external devices. Let's keep the mfence though,
to make sure that reads of same memory location after writes report
the value from memory and not the one snooped from the write buffer
for too long.

An in-depth review of all use cases tends to indicate that this is
okay in the rest of the code. Some parts could be cleaned up to
use atomic stores and atomic loads instead of explicit barriers
though.

Doing this reliably increases the overall performance by about 2-2.5%
on a 8c-16t Xeon thanks to less frequent flushes (it's likely that the
biggest gain is in the MT lists which use them a lot, and that this
results in less cache line flushes).
2021-08-01 17:34:06 +02:00
Willy Tarreau
cb0451146f MEDIUM: atomic: simplify the atomic load/store/exchange operations
The atomic_load/atomic_store/atomic_xchg operations were all forced to
__ATOMIC_SEQ_CST, which results in explicit store or even full barriers
even on x86-tso while we do not need them: we're not communicating with
external devices for example and are only interested in respecting the
proper ordering of loads and stores between each other.

These ones being rarely used, the emitted code on x86 remains almost the
same (barring a handful of locations). However they will allow to place
correct barriers at other places where atomics are accessed a bit lightly.

The patch is marked medium because we can never rule out the risk of some
bugs on more relaxed platforms due to the rest of the code.
2021-08-01 17:34:06 +02:00
Willy Tarreau
55a0975b1e BUG/MINOR: freq_ctr: use stricter barriers between updates and readings
update_freq_ctr_period() was using relaxed atomics without using barriers,
which usually works fine on x86 but not everywhere else. In addition, some
values were read without being enclosed by barriers, allowing the compiler
to possibly prefetch them a bit earlier. Finally, freq_ctr_total() was also
reading these without enough barriers. Let's make explicit use of atomic
loads and atomic stores to get rid of this situation. This required to
slightly rearrange the freq_ctr_total() loop, which could possibly slightly
improve performance under extreme contention by avoiding to reread all
fields.

A backport may be done to 2.4 if a problem is encountered, but last tests
on arm64 with LSE didn't show any issue so this can possibly stay as-is.
2021-08-01 17:34:06 +02:00
Willy Tarreau
200bd50b73 MEDIUM: fd: rely more on fd_update_events() to detect changes
This function already performs a number of checks prior to calling the
IOCB, and detects the change of thread (FD migration). Half of the
controls are still in each poller, and these pollers also maintain
activity counters for various cases.

Note that the unreliable test on thread_mask was removed so that only
the one performed by fd_set_running() is now used, since this one is
reliable.

Let's centralize all that fd-specific logic into the function and make
it return a status among:

  FD_UPDT_DONE,        // update done, nothing else to be done
  FD_UPDT_DEAD,        // FD was already dead, ignore it
  FD_UPDT_CLOSED,      // FD was closed
  FD_UPDT_MIGRATED,    // FD was migrated, ignore it now

Some pollers already used to call it last and have nothing to do after
it, regardless of the result. epoll has to delete the FD in case a
migration is detected. Overall this removes more code than it adds.
2021-07-30 17:45:18 +02:00
Willy Tarreau
84c7922c52 REORG: fd: uninline fd_update_events()
This function has become a monster (80 lines and 2/3 of a kB), it doesn't
benefit from being static nor inline anymore, let's move it to fd.c.
2021-07-30 17:41:55 +02:00
Willy Tarreau
a199a17d72 MINOR: fd: update flags only once in fd_update_events()
Since 2.4 with commit f50906519 ("MEDIUM: fd: merge fdtab[].ev and state
for FD_EV_* and FD_POLL_* into state") we can merge all flag updates at
once in fd_update_events(). Previously this was performed in 1 to 3 steps,
setting the polling state, then setting READY_R if in/err/hup, and setting
READY_W if out/err. But since the commit above, all flags are stored
together in the same structure field that is being updated with the new
flags, thus we can simply update the flags altogether and avoid multiple
atomic operations. This even removes the need for atomic ops for FDs that
are not shared.
2021-07-30 17:41:55 +02:00
Willy Tarreau
d5402b8df8 BUG/MINOR: fd: protect fd state harder against a concurrent takeover
There's a theoretical race (that we failed to trigger) in function
fd_update_events(), which could strike on idle connections. The "locked"
variable will most often be 0 as the FD is bound to the current thread
only. Another thread could take it over once "locked" is set, change
the thread and running masks. Then the first thread updates the FD's
state non-atomically and possibly overwrites what the other thread was
preparing. It still looks like the FD's state will ultimately converge
though.

The solution against this is to set the running flag earlier so that a
takeover() attempt cannot succeed, or that the fd_set_running() attempt
fails, indicating that nothing needs to be done on this FD.

While this is sufficient for a simple fix to be backported, it leaves
the FD actively polled in the calling thread, this will trigger a second
wakeup which will notice the absence of tid_bit in the thread_mask,
getting rid of it.

A more elaborate solution would consist in calling fd_set_running()
directly from the pollers before calling fd_update_events(), getting
rid of the thread_mask test and letting the caller eliminate that FD
from its list if needed.

Interestingly, this code also proves to be suboptimal in that it sets
the FD state twice instead of calculating the new state at once and
always using a CAS to set it. This is a leftover of a simplification
that went into 2.4 and which should be explored in a future patch.

This may be backported as far as 2.2.
2021-07-30 14:54:19 +02:00
Willy Tarreau
6ed242ece6 BUG/MEDIUM: connection: close a rare race between idle conn close and takeover
The takeover of idle conns between threads is particularly tricky, for
two reasons:
  - there's no way to atomically synchronize kernel-side polling with
    userspace activity, so late events will always be reported for some
    FDs just migrated ;

  - upon error, an FD may be immediately reassigned to whatever other
    thread since it's process-wide.

The current model uses the FD's thread_mask to figure if an FD still
ought to be reported or not, and a per-thread idle connection queue
from which eligible connections are atomically added/picked. I/Os
coming from the bottom for such a connection must remove it from the
list so that it's not elected. Same for timeout tasks and iocbs. And
these last ones check their context under the idle_conn lock to judge
if they're still allowed to run.

One rare case was omitted: the wake() callback. This one is rare, it
may serve to notify about finalized connect() calls that are not being
polled, as well as unhandled shutdowns and errors. This callback was
not protected till now because it wasn't seen as sensitive, but there
exists a particular case where it may be called without protectoin in
parallel to a takeover. This happens in the following sequence:

  - thread T1 wants to establish an outgoing connection
  - the connect() call returns EINPROGRESS
  - the poller adds it using epoll_ctl()
  - epoll_wait() reports it, connect() is done. The connection is
    not being marked as actively polled anymore but is still known
    from the poller.
  - the request is sent over that connection using send(), which
    queues to system buffers while data are being delivered
  - the scheduler switches to other tasks
  - the request is physically sent
  - the server responds
  - the stream is notified that send() succeeded, and makes progress,
    trying to recv() from that connection
  - the recv() succeeds, the response is delivered
  - the poller doesn't need to be touched (still no active polling)
  - the scheduler switches to other tasks
  - the server closes the connection
  - the poller on T1 is notified of the SHUTR and starts to call mux->wake()
  - another thread T2 takes over the connection
  - T2 continues to run inside wake() and releases the connection
  - T2 is just dereferencing it.
  - BAM.

The most logical solution here is to surround the call to wake() with an
atomic removal/insert of the connection from/into the idle conns lists.
This way, wake() is guaranteed to run alone. Any other poller reporting
the FD will not have its tid_bit in the thread_mask si will not bother
it. Another thread trying a takeover will not find this connection. A
task or tasklet being woken up late will either be on the same thread,
or be called on another one with a NULL context since it will be the
consequence of previous successful takeover, and will be ignored. Note
that the extra cost of a lock and tree access here have a low overhead
which is totally amortized given that these ones roughly happen 1-2
times per connection at best.

While it was possible to crash the process after 10-100k req using H2
and a hand-refined configuration achieving perfect synchronism between
a long (20+) chain of proxies and a short timeout (1ms), now with that
fix this never happens even after 10M requests.

Many thanks to Olivier for proposing this solution and explaining why
it works.

This should be backported as far as 2.2 (when inter-thread takeover was
introduced). The code in older versions will be found in conn_fd_handler().
A workaround consists in disabling inter-thread pool sharing using:

    tune.idle-pool.shared off
2021-07-30 08:34:38 +02:00
Remi Tricot-Le Breton
4a6328f066 MEDIUM: connection: Add option to disable legacy error log
In case of connection failure, a dedicated error message is output,
following the format described in section "Error log format" of the
documentation. These messages cannot be configured through a log-format
option.
This patch adds a new option, "dontloglegacyconnerr", that disables
those error logs when set, and "replaces" them by a regular log line
that follows the configured log-format (thanks to a call to sess_log in
session_kill_embryonic).
The new fc_conn_err sample fetch allows to add the legacy error log
information into a regular log format.
This new option is unset by default so the logging logic will remain the
same until this new option is used.
2021-07-29 15:40:45 +02:00