Commit Graph

12984 Commits

Author SHA1 Message Date
Ilya Shipitsin
fcb69d768b BUILD: ssl: make BoringSSL use its own version numbers
BoringSSL is a fork of OpenSSL 1.1.0, however in
49e9f67d8b7cbeb3953b5548ad1009d15947a523 it has changed version to 1.1.1.

Should fix issue #895.

This must be backported to 2.2, 2.1, 2.0, 1.8
2020-10-19 11:34:37 +02:00
Ilya Shipitsin
b3201a3e07 BUG/MINOR: disable dynamic OCSP load with BoringSSL
it was accidently enabled on BoringSSL while
actually it is not supported

wla: Fix part of the issue mentionned in #895.
It fixes build of boringSSL versions prior to commit
https://boringssl.googlesource.com/boringssl/+/49e9f67d8b7cbeb3953b5548ad1009d15947a523

Must be backported in 2.2.

Signed-off-by: William Lallemand <wlallemand@haproxy.org>
2020-10-19 11:00:51 +02:00
Willy Tarreau
4b6e3c284a MINOR: lb/chash: use a read lock in chash_get_server_hash()
When using a low hash-balance-factor value, it's possible to loop
many times trying to find the best server. Figures in the order of
100-300 times were observed for 1000 servers with a factor of 101
(which seems a bit excessive for such a large farm). Given that
there's nothing in that function that prevents multiple threads
from working in parallel, let's switch to a read lock. Tests on
8 threads show roughly a 2% performance increase with this.
2020-10-17 20:15:49 +02:00
Willy Tarreau
f76a21f78c MINOR: lb/first: use a read lock in fas_get_next_server()
The "first" algorithm creates a lot of contention because all threads
focus on the same server by definition (the first available one). By
turning the exclusive lock to a read lock in fas_get_next_server(),
the request rate increases by 16% for 8 threads when many servers are
getting close to their maxconn.
2020-10-17 19:49:49 +02:00
Willy Tarreau
58bc9c1ced MINOR: lb/leastconn: only take a read lock in fwlc_get_next_server()
This function doesn't change the tree, it only looks for the first
usable server, so let's do that under a read lock to limit the
situations like the ones described in issue #881 where finding a
usable server when dealing with lots of saturated ones can be
expensive. At least threads will now be able to look up in
parallel.

It's interesting to note that s->served is not incremented during the
server choice, nor is the server repositionned. So right now already,
nothing prevents multiple threads from picking the same server. This
will not cause a significant imbalance anyway given that the server
will automatically be repositionned at the right place, but this might
be something to improve in the future if it doesn't come with too high
a cost.

It also looks like the way a server's weight is updated could be
revisited so that the write lock gets tighter at the expense of a
short part of inconsistency between weights and servers still present
in the tree.
2020-10-17 19:37:40 +02:00
Willy Tarreau
ae99aeb135 MINOR: lb/map: use seek lock and read locks where appropriate
- map_get_server_hash() doesn't need a write lock since it only
  reads the array, let's only use a read lock here.

- map_get_server_rr() only needs exclusivity to adjust the rr_idx
  while looking for its entry. Since this one is not used by
  map_get_server_hash(), let's turn this lock to a seek lock that
  doesn't block reads.

With 8 threads, no significant performance difference was noticed
given that lookups are usually instant with this LB algo so the
lock contention is rare.
2020-10-17 19:04:27 +02:00
Willy Tarreau
cd10def825 MINOR: backend: replace the lbprm lock with an rwlock
It was previously a spinlock, and it happens that a number of LB algos
only lock it for lookups, without performing any modification. Let's
first turn it to an rwlock and w-lock it everywhere. This is strictly
identical.

It was carefully checked that every HA_SPIN_LOCK() was turned to
HA_RWLOCK_WRLOCK() and that HA_SPIN_UNLOCK() was turned to
HA_RWLOCK_WRUNLOCK() on this lock. _INIT and _DESTROY were updated too.
2020-10-17 18:51:41 +02:00
Willy Tarreau
9d58c9b251 [RELEASE] Released version 2.3-dev7
Released version 2.3-dev7 with the following main changes :
    - CI: travis-ci: replace not defined SSL_LIB, SSL_INC for BotringSSL builds
    - BUG/MINOR: init: only keep rlim_fd_cur if max is unlimited
    - BUG/MINOR: mux-h2: do not stop outgoing connections on stopping
    - MINOR: fd: report an error message when failing initial allocations
    - MINOR: proto-tcp: make use of connect(AF_UNSPEC) for the pause
    - MINOR: sock: add sock_accept_conn() to test a listening socket
    - MINOR: protocol: make proto_tcp & proto_uxst report listening sockets
    - MINOR: sockpair: implement the .rx_listening function
    - CLEANUP: tcp: make use of sock_accept_conn() where relevant
    - CLEANUP: unix: make use of sock_accept_conn() where relevant
    - BUG/MINOR: listener: detect and handle shared sockets stopped in other processes
    - CONTRIB: tcploop: implement a disconnect operation 'D'
    - CLEANUP: protocol: intitialize all of the sockaddr when disconnecting
    - BUG/MEDIUM: deinit: check fdtab before fdtab[fd].owner
    - BUG/MINOR: connection: fix loop iter on connection takeover
    - BUG/MEDIUM: connection: fix srv idle count on conn takeover
    - MINOR: connection: improve list api usage
    - MINOR: mux/connection: add a new mux flag for HOL risk
    - MINOR: connection: don't check priv flag on free
    - MEDIUM: backend: add new conn to session if mux marked as HOL blocking
    - MEDIUM: backend: add reused conn to sess if mux marked as HOL blocking
    - MEDIUM: h2: remove conn from session on detach
    - MEDIUM: fcgi: remove conn from session on detach
    - DOC: Describe reuse safe for HOL handling
    - MEDIUM: proxy: remove obsolete "mode health"
    - MEDIUM: proxy: remove obsolete "monitor-net"
    - CLEANUP: protocol: remove the ->drain() function
    - CLEANUP: fd: finally get rid of fd_done_recv()
    - MINOR: connection: make sockaddr_alloc() take the address to be copied
    - MEDIUM: listener: allocate the connection before queuing a new connection
    - MINOR: session: simplify error path in session_accept_fd()
    - MINOR: connection: add new error codes for accept_conn()
    - MINOR: sock: rename sock_accept_conn() to sock_accepting_conn()
    - MINOR: protocol: add a new function accept_conn()
    - MINOR: sock: implement sock_accept_conn() to accept a connection
    - MINOR: sockpair: implement sockpair_accept_conn() to accept a connection
    - MEDIUM: listener: use protocol->accept_conn() to accept a connection
    - MEDIUM: listener: remove the second pass of fd manipulation at the end
    - MINOR: protocol: add a default I/O callback and put it into the receiver
    - MINOR: log: set the UDP receiver's I/O handler in the receiver
    - MINOR: protocol: register the receiver's I/O handler and not the protocol's
    - CLEANUP: protocol: remove the now unused <handler> field of proto_fam->bind()
    - DOC: improve the documentation for "option nolinger"
    - BUG/MEDIUM: proxy: properly stop backends
    - BUG/MEDIUM: task: bound the number of tasks picked from the wait queue at once
    - MINOR: threads: augment rwlock debugging stats to report seek lock stats
    - MINOR: threads: add the transitions to/from the seek state
    - MEDIUM: task: use an upgradable seek lock when scanning the wait queue
    - BUILD: listener: avoir a build warning when threads are disabled
    - BUG/MINOR: peers: Possible unexpected peer seesion reset after collisions.
    - MINOR: ssl: add volatile flags to ssl samples
    - MEDIUM: backend: reuse connection if using a static sni
    - BUG/MEDIUM: spoe: Unset variable instead of set it if no data provided
    - BUG/MEDIUM: mux-h1: Get the session from the H1S when capturing bad messages
    - BUG/MEDIUM: lb: Always lock the server when calling server_{take,drop}_conn
    - DOC: fix typo in MAX_SESS_STKCTR
2020-10-17 10:31:50 +02:00
Matteo Contrini
1857b8cf4d DOC: fix typo in MAX_SESS_STKCTR
MAX_SESS_STKCTR is spelled wrongly a couple of times
in the configuration docs (K and C are swapped). This patch
fixes the typos.
2020-10-17 09:37:25 +02:00
Christopher Faulet
26a52af642 BUG/MEDIUM: lb: Always lock the server when calling server_{take,drop}_conn
The server lock must be held when server_take_conn() and server_drop_conn()
lbprm callback functions are called. It is a documented prerequisite but it is
not always performed. It only affects leastconn and fas lb algorithm. Others
don't use these callback functions.

A race condition on the next pending effecive weight (next_eweight) may be
encountered with the leastconn lb algorithm. An agent check may set it to 0
while fwlc_srv_reposition() is called. The server is locked during the
next_eweight update. But because the server lock is not acquired when
fwlc_srv_reposition() is called, we may use it to recompute the server key,
leading to a division by 0.

This patch must be backported as far as 1.8.
2020-10-17 09:29:43 +02:00
Christopher Faulet
db2c17da60 BUG/MEDIUM: mux-h1: Get the session from the H1S when capturing bad messages
It is not guaranteed that the backend connection has an owner. It is set when
the connection is created. But when the connection is moved in a server idle
list, the connection owner is set to NULL and may never be set again. On the
other hand, when a mux is created or when a CS is attached, the session is
always defined. The H1 stream always keep a reference on it when it is
created. Thus, when a bad message is captured we should not rely on the
connection owner to retrieve the session. Instead we should get it from the H1
stream.
2020-10-16 19:53:17 +02:00
Christopher Faulet
2469eba20f BUG/MEDIUM: spoe: Unset variable instead of set it if no data provided
If an agent try to set a variable with the NULL data type, an unset is perform
instead to avoid undefined behaviors. Once decoded, such data are translated to
a sample with the type SMP_T_ANY. It is unexpected in HAProxy. When a variable
is set with such sample, no data are attached to the variable. Thus, when the
variable is retrieved later in the transaction, the sample data are
uninitialized, leading to undefined behaviors depending on how it is used. For
instance, it leads to a crash if the debug converter is used on such variable.

This patch should fix the issue #855. It must be backported as far as 1.8.
2020-10-16 19:53:17 +02:00
Amaury Denoyelle
7239c24986 MEDIUM: backend: reuse connection if using a static sni
Detect if the sni used a constant value and if so, allow to reuse this
connection for later sessions. Use a combination of SMP_USE_INTRN +
!SMP_F_VOLATILE to consider a sample as a constant value.

This features has been requested on github issue #371.
2020-10-16 17:48:01 +02:00
Amaury Denoyelle
2f0a797631 MINOR: ssl: add volatile flags to ssl samples
The ssl samples are not constant over time and change according to the
session. Add the flag SMP_F_VOL_SESS to indicate this.
2020-10-16 17:47:29 +02:00
Frdric Lcaille
baeb919177 BUG/MINOR: peers: Possible unexpected peer seesion reset after collisions.
During a peers session collision (two peer sessions opened on both side) we must
mark the peer the session of which will be shutdown as alive, if not ->reconnect
timer will be set with a wrong value if the synchro task expires after the peer
has been reconnected. This possibly leads to unexpected deconnections during handshakes.
Furthermore, this patch cancels any heartbeat tranmimission when a reconnection
is prepared.
2020-10-16 17:45:58 +02:00
Willy Tarreau
0aa5a5b175 BUILD: listener: avoir a build warning when threads are disabled
It's just a __decl_thread() that appeared before the last variable.
2020-10-16 17:43:04 +02:00
Willy Tarreau
d48ed6643b MEDIUM: task: use an upgradable seek lock when scanning the wait queue
Right now when running a configuration with many global timers (e.g. many
health checks), there is a lot of contention on the global wait queue
lock because all threads queue up in front of it to scan it.

With 2000 servers checked every 10 milliseconds (200k checks per second),
after 23 seconds running on 8 threads, the lock stats were this high:

  Stats about Lock TASK_WQ:
      write lock  : 9872564
      write unlock: 9872564 (0)
      wait time for write     : 9208.409 msec
      wait time for write/lock: 932.727 nsec
      read lock   : 240367
      read unlock : 240367 (0)
      wait time for read      : 149.025 msec
      wait time for read/lock : 619.991 nsec

i.e. ~5% of the total runtime spent waiting on this specific lock.

With upgradable locks we don't need to work like this anymore. We
can just try to upgade the read lock to a seek lock before scanning
the queue, then upgrade the seek lock to a write lock for each element
we want to delete there and immediately downgrade it to a seek lock.

The benefit is double:
  - all other threads which need to call next_expired_task() before
    polling won't wait anymore since the seek lock is compatible with
    the read lock ;

  - all other threads competing on trying to grab this lock will fail
    on the upgrade attempt from read to seek, and will let the current
    lock owner finish collecting expired entries.

Doing only this has reduced the wake_expired_tasks() CPU usage in a
very large servers test from 2.15% to 1.04% as reported by perf top,
and increased by 3% the health check rate (all threads being saturated).

This is expected to help against (and possibly solve) the problem
described in issue #875.
2020-10-16 17:15:54 +02:00
Willy Tarreau
61f799b8da MINOR: threads: add the transitions to/from the seek state
Since our locks are based on progressive locks, we support the upgradable
seek lock that is compatible with readers and upgradable to a write lock.
The main purpose is to take it while seeking down a tree for modification
while other threads may seek the same tree for an input (e.g. compute the
next event date).

The newly supported operations are:

  HA_RWLOCK_SKLOCK(lbl,l)         pl_take_s(l)      /* N --> S */
  HA_RWLOCK_SKTOWR(lbl,l)         pl_stow(l)        /* S --> W */
  HA_RWLOCK_WRTOSK(lbl,l)         pl_wtos(l)        /* W --> S */
  HA_RWLOCK_SKTORD(lbl,l)         pl_stor(l)        /* S --> R */
  HA_RWLOCK_WRTORD(lbl,l)         pl_wtor(l)        /* W --> R */
  HA_RWLOCK_SKUNLOCK(lbl,l)       pl_drop_s(l)      /* S --> N */
  HA_RWLOCK_TRYSKLOCK(lbl,l)      (!pl_try_s(l))    /* N -?> S */
  HA_RWLOCK_TRYRDTOSK(lbl,l)      (!pl_try_rtos(l)) /* R -?> S */

Existing code paths are left unaffected so this patch doesn't affect
any running code.
2020-10-16 16:53:46 +02:00
Willy Tarreau
8d5360ca7f MINOR: threads: augment rwlock debugging stats to report seek lock stats
We currently use only read and write lock operations with rwlocks, but
ours also support upgradable seek locks for which we do not report any
stats. Let's add them now when DEBUG_THREAD is enabled.
2020-10-16 16:51:49 +02:00
Willy Tarreau
3cfaa8d1e0 BUG/MEDIUM: task: bound the number of tasks picked from the wait queue at once
There is a theorical problem in the wait queue, which is that with many
threads, one could spend a lot of time looping on the newly expired tasks,
causing a lot of contention on the global wq_lock and on the global
rq_lock. This initially sounds bening, but if another thread does just
a task_schedule() or task_queue(), it might end up waiting for a long
time on this lock, and this wait time will count on its execution budget,
degrading the end user's experience and possibly risking to trigger the
watchdog if that lasts too long.

The simplest (and backportable) solution here consists in bounding the
number of expired tasks that may be picked from the global wait queue at
once by a thread, given that all other ones will do it as well anyway.

We don't need to pick more than global.tune.runqueue_depth tasks at once
as we won't process more, so this counter is updated for both the local
and the global queues: threads with more local expired tasks will pick
less global tasks and conversely, keeping the load balanced between all
threads. This will guarantee a much lower latency if/when wakeup storms
happen (e.g. hundreds of thousands of synchronized health checks).

Note that some crashes have been witnessed with 1/4 of the threads in
wake_expired_tasks() and, while the issue might or might not be related,
not having reasonable bounds here definitely justifies why we can spend
so much time there.

This patch should be backported, probably as far as 2.0 (maybe with
some adaptations).
2020-10-16 15:18:48 +02:00
Willy Tarreau
ba29687bc1 BUG/MEDIUM: proxy: properly stop backends
The proxy stopping mechanism was changed with commit 322b9b94e ("MEDIUM:
proxy: make stop_proxy() now use stop_listener()") so that it's now
entirely driven by the listeners. One thing was forgotten though, which
is that pure backends will not stop anymore since they don't have any
listener, and that it's necessary to stop them in order to stop the
health checks.

No backport is needed.
2020-10-16 15:16:17 +02:00
Willy Tarreau
4a32103a48 DOC: improve the documentation for "option nolinger"
This reminds the different issues caused by option nolinger, as discussed
in issue #896.
2020-10-16 04:55:19 +02:00
Willy Tarreau
233ad288cd CLEANUP: protocol: remove the now unused <handler> field of proto_fam->bind()
We don't need to specify the handler anymore since it's set in the
receiver. Let's remove this argument from the function and clean up
the remains of code that were still setting it.
2020-10-15 21:47:56 +02:00
Willy Tarreau
a74cb38e7c MINOR: protocol: register the receiver's I/O handler and not the protocol's
Now we define a new sock_accept_iocb() for socket-based stream protocols
and use it as a wrapper for listener_accept() which now takes a listener
and not an FD anymore. This will allow the receiver's I/O cb to be
redefined during registration, and more specifically to get rid of the
hard-coded hacks in protocol_bind_all() made for syslog.

The previous ->accept() callback in the protocol was removed since it
doesn't have anything to do with accept() anymore but is more generic.
A few places where listener_accept() was compared against the FD's IO
callback for debugging purposes on the CLI were updated.
2020-10-15 21:47:56 +02:00
Willy Tarreau
e140a6921f MINOR: log: set the UDP receiver's I/O handler in the receiver
The I/O handler is syslog_fd_handler(), let's set it when creating
the receivers.
2020-10-15 21:47:56 +02:00
Willy Tarreau
d2fb99f9d5 MINOR: protocol: add a default I/O callback and put it into the receiver
For now we're still using the protocol's default accept() function as
the I/O callback registered by the receiver into the poller. While
this is usable for most TCP connections where a listener is needed,
this is not suitable for UDP where a different handler is needed.

Let's make this configurable in the receiver just like the upper layer
is configurable for listeners. In order to ease stream protocols
handling, the protocols will now provide a default I/O callback
which will be preset into the receivers upon allocation so that
almost none of them has to deal with it.
2020-10-15 21:47:56 +02:00
Willy Tarreau
caa91de718 MEDIUM: listener: remove the second pass of fd manipulation at the end
The receiver FDs must not be manipulated by the listener_accept()
function anymore, it must exclusively rely on the job performed by
its listeners, as it is also the only way to keep the receivers
working for established connections regardless of the listener's
state (typically for multiplexed protocols like QUIC). This used
to be necessary when the FDs were adjusted at once only but now
that fd_done() is gone and the need for polling enabled by the
accept_conn() function which detects the EAGAIN, we have nothing
to do there to fixup any possible previous bad decision anymore.

Interestingly, as a side effect of making the code not depend on
the FD anymore, it also removes the need for a second lock, which
increase the accept rate by about 1% on 8 threads.
2020-10-15 21:47:56 +02:00
Willy Tarreau
9378bbe0be MEDIUM: listener: use protocol->accept_conn() to accept a connection
Now listener_accept() doesn't have to deal with the incoming FD anymore
(except for a little bit of side band stuff). It directly retrieves a
valid connection from the protocol layer, or receives a well-defined
error code that helps it decide how to proceed. This removes a lot of
hardly maintainable low-level code and opens the function to receive
new protocol stacks.
2020-10-15 21:47:56 +02:00
Willy Tarreau
344b8fcf87 MINOR: sockpair: implement sockpair_accept_conn() to accept a connection
This is the same as previous commit, but this time for the sockpair-
specific stuff, relying on recv_fd_uxst() instead of accept(), so the
code is simpler. The various errno cases are handled like for regular
sockets, though some of them will probably never happen, but this does
not hurt.
2020-10-15 21:47:56 +02:00
Willy Tarreau
f1dc9f2f17 MINOR: sock: implement sock_accept_conn() to accept a connection
The socket-specific accept() code in listener_accept() has nothing to
do there. Let's move it to sock.c where it can be significantly cleaned
up. It will now directly return an accepted connection and provide a
status code instead of letting listener_accept() deal with various errno
values. Note that this doesn't support the sockpair specific code.

The function is now responsible for dealing with its own receiver's
polling state and calling fd_cant_recv() when facing EAGAIN.

One tiny change from the previous implementation is that the connection's
sockaddr is now allocated before trying accept(), which saves a memcpy()
of the resulting address for each accept at the expense of a cheap
pool_alloc/pool_free on the final accept returning EAGAIN. This still
apparently slightly improves accept performance in microbencharks.
2020-10-15 21:47:56 +02:00
Willy Tarreau
1e509a7231 MINOR: protocol: add a new function accept_conn()
This per-protocol function will be used to accept an incoming
connection and return it as a struct connection*. As such the protocol
stack's internal representation of a connection will not need to be
handled by the listener code.
2020-10-15 21:47:56 +02:00
Willy Tarreau
7d053e4211 MINOR: sock: rename sock_accept_conn() to sock_accepting_conn()
This call was introduced by commit 5ced3e887 ("MINOR: sock: add
sock_accept_conn() to test a listening socket") but is actually quite
confusing because it makes one think the socket will accept a connection
(which is what we want to have in a new function) while it only tells
whether it's configured to accept connections. Let's call it
sock_accepting_conn() instead.

The same change was applied to sockpair which had the same issue.
2020-10-15 21:47:56 +02:00
Willy Tarreau
65ed143841 MINOR: connection: add new error codes for accept_conn()
accept_conn() will be used to accept an incoming connection and return it.
It will have to deal with various error codes. The currently identified
ones were created as CO_AC_*.
2020-10-15 21:47:56 +02:00
Willy Tarreau
01ca149047 MINOR: session: simplify error path in session_accept_fd()
Now that this function is always called with an initialized connection
and that the control layer is always initialized, we don't need to play
games with fdtab[] to decide how to close, we can simply rely on the
regular close path using conn_ctrl_close(), which can be fused with
conn_xprt_close() into conn_full_close().

The code is cleaner because the FD is now used only for some
protocol-specific setup (that will eventually have to move) and to
try to send a hard-coded HTTP 500 error message on raw sockets.
2020-10-15 21:47:56 +02:00
Willy Tarreau
83efc320aa MEDIUM: listener: allocate the connection before queuing a new connection
Till now we would keep a per-thread queue of pending incoming connections
for which we would store:
  - the listener
  - the accepted FD
  - the source address
  - the source address' length

And these elements were first used in session_accept_fd() running on the
target thread to allocate a connection and duplicate them again. Doing
this induces various problems. The first one is that session_accept_fd()
may only run on file descriptors and cannot be reused for QUIC. The second
issue is that it induces lots of memory copies and that the listerner
queue thrashes a lot of cache, consuming 64 bytes per entry.

This patch changes this by allocating the connection before queueing it,
and by only placing the connection's pointer into the queue. Indeed, the
first two calls used to initialize the connection already store all the
information above, which can be retrieved from the connection pointer
alone. So we just have to pop one pointer from the target thread, and
pass it to session_accept_fd() which only needs the FD for the final
settings.

This starts to make the accept path a bit more transport-agnostic, and
saves memory and CPU cycles at the same time (1% connection rate increase
was noticed with 4 threads). Thanks to dividing the accept-queue entry
size from 64 to 8 bytes, its size could be increased from 256 to 1024
connections while still dividing the overall size by two. No single
queue full condition was met.

One minor drawback is that connection may be allocated from one thread's
pool to be used into another one. But this already happens a lot with
connection reuse so there is really nothing new here.
2020-10-15 21:47:56 +02:00
Willy Tarreau
9b7587a6af MINOR: connection: make sockaddr_alloc() take the address to be copied
Roughly half of the calls to sockadr_alloc() are made to copy an already
known address. Let's optionally pass it in argument so that the function
can handle the copy at the same time, this slightly simplifies its usage.
2020-10-15 21:47:56 +02:00
Willy Tarreau
0138f51f93 CLEANUP: fd: finally get rid of fd_done_recv()
fd_done_recv() used to be useful with the FD cache because it used to
allow to keep a file descriptor active in the poller without being
marked as ready in the cache, saving it from ringing immediately,
without incurring any system call. It was a way to make it yield
to wait for new events leaving a bit of time for others. The only
user left was the connection accepter (listen_accept()). We used
to suspect that with the FD cache removal it had become totally
useless since changing its readiness or not wouldn't change its
status regarding the poller itself, which would be the only one
deciding to report it again.

Careful tests showed that it indeed has exactly zero effect nowadays,
the syscall numbers are exactly the same with and without, including
when enabling edge-triggered polling.

Given that there's no more API available to manipulate it and that it
was directly called as an optimization from listener_accept(), it's
about time to remove it.
2020-10-15 21:47:56 +02:00
Willy Tarreau
e53e7ec9d9 CLEANUP: protocol: remove the ->drain() function
No protocol defines it anymore. The last user used to be the monitor-net
stuff that got partially broken already when the tcp_drain() function
moved to conn_sock_drain() with commit e215bba95 ("MINOR: connection:
make conn_sock_drain() work for all socket families") in 1.9-dev2.

A part of this will surely move back later when non-socket connections
arrive with QUIC but better keep the API clean and implement what's
needed in time instead.
2020-10-15 21:47:04 +02:00
Willy Tarreau
9e9919dd8b MEDIUM: proxy: remove obsolete "monitor-net"
As discussed here during 2.1-dev, "monitor-net" is totally obsolete:

   https://www.mail-archive.com/haproxy@formilux.org/msg35204.html

It's fundamentally incompatible with usage of SSL, and imposes the
presence of file descriptors with hard-coded syscalls directly in the
generic accept path.

It's very unlikely that anyone has used it in the last 10 years for
anything beyond testing. In the worst case if anyone would depend
on it, replacing it with "http-request return status 200 if ..." and
"mode http" would certainly do the trick.

The keyword is still detected as special by the config parser to help
users update their configurations appropriately.
2020-10-15 21:47:04 +02:00
Willy Tarreau
77e0daef9f MEDIUM: proxy: remove obsolete "mode health"
As discussed here during 2.1-dev, "mode health" is totally obsolete:

   https://www.mail-archive.com/haproxy@formilux.org/msg35204.html

It's fundamentally incompatible with usage of SSL, doesn't support
source filtering, and imposes the presence of file descriptors with
hard-coded syscalls directly in the generic accept path.

It's very unlikely that anyone has used it in the last 10 years for
anything beyond testing. In the worst case if anyone would depend
on it, replacing it with "http-request return status 200" and "mode
http" would certainly do the trick.

The keyword is still detected as special by the config parser to help
users update their configurations appropriately.
2020-10-15 21:47:04 +02:00
Amaury Denoyelle
2717965387 DOC: Describe reuse safe for HOL handling
Explain the special case of server connections using a protocol subject
to HOL on http-reuse safe mode.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
46f041d7f8 MEDIUM: fcgi: remove conn from session on detach
FCGI mux is marked with HOL blocking. On safe reuse mode, the connection
using it are placed on the sessions instead of the available lists to
avoid sharing it with several clients. On detach, if they are no more
streams, remove the connection from the session before adding it to the
idle list. If there is still used streams, do not add it to available
list as it should be already on the session list.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
6b8daef56b MEDIUM: h2: remove conn from session on detach
H2 mux is marked with HOL blocking. On safe reuse mode, the connection
using it are placed on the sessions instead of the available lists to
avoid sharing it with several clients. On detach, if they are no more
streams, remove the connection from the session before adding it to the
idle list. If there is still used streams, do not add it to available
list as it should be already on the session list.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
0d21deaded MEDIUM: backend: add reused conn to sess if mux marked as HOL blocking
If a connection is using a mux protocol subject to HOL blocking, add it
to the session instead of the available list to avoid sharing it with
other clients on connection reuse.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
00464ab8f4 MEDIUM: backend: add new conn to session if mux marked as HOL blocking
When allocating a new session on connect_server, if the mux protocol is
marked as subject of HOL blocking, add it into session instead of
available list to avoid sharing it with other clients.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
04a24c5eaa MINOR: connection: don't check priv flag on free
Do not check CO_FL_PRIVATE flag to check if the connection is in session
list on conn_free. This is necessary due to the future patches which add
server connections in the session list even if not private, if the mux
protocol is the subject of HOL blocking.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
3d3c0918dc MINOR: mux/connection: add a new mux flag for HOL risk
This flag is used to indicate if the mux protocol is subject to
head-of-line blocking problem.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
c98df5fb44 MINOR: connection: improve list api usage
Replace !LIST_ISEMPTY by LIST_ADDED and LIST_DEL+LIST_INIT by
LIST_DEL_INIT for connection session list.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
9c13b62b47 BUG/MEDIUM: connection: fix srv idle count on conn takeover
On server connection migration from one thread to another, the wrong
idle thread-specific counter is decremented. This bug was introduced
since commit 3d52f0f1f8 due to the
factorization with srv_use_idle_conn. However, this statement is only
executed from conn_backend_get. Extract the decrement from
srv_use_idle_conn in conn_backend_get and use the correct
thread-specific counter.

Rename the function to srv_use_conn to better reflect its purpose as it
is also used with a newly initialized connection not in the idle list.

As a side change, the connection insertion to available list has also
been extracted to conn_backend_get. This will be useful to be able to
specify an alternative list for protocol subject to HOL risk that should
not be shared between several clients.

This bug is only present in this release and thus do not need a backport.
2020-10-15 15:19:34 +02:00
Amaury Denoyelle
5f1ded5629 BUG/MINOR: connection: fix loop iter on connection takeover
The loop always missed one iteration due to the incrementation done on
the for check. Move the incrementation on the loop last statement to fix
this behaviour.

This bug has a very limited impact, not at all visible to the user, but
could be backported to 2.2.
2020-10-15 15:19:25 +02:00