Commit Graph

20785 Commits

Author SHA1 Message Date
Amaury Denoyelle
ac1164de7c MINOR: connection: define error for reverse connect
Define a new error code for connection CO_ER_REVERSE. This will be used
to report an issue which happens on a connection targetted for reversal
before reverse process is completed.
2023-09-29 18:08:26 +02:00
Amaury Denoyelle
753fe2b9ac BUG/MINOR: tcp_act: fix attach-srv rule ACL parsing
Fix parser for tcp-request session attach-srv rule. Before this commit,
it was impossible to use an anonymous ACL with it. This was caused
because support for optional name argument was badly implemented.

No need to backport this.
2023-09-29 18:07:52 +02:00
Amaury Denoyelle
6118590e95 BUG/MINOR: proto_reverse_connect: fix FD leak on connection error
Listener using "rev@" address is responsible to setup connection and
reverse it using a server instance. If an error occured before reversal
is completed, proper freeing must be taken care of by the listener as no
session exists for this.

Currently, there is two locations where a connection is freed on error
before reversal inside reverse_connect protocol. Both of these were
incomplete as several function must be used to ensure connection is
properly freed. This commit fixes this by reusing the same cleaning
mechanism used inside H2 multiplexer.

One of the biggest drawback before this patch was that connection FD was
not properly removed from fdtab which caused a file-descriptor leak.

No need to backport this.
2023-09-29 18:02:36 +02:00
Willy Tarreau
b3dcd59f8d MINOR: stream: fix output alignment of stuck thread dumps
Since commit c185bc465 ("MEDIUM: stream: now provide full stream dumps
in case of loops"), the stuck threads show the stream's pointer in the
margin since it appears immediately after a line feed. Let's add it after
the prefix and "stream=" to make the output more readable.
2023-09-29 16:43:07 +02:00
Emeric Brun
3c250cb847 Revert "BUG/MEDIUM: quic: missing check of dcid for init pkt including a token"
This reverts commit 072e774939.

Doing h2load with h3 tests we notice this behavior:

Client ---- INIT no token SCID = a , DCID = A ---> Server (1)
Client <--- RETRY+TOKEN DCID = a, SCID = B    ---- Server (2)
Client ---- INIT+TOKEN SCID = a , DCID = B    ---> Server (3)
Client <--- INIT DCID = a, SCID = C           ---- Server (4)
Client ---- INIT+TOKEN SCID = a, DCID = C     ---> Server (5)

With (5) dropped by haproxy due to token validation.

Indeed the previous patch adds SCID of retry packet sent to the aad
of the token ciphering aad. It was useful to validate the next INIT
packets including the token are sent by the client using the new
provided SCID for DCID as mantionned into the RFC 9000.
But this stateless information is lost on received INIT packets
following the first outgoing INIT packet from the server because
the client is also supposed to re-use a second time the lastest
received SCID for its new DCID. This will break the token validation
on those last packets and they will be dropped by haproxy.

It was discussed there:
https://mailarchive.ietf.org/arch/msg/quic/7kXVvzhNCpgPk6FwtyPuIC6tRk0/

To resume: this is not the role of the server to verify the re-use of
retry's SCID for DCID in further client's INIT packets.

The previous patch must be reverted in all versions where it was
backported (supposed until 2.6)
2023-09-29 09:27:22 +02:00
Willy Tarreau
d956db6638 CLEANUP: stream: remove the now unused stream_dump() function
It was superseded by strm_dump_to_buffer() which provides much more
complete information and supports anonymizing.
2023-09-29 09:20:27 +02:00
Willy Tarreau
feff6296a1 MINOR: debug: use the more detailed stream dump in panics
Similarly upon a panic we'd like to have a more detailed dump of a
stream's state, so let's use the full dump function for this now.
2023-09-29 09:20:27 +02:00
Willy Tarreau
c185bc4656 MEDIUM: stream: now provide full stream dumps in case of loops
When a stream is caught looping, we produce some output to help figure
its internal state explaining why it's looping. The problem is that this
debug output is quite old and the info it provides are quite insufficient
to debug a modern process, and since such bugs happen only once or twice
a year the situation doesn't improve.

On the other hand the output of "show sess all" is extremely detailed
and kept up to date with code evolutions since it's a heavily used
debugging tool.

This commit replaces the call to the totally outdated stream_dump() with
a call to strm_dump_to_buffer(), and removes the filters dump since they
are already emitted there, and it now produces much more exploitable
output:

  [ALERT]    (5936) : A bogus STREAM [0x7fa8dc02f660] is spinning at 5653514 calls per second and refuses to die, aborting now! Please report this error to developers:
  0x7fa8dc02f660: [28/Sep/2023:09:53:08.811818] id=2 proto=tcpv4 source=127.0.0.1:58306
     flags=0xc4a, conn_retries=0, conn_exp=<NEVER> conn_et=0x000 srv_conn=0x133f220, pend_pos=(nil) waiting=0 epoch=0x1
     frontend=public (id=2 mode=http), listener=? (id=1) addr=127.0.0.1:4080
     backend=public (id=2 mode=http) addr=127.0.0.1:61932
     server=s1 (id=1) addr=127.0.0.1:7443
     task=0x7fa8dc02fa40 (state=0x01 nice=0 calls=5749559 rate=5653514 exp=3s tid=1(1/1) age=1s)
     txn=0x7fa8dc02fbf0 flags=0x3000 meth=1 status=-1 req.st=MSG_DONE rsp.st=MSG_RPBEFORE req.f=0x4c rsp.f=0x00
     scf=0x7fa8dc02f5f0 flags=0x00000482 state=EST endp=CONN,0x7fa8dc02b4b0,0x05004001 sub=1 rex=58s wex=<NEVER>
         h1s=0x7fa8dc02b4b0 h1s.flg=0x100010 .sd.flg=0x5004001 .req.state=MSG_DONE .res.state=MSG_RPBEFORE
          .meth=GET status=0 .sd.flg=0x05004001 .sc.flg=0x00000482 .sc.app=0x7fa8dc02f660
          .subs=0x7fa8dc02f608(ev=1 tl=0x7fa8dc02fae0 tl.calls=0 tl.ctx=0x7fa8dc02f5f0 tl.fct=sc_conn_io_cb)
          h1c=0x7fa8dc0272d0 h1c.flg=0x0 .sub=0 .ibuf=0@(nil)+0/0 .obuf=0@(nil)+0/0 .task=0x7fa8dc0273f0 .exp=<NEVER>
         co0=0x7fa8dc027040 ctrl=tcpv4 xprt=RAW mux=H1 data=STRM target=LISTENER:0x12840c0
         flags=0x00000300 fd=32 fd.state=20 updt=0 fd.tmask=0x2
     scb=0x7fa8dc02fb30 flags=0x00001411 state=EST endp=CONN,0x7fa8dc0300c0,0x05000001 sub=1 rex=58s wex=<NEVER>
         h1s=0x7fa8dc0300c0 h1s.flg=0x4010 .sd.flg=0x5000001 .req.state=MSG_DONE .res.state=MSG_RPBEFORE
          .meth=GET status=0 .sd.flg=0x05000001 .sc.flg=0x00001411 .sc.app=0x7fa8dc02f660
          .subs=0x7fa8dc02fb48(ev=1 tl=0x7fa8dc02feb0 tl.calls=2 tl.ctx=0x7fa8dc02fb30 tl.fct=sc_conn_io_cb)
          h1c=0x7fa8dc02ff00 h1c.flg=0x80000000 .sub=1 .ibuf=0@(nil)+0/0 .obuf=0@(nil)+0/0 .task=0x7fa8dc030020 .exp=<NEVER>
         co1=0x7fa8dc02fcd0 ctrl=tcpv4 xprt=RAW mux=H1 data=STRM target=SERVER:0x133f220
         flags=0x10000300 fd=33 fd.state=10421 updt=0 fd.tmask=0x2
     req=0x7fa8dc02f680 (f=0x1840000 an=0x8000 pipe=0 tofwd=0 total=79)
         an_exp=<NEVER> buf=0x7fa8dc02f688 data=(nil) o=0 p=0 i=0 size=0
         htx=0xc18f60 flags=0x0 size=0 data=0 used=0 wrap=NO extra=0
     res=0x7fa8dc02f6d0 (f=0x80000000 an=0x1400000 pipe=0 tofwd=0 total=0)
         an_exp=<NEVER> buf=0x7fa8dc02f6d8 data=(nil) o=0 p=0 i=0 size=0
         htx=0xc18f60 flags=0x0 size=0 data=0 used=0 wrap=NO extra=0
    call trace(10):
    |       0x59f2b7 [0f 0b 0f 1f 80 00 00 00]: stream_dump_and_crash+0x1f7/0x2bf
    |       0x5a0d71 [e9 af e6 ff ff ba 40 00]: process_stream+0x19f1/0x3a56
    |       0x68d7bb [49 89 c7 4d 85 ff 74 77]: run_tasks_from_lists+0x3ab/0x924
    |       0x68e0b4 [29 44 24 14 8b 4c 24 14]: process_runnable_tasks+0x374/0x6d6
    |       0x656f67 [83 3d f2 75 84 00 01 0f]: run_poll_loop+0x127/0x5a8
    |       0x6575d7 [48 8b 1d 42 50 5c 00 48]: main+0x1b22f7
    | 0x7fa8e0f35e45 [64 48 89 04 25 30 06 00]: libpthread:+0x7e45
    | 0x7fa8e0e5a4af [48 89 c7 b8 3c 00 00 00]: libc:clone+0x3f/0x5a

Note that the output is subject to the global anon key so that IPs and
object names can be anonymized if required. It could make sense to
backport this and the few related previous patches next time such an
issue is reported.
2023-09-29 09:20:27 +02:00
Willy Tarreau
b206504f43 MINOR: streams: add support for line prefixes to strm_dump_to_buffer()
Now the function can prepend every new line with a caller-fed prefix
that will later be used for indenting. The caller has to feed the
prefix for the first line itself though, allowing to possibly append
the first line at the end of an existing one.
2023-09-29 09:20:27 +02:00
Willy Tarreau
5743eeea88 MINOR: stream: make stream_dump() always multi-line
There used to be two working modes for this function, a single-line one
and a multi-line one, the difference being made on the "eol" argument
which could contain either a space or an LF (and with the prefix being
adjusted accordingly). Let's get rid of the single-line mode as it's
what limits the output contents because it's difficult to produce
exploitable structured data this way. It was only used in the rare case
of spinning streams and applets and these are the ones lacking info. Now
a spinning stream produces:

[ALERT]    (3511) : A bogus STREAM [0x227e7b0] is spinning at 5581202 calls per second and refuses to die, aborting now! Please report this error to developers:
  strm=0x227e7b0,c4a src=127.0.0.1 fe=public be=public dst=s1
  txn=0x2041650,3000 txn.req=MSG_DONE,4c txn.rsp=MSG_RPBEFORE,0
  rqf=1840000 rqa=8000 rpf=80000000 rpa=1400000
  scf=0x24af280,EST,482 scb=0x24af430,EST,1411
  af=(nil),0 sab=(nil),0
  cof=0x7fdb28026630,300:H1(0x24a6f60)/RAW((nil))/tcpv4(33)
  cob=0x23199f0,10000300:H1(0x24af630)/RAW((nil))/tcpv4(32)
  filters={}
  call trace(11):
  (...)
2023-09-29 09:20:27 +02:00
Willy Tarreau
5ddeba7af3 MINOR: stream: make strm_dump_to_buffer() show the list of filters
That's one of the rare pieces of information that was not present in
the full dump and only in the short one, the list of filters the stream
is subscribed to (however the current filter was present and more
detailed).
2023-09-29 09:20:27 +02:00
Willy Tarreau
3e630a9871 MINOR: stream: make strm_dump_to_buffer() take an arbitrary buffer
We won't always want to dump into the trash, so let's make the function
accept an arbitrary buffer.
2023-09-29 09:20:27 +02:00
Willy Tarreau
6bc07103f8 CLEANUP: stream: make strm_dump_to_buffer() take a const stream
Now that we don't need a variable anymore, let's pass a const stream.
It will void any doubt about what can happen to the stream when the
function is called from inspection points (show sess etc).
2023-09-29 09:20:27 +02:00
Willy Tarreau
1a01ee4740 CLEANUP: stream: use const filters in the dump function
The strm_dump_to_buffer() function requires a variable stream only
for a few functions in it that do not take a const. strm_flt() is
one of them (and for good reasons since most call places want to
update filters). Here we know we won't modify the filter nor the
stream so let's directly access the strm_flt in the stream and assign
it to a const filter. This will also catch any future accidental change.
2023-09-29 09:20:27 +02:00
Willy Tarreau
77ecb3146a MINOR: stream: split stats_dump_full_strm_to_buffer() in two
The function only works with the CLI's appctx and does most of the
convenient work of dumping a stream into a buffer (well, the trash
buffer for now). Let's split it in two so that most of the work is
done in a generic function and that the CLI-specific function relies
on that one.

The diff looks huge due to the changed indent caused by the extraction
of the switch/case statement, but when looked at using diff -b it's
small.
2023-09-29 09:20:27 +02:00
Willy Tarreau
6c2af048d6 CLEANUP: stream: make the dump code not depend on the CLI appctx
The HA_ANON_CLI() helper relies on the CLI appctx and prevents the code
from being made more generic. Let's extract the CLI's anon key separately
and pass it via HA_ANON_STR() instead.
2023-09-29 09:20:27 +02:00
Willy Tarreau
48b2233d36 CLEANUP: freq_ctr: make all freq_ctr readers take a const
Since 2.4-dev18 with commit b4476c6a8 ("CLEANUP: freq_ctr: make
arguments of freq_ctr_total() const"), most of the freq_ctr readers
should be fine with a const, except that they were not updated to
reflect this and they continue to force variable on some functions
that call them. Let's update this. This could even be backported if
needed.
2023-09-29 09:20:27 +02:00
Amaury Denoyelle
7cf9cf705e BUG/MINOR: mux-quic: remove full demux flag on ncbuf release
When rcv_buf stream callback is invoked, mux tasklet is woken up if
demux was previously blocked due to lack of buffer space. A BUG_ON() is
present to ensure there is data in qcs Rx buffer. If this is not the
case, wakeup is unneeded :

  BUG_ON(!ncb_data(&qcs->rx.ncbuf, 0));

This BUG_ON() may be triggered if RESET_STREAM is received after demux
has been blocked. On reset, Rx buffer is purged according to RFC 9000
which allows to discard any data not yet consumed. This will trigger the
BUG_ON() assertion if rcv_buf stream callback is invoked after this.

To prevent BUG_ON() crash, just clear demux block flag each time Rx
buffer is purged. This covers accordingly RESET_STREAM reception.

This should be backported up to 2.7.

This may fix github issue #2293.

This bug relies on several precondition so its occurence is rare. This
was reproduced by using a custom client which post big enough data to
fill the buffer. It then emits a RESET_STREAM in place of a proper FIN.
Moreover, mux code has been edited to artificially stalled stream read
to force demux blocking.

h3_data_to_htx:
-       return htx_sent;
+       return 1;

qcc_recv_reset_stream:
        qcs_free_ncbuf(qcs, &qcs->rx.ncbuf);
+       qcs_notify_recv(qcs);

qmux_strm_rcv_buf:
        char fin = 0;
+       static int i = 0;
+       if (++i < 2)
+               return 0;
        TRACE_ENTER(QMUX_EV_STRM_RECV, qcc->conn, qcs);
2023-09-28 11:44:53 +02:00
Vladimir Vdovin
f8b81f6eb7 MINOR: support for http-request set-timeout client
Added set-timeout for frontend side of session, so it can be used to set
custom per-client timeouts if needed. Added cur_client_timeout to fetch
client timeout samples.
2023-09-28 08:49:22 +02:00
Willy Tarreau
f75a369009 [RELEASE] Released version 2.9-dev6
Released version 2.9-dev6 with the following main changes :
    - BUG/MINOR: quic: fdtab array underflow access
    - DEBUG: pools: always record the caller for uncached allocs as well
    - DEBUG: pools: pass the caller pointer to the check functions and macros
    - DEBUG: pools: make pool_check_pattern() take a pointer to the pool
    - DEBUG: pools: inspect pools on fatal error and dump information found
    - BUG/MEDIUM: quic: quic_cc_conn ->cntrs counters unreachable
    - DEBUG: pools: also print the item's pointer when crashing
    - DEBUG: pools: also print the value of the tag when it doesn't match
    - DEBUG: pools: print the contents surrounding the expected tag location
    - MEDIUM: pools: refine pool size rounding
    - BUG/MEDIUM: hlua: don't pass stale nargs argument to lua_resume()
    - BUG/MINOR: hlua/init: coroutine may not resume itself
    - BUG/MEDIUM: mux-fcgi: Don't swap trash and dbuf when handling STDERR records
    - BUG/MINOR: promex: fix backend_agg_check_status
    - BUG/MEDIUM: master/cli: Pin the master CLI on the first thread of the group 1
    - MAJOR: import: update mt_list to support exponential back-off
    - CLEANUP: pools: simplify the pool expression when no pool was matched in dump
    - MINOR: samples: implement bytes_in and bytes_out samples
    - DOC: configuration: add %[req.ver] sample to %HV
    - BUG/MINOR: quic: Leak of frames to send.
    - DOC: configuration: add %[query] to %HQ
    - BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API
    - BUG/MAJOR: mux-h2: Report a protocol error for any DATA frame before headers
    - BUILD: quic: fix build on centos 8 and USE_QUIC_OPENSSL_COMPAT
    - Revert "MAJOR: import: update mt_list to support exponential back-off"
    - BUG/MINOR: server: add missing free for server->rdr_pfx
    - REGTESTS: ssl: skip OCSP test w/ WolfSSL
    - REGTESTS: ssl: skip generate-certificates test w/ wolfSSL
    - MINOR: logs: clarify the check of the log range
    - MINOR: log: remove the unused curr_idx in struct smp_log_range
    - CLEANUP: logs: rename a confusing local variable "curr_rg" to "smp_rg"
    - MINOR: logs: use a single index to store the current range and index
    - MEDIUM: logs: atomically check and update the log sample index
    - CLEANUP: ring: rename the ring lock "RING_LOCK" instead of "LOGSRV_LOCK"
    - BUG/MEDIUM: http-ana: Try to handle response before handling server abort
    - MEDIUM: tools/ip: v4tov6() and v6tov4() rework
    - MINOR: pattern/ip: offload ip conversion logic to helper functions
    - MINOR: pattern: fix pat_{parse,match}_ip() function comments
    - MINOR: pattern/ip: simplify pat_match_ip() function
    - BUG/MEDIUM: server/cli: don't delete a dynamic server that has streams
    - MINOR: hlua: Add support for the "http-after-res" action
    - BUG/MINOR: proto_reverse_connect: fix preconnect with startup name resolution
    - MINOR: proto_reverse_connect: prevent transparent server for pre-connect
    - CI: cirrus-ci: display gdb bt if any
    - MEDIUM: sample: Enhances converter "bytes" to take variable names as arguments
    - MEDIUM: sample: Small fix in function check_operator for eror reporting
    - MINOR: quic: handle external extra CIDs generator.
    - BUG/MINOR: proto_reverse_connect: set default maxconn
    - MINOR: proto_reverse_connect: refactor preconnect failure
    - MINOR: proto_reverse_connect: remove unneeded wakeup
    - MINOR: proto_reverse_connect: emit log for preconnect
2023-09-22 23:11:31 +02:00
Amaury Denoyelle
b9bb3b932c MINOR: proto_reverse_connect: emit log for preconnect
Add reporting using send_log() for preconnect operation. This is minimal
to ensure we understand the current status of listener in active reverse
connect.

To limit logging quantity, only important transition are considered.
This requires to implement a minimal state machine as a new field in
receiver structure.

Here are the logs produced :
* Initiating : first time preconnect is enabled on a listener
* Error : last preconnect attempt interrupted on a connection error
* Reaching maxconn : all necessary connections were reversed and are
  operational on a listener
2023-09-22 17:21:53 +02:00
Amaury Denoyelle
069ca55e70 MINOR: proto_reverse_connect: remove unneeded wakeup
No need to use task_wakeup() on rev_bind_listener() to bootstrap
preconnect. A similar call is done on rev_enable_listener() which serve
both for bootstrap and also later to reinitiate attemps to maintain
maxconn if connection are freed.
2023-09-22 17:06:18 +02:00
Amaury Denoyelle
1f43fb71be MINOR: proto_reverse_connect: refactor preconnect failure
When a connection is freed during preconnect before reversal, the error
must be notified to the listener to remove any connection reference and
rearm a new preconnect attempt. Currently, this can occur through 2 code
paths :
* conn_free() called directly by H2 mux
* error during conn_create_mux(). For this case, connection is flagged
  with CO_FL_ERROR and reverse_connect task is woken up. The process
  task handler is then responsible to call conn_free() for such
  connection.

Duplicated steps where done both in conn_free() and process task
handler. These are now removed. To facilitate code maintenance,
dedicated operation have been centralized in a new function
rev_notify_preconn_err() which is called by conn_free().
2023-09-22 16:43:36 +02:00
Amaury Denoyelle
a37abee266 BUG/MINOR: proto_reverse_connect: set default maxconn
If maxconn is not set for preconnect, it assumes we want to establish a
single connection. However, this does not work properly in case the
connection is closed after reversal. Listener is not resumed by protocol
layer to attempt a new preconnect.

To fix this, explicitely set maxconn to 1 in the listener instance if
none is defined. This ensures the behavior is consistent. A BUG_ON() has
been added to validate we never try to use a listener with a 0 maxconn.
2023-09-22 16:40:58 +02:00
Emeric Brun
27b2fd2e06 MINOR: quic: handle external extra CIDs generator.
This patch adds the ability to externalize and customize the code
of the computation of extra CIDs after the first one was derived from
the ODCID.

This is to prepare interoperability with extra components such as
different QUIC proxies or routers for instance.

To process the patch defines two function callbacks:
- the first one to compute a hash 64bits from the first generated CID
  (itself continues to be derived from ODCID). Resulting hash is stored
  into the 'quic_conn' and 64bits is chosen large enought to be able to
  store an entire haproxy's CID.
- the second callback re-uses the previoulsy computed hash to derive
  an extra CID using the custom algorithm. If not set haproxy will
  continue to choose a randomized CID value.

Those two functions have also the 'cluster_secret' passed as an argument:
this way, it is usable for obfuscation or ciphering.
2023-09-22 10:32:14 +02:00
Lokesh Jindal
d897d7da87 MEDIUM: sample: Small fix in function check_operator for eror reporting
When function "check_operator" calls function "vars_check_arg" to decode
a variable, it passes in NULL value for pointer to the char array meant
for capturing the error message.  This commit replaces NULL with the
pointer to the real char array.  This should help in correct error
reporting.
2023-09-22 08:48:53 +02:00
Lokesh Jindal
915e48675a MEDIUM: sample: Enhances converter "bytes" to take variable names as arguments
Prior to this commit, converter "bytes" takes only integer values as
arguments.  After this commit, it can take variable names as inputs.
This allows us to dynamically determine the offset/length and capture
them in variables.  These variables can then be used with the converter.
Example use case: parsing a token present in a request header.
2023-09-22 08:48:51 +02:00
Ilya Shipitsin
6601317b3b CI: cirrus-ci: display gdb bt if any
previously, if test process crashes (either BUG_ON or segfault), no
coredump were collected and analysed
2023-09-22 08:28:30 +02:00
Amaury Denoyelle
d3db96f11a MINOR: proto_reverse_connect: prevent transparent server for pre-connect
Prevent using transparent servers for pre-connect on startup by emitting
a fatal error. This is used to ensure we never try to connect to a
target with an unspecified destination address or port.
2023-09-21 16:58:08 +02:00
Amaury Denoyelle
9b6812d781 BUG/MINOR: proto_reverse_connect: fix preconnect with startup name resolution
addr member of server structure is not set consistently depending on the
server address type. When using <IP:PORT> notation, its port is properly
set. However, when using <HOSTNAME:PORT>, only IP address is set after
startup name resolution but its port is left to 0.

This behavior causes preconnect to not be functional when using server
with hostname for startup name resolution. Indeed, only srv.addr is used
as connect argument through function new_reverse_conn(). To fix this,
rely on srv.svc_port : this member is always set for servers using IP or
hostname. This is similar to connect_server() on the backend side.

This does not need to be backported.
2023-09-21 16:57:30 +02:00
Sébastien Gross
6a9ba85322 MINOR: hlua: Add support for the "http-after-res" action
This commit introduces support for the "http-after-res" action in
hlua, enabling the invocation of a Lua function in a
"http-after-response" rule. With this enhancement, a Lua action can be
registered using the "http-after-res" action type:

    core.register_action('myaction', {'http-after-res'}, myaction)

A new "lua.myaction" is created and can be invoked in a
"http-after-response" rule:

    http-after-response lua.myaction

This addition provides greater flexibility and extensibility in
handling post-response actions using Lua.

This commit depends on:
 - 4457783 ("MINOR: http_ana: position the FINAL flag for http_after_res execution")

Signed-off-by: Sébastien Gross <sgross@haproxy.com>
2023-09-21 16:31:20 +02:00
Aurelien DARRAGON
95c4d24825 BUG/MEDIUM: server/cli: don't delete a dynamic server that has streams
In cli_parse_delete_server(), we take care of checking that the server is
in MAINT and that the cur_sess counter is set to 0, in the hope that no
connection/stream ressources continue to point to the server, else we
refuse to delete it.

As shown in GH #2298, this is not sufficient.

Indeed, when the server option "on-marked-down shutdown-sessions" is not
used, server streams are not purged when srv enters maintenance mode.

As such, there could be remaining streams that point to the server. To
detect this, a secondary check on srv->cur_sess counter was performed in
cli_parse_delete_server(). Unfortunately, there are some code paths that
could lead to cur_sess being decremented, and not resulting in a stream
being actually shutdown. As such, if the delete_server cli is handled
right after cur_sess has been decremented with streams still pointing to
the server, we could face some nasty bugs where stream->srv_conn could
point to garbage memory area, as described in the original github report.

To make the check more reliable prior to deleting the server, we don't
rely exclusively on cur_sess and directly check that the server is not
used in any stream through the srv_has_stream() helper function.

Thanks to @capflam which found out the root cause for the bug and greatly
helped to provide the fix.

This should be backported up to 2.6.
2023-09-21 14:57:01 +02:00
Aurelien DARRAGON
0189a4679e MINOR: pattern/ip: simplify pat_match_ip() function
pat_match_ip() has been updated several times over the last decade to
introduce new features, but it was never cleaned up.

The result is that the function is pretty hard to read, and there are
multiple duplicated code blocks so it becomes error-prone to maintain it,
plus it bloats the haproxy binary for nothing.

In this patch, we move the tree search (ip4 / ip6) logic into 2
dedicated helper functions. This allows us to refactor pat_match_ip()
without touching to the original behavior.
2023-09-21 09:50:56 +02:00
Aurelien DARRAGON
acb7d8a89c MINOR: pattern: fix pat_{parse,match}_ip() function comments
Function comments were outdated, probably because they have not been
updated during the previous refactors.

Fixing comments to better reflect the current behavior.

This may be backported up to 2.2, or even 2.0 by slightly adapting the
patch (in 2.0, such functions are documented in proto/pattern.h)
2023-09-21 09:50:55 +02:00
Aurelien DARRAGON
f80122db26 MINOR: pattern/ip: offload ip conversion logic to helper functions
Now that v4tov6() and v6tov4() were reworked to match behavior from
pat_match_ip() function in ("MINOR: tools/ip: v4tov6() and v6tov4()
rework"), we can remove code duplication in pat_match_ip() by directly
using those dedicated functions where relevant.
2023-09-21 09:50:55 +02:00
Aurelien DARRAGON
72514a4467 MEDIUM: tools/ip: v4tov6() and v6tov4() rework
v4tov6() and v6tov4() helper function were initially implemented in
4f92d3200 ("[MEDIUM] IPv6 support for stick-tables").

However, since ceb4ac9c3 ("MEDIUM: acl: support IPv6 address matching")
support for legacy ip6 to ip4 conversion formats were added, with the
parsing logic directly performed in acl_match_ip (which later became
pat_match_ip)

The issue is that the original v6tov4() function which is used for sample
expressions handling lacks those additional formats, so we could face
inconsistencies whether we rely on ip4/ip6 conversions from an acl context
or an expression context.

To unify ip4/ip6 automatic mapping behavior, we reworked v4tov6 and v6tov4
functions so that they now behave like in pat_match_ip() function.

Note: '6to4 (RFC3056)' and 'RFC4291 ipv4 compatible address' formats are
still supported for legacy purposes despite being deprecated for a while
now.
2023-09-21 09:50:55 +02:00
Christopher Faulet
d3e379b3ce BUG/MEDIUM: http-ana: Try to handle response before handling server abort
In the request analyser responsible to forward the request, we try to detect
the server abort to stop the request forwarding. However, we must be careful
to not block the response processing, if any. Indeed, it is possible to get
the response and the server abort in same time. In this case, we must try to
forward the response to the client first.

So to fix the issue, in the request analyser we no longer handle the server
abort if the response channel is not empty. In the end, the response
analyser is able to detect the server abort if it is relevant. Otherwise,
the stream will be woken up after the response forwarding and the server
abort should be handled at this stage.

This patch should be backported as far as 2.7 only because the risk of
breakage is high. And it is probably a good idea to wait a bit before
backporting it.
2023-09-21 09:36:37 +02:00
Willy Tarreau
cbbee15462 CLEANUP: ring: rename the ring lock "RING_LOCK" instead of "LOGSRV_LOCK"
The ring lock was initially mostly used for the logs and used to inherit
its name in lock stats. Now that it's exclusively used by rings, let's
rename it accordingly.
2023-09-20 21:38:33 +02:00
Willy Tarreau
cec8b42cb3 MEDIUM: logs: atomically check and update the log sample index
The log server lock is pretty visible in perf top when using log samples
because it's taken for each server in turn while trying to validate and
update the log server's index. Let's change this for a CAS, since we have
the index and the range at hand now. This allow us to remove the logsrv
lock.

The test on 4 servers now shows a 3.7 times improvement thanks to much
lower contention. Without log sampling a test producing 4.4M logs/s
delivers 4.4M logs/s at 21 CPUs used, everything spent in the kernel.
After enabling 4 samples (1:4, 2:4, 3:4 and 4:4), the throughput would
previously drop to 1.13M log/s with 37 CPUs used and 75% spent in
process_send_log(). Now with this change, 4.25M logs/s are emitted,
using 26 CPUs and 22% in process_send_log(). That's a 3.7x throughput
improvement for a 30% global CPU usage reduction, but in practice it
mostly shows that the performance drop caused by having samples is much
less noticeable (each of the 4 servers has its index updated for each
log).

Note that in order to even avoid incrementing an index for each log srv
that is consulted, it would be more convenient to have a single index
per frontend and apply the modulus on each log server in turn to see if
the range has to be updated. It would then only perform one write per
range switch. However the place where this is done doesn't have access
to a frontend, so some changes would need to be performed for this, and
it would require to update the current range independently in each
logsrv, which is not necessarily easier since we don't know yet if we
can commit it.
2023-09-20 21:38:33 +02:00
Willy Tarreau
e00470378b MINOR: logs: use a single index to store the current range and index
By using a single long long to store both the current range and the
next index, we'll make it possible to perform atomic operations instead
of locking. Let's only regroup them for now under a new "curr_rg_idx".
The upper word is the range, the lower is the index.
2023-09-20 21:38:33 +02:00
Willy Tarreau
49ddc0138c CLEANUP: logs: rename a confusing local variable "curr_rg" to "smp_rg"
The variable curr_rg in process_send_log() is misleading because it is
not related to the integer curr_rg that's used to calculate it, instead
it's a pointer to the current smp_log_range from smp_rgs[], so let's call
it "smp_rg" as a singular for this "smp_rgs" and put an end to this
confusion.
2023-09-20 21:38:33 +02:00
Willy Tarreau
3f1284560f MINOR: log: remove the unused curr_idx in struct smp_log_range
This index is useless because it only serves to know when the global
index reached the end, while the global one already knows it. Let's
just drop it and perform the test on the global range.

It was verified with the following config that the first server continues
to take 1/10 of the traffic, the 2nd one 2/10, the 3rd one 3/10 and the
4th one 4/10:

    log 127.0.0.1:10001 sample 1:10 local0
    log 127.0.0.1:10002 sample 2,5:10 local0
    log 127.0.0.1:10003 sample 3,7,9:10 local0
    log 127.0.0.1:10004 sample 4,6,8,10:10 local0
2023-09-20 21:38:33 +02:00
Willy Tarreau
4351364700 MINOR: logs: clarify the check of the log range
The test of the log range is not very clear, in part due to the
reuse of the "curr_idx" name that happens at two levels. The call
to in_smp_log_range() applies to the smp_info's index to which 1 is
added: it verifies that the next index is still within the current
range.

Let's just have a local variable "next_index" in process_send_log()
that gets assigned the next index (current+1) and compare it to the
current range's boundaries. This makes the test much clearer. We can
then simply remove in_smp_log_range() that's no longer needed.
2023-09-20 21:38:33 +02:00
William Lallemand
61b6a4da6c REGTESTS: ssl: skip generate-certificates test w/ wolfSSL
WolfSSL does not seem to work correctly with the generate-certificates
features. This patch disables it temporarly.

    ssl-max-ver TLSv1.2 seems to be a problem in the reg-test and
    wolfSSL but without it it's not able to generate correctly the cert:

    ***  h1    debug|00000004:clear-lst.accept(0007)=0028 from [127.0.0.1:35956] ALPN=<none>
    ***  h1    debug|00000004:clear-lst.clireq[0028:ffffffff]: GET / HTTP/1.1
    ***  h1    debug|00000004:clear-lst.clihdr[0028:ffffffff]: x-sni: unknown-sni.com
    ***  h1    debug|00000004:clear-lst.clihdr[0028:ffffffff]: host: 127.0.0.1
    ***  h1    debug|fd[0x29] OpenSSL error[0x13d] : need the private key
    ***  h1    debug|<134>Sep 20 15:42:58 haproxy[165743]: unix:1 [20/Sep/2023:15:42:58.042] ssl-lst/1: SSL handshake failure (need the private key)
    **** dT    1.072
    ***  h1    debug|fd[0x2a] OpenSSL error[0x13d] : need the private key
    ***  h1    debug|<134>Sep 20 15:42:59 haproxy[165743]: unix:1 [20/Sep/2023:15:42:59.044] ssl-lst/1: SSL handshake failure (need the private key)
    **** dT    2.075
    ***  h1    debug|fd[0x29] OpenSSL error[0x13d] : need the private key
    ***  h1    debug|<134>Sep 20 15:43:00 haproxy[165743]: unix:1 [20/Sep/2023:15:43:00.046] ssl-lst/1: SSL handshake failure (need the private key)
    **** dT    3.079
    ***  h1    debug|fd[0x29] OpenSSL error[0x13d] : need the private key
    ***  h1    debug|<134>Sep 20 15:43:01 haproxy[165743]: unix:1 [20/Sep/2023:15:43:01.050] ssl-lst/1: SSL handshake failure (need the private key)
    **** dT    3.080
    ***  h1    debug|00000004:default_backend.clicls[0028:0023]
    ***  h1    debug|00000004:default_backend.closed[0028:0023]
    ***  h1    debug|<134>Sep 20 15:43:01 haproxy[165743]: 127.0.0.1:35956 [20/Sep/2023:15:42:58.042] clear-lst default_backend/s1 0/0/-1/-1/+3009 503 +217 - - SC-- 3/1/0/0/3 0/0 "GET / HTTP/1.1" 0/-/-/-/0 -/-/-
    **** c3    rxhdr|HTTP/1.1 503 Service Unavailable\r
    **** c3    rxhdr|content-length: 107\r
    **** c3    rxhdr|cache-control: no-cache\r
    **** c3    rxhdr|content-type: text/html\r
    **** c3    rxhdr|\r
2023-09-20 16:02:16 +02:00
William Lallemand
64a4b44a44 REGTESTS: ssl: skip OCSP test w/ WolfSSL
The OCSP test does not seem to be working correctly with wolfSSL.

i2d_OCSP_CERTID(data->ocsp_cid, NULL); always returns 0.

Skip it for now.
2023-09-20 15:23:32 +02:00
Aurelien DARRAGON
2c9bd3ae80 BUG/MINOR: server: add missing free for server->rdr_pfx
rdr_pfx was not being free during server cleanup, leading to small memory
leak when "redir" argument was used on a server line (HTTP only).

This should be backported to every stable versions.

[For 2.6 and 2.7: the free should be performed in srv_drop() directly.
 For older versions: free in deinit() function near the free for the
 cookie string]
2023-09-15 17:46:49 +02:00
Willy Tarreau
6cbb5a057b Revert "MAJOR: import: update mt_list to support exponential back-off"
This reverts commit c618ed5ff4.

The list iterator is broken. As found by Fred, running QUIC single-
threaded shows that only the first connection is accepted because the
accepter relies on the element being initialized once detached (which
is expected and matches what MT_LIST_DELETE_SAFE() used to do before).
However while doing this in the quic_sock code seems to work, doing it
inside the macro show total breakage and the unit test doesn't work
anymore (random crashes). Thus it looks like the fix is not trivial,
let's roll this back for the time it will take to fix the loop.
2023-09-15 17:13:43 +02:00
William Lallemand
694889ac2d BUILD: quic: fix build on centos 8 and USE_QUIC_OPENSSL_COMPAT
When using USE_QUIC_OPENSSL_COMPAT=1 on centos-8 the build fail this
way:

In file included from src/quic_openssl_compat.c:11:
/usr/include/openssl/kdf.h:33:46: error: unknown type name 'va_list'
 int EVP_KDF_vctrl(EVP_KDF_CTX *ctx, int cmd, va_list args);

This is because of openssl/kdf.h being include before openssl-compat.h
2023-09-14 16:26:58 +02:00
Christopher Faulet
89e20033c7 BUG/MAJOR: mux-h2: Report a protocol error for any DATA frame before headers
If any DATA frame is received before all headers are fully received, a
protocol error must be reported. It is required by the HTTP/2 RFC but it is
also important because the HTTP analyzers expect the first HTX block is a
start-line. It leads to a crash if this statement is not respected.

For instance, it is possible to trigger a crash by sending an interim
message with a DATA frame (It may be an empty DATA frame with the ES
flag). AFAIK, only the server side is affected by this bug.

To fix the issue, an protocol error is reported for the stream.

This patch should fix the issue #2291. It must be backported as far as 2.2
(and probably to 2.0 too).
2023-09-14 11:39:39 +02:00
Willy Tarreau
e3b2704e26 BUG/MINOR: freq_ctr: fix possible negative rate with the scaled API
In 1.9 with commit 627505d36 ("MINOR: freq_ctr: add swrate_add_scaled()
to work with large samples") we got the ability to indicate when adding
some values that they represent a number of samples. However there is an
issue in the calculation which is that the number of samples that is
added to the sum before the division in order to avoid fading away too
fast, is multiplied by the scale. The problem it causes is that this is
done in the negative part of the expression, and that as soon if the sum
of old_sum and v*s is too small (e.g. zero), we end up with a negative
value of -s.

This is visible in "show pools" which occasionally report a very large
value on "needed_avg" since 2.9, though the bug has been there for longer.
Indeed in 2.9 since they're hashed in buckets, it suffices that any
thread got one such error once for the sum to be wrong. One possible
impact is memory usage not shrinking after a short burst due to pools
refraining from releasing objects, believing they don't have enough.

This must be backported to all versions. Note that the opportunistic
version can be dropped before 2.8.
2023-09-14 11:09:07 +02:00