Commit Graph

13718 Commits

Author SHA1 Message Date
Willy Tarreau
d597ec2718 MINOR: listener: export manage_global_listener_queue()
This one pops up in tasks lists when running against a saturated
listener.
2021-01-29 14:29:57 +01:00
Christopher Faulet
5a2f938732 MEDIUM: contrib/prometheus-exporter: Use dynamic labels instead of static ones
Instead of using static labels for metrics, we now use an array of labels,
filled for each metrics if necessary and passed to the dump function. This
way, it is easier to extend the promex service. For now, there are at most 8
labels per metrics. This limit may be raised by changing PROMEX_MAX_LABELS
value. And to ease labels addition, a label is defined as a key/value
pair. The formatting is handled by the dump function.

For the proxies and servers, the first entry of the array is always the
proxy name. In addition, for the servers, the second entry is always the
server name.
2021-01-29 13:42:43 +01:00
William Dauchy
c6464591a3 MAJOR: contrib/prometheus-exporter: move ftd/bkd/srv states to labels
this patch is a breaking change between v2.3 and v2.4: we move from
using gauge value for frontend/backend/servers states to labels values.

the main motivation being I realised it is very difficult to make use of
it without hard coded quirks on prometheus client side; especially
because the main use is often to group by state, which is harder when
the state is the value of the metric.

in order to achieve that we iterate on the status metric to generate
labels, and so as many metrics.

this is the first step to resolve github issue #1029
A second step should address health check states.

Signed-off-by: William Dauchy <wdauchy@gmail.com>
2021-01-29 13:42:43 +01:00
William Dauchy
5493821fe6 MINOR: contrib/prometheus-exporter: declare states for objects
in preparation to change state gauge values as labels, declare them as
enum associated with the string definition

Signed-off-by: William Dauchy <wdauchy@gmail.com>
2021-01-29 13:42:39 +01:00
Christopher Faulet
c29b4bf946 MINOR: mux-h2: Slightly improve request HEADERS frames sending
In h2s_bck_make_req_headers() function, in the loop on the HTX blocks, the
most common blocks, the headers, are now handled in first, before the
start-line. The same change was already performed on the response HEADERS
frames. Thus the code is more consistent now.
2021-01-29 13:28:43 +01:00
Christopher Faulet
564981369b MINOR: mux-h2: Don't tests the start-line when sending HEADERS frame
When a HEADERS frame is sent, it is always when an HTX start-line block is
found. Thus, in h2s_bck_make_req_headers() and h2s_frt_make_resp_headers()
functions, it is useless to tests the start-line. Instead of being too
defensive, we use BUG_ON() now because it must not happen and must be
handled as a bug.

This patch should fix the issue #1086.
2021-01-29 13:27:57 +01:00
Christopher Faulet
3702f78cf9 MINOR: ssl-sample: Don't check if argument list is set in sample fetches
The list is always defined by definition. Thus there is no reason to test
it.
2021-01-29 13:26:24 +01:00
Christopher Faulet
e6e7a585e9 MINOR: sample: Don't check if argument list is set in sample fetches
The list is always defined by definition. Thus there is no reason to test
it.
2021-01-29 13:26:13 +01:00
Christopher Faulet
72dbcfe66d MINOR: http-conv: Don't check if argument list is set in sample converters
The list is always defined by definition. Thus there is no reason to test
it.
2021-01-29 13:26:02 +01:00
Christopher Faulet
623af93722 MINOR: http-fetch: Don't check if argument list is set in sample fetches
The list is always defined by definition. Thus there is no reason to test
it. There is also plenty of checks on arguments types while it is already
validated during the configuration parsing. But one thing at a time.

This patch should fix the issue #1087.
2021-01-29 13:25:34 +01:00
Christopher Faulet
bdbd5db2a5 BUG/MINOR: stick-table: Always call smp_fetch_src() with a valid arg list
The sample fetch functions must always be called with a valid argument
list. When called by hand, if there is no argument to pass, empty_arg_list must
be used.

In the stick-table code, there are some calls to smp_fetch_src() with NULL as
argument list. It is changed to use empty_arg_list instead. It is not really a
bug because smp_fetch_src() does not use the argument list. But it is an API
bug.

This patch may be backported to all stable branches as a cleanup.
2021-01-29 13:24:16 +01:00
Christopher Faulet
1faeb4c710 MINOR: mux-h1: Remove first useless test on count in h1_process_output()
h1_process_output() function is never called with no data to send (count ==
0). Thus, the first test on count, at the beginning of the function is
useless and may be removed. This way, by reading the code, it is obvious the
<chn_htx> variable is always defined.

This patch should fix the issue #1085.
2021-01-29 13:16:32 +01:00
Willy Tarreau
5c25daa170 MINOR: stick-tables: export process_table_expire()
This handler can take quite some time as it deletes a large number of
entries under a lock, let's export it so that it's immediately visible
in "show profiling".
2021-01-29 12:39:32 +01:00
Willy Tarreau
f6c88421b7 MINOR: peers: export process_peer_sync() to improve traces
This one will probably pop up from time to time in "show profiling",
better have it resolve.
2021-01-29 12:38:42 +01:00
Willy Tarreau
025fc71b47 MINOR: checks: export a few functions that appear often in trace dumps
The check I/O handler, process_chk_conn and server_warmup are often
present in complex backtraces as they're impacted by locking or I/O
issues. Let's export them so that they resolve cleanly.
2021-01-29 12:35:24 +01:00
Willy Tarreau
ac6322dd36 MINOR: muxes: export the timeout and shutr task handlers
These ones appear often in "show tasks" so it's handy to make them
resolve.
2021-01-29 12:33:46 +01:00
Willy Tarreau
02922e19ca MINOR: session: export session_expire_embryonic()
This is only to make it resolve nicely in "show tasks".
2021-01-29 12:27:57 +01:00
Willy Tarreau
fb5401f296 MINOR: listener: export accept_queue_process
This is only to make it resolve in "show tasks".
2021-01-29 12:25:23 +01:00
Willy Tarreau
7eff06e162 MINOR: activity: add a new "show tasks" command to list currently active tasks
This finally adds the long-awaited solution to inspect the run queues
and figure what is eating the CPU or causing latencies. We can even see
the experienced latencies when profiling is enabled. Example on a
saturated process:

> show tasks
Running tasks: 14983 (4 threads)
  function                     places     %    lat_tot   lat_avg
  process_stream                 4948   33.0   5.840m    70.82ms
  h1_io_cb                       2535   16.9      -         -
  main+0x9e670                   2508   16.7   2.930m    70.10ms
  ssl_sock_io_cb                 2499   16.6      -         -
  si_cs_io_cb                    2493   16.6      -         -
2021-01-29 12:12:28 +01:00
Willy Tarreau
cfa7101d59 MINOR: activity: flush scheduler stats on "set profiling tasks on"
If a user enables profiling by hand, it makes sense to reset the stats
counters to provide fresh new measurements. Therefore it's worth using
this as the standard method to reset counters.
2021-01-29 12:10:33 +01:00
Willy Tarreau
1bd67e9b03 MINOR: activity: also report collected tasks stats in "show profiling"
"show profiling" will now dump the stats collected by the scheduler if
profiling was previously enabled. This will immediately make it obvious
what functions are responsible for others' high latencies or which ones
are suffering from others, and should help spot issues like undesired
wakeups.

Example:

Per-task CPU profiling              : on      # set profiling tasks {on|auto|off}
Tasks activity:
  function                      calls   cpu_tot   cpu_avg   lat_tot   lat_avg
  si_cs_io_cb                 5569479   23.37s    4.196us      -         -
  h1_io_cb                    5558654   13.60s    2.446us      -         -
  process_stream               250841   1.476s    5.882us   3.499s    13.95us
  main+0x9e670                    198      -         -      5.526ms   27.91us
  task_run_applet                  17   1.509ms   88.77us   205.8us   12.11us
  srv_cleanup_idle_connections     12   44.51us   3.708us   25.71us   2.142us
  main+0x158c80                     9   48.72us   5.413us      -         -
  srv_cleanup_toremove_connections  5   165.1us   33.02us   123.6us   24.72us
2021-01-29 12:10:33 +01:00
Willy Tarreau
4e2282f9bf MEDIUM: tasks/activity: collect per-task statistics when profiling is enabled
Now when the profiling is enabled, the scheduler wlil update per-function
task-level statistics on number of calls, cpu usage and lateny, that could
later be checked using "show profiling". This will immediately make it
obvious what functions are responsible for others' high latencies or which
ones are suffering from others, and should help spot issues like undesired
wakeups. For now the stats are only collected but not reported (though they
are readable from sched_activity[] under gdb).
2021-01-29 12:10:33 +01:00
Willy Tarreau
3fb6a7b46e MINOR: activity: declare a new structure to collect per-function activity
The new sched_activity structure will be used to collect task-level
activity based on the target function. The principle is to declare a
large enough array to make collisions rare (256 entries), and hash
the function pointer using a reduced XXH to decide where to store the
stats. On first computation an entry is definitely assigned to the
array and it's done atomically. A special entry (0) is used to store
collisions ("others"). The goal is to make it easy and inexpensive for
the scheduler code to use these to store #calls, cpu_time and lat_time
for each task.
2021-01-29 12:10:33 +01:00
Willy Tarreau
aa622b822b MINOR: activity: make profiling more manageable
In 2.0, commit d2d3348ac ("MINOR: activity: enable automatic profiling
turn on/off") introduced an automatic mode to enable/disable profiling.
The problem is that the automatic mode automatically changes to on/off,
which implied that the forced on/off modes aren't sticky anymore. It's
annoying when debugging because as soon as the load decreases, profiling
stops.

This makes a small change which ought to have been done first, which
consists in having two states for "auto" (auto-on, auto-off) to
distinguish them from the forced states. Setting to "auto" in the config
defaults to "auto-off" as before, and setting it on the CLI switches to
auto but keeps the current operating state.

This is simple enough to be backported to older releases if needed.
2021-01-29 12:10:33 +01:00
Willy Tarreau
4deeb1055f MINOR: tools: add print_time_short() to print a condensed duration value
When reporting some values in debugging output we often need to have
some condensed, stable-length values. This function prints a duration
from nanosecond to years with at least 4 digits of accuracy using the
most suitable unit, always on 7 chars.
2021-01-29 12:10:33 +01:00
Willy Tarreau
87ef323971 DOC: management: fix "show resolvers" alphabetical ordering
Not sure why it was located between "show ssl" and "show table"...
This should be backported.
2021-01-29 12:10:33 +01:00
Tim Duesterhus
66d28e7045 CI: Fix the coverity builds
In an attempt to fix the use of DEBUG_STRICT commit
7f0f4786d1 unfortunately broke the Coverity
builds completely.

It turns out that Coverity does not properly handle quoting within
`COVERITY_SCAN_BUILD_COMMAND`, instead breaking up single arguments at
whitespace, thus passing `-DDEBUG_USE_ABORT=1` to `make` as-is.

Fix this issue by hijacking the Makefile within the Coverity workflow. We
simply replace the default value of the `DEBUG` option with whatever values we
need. The build command now only includes the TARGET and USE_* flags, each of
which works without any spaces.
2021-01-28 20:40:14 +01:00
Amaury Denoyelle
a81bb7197e BUG/MINOR: backend: check available list allocation for reuse
Do not consider reuse connection if available list is not allocated for
the target server. This will prevent a crash when using a standalone
server for an external purpose like socket_tcp/socket_ssl on hlua code.
For the idle/safe lists, they are considered allocated if
srv.max_idle_conns is not null.

Note that the hlua code is currently safe thanks to the additional
checks on proxy http mode and stream reuse policy not never. However,
this might not be sufficient for future code.

This patch should be backported in every branches containing the
following patch :
  7f68d815af (2.4 tree)
  REORG: backend: simplify conn_backend_get
2021-01-28 18:12:07 +01:00
Willy Tarreau
02757d02c2 Revert "BUG/MEDIUM: listener: do not accept connections faster than we can process them"
This reverts commit 62e8aaa1bd.

While is works extremely well to address SSL handshake floods, it prevents
establishment of new connections during regular traffic above 50-60 Gbps,
because for an unknown reason the queue seems to have ~1.7 active tasks
per connection all the time, which makes no sense as these ought to be
waiting on subscribed events. It might uncover a deeper issue but at least
for now a different solution is needed. cf issue #822.

The test is trivial to run, just start a config with tune.runqueue-depth 10
and inject on 1GB objects with more than 10 connections. Try to connect to
the stats socket, it only works once, then the listeners are not dequeued.
2021-01-28 18:11:32 +01:00
William Lallemand
e814321287 REGTESTS: set_ssl_server_cert.vtc: set as broken
It looks like this test is broken with a low nbthread value (1 for
example). Disable this test in the CI until the problem is solved.
2021-01-28 18:08:36 +01:00
Willy Tarreau
62e8aaa1bd BUG/MEDIUM: listener: do not accept connections faster than we can process them
In github issue #822, user @ngaugler reported some performance problems when
dealing with many concurrent SSL connections on restarts, after migrating
from 1.6 to 2.2, indicating a long time required to re-establish connections.

The Run_queue metric in the traces showed an abnormally high number of tasks
in the run queue, likely indicating we were accepting faster than we could
process. And this is indeed one of the differences between 1.6 and 2.2, the
accept I/O loop and the TLS handshakes are totally independent, so much that
they can even run on different threads. In 1.6 the SSL handshake was handled
almost immediately after the accept(), so this was limiting the input rate.
With large maxconn values, as long as there are incoming connections, new
I/Os are scheduled and many of them pass before the handshake, being tagged
for low latency processing.

The result is that handshakes get postponed, and are further postponed as
new connections are accepted. When they are finally able to be processed,
some of them fail as the client is gone, and the client had already queued
new ones. This causes an excess number of apparent connections and total
number of handshakes to be processed, just because we were accepting
connections on a temporarily saturated machine.

The solution is to temporarily pause new incoming connections when the
load already indicates that more tasks are already queued than will be
handled in a poll loop. The difficulty with this usually is to be able
to come back to re-enable the operation, but given that the metric is
the run queue, we just have to queue the global_listener_queue task so
that it gets picked by any thread once the run queues get flushed.

Before this patch, injecting with SSL reneg with 10000 concurrent
connections resulted in 350k tasks in the run queue, and a majority of
handshake timeouts noticed by the client. With the patch, the run queue
fluctuates between 1-3x runqueue-depth, the process is constantly busy, the
accept rate is maximized and clients observe no error anymore.

It would be desirable to backport this patch to 2.3 and 2.2 after some more
testing, provided the accept loop there is compatible.
2021-01-28 16:48:01 +01:00
Christopher Faulet
405f054652 MINOR: h1: Raise the chunk size limit up to (2^52 - 1)
The allowed chunk size was historically limited to 2GB to avoid risk of
overflow. This restriction is no longer necessary because the chunk size is
immediately stored into a 64bits integer after the parsing. Thus, it is now
possible to raise this limit. However to never fed possibly bogus values
from languages that use floats for their integers, we don't get more than 13
hexa-digit (2^52 - 1). 4 petabytes is probably enough !

This patch should fix the issue #1065. It may be backported as far as
2.1. For the 2.0, the legacy HTTP part must be reviewed. But there is
honestely no reason to do so.
2021-01-28 16:37:14 +01:00
Christopher Faulet
73518be595 MINOR: mux-fcgi/trace: add traces at level ERROR for all kind of errors
A number of traces could be added or changed to report errors with
TRACE_ERROR. The goal is to be able to enable error tracing only to detect
anomalies.
2021-01-28 16:37:14 +01:00
Christopher Faulet
26a2643466 MINOR: mux-h1/trace: add traces at level ERROR for all kind of errors
A number of traces could be added or changed to report errors with
TRACE_ERROR. The goal is to be able to enable error tracing only to detect
anomalies.
2021-01-28 16:37:14 +01:00
Christopher Faulet
d2dcd8a88f REGTEST: Don't use the websocket to validate http-check
Now, some conformance tests are performed when an HTTP connection is
upgraded to websocket. This make the http-check-send.vtc script failed for
the backend <be6_ws>. Because the purpose of this health-check is to pass a
"Connection: Upgrade" header on an http-check send rule, we may use a dummy
protocal instead.
2021-01-28 16:37:14 +01:00
Christopher Faulet
85a813676f REGTESTS: Fix required versions for several scripts
The following scripts require HAProxy 2.4 :

 * cache/caching_rules.vtc
 * cache/post_on_entry.vtc
 * cache/vary.vtc
 * checks/1be_40srv_odd_health_checks.vtc
 * checks/40be_2srv_odd_health_checks.vtc
 * checks/4be_1srv_health_checks.vtc
 * converter/fix.vtc
 * converter/mqtt.vtc
 * http-messaging/protocol_upgrade.vtc
 * http-messaging/websocket.vtc
 * http-set-timeout/set_timeout.vtc
 * log/log_uri.vtc

However it may change is features are backported.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
8969c32d7e MINOR: vtc: add websocket test
Test the conformance of websocket rfc6455 in haproxy. In particular, if
a missing key is detected on a h1 message, haproxy must close the
connection.

Note that the case h2 client/h1 srv is not tested as I did not find a
way to calculate the key on the server side.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
bda34302d7 MINOR: vtc: add test for h1/h2 protocol upgrade translation
This test send HTTP/1.1 Get+Upgrade or HTTP/2 Extended Connect through a
haproxy instance.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
f9dcbeeab3 MEDIUM: h2: send connect protocol h2 settings
In order to announce support for the Extended CONNECT h2 method by
haproxy, always send the ENABLE_CONNECT_PROTOCOL h2 settings. This new
setting has been described in the rfc 8441.

After receiving ENABLE_CONNECT_PROTOCOL, the client is free to use the
Extended CONNECT h2 method. This can notably be useful for the support
of websocket handshake on http/2.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
c9a0afcc32 MEDIUM: h2: parse Extended CONNECT request to htx
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2

Convert an Extended CONNECT HTTP/2 request into a htx representation.
The htx message uses the GET method with an Upgrade header field to be
fully compatible with the equivalent HTTP/1.1 Upgrade mechanism.

The Extended CONNECT is of the following form :

:method = CONNECT
:protocol = websocket
:scheme = https
:path = /chat
:authority = server.example.com

The new pseudo-header :protocol has been defined and is used to identify
an Extended CONNECT method. Contrary to standard CONNECT, Extended
CONNECT must have :scheme, :path and :authority defined.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
efe2276a9e MEDIUM: mux_h2: generate Extended CONNECT response
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2

Convert a 101 htx response message to a 200 HTTP/2 response.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
aad333a9fc MEDIUM: h1: add a WebSocket key on handshake if needed
Add the header Sec-Websocket-Key when generating a h1 handshake websocket
without this header. This is the case when doing h2-h1 conversion.

The key is randomly generated and base64 encoded. It is stored on the session
side to be able to verify response key and reject it if not valid.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
9bf957335e MEDIUM: mux_h2: generate Extended CONNECT from htx upgrade
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2

Generate an HTTP/2 Extended CONNECT request from a htx Upgrade message.
This conversion is done when seeing the header Connection: Upgrade. A
CONNECT request is written with the :protocol pseudo-header set from the
Upgrade htx header value.

The protocol is saved in the h2s structure. This is needed on the
response side because the protocol is not present on HTTP/2 response but
is needed if the client side is using HTTP/1.1 with 101 status code.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
7416274914 MEDIUM: h2: parse Extended CONNECT reponse to htx
Support for the rfc 8441 Bootstraping WebSockets with HTTP/2

Convert a 200 status reply from an Extended CONNECT request into a htx
representation. The htx message is set to 101 status code to be fully
compatible with the equivalent HTTP/1.1 Upgrade mechanism.

This conversion is only done if the stream flags H2_SF_EXT_CONNECT_SENT
has been set. This is true if an Extended CONNECT request has already
been seen on the stream.

Besides the 101 status, the additional headers Connection/Upgrade are
added to the htx message. The protocol is set from the value stored in
h2s. Typically it will be extracted from the client request. This is
only used if the client is using h1 as only the HTTP/1.1 101 Response
contains the Upgrade header.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
5fb48ea7a4 MINOR: mux_h2: define H2_SF_EXT_CONNECT_SENT stream flag
This flag is used to signal that an Extended CONNECT has been sent by
the server mux on the current stream.
This will allow to convert the response to a 101 htx status message.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
c193823343 MEDIUM: h1: generate WebSocket key on response if needed
Add the Sec-Websocket-Accept header on a websocket handshake response.
This header may be missing if a h2 server is used with a h1 client.

The response key is calculated following the rfc6455. For this, the
handshake request key must be stored in the h1 session, as a new field
name ws_key. Note that this is only done if the message has been
prealably identified as a Websocket handshake request.
2021-01-28 16:37:14 +01:00
Amaury Denoyelle
18ee5c3eb0 MINOR: h1: reject websocket handshake if missing key
If a request is identified as a WebSocket handshake, it must contains a
websocket key header or else it can be reject, following the rfc6455.

A new flag H1_MF_UPG_WEBSOCKET is set on such messages.  For the request
te be identified as a WebSocket handshake, it must contains the headers:
  Connection: upgrade
  Upgrade: websocket

This commit is a compagnon of
"MEDIUM: h1: generate WebSocket key on response if needed" and
"MEDIUM: h1: add a WebSocket key on handshake if needed".

Indeed, it ensures that a WebSocket key is added only from a http/2 side
and not for a http/1 bogus peer.
2021-01-28 16:37:14 +01:00
Christopher Faulet
5b82cc5b5c MEDIUM: http-ana: Deal with L7 retries in HTTP analysers
The code dealing with the copy of requests in the L7-buffer and the
retransmits during L7 retries has been moved in the HTTP analysers. The copy
is now performed in the REQ_HTTP_XFER_BODY analyser and the L7 retries is
performed in the RES_WAIT_HTTP analyser. This way, si_cs_recv() and
si_cs_send() don't care of it anymore. It is much more natural to deal with
L7 retry in HTTP analysers.
2021-01-28 16:37:14 +01:00
Christopher Faulet
991febdfe0 MEDIUM: mux-h2: Don't emit DATA frame for bodyless responses
Some responses must not contain data. Reponses to HEAD requests and 204/304
responses. But there is no warranty that this will be really respected by
the senders or even if it is possible. For instance, the method may be
rewritten by an http-request rule (HEAD->GET). Thus, it is not really
possible to always strip these data from the response at the receive
stage. And the response may be emitted by an applet or an internal service
not strictly following the spec. All that to say that we may be prepared to
handle payload for bodyless responses on the sending path.

In addition, unlike the HTTP/1, it is not really clear that the trailers is
part of the payload or not. Thus, some clients may expect to have the
trailers, if any, in the response to a HEAD request. For instance, the GRPC
status is placed in a trailer and clients rely on it. But what happens for
204 responses then.  Read the following thread for details :

  https://lists.w3.org/Archives/Public/ietf-http-wg/2020OctDec/0040.html

So, thanks to previous patches, it is now possible to know on the sending
path if a response must be bodyless or not. So, for such responses, no DATA
frame is emitted, except eventually the last empty one carring the ES
flag. However, the TRAILERS frames are still emitted. The h2s_skip_data()
function is added to take care to remove HTX DATA blocks without emitting
any DATA frame expect the last one, if there is no trailers.
2021-01-28 16:37:14 +01:00
Christopher Faulet
7d247f0771 MINOR: h2/mux-h2: Add flags to notify the response is known to have no body
The H2 message flag H2_MSGF_BODYLESS_RSP is now used during the request or
the response parsing to notify the mux that, considering the parsed message,
the response is known to have no body. This happens during HEAD requests
parsing and during 204/304 responses parsing.

On the H2 multiplexer, the equivalent flag is set on H2 streams. Thus the
H2_SF_BODYLESS_RESP flag is set on a H2 stream if the H2_MSGF_BODYLESS_RSP
is found after a HEADERS frame parsing. Conversely, this flag is also set
when a HEADERS frame is emitted for HEAD requests and for 204/304 responses.

The H2_SF_BODYLESS_RESP flag will be used to ignore data payload from the
response but not the trailers.
2021-01-28 16:37:14 +01:00