MINOR: mux-h2: support limiting the total number of H2 streams per connection
This patch introduces a new setting: tune.h2.fe.max-total-streams. It sets the HTTP/2 maximum number of total streams processed per incoming connection. Once this limit is reached, HAProxy will send a graceful GOAWAY frame informing the client that it will close the connection after all pending streams have been closed. In practice, clients tend to close as fast as possible when receiving this, and to establish a new connection for next requests. Doing this is sometimes useful and desired in situations where clients stay connected for a very long time and cause some imbalance inside a farm. For example, in some highly dynamic environments, it is possible that new load balancers are instantiated on the fly to adapt to a load increase, and that once the load goes down they should be stopped without breaking established connections. By setting a limit here, the connections will have a limited lifetime and will be frequently renewed, with some possibly being established to other nodes, so that existing resources are quickly released. The default value is zero, which enforces no limit beyond those implied by the protocol (2^30 ~= 1.07 billion). Values around 1000 were found to already cause frequent enough connection renewal without causing any perceptible latency to most clients. One notable exception here is h2load which reports errors for all requests that were expected to be sent over a given connection after it receives a GOAWAY. This is an already known limitation: https://github.com/nghttp2/nghttp2/issues/981 The patch was made in two parts inside h2_frt_handle_headers(): - the first one, at the end of the function, which verifies if the configured limit was reached and if it's needed to emit a GOAWAY ; - the second, just before decoding the stream frame, which verifies if a previously configured limit was ignored by the client, and closes the connection if this happens. Indeed, one reason for a connection to stay alive for too long definitely comes from a stupid bot that periodically fetches the same resource, scans lots of URLs or tries to brute-force something. These ones are more likely to just ignore the last stream ID advertised in GOAWAY than a regular browser, or a well-behaving client such as curl which respects it. So in order to make sure we can close the connection we need to enforce the advertised limit. Note that a regular client will not face a problem with that because in the worst case it will have max_concurrent_streams in flight and this limit is taken into account when calculating the advertised last acceptable stream ID. Just a note: it may also be possible to move the first part above to h2s_frt_stream_new() instead so that it's not processed for trailers, though it doesn't seem to be more interesting, first because it has two return points. This is something that may be backported to 2.9 and 2.8 to offer more control to those dealing with dynamic infrastructures, especially since for now we cannot force a connection to be cleanly closed using rules (e.g. github issues #946, #2146).
This commit is contained in:
parent
72c23bd4cd
commit
983ac4397d
|
@ -1373,6 +1373,7 @@ The following keywords are supported in the "global" section :
|
|||
- tune.h2.be.max-concurrent-streams
|
||||
- tune.h2.fe.initial-window-size
|
||||
- tune.h2.fe.max-concurrent-streams
|
||||
- tune.h2.fe.max-total-streams
|
||||
- tune.h2.header-table-size
|
||||
- tune.h2.initial-window-size
|
||||
- tune.h2.max-concurrent-streams
|
||||
|
@ -3209,6 +3210,41 @@ tune.h2.fe.max-concurrent-streams <number>
|
|||
client to allocate more resources at once. The default value of 100 is
|
||||
generally good and it is recommended not to change this value.
|
||||
|
||||
tune.h2.fe.max-total-streams <number>
|
||||
Sets the HTTP/2 maximum number of total streams processed per incoming
|
||||
connection. Once this limit is reached, HAProxy will send a graceful GOAWAY
|
||||
frame informing the client that it will close the connection after all
|
||||
pending streams have been closed. In practice, clients tend to close as fast
|
||||
as possible when receiving this, and to establish a new connection for next
|
||||
requests. Doing this is sometimes useful and desired in situations where
|
||||
clients stay connected for a very long time and cause some imbalance inside a
|
||||
farm. For example, in some highly dynamic environments, it is possible that
|
||||
new load balancers are instantiated on the fly to adapt to a load increase,
|
||||
and that once the load goes down they should be stopped without breaking
|
||||
established connections. By setting a limit here, the connections will have
|
||||
a limited lifetime and will be frequently renewed, with some possibly being
|
||||
established to other nodes, so that existing resources are quickly released.
|
||||
|
||||
It's important to understand that there is an implicit relation between this
|
||||
limit and "tune.h2.fe.max-concurrent-streams" above. Indeed, HAProxy will
|
||||
always accept to process any possibly pending streams that might be in flight
|
||||
between the client and the frontend, so the advertised limit will always
|
||||
automatically be raised by the value configured in max-concurrent-streams,
|
||||
and this value will serve as a hard limit above which a violation by a non-
|
||||
compliant client will result in the connection being closed. Thus when
|
||||
counting the number of requests per connection from the logs, any number
|
||||
between max-total-streams and (max-total-streams + max-concurrent-streams)
|
||||
may be observed depending on how fast streams are created by the client.
|
||||
|
||||
The default value is zero, which enforces no limit beyond those implied by
|
||||
the protocol (2^30 ~= 1.07 billion). Values around 1000 may already cause
|
||||
frequent connection renewal without causing any perceptible latency to most
|
||||
clients. Setting it too low may result in an increase of CPU usage due to
|
||||
frequent TLS reconnections, in addition to increased page load time. Please
|
||||
note that some load testing tools do not support reconnections and may report
|
||||
errors with this setting; as such it may be needed to disable it when running
|
||||
performance benchmarks. See also "tune.h2.fe.max-concurrent-streams".
|
||||
|
||||
tune.h2.header-table-size <number>
|
||||
Sets the HTTP/2 dynamic header table size. It defaults to 4096 bytes and
|
||||
cannot be larger than 65536 bytes. A larger value may help certain clients
|
||||
|
|
72
src/mux_h2.c
72
src/mux_h2.c
|
@ -413,6 +413,9 @@ static unsigned int h2_be_settings_max_concurrent_streams = 0; /* backend valu
|
|||
static unsigned int h2_fe_settings_max_concurrent_streams = 0; /* frontend value */
|
||||
static int h2_settings_max_frame_size = 0; /* unset */
|
||||
|
||||
/* other non-protocol settings */
|
||||
static unsigned int h2_fe_max_total_streams = 0; /* frontend value */
|
||||
|
||||
/* a dummy closed endpoint */
|
||||
static const struct sedesc closed_ep = {
|
||||
.sc = NULL,
|
||||
|
@ -2787,8 +2790,24 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
|
|||
session_inc_http_err_ctr(h2c->conn->owner);
|
||||
goto conn_err;
|
||||
}
|
||||
else if (h2c->flags & H2_CF_DEM_TOOMANY)
|
||||
else if (h2c->flags & H2_CF_DEM_TOOMANY) {
|
||||
goto out; // IDLE but too many sc still present
|
||||
}
|
||||
else if (h2_fe_max_total_streams &&
|
||||
h2c->stream_cnt >= h2_fe_max_total_streams + h2c_max_concurrent_streams(h2c)) {
|
||||
/* We've already told this client we were going to close a
|
||||
* while ago and apparently it didn't care, so it's time to
|
||||
* stop processing its requests for real.
|
||||
*/
|
||||
error = H2_ERR_ENHANCE_YOUR_CALM;
|
||||
TRACE_STATE("Stream limit violated", H2_EV_STRM_SHUT, h2c->conn);
|
||||
HA_ATOMIC_INC(&h2c->px_counters->conn_proto_err);
|
||||
sess_log(h2c->conn->owner);
|
||||
session_inc_http_req_ctr(h2c->conn->owner);
|
||||
session_inc_http_err_ctr(h2c->conn->owner);
|
||||
printf("Oops, forcefully killing the extraneous stream: last_sid=%d max_id=%d strms=%u\n", h2c->last_sid, h2c->max_id, h2c->stream_cnt);
|
||||
goto conn_err;
|
||||
}
|
||||
|
||||
error = h2c_dec_hdrs(h2c, &rxbuf, &flags, &body_len, NULL);
|
||||
|
||||
|
@ -2863,16 +2882,15 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
|
|||
h2s_close(h2s);
|
||||
}
|
||||
TRACE_LEAVE(H2_EV_RX_FRAME|H2_EV_RX_HDR, h2c->conn, h2s);
|
||||
return h2s;
|
||||
goto leave;
|
||||
|
||||
conn_err:
|
||||
h2c_error(h2c, error);
|
||||
goto out;
|
||||
|
||||
out:
|
||||
h2_release_buf(h2c, &rxbuf);
|
||||
TRACE_DEVEL("leaving on missing data or error", H2_EV_RX_FRAME|H2_EV_RX_HDR, h2c->conn, h2s);
|
||||
return NULL;
|
||||
h2s = NULL;
|
||||
goto leave;
|
||||
|
||||
send_rst:
|
||||
/* make the demux send an RST for the current stream. We may only
|
||||
|
@ -2883,6 +2901,28 @@ static struct h2s *h2c_frt_handle_headers(struct h2c *h2c, struct h2s *h2s)
|
|||
h2c->st0 = H2_CS_FRAME_E;
|
||||
|
||||
TRACE_DEVEL("leaving on error", H2_EV_RX_FRAME|H2_EV_RX_HDR, h2c->conn, h2s);
|
||||
|
||||
leave:
|
||||
if (h2_fe_max_total_streams && h2c->stream_cnt >= h2_fe_max_total_streams) {
|
||||
/* we've had enough streams on this connection, time to renew it.
|
||||
* In order to gracefully do this, we'll advertise a stream limit
|
||||
* of the current one plus the max concurrent streams value in the
|
||||
* GOAWAY frame, so that we're certain that the client is aware of
|
||||
* the limit before creating a new stream, but knows we won't harm
|
||||
* the streams in flight. Remember that client stream IDs are odd
|
||||
* so we apply twice the concurrent streams value to the current
|
||||
* ID.
|
||||
*/
|
||||
printf("last_sid=%d max_id=%d strms=%u\n", h2c->last_sid, h2c->max_id, h2c->stream_cnt);
|
||||
|
||||
if (h2c->last_sid <= 0 ||
|
||||
h2c->last_sid > h2c->max_id + 2 * h2c_max_concurrent_streams(h2c)) {
|
||||
/* not set yet or was too high */
|
||||
h2c->last_sid = h2c->max_id + 2 * h2c_max_concurrent_streams(h2c);
|
||||
h2c_send_goaway_error(h2c, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
return h2s;
|
||||
}
|
||||
|
||||
|
@ -7428,6 +7468,27 @@ static int h2_parse_max_concurrent_streams(char **args, int section_type, struct
|
|||
return 0;
|
||||
}
|
||||
|
||||
/* config parser for global "tune.h2.fe.max-total-streams" */
|
||||
static int h2_parse_max_total_streams(char **args, int section_type, struct proxy *curpx,
|
||||
const struct proxy *defpx, const char *file, int line,
|
||||
char **err)
|
||||
{
|
||||
uint *vptr;
|
||||
|
||||
if (too_many_args(1, args, err, NULL))
|
||||
return -1;
|
||||
|
||||
/* frontend only for now */
|
||||
vptr = &h2_fe_max_total_streams;
|
||||
|
||||
*vptr = atoi(args[1]);
|
||||
if ((int)*vptr < 0) {
|
||||
memprintf(err, "'%s' expects a positive numeric value.", args[0]);
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* config parser for global "tune.h2.max-frame-size" */
|
||||
static int h2_parse_max_frame_size(char **args, int section_type, struct proxy *curpx,
|
||||
const struct proxy *defpx, const char *file, int line,
|
||||
|
@ -7507,6 +7568,7 @@ static struct cfg_kw_list cfg_kws = {ILH, {
|
|||
{ CFG_GLOBAL, "tune.h2.be.max-concurrent-streams", h2_parse_max_concurrent_streams },
|
||||
{ CFG_GLOBAL, "tune.h2.fe.initial-window-size", h2_parse_initial_window_size },
|
||||
{ CFG_GLOBAL, "tune.h2.fe.max-concurrent-streams", h2_parse_max_concurrent_streams },
|
||||
{ CFG_GLOBAL, "tune.h2.fe.max-total-streams", h2_parse_max_total_streams },
|
||||
{ CFG_GLOBAL, "tune.h2.header-table-size", h2_parse_header_table_size },
|
||||
{ CFG_GLOBAL, "tune.h2.initial-window-size", h2_parse_initial_window_size },
|
||||
{ CFG_GLOBAL, "tune.h2.max-concurrent-streams", h2_parse_max_concurrent_streams },
|
||||
|
|
Loading…
Reference in New Issue