MEDIUM: server: add a new pool-low-conn server setting

The problem with the way idle connections currently work is that it's
easy for a thread to steal all of its siblings' connections, then release
them, then it's done by another one, etc. This happens even more easily
due to scheduling latencies, or merged events inside the same pool loop,
which, when dealing with a fast server responding in sub-millisecond
delays, can really result in one thread being fully at work at a time.

In such a case, we perform a huge amount of takeover() which consumes
CPU and requires quite some locking, sometimes resulting in lower
performance than expected.

In order to fight against this problem, this patch introduces a new server
setting "pool-low-conn", whose purpose is to dictate when it is allowed to
steal connections from a sibling. As long as the number of idle connections
remains at least as high as this value, it is permitted to take over another
connection. When the idle connection count becomes lower, a thread may only
use its own connections or create a new one. By proceeding like this even
with a low number (typically 2*nbthreads), we quickly end up in a situation
where all active threads have a few connections. It then becomes possible
to connect to a server without bothering other threads the vast majority
of the time, while still being able to use these connections when the
number of available FDs becomes low.

We also use this threshold instead of global.nbthread in the connection
release logic, allowing to keep more extra connections if needed.

A test performed with 10000 concurrent HTTP/1 connections, 16 threads
and 210 servers with 1 millisecond of server response time showed the
following numbers:

   haproxy 2.1.7:           185000 requests per second
   haproxy 2.2:             314000 requests per second
   haproxy 2.2 lowconn 32:  352000 requests per second

The takeover rate goes down from 300k/s to 13k/s. The difference is
further amplified as the response time shrinks.
This commit is contained in:
Willy Tarreau 2020-07-01 07:43:51 +02:00
parent 35e30c9670
commit 2f3f4d3441
5 changed files with 47 additions and 9 deletions

View File

@ -13404,6 +13404,19 @@ on-marked-up <action>
Actions are disabled by default
pool-low-conn <max>
Set a low threshold on the number of idling connections for a server, below
which a thread will not try to steal a connection from another thread. This
can be useful to improve CPU usage patterns in scenarios involving many very
fast servers, in order to ensure all threads will keep a few idle connections
all the time instead of letting them accumulate over one thread and migrating
them from thread to thread. Typical values of twice the number of threads
seem to show very good performance already with sub-millisecond response
times. The default is zero, indicating that any idle connection can be used
at any time. It is the recommended setting for normal use. This only applies
to connections that can be shared according to the same principles as those
applying to "http-reuse".
pool-max-conn <max>
Set the maximum number of idling connections for a server. -1 means unlimited
connections, 0 means no idle connections. The default is -1. When idle

View File

@ -226,6 +226,7 @@ struct server {
struct mt_list *safe_conns; /* safe idle connections */
struct list *available_conns; /* Connection in used, but with still new streams available */
unsigned int pool_purge_delay; /* Delay before starting to purge the idle conns pool */
unsigned int low_idle_conns; /* min idle connection count to start picking from other threads */
unsigned int max_idle_conns; /* Max number of connection allowed in the orphan connections list */
unsigned int curr_idle_conns; /* Current number of orphan idling connections, both the idle and the safe lists */
unsigned int curr_idle_nb; /* Current number of connections in the idle list */

View File

@ -258,8 +258,8 @@ static inline int srv_add_to_idle_list(struct server *srv, struct connection *co
((MT_LIST_ISEMPTY(&srv->safe_conns[tid]) &&
(is_safe || MT_LIST_ISEMPTY(&srv->idle_conns[tid]))) ||
(ha_used_fds < global.tune.pool_low_count &&
(srv->curr_used_conns + srv->curr_idle_conns <
MAX(srv->curr_used_conns, srv->est_need_conns) + global.nbthread))) &&
(srv->curr_used_conns + srv->curr_idle_conns <=
MAX(srv->curr_used_conns, srv->est_need_conns) + srv->low_idle_conns))) &&
!conn->mux->used_streams(conn) && conn->mux->avail_streams(conn)) {
int retadd;

View File

@ -1076,13 +1076,14 @@ static struct connection *conn_backend_get(struct server *srv, int is_safe)
{
struct mt_list *mt_list = is_safe ? srv->safe_conns : srv->idle_conns;
struct connection *conn;
int i;
int i; // thread number
int found = 0;
/* We need to lock even if this is our own list, because another
* thread may be trying to migrate that connection, and we don't want
* to end up with two threads using the same connection.
*/
i = tid;
HA_SPIN_LOCK(OTHER_LOCK, &idle_conns[tid].toremove_lock);
conn = MT_LIST_POP(&mt_list[tid], struct connection *, list);
HA_SPIN_UNLOCK(OTHER_LOCK, &idle_conns[tid].toremove_lock);
@ -1090,10 +1091,17 @@ static struct connection *conn_backend_get(struct server *srv, int is_safe)
/* If we found a connection in our own list, and we don't have to
* steal one from another thread, then we're done.
*/
if (conn) {
i = tid;
goto fix_conn;
}
if (conn)
goto done;
/* Are we allowed to pick from another thread ? We'll still try
* it if we're running low on FDs as we don't want to create
* extra conns in this case, otherwise we can give up if we have
* too few idle conns.
*/
if (srv->curr_idle_conns < srv->low_idle_conns &&
ha_used_fds < global.tune.pool_low_count)
goto done;
/* Lookup all other threads for an idle connection, starting from tid + 1 */
for (i = tid; !found && (i = ((i + 1 == global.nbthread) ? 0 : i + 1)) != tid;) {
@ -1116,8 +1124,8 @@ static struct connection *conn_backend_get(struct server *srv, int is_safe)
if (!found)
conn = NULL;
else {
fix_conn:
done:
if (conn) {
conn->idle_time = 0;
_HA_ATOMIC_SUB(&srv->curr_idle_conns, 1);
_HA_ATOMIC_SUB(&srv->curr_idle_thr[i], 1);

View File

@ -356,6 +356,20 @@ static int srv_parse_pool_purge_delay(char **args, int *cur_arg, struct proxy *c
return 0;
}
static int srv_parse_pool_low_conn(char **args, int *cur_arg, struct proxy *curproxy, struct server *newsrv, char **err)
{
char *arg;
arg = args[*cur_arg + 1];
if (!*arg) {
memprintf(err, "'%s' expects <value> as argument.\n", args[*cur_arg]);
return ERR_ALERT | ERR_FATAL;
}
newsrv->low_idle_conns = atoi(arg);
return 0;
}
static int srv_parse_pool_max_conn(char **args, int *cur_arg, struct proxy *curproxy, struct server *newsrv, char **err)
{
char *arg;
@ -1256,6 +1270,7 @@ static struct srv_kw_list srv_kws = { "ALL", { }, {
{ "no-tfo", srv_parse_no_tfo, 0, 1 }, /* Disable use of TCP Fast Open */
{ "non-stick", srv_parse_non_stick, 0, 1 }, /* Disable stick-table persistence */
{ "observe", srv_parse_observe, 1, 1 }, /* Enables health adjusting based on observing communication with the server */
{ "pool-low-conn", srv_parse_pool_low_conn, 1, 1 }, /* Set the min number of orphan idle connecbefore being allowed to pick from other threads */
{ "pool-max-conn", srv_parse_pool_max_conn, 1, 1 }, /* Set the max number of orphan idle connections, 0 means unlimited */
{ "pool-purge-delay", srv_parse_pool_purge_delay, 1, 1 }, /* Set the time before we destroy orphan idle connections, defaults to 1s */
{ "proto", srv_parse_proto, 1, 1 }, /* Set the proto to use for all outgoing connections */
@ -1730,6 +1745,7 @@ static void srv_settings_cpy(struct server *srv, struct server *src, int srv_tmp
#endif
srv->mux_proto = src->mux_proto;
srv->pool_purge_delay = src->pool_purge_delay;
srv->low_idle_conns = src->low_idle_conns;
srv->max_idle_conns = src->max_idle_conns;
srv->max_reuse = src->max_reuse;