MEDIUM: conn: make conn_backend_get always scan the same group

We don't want to pick idle connections from another thread group,
this would be very slow by forcing to share undesirable data.

This patch makes sure that we start seeking from the current thread
group's threads only and loops over that range exclusively.

It's worth noting that the next_takeover pointer remains per-server
and will bounce when multiple groups use it at the same time. But we
preserve the perturbation by applying a modulo when retrieving it,
so that when groups are of the same size (most common case), the
index will not even change. At this time it doesn't seem worth
storing one index per group in servers, but that might be an option
if any contention is detected later.
This commit is contained in:
Willy Tarreau 2022-07-07 09:12:45 +02:00
parent d60269f93f
commit 15c5500b6e

View File

@ -1204,12 +1204,13 @@ static struct connection *conn_backend_get(struct stream *s, struct server *srv,
goto done;
/* Lookup all other threads for an idle connection, starting from last
* unvisited thread.
* unvisited thread, but always staying in the same group.
*/
stop = srv->next_takeover;
if (stop >= global.nbthread)
stop = 0;
if (stop >= tg->count)
stop %= tg->count;
stop += tg->base;
i = stop;
do {
if (!srv->curr_idle_thr[i] || i == tid)
@ -1244,13 +1245,13 @@ static struct connection *conn_backend_get(struct stream *s, struct server *srv,
}
}
HA_SPIN_UNLOCK(IDLE_CONNS_LOCK, &idle_conns[i].idle_conns_lock);
} while (!found && (i = (i + 1 == global.nbthread) ? 0 : i + 1) != stop);
} while (!found && (i = (i + 1 == tg->base + tg->count) ? tg->base : i + 1) != stop);
if (!found)
conn = NULL;
done:
if (conn) {
_HA_ATOMIC_STORE(&srv->next_takeover, (i + 1 == global.nbthread) ? 0 : i + 1);
_HA_ATOMIC_STORE(&srv->next_takeover, (i + 1 == tg->base + tg->count) ? tg->base : i + 1);
srv_use_conn(srv, conn);