BUG/MINOR: buffers: Fix b_alloc_margin to be "fonctionnaly" thread-safe

b_alloc_margin is, strickly speeking, thread-safe. It will not crash
HAproxy. But its contract is not respected anymore in a multithreaded
environment. In this function, we need to be sure to have <margin> buffers
available in the pool after the allocation. So to have this guarantee, we must
lock the memory pool during all the operation. This also means, we must call
internal and lockless memory functions (prefixed with '__').

For the record, this patch fixes a pernicious bug happens after a soft reload
where some streams can be blocked infinitly, waiting for a buffer in the
buffer_wq list. This happens because, during a soft reload, pool_gc2 is called,
making some calls to b_alloc_fast fail.

This is specific to threads, no backport is needed.
This commit is contained in:
Christopher Faulet 2017-11-10 10:39:16 +01:00 committed by Willy Tarreau
parent 9dcf9b6f03
commit fa5c812a6b

View File

@ -722,26 +722,44 @@ static inline void b_free(struct buffer **buf)
* a response buffer could not be allocated anymore, resulting in a deadlock. * a response buffer could not be allocated anymore, resulting in a deadlock.
* This means that we sometimes need to try to allocate extra entries even if * This means that we sometimes need to try to allocate extra entries even if
* only one buffer is needed. * only one buffer is needed.
*
* We need to lock the pool here to be sure to have <margin> buffers available
* after the allocation, regardless how many threads that doing it in the same
* time. So, we use internal and lockless memory functions (prefixed with '__').
*/ */
static inline struct buffer *b_alloc_margin(struct buffer **buf, int margin) static inline struct buffer *b_alloc_margin(struct buffer **buf, int margin)
{ {
struct buffer *next; struct buffer *b;
if ((*buf)->size) if ((*buf)->size)
return *buf; return *buf;
*buf = &buf_wanted;
HA_SPIN_LOCK(POOL_LOCK, &pool2_buffer->lock);
/* fast path */ /* fast path */
if ((pool2_buffer->allocated - pool2_buffer->used) > margin) if ((pool2_buffer->allocated - pool2_buffer->used) > margin) {
return b_alloc_fast(buf); b = __pool_get_first(pool2_buffer);
if (likely(b)) {
HA_SPIN_UNLOCK(POOL_LOCK, &pool2_buffer->lock);
b->size = pool2_buffer->size - sizeof(struct buffer);
b_reset(b);
*buf = b;
return b;
}
}
next = pool_refill_alloc(pool2_buffer, margin); /* slow path, uses malloc() */
if (!next) b = __pool_refill_alloc(pool2_buffer, margin);
return next;
next->size = pool2_buffer->size - sizeof(struct buffer); HA_SPIN_UNLOCK(POOL_LOCK, &pool2_buffer->lock);
b_reset(next);
*buf = next; if (b) {
return next; b->size = pool2_buffer->size - sizeof(struct buffer);
b_reset(b);
*buf = b;
}
return b;
} }