BUG/MEDIUM: tasks: make sure we pick all tasks in the run queue

Commit 09eeb76 ("BUG/MEDIUM: tasks: Don't forget to increase/decrease
tasks_run_queue.") addressed a count issue in the run queue and uncovered
another issue with the way the tasks are dequeued from the global run
queue. The number of tasks to pick is computed using an integral divide,
which results in up to nbthread-1 tasks never being run. The fix simply
consists in getting rid of the divide and checking the task count in the
loop.

No backport is needed, this is 1.9-specific.
This commit is contained in:
Willy Tarreau 2018-07-26 13:36:01 +02:00
parent 306e653331
commit 9a77186cb0

View File

@ -251,9 +251,6 @@ void process_runnable_tasks()
{
struct task *t;
int max_processed;
#ifdef USE_THREAD
uint64_t average = 0;
#endif
tasks_run_queue_cur = tasks_run_queue; /* keep a copy for reporting */
nb_tasks_cur = nb_tasks;
@ -268,15 +265,12 @@ void process_runnable_tasks()
}
#ifdef USE_THREAD
average = tasks_run_queue / global.nbthread;
/* Get some elements from the global run queue and put it in the
* local run queue. To try to keep a bit of fairness, just get as
* much elements from the global list as to have a bigger local queue
* than the average.
*/
while ((task_list_size[tid] + rqueue_size[tid]) <= average) {
while ((task_list_size[tid] + rqueue_size[tid]) * global.nbthread <= tasks_run_queue) {
/* we have to restart looking up after every batch */
rq_next = eb32sc_lookup_ge(&rqueue, rqueue_ticks - TIMER_LOOK_BACK, tid_bit);
if (unlikely(!rq_next)) {