BUG: tasks: fix bug introduced by latest scheduler cleanup

In commit 86eded6c6 ("CLEANUP: tasks: rename task_remove_from_tasklet_list()
to tasklet_remove_*") which consisted in removing the casts between tasks
and tasklet, I was a bit too fast to believe that we only saw tasklets in
this function since process_runnable_tasks() also uses it with tasks under
a cast. So removing the bookkeeping on task_list_size was not appropriate.
Bah, the joy of casts which hide the real thing...

This patch does two things at once to address this mess once for all:
  - it restores the decrement of task_list_size when it's a real task,
    but moves it to process_runnable_task() since it's the only place
    where it's allowed to call it with a task

  - it moves the increment there as well and renames
    task_insert_into_tasklet_list() to tasklet_insert_into_tasklet_list()
    of obvious consistency reasons.

This way the increment/decrement of task_list_size is made at the only
places where the cast is enforced, so it has less risks to be missed.
The comments on top of these functions were updated to reflect that they
are only supposed to be used with tasklets and that the caller is responsible
for keeping task_list_size up to date if it decides to enforce a task there.

Now we don't have to worry anymore about how these functions work outside
of the scheduler, which is better longterm-wise. Thanks to Christopher for
spotting this mistake.

No backport is needed.
This commit is contained in:
Willy Tarreau 2019-06-14 18:05:54 +02:00
parent cd67bffd26
commit bd20a9dd4e
2 changed files with 11 additions and 11 deletions

View File

@ -104,8 +104,6 @@ __decl_hathreads(extern HA_SPINLOCK_T rq_lock); /* spin lock related to run que
__decl_hathreads(extern HA_RWLOCK_T wq_lock); /* RW lock related to the wait queue */
static inline void task_insert_into_tasklet_list(struct task *t);
/* return 0 if task is in run queue, otherwise non-zero */
static inline int task_in_rq(struct task *t)
{
@ -235,19 +233,18 @@ static inline void tasklet_wakeup(struct tasklet *tl)
}
/* may only be used for real tasks */
static inline void task_insert_into_tasklet_list(struct task *t)
/* Insert a tasklet into the tasklet list. If used with a plain task instead,
* the caller must update the task_list_size.
*/
static inline void tasklet_insert_into_tasklet_list(struct tasklet *tl)
{
struct tasklet *tl;
_HA_ATOMIC_ADD(&tasks_run_queue, 1);
task_per_thread[tid].task_list_size++;
tl = (struct tasklet *)t;
LIST_ADDQ(&task_per_thread[tid].task_list, &tl->list);
}
/* remove the task from the tasklet list. The tasklet MUST already be there. If
* unsure, use tasklet_remove_from_tasklet_list() instead.
/* Remove the tasklet from the tasklet list. The tasklet MUST already be there.
* If unsure, use tasklet_remove_from_tasklet_list() instead. If used with a
* plain task, the caller must update the task_list_size.
*/
static inline void __tasklet_remove_from_tasklet_list(struct tasklet *t)
{

View File

@ -369,7 +369,8 @@ void process_runnable_tasks()
#endif
/* And add it to the local task list */
task_insert_into_tasklet_list(t);
tasklet_insert_into_tasklet_list((struct tasklet *)t);
task_per_thread[tid].task_list_size++;
activity[tid].tasksw++;
}
@ -389,6 +390,8 @@ void process_runnable_tasks()
state = _HA_ATOMIC_XCHG(&t->state, TASK_RUNNING);
__ha_barrier_atomic_store();
__tasklet_remove_from_tasklet_list((struct tasklet *)t);
if (!TASK_IS_TASKLET(t))
task_per_thread[tid].task_list_size--;
ti->flags &= ~TI_FL_STUCK; // this thread is still running
activity[tid].ctxsw++;