workqueue: better define synchronization rule around rescuer->pool updates
Rescuers visit different worker_pools to process work items from pools
under pressure. Currently, rescuer->pool is updated outside any
locking and when an outsider looks at a rescuer, there's no way to
tell when and whether rescuer->pool is gonna change. While this
doesn't currently cause any problem, it is nasty.
With recent worker_maybe_bind_and_lock() changes, we can move
rescuer->pool updates inside pool locks such that if rescuer->pool
equals a locked pool, it's guaranteed to stay that way until the pool
is unlocked.
Move rescuer->pool inside pool->lock.
This patch doesn't introduce any visible behavior difference.
tj: Updated the description.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 09545d4..fd9a28a 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2357,8 +2357,8 @@
mayday_clear_cpu(cpu, wq->mayday_mask);
/* migrate to the target cpu if possible */
- rescuer->pool = pool;
worker_maybe_bind_and_lock(pool);
+ rescuer->pool = pool;
/*
* Slurp in all works issued via this workqueue and
@@ -2379,6 +2379,7 @@
if (keep_working(pool))
wake_up_worker(pool);
+ rescuer->pool = NULL;
spin_unlock_irq(&pool->lock);
}
diff --git a/kernel/workqueue_internal.h b/kernel/workqueue_internal.h
index 0765026..f9c8877 100644
--- a/kernel/workqueue_internal.h
+++ b/kernel/workqueue_internal.h
@@ -32,6 +32,7 @@
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
struct worker_pool *pool; /* I: the associated pool */
+ /* L: for rescuers */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */