|author||Milton Miller <firstname.lastname@example.org>||2011-03-15 13:27:16 -0600|
|committer||Eric Miao <email@example.com>||2011-11-10 07:39:01 +0800|
call_function_many: fix list delete vs add race
commit e6cd1e07a185d5f9b0aa75e020df02d3c1c44940 upstream. Peter pointed out there was nothing preventing the list_del_rcu in smp_call_function_interrupt from running before the list_add_rcu in smp_call_function_many. Fix this by not setting refs until we have gotten the lock for the list. Take advantage of the wmb in list_add_rcu to save an explicit additional one. I tried to force this race with a udelay before the lock & list_add and by mixing all 64 online cpus with just 3 random cpus in the mask, but was unsuccessful. Still, inspection shows a valid race, and the fix is a extension of the existing protection window in the current code. Reported-by: Peter Zijlstra <firstname.lastname@example.org> Signed-off-by: Milton Miller <email@example.com> Signed-off-by: Linus Torvalds <firstname.lastname@example.org> Signed-off-by: Greg Kroah-Hartman <email@example.com> (cherry picked from commit cb8385e61fb736ef6748d305d868b28a9f649ef1)
1 files changed, 13 insertions, 7 deletions
diff --git a/kernel/smp.c b/kernel/smp.c
index 9910744f085..aaeee20c563 100644
@@ -491,14 +491,15 @@ void smp_call_function_many(const struct cpumask *mask,
- * To ensure the interrupt handler gets an complete view
- * we order the cpumask and refs writes and order the read
- * of them in the interrupt handler. In addition we may
- * only clear our own cpu bit from the mask.
+ * We reuse the call function data without waiting for any grace
+ * period after some other cpu removes it from the global queue.
+ * This means a cpu might find our data block as it is writen.
+ * The interrupt handler waits until it sees refs filled out
+ * while its cpu mask bit is set; here we may only clear our
+ * own cpu mask bit, and must wait to set refs until we are sure
+ * previous writes are complete and we have obtained the lock to
+ * add the element to the queue.
- atomic_set(&data->refs, cpumask_weight(data->cpumask));
@@ -507,6 +508,11 @@ void smp_call_function_many(const struct cpumask *mask,
* will not miss any other list entries:
+ * We rely on the wmb() in list_add_rcu to order the writes
+ * to func, data, and cpumask before this write to refs.
+ atomic_set(&data->refs, cpumask_weight(data->cpumask));