|author||Peter Zijlstra (Intel) <email@example.com>||2020-03-27 11:44:56 +0100|
|committer||Thomas Gleixner <firstname.lastname@example.org>||2020-05-12 17:10:48 +0200|
sched: Clean up scheduler_ipi()
The scheduler IPI has grown weird and wonderful over the years, time for spring cleaning. Move all the non-trivial stuff out of it and into a regular smp function call IPI. This then reduces the schedule_ipi() to most of it's former NOP glory and ensures to keep the interrupt vector lean and mean. Aside of that avoiding the full irq_enter() in the x86 IPI implementation is incorrect as scheduler_ipi() can be instrumented. To work around that scheduler_ipi() had an irq_enter/exit() hack when heavy work was pending. This is gone now. Signed-off-by: Peter Zijlstra (Intel) <email@example.com> Signed-off-by: Thomas Gleixner <firstname.lastname@example.org> Reviewed-by: Alexandre Chartre <email@example.com> Link: https://firstname.lastname@example.org
Diffstat (limited to 'kernel/sched/fair.c')
1 files changed, 2 insertions, 3 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 46b7bd41573f..6b7f1474e2d6 100644
@@ -10000,12 +10000,11 @@ static void kick_ilb(unsigned int flags)
- * Use smp_send_reschedule() instead of resched_cpu().
- * This way we generate a sched IPI on the target CPU which
+ * This way we generate an IPI on the target CPU which
* is idle. And the softirq performing nohz idle load balance
* will be run before returning from the IPI.
+ smp_call_function_single_async(ilb_cpu, &cpu_rq(ilb_cpu)->nohz_csd);