diff options
author | Peter Zijlstra <peterz@infradead.org> | 2013-09-27 17:30:03 +0200 |
---|---|---|
committer | Robin Randhawa <robin.randhawa@arm.com> | 2015-04-09 12:25:52 +0100 |
commit | 33d9314e8156c43c8f9dfbbf7cda3ab3f29a8f59 (patch) | |
tree | 1db32d95c38220c5d93db6a3f3754416bbe53918 /include | |
parent | 7199c4d398487086f1afc2bd77b4ec31e8f222f7 (diff) |
sched: Revert need_resched() to look at TIF_NEED_RESCHED
Yuanhan reported a serious throughput regression in his pigz
benchmark. Using the ftrace patch I found that several idle
paths need more TLC before we can switch the generic
need_resched() over to preempt_need_resched.
The preemption paths benefit most from preempt_need_resched and
do indeed use it; all other need_resched() users don't really
care that much so reverting need_resched() back to
tif_need_resched() is the simple and safe solution.
Reported-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: lkp@linux.intel.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20130927153003.GF15690@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 75f93fed50c2abadbab6ef546b265f51ca975b27)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/asm-generic/preempt.h | 8 | ||||
-rw-r--r-- | include/linux/sched.h | 5 |
2 files changed, 5 insertions, 8 deletions
diff --git a/include/asm-generic/preempt.h b/include/asm-generic/preempt.h index 82d958fc3823..bdaaa1d49e26 100644 --- a/include/asm-generic/preempt.h +++ b/include/asm-generic/preempt.h @@ -85,14 +85,6 @@ static __always_inline bool __preempt_count_dec_and_test(void) } /* - * Returns true when we need to resched -- even if we can not. - */ -static __always_inline bool need_resched(void) -{ - return unlikely(test_preempt_need_resched()); -} - -/* * Returns true when we need to resched and can (barring IRQ state). */ static __always_inline bool should_resched(void) diff --git a/include/linux/sched.h b/include/linux/sched.h index ce169ac8489f..a49db1faab44 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -2616,6 +2616,11 @@ static inline bool __must_check current_clr_polling_and_test(void) } #endif +static __always_inline bool need_resched(void) +{ + return unlikely(tif_need_resched()); +} + /* * Thread group CPU time accounting. */ |