aboutsummaryrefslogtreecommitdiff
path: root/kernel/sched_rt.c
diff options
context:
space:
mode:
authorSripathi Kodi <sripathik@in.ibm.com>2008-11-05 18:57:14 +0530
committerIngo Molnar <mingo@elte.hu>2008-11-06 22:12:09 +0100
commitcf7f8690e864c6fe11e77202dd847fa60f483418 (patch)
tree9f0e3cca10a550698c3761c3ee5de6496ecf1e78 /kernel/sched_rt.c
parente113a745f693af196c8081b328bf42def086989b (diff)
sched, lockdep: inline double_unlock_balance()
We have a test case which measures the variation in the amount of time needed to perform a fixed amount of work on the preempt_rt kernel. We started seeing deterioration in it's performance recently. The test should never take more than 10 microseconds, but we started 5-10% failure rate. Using elimination method, we traced the problem to commit 1b12bbc747560ea68bcc132c3d05699e52271da0 (lockdep: re-annotate scheduler runqueues). When LOCKDEP is disabled, this patch only adds an additional function call to double_unlock_balance(). Hence I inlined double_unlock_balance() and the problem went away. Here is a patch to make this change. Signed-off-by: Sripathi Kodi <sripathik@in.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'kernel/sched_rt.c')
-rw-r--r--kernel/sched_rt.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index c7963d5d062..2bdd4442359 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -910,7 +910,8 @@ static void put_prev_task_rt(struct rq *rq, struct task_struct *p)
#define RT_MAX_TRIES 3
static int double_lock_balance(struct rq *this_rq, struct rq *busiest);
-static void double_unlock_balance(struct rq *this_rq, struct rq *busiest);
+static inline void double_unlock_balance(struct rq *this_rq,
+ struct rq *busiest);
static void deactivate_task(struct rq *rq, struct task_struct *p, int sleep);