diff options
author | Jason Low <jason.low2@hp.com> | 2014-04-28 15:45:54 -0700 |
---|---|---|
committer | Alex Shi <alex.shi@linaro.org> | 2014-12-02 22:36:55 +0800 |
commit | 7bb9eaab7b613654a42be32db0ada61a4073640f (patch) | |
tree | 0342fb83784a661121a3a4caf75efc687c6adcb4 | |
parent | 2316b9ffaabd4221a7c1b71a7dd8640d234143a4 (diff) |
sched: Fix updating rq->max_idle_balance_cost and rq->next_balance in idle_balance()
The following commit:
e5fc66119ec9 ("sched: Fix race in idle_balance()")
can potentially cause rq->max_idle_balance_cost to not be updated,
even when load_balance(NEWLY_IDLE) is attempted and the per-sd
max cost value is updated.
Preeti noticed a similar issue with updating rq->next_balance.
In this patch, we fix this by making sure we still check/update those values
even if a task gets enqueued while browsing the domains.
Signed-off-by: Jason Low <jason.low2@hp.com>
Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: morten.rasmussen@arm.com
Cc: aswin@hp.com
Cc: daniel.lezcano@linaro.org
Cc: alex.shi@linaro.org
Cc: efault@gmx.de
Cc: vincent.guittot@linaro.org
Link: http://lkml.kernel.org/r/1398725155-7591-2-git-send-email-jason.low2@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0e5b5337f0da073e1f17aec3c322ea7826975d0d)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
-rw-r--r-- | kernel/sched/fair.c | 16 |
1 files changed, 8 insertions, 8 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 59de4d6e87e..c79fbe996cc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -5523,6 +5523,7 @@ static int idle_balance(struct rq *this_rq) int this_cpu = this_rq->cpu; idle_enter_fair(this_rq); + /* * We must set idle_stamp _before_ calling idle_balance(), such that we * measure the duration of idle_balance() as idle time. @@ -5575,14 +5576,16 @@ static int idle_balance(struct rq *this_rq) raw_spin_lock(&this_rq->lock); + if (curr_cost > this_rq->max_idle_balance_cost) + this_rq->max_idle_balance_cost = curr_cost; + /* - * While browsing the domains, we released the rq lock. - * A task could have be enqueued in the meantime + * While browsing the domains, we released the rq lock, a task could + * have been enqueued in the meantime. Since we're not going idle, + * pretend we pulled a task. */ - if (this_rq->cfs.h_nr_running && !pulled_task) { + if (this_rq->cfs.h_nr_running && !pulled_task) pulled_task = 1; - goto out; - } if (pulled_task || time_after(jiffies, this_rq->next_balance)) { /* @@ -5592,9 +5595,6 @@ static int idle_balance(struct rq *this_rq) this_rq->next_balance = next_balance; } - if (curr_cost > this_rq->max_idle_balance_cost) - this_rq->max_idle_balance_cost = curr_cost; - out: if (pulled_task) this_rq->idle_stamp = 0; |