path: root/kernel/sched/fair.c
diff options
authorTao Zhou <ouwen210@hotmail.com>2020-03-19 11:39:20 +0800
committerPeter Zijlstra <peterz@infradead.org>2020-03-20 13:06:20 +0100
commit6c8116c914b65be5e4d6f66d69c8142eb0648c22 (patch)
tree380520e9cbfbe99435df4f285d5894f0a2dfa840 /kernel/sched/fair.c
parente94f80f6c49020008e6fa0f3d4b806b8595d17d8 (diff)
sched/fair: Fix condition of avg_load calculation
In update_sg_wakeup_stats(), the comment says: Computing avg_load makes sense only when group is fully busy or overloaded. But, the code below this comment does not check like this. From reading the code about avg_load in other functions, I confirm that avg_load should be calculated in fully busy or overloaded case. The comment is correct and the checking condition is wrong. So, change that condition. Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()") Signed-off-by: Tao Zhou <ouwen210@hotmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Acked-by: Mel Gorman <mgorman@suse.de> Link: https://lkml.kernel.org/r/Message-ID:
Diffstat (limited to 'kernel/sched/fair.c')
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 783356f96b7b..d7fb20adabeb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8631,7 +8631,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
* Computing avg_load makes sense only when group is fully busy or
* overloaded
- if (sgs->group_type < group_fully_busy)
+ if (sgs->group_type == group_fully_busy ||
+ sgs->group_type == group_overloaded)
sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /