diff options
author | Ke Wang <ke.wang@spreadtrum.com> | 2017-10-18 15:49:33 +0800 |
---|---|---|
committer | Amit Pundir <amit.pundir@linaro.org> | 2017-11-20 21:15:59 +0530 |
commit | 6f8b7ac2224b4701114295f572a15ac03cbfb560 (patch) | |
tree | d86b6646a36521cbf8fab3e26f6d2a693b89d35d /include/trace | |
parent | dd1a6f18874b82c276573176bba40dac2f703511 (diff) |
trace: sched: Fix util_avg_walt in sched_load_avg_cpu trace
cumulative_runnable_avg was introduced in commit ee4cebd75ed7 ("sched:
EAS/WALT: use cr_avg instead of prev_runnable_sum") in cpu_util() for
task placement, which is used to replace prev_runnable_sum.
Fix util_avg_walt in sched_load_avg_cpu trace, which use prev_runnable_sum
for cpu_util().
Moreover, fix potential overflow due to cumulative_runnable_avg is in u64.
Change-Id: I1220477bf2ff32a6e34a34b6280b15a8178203a8
Signed-off-by: Ke Wang <ke.wang@spreadtrum.com>
Diffstat (limited to 'include/trace')
-rw-r--r-- | include/trace/events/sched.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index d4173039c599..bf96bf05be82 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -724,9 +724,9 @@ TRACE_EVENT(sched_load_avg_cpu, __entry->util_avg_pelt = cfs_rq->avg.util_avg; __entry->util_avg_walt = 0; #ifdef CONFIG_SCHED_WALT - __entry->util_avg_walt = - cpu_rq(cpu)->prev_runnable_sum << SCHED_LOAD_SHIFT; - do_div(__entry->util_avg_walt, walt_ravg_window); + __entry->util_avg_walt = + div64_u64(cpu_rq(cpu)->cumulative_runnable_avg, + walt_ravg_window >> SCHED_LOAD_SHIFT); if (!walt_disabled && sysctl_sched_use_walt_cpu_util) __entry->util_avg = __entry->util_avg_walt; #endif |