Age | Commit message (Collapse) | Author |
|
|
|
We use get_task_struct to increment the ref count on a task_struct
so that even if the task dies with a pending migration we are still
able to read the memory without causing a fault.
In the case of non-running tasks, we forgot to decrement the ref
count when we are done with the task.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Since TC2 power curves don't really have a utilisation hotspot where
packing makes sense, if it is present for a TC2 system at least make
it default to disabled.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The presence of packing permanently changed the idle balance
behaviour. Do not restrict idle balance on the smallest CPUs when
packing is present but disabled.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
If we migrate a sleeping task away from a CPU which has the
tick stopped, then both the clock_task and decay_counter will
be out of date for that CPU and we will not decay load correctly
regardless of how often we update the blocked load.
This is only an issue for tasks which are not on a runqueue
(because otherwise that CPU would be awake) and simultaneously
the CPU the task previously ran on has had the tick stopped.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
If an entity happens to sleep for less than one tick duration
the tracked load associated with that entity can be decayed by an
unexpectedly large amount if it is later migrated to a different
CPU. This can interfere with correct scheduling when entity load
is used for decision making.
The reason for this is that when an entity is dequeued and enqueued
quickly, such that se.avg.decay_count and cfs_rq.decay_counter
do not differ when that entity is enqueued again,
__synchronize_entity_decay skips the calculation step and also skips
clearing the decay_count. At a later time that entity may be
migrated and its load will be decayed incorrectly.
All users of this function expect decay_count to be zero'ed after
use.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
|
|
commit 0ac9b1c21874d2490331233b3242085f8151e166 upstream.
Currently, group entity load-weights are initialized to zero. This
admits some races with respect to the first time they are re-weighted in
earlty use. ( Let g[x] denote the se for "g" on cpu "x". )
Suppose that we have root->a and that a enters a throttled state,
immediately followed by a[0]->t1 (the only task running on cpu[0])
blocking:
put_prev_task(group_cfs_rq(a[0]), t1)
put_prev_entity(..., t1)
check_cfs_rq_runtime(group_cfs_rq(a[0]))
throttle_cfs_rq(group_cfs_rq(a[0]))
Then, before unthrottling occurs, let a[0]->b[0]->t2 wake for the first
time:
enqueue_task_fair(rq[0], t2)
enqueue_entity(group_cfs_rq(b[0]), t2)
enqueue_entity_load_avg(group_cfs_rq(b[0]), t2)
account_entity_enqueue(group_cfs_ra(b[0]), t2)
update_cfs_shares(group_cfs_rq(b[0]))
< skipped because b is part of a throttled hierarchy >
enqueue_entity(group_cfs_rq(a[0]), b[0])
...
We now have b[0] enqueued, yet group_cfs_rq(a[0])->load.weight == 0
which violates invariants in several code-paths. Eliminate the
possibility of this by initializing group entity weight.
Signed-off-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131016181627.22647.47543.stgit@sword-of-the-dawn.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 927b54fccbf04207ec92f669dce6806848cbec7d upstream.
__start_cfs_bandwidth calls hrtimer_cancel while holding rq->lock,
waiting for the hrtimer to finish. However, if sched_cfs_period_timer
runs for another loop iteration, the hrtimer can attempt to take
rq->lock, resulting in deadlock.
Fix this by ensuring that cfs_b->timer_active is cleared only if the
_latest_ call to do_sched_cfs_period_timer is returning as idle. Then
__start_cfs_bandwidth can just call hrtimer_try_to_cancel and wait for
that to succeed or timer_active == 1.
Signed-off-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: pjt@google.com
Link: http://lkml.kernel.org/r/20131016181622.22647.16643.stgit@sword-of-the-dawn.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit db06e78cc13d70f10877e0557becc88ab3ad2be8 upstream.
hrtimer_expires_remaining does not take internal hrtimer locks and thus
must be guarded against concurrent __hrtimer_start_range_ns (but
returning HRTIMER_RESTART is safe). Use cfs_b->lock to make it safe.
Signed-off-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: pjt@google.com
Link: http://lkml.kernel.org/r/20131016181617.22647.73829.stgit@sword-of-the-dawn.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 1ee14e6c8cddeeb8a490d7b54cd9016e4bb900b4 upstream.
When we transition cfs_bandwidth_used to false, any currently
throttled groups will incorrectly return false from cfs_rq_throttled.
While tg_set_cfs_bandwidth will unthrottle them eventually, currently
running code (including at least dequeue_task_fair and
distribute_cfs_runtime) will cause errors.
Fix this by turning off cfs_bandwidth_used only after unthrottling all
cfs_rqs.
Tested: toggle bandwidth back and forth on a loaded cgroup. Caused
crashes in minutes without the patch, hasn't crashed with it.
Signed-off-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: pjt@google.com
Link: http://lkml.kernel.org/r/20131016181611.22647.80365.stgit@sword-of-the-dawn.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Conflicts:
arch/arm64/kernel/smp.c
Signed-off-by: Alex Shi <alex.shi@linaro.org>
|
|
commit 3c67f474558748b604e247d92b55dfe89654c81d upstream.
Inaccessible VMA should not be trapping NUMA hint faults. Skip them.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Alex Thorlton <athorlton@sgi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
|
|
commit f9f9ffc237dd924f048204e8799da74f9ecf40cf upstream.
throttle_cfs_rq() doesn't check to make sure that period_timer is running,
and while update_curr/assign_cfs_runtime does, a concurrently running
period_timer on another cpu could cancel itself between this cpu's
update_curr and throttle_cfs_rq(). If there are no other cfs_rqs running
in the tg to restart the timer, this causes the cfs_rq to be stranded
forever.
Fix this by calling __start_cfs_bandwidth() in throttle if the timer is
inactive.
(Also add some sched_debug lines for cfs_bandwidth.)
Tested: make a run/sleep task in a cgroup, loop switching the cgroup
between 1ms/100ms quota and unlimited, checking for timer_active=0 and
throttled=1 as a failure. With the throttle_cfs_rq() change commented out
this fails, with the full patch it passes.
Signed-off-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Cc: pjt@google.com
Link: http://lkml.kernel.org/r/20131016181632.22647.84174.stgit@sword-of-the-dawn.mtv.corp.google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Chris J Arges <chris.j.arges@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
|
|
The woken migrated task will __synchronize_entity_decay(se); in
migrate_task_rq_fair, then it needs to set
`se->avg.last_runnable_update -= (-se->avg.decay_count) << 20' before
update_entity_load_avg, in order to avoid sleep time is updated twice
for se.avg.load_avg_contrib in both __syncchronize and
update_entity_load_avg.
However if the sleeping task is woken up from the same cpu, it miss
the last_runnable_update before update_entity_load_avg(se, 0, 1), then
the sleep time was used twice in both functions. So we need to remove
the double sleep time accounting.
Paul also contributed some code comments in this commit.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Reviewed-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1371694737-29336-5-git-send-email-alex.shi@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
hmp_variable_scale_convert was used without guards in
__update_entity_runnable_avg. Guard it.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
|
|
hmp_variable_scale_convert was used without guards in
__update_entity_runnable_avg. Guard it.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
|
|
|
|
In order to allow userspace to restrict known low-load tasks to
little CPUs, we must export this knowledge from the kernel or
expect userspace to make their own attempts at figuring it out.
Since we now have a userspace requirement for an HMP implementation
to always have at least some sysfs files, change the integration
so that it only depends upon CONFIG_SCHED_HMP rather than
CONFIG_HMP_VARIABLE_SCALE. Fix Kconfig text to match.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
When migrating a runnable task, we use the CPU stopper on
the source CPU to ensure that the task to be moved is not
currently running. Before this patch, all forced migrations
(up, offload, idle pull) use the stopper for every migration.
Using the CPU stopper is mandatory only when a task is currently
running on a CPU. Otherwise tasks can be moved by locking the
source and destination run queues.
This patch checks to see if the task to be moved are currently
running. If not the task is moved directly without using the
stopper thread.
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
|
|
If we wake up a task on a little CPU, fill CPUs rather than
spread. Adds 2 new files to sys/kernel/hmp to control packing
behaviour.
packing_enable: task packing enabled (1) or disabled (0)
packing_limit: Runqueues will be filled up to this load ratio.
This functionality is disabled by default on TC2 as it lacks per-cpu
power gating so packing small tasks there doesn't make sense.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Accessing the task_struct can be racy in certain conditions, so
we need to only acquire the data when needed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The previous API for hmp_up_migration reset the destination
CPU every time, regardless of if a migration was desired. The code
using it assumed that the value would not be changed unless
a migration was required. In one rare circumstance, this could
have lead to a task migrating to a little CPU at the wrong time.
Fixing that lead to a slight logical tweak to make the surrounding
APIs operate a bit more obviously.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Robin Randhawa <robin.randhawa@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
1. Replace magic numbers in code for migration trace.
Trace points still emit a number as force=<n> field:
force=0 : wakeup migration
force=1 : forced migration
force=2 : offload migration
force=3 : idle pull migration
2. Add trace to expose offload decision-making.
Also adds tracing rq->nr_running so that you can
look back to see what state the RQ was in at the time.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
When the up-threshold is at 512 on TC2, behaviour looks OK since
the graphic-related tasks are very heavy due to lack of a GPU.
Increasing the up-threshold does not reduce power consumption.
When a GPU is present, graphic tasks are much less CPU-heavy and
so additional power may be saved by having a higher threshold.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <Liviu.Dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
This is the 3.10.14 stable release
|
|
invalid ones
commit 6c9a27f5da9609fca46cb2b183724531b48f71ad upstream.
There is a small race between copy_process() and cgroup_attach_task()
where child->se.parent,cfs_rq points to invalid (old) ones.
parent doing fork() | someone moving the parent to another cgroup
-------------------------------+---------------------------------------------
copy_process()
+ dup_task_struct()
-> parent->se is copied to child->se.
se.parent,cfs_rq of them point to old ones.
cgroup_attach_task()
+ cgroup_task_migrate()
-> parent->cgroup is updated.
+ cpu_cgroup_attach()
+ sched_move_task()
+ task_move_group_fair()
+- set_task_rq()
-> se.parent,cfs_rq of parent
are updated.
+ cgroup_fork()
-> parent->cgroup is copied to child->cgroup. (*1)
+ sched_fork()
+ task_fork_fair()
-> se.parent,cfs_rq of child are accessed
while they point to old ones. (*2)
In the worst case, this bug can lead to "use-after-free" and cause a panic,
because it's new cgroup's refcount that is incremented at (*1),
so the old cgroup(and related data) can be freed before (*2).
In fact, a panic caused by this bug was originally caught in RHEL6.4.
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<ffffffff81051e3e>] sched_slice+0x6e/0xa0
[...]
Call Trace:
[<ffffffff81051f25>] place_entity+0x75/0xa0
[<ffffffff81056a3a>] task_fork_fair+0xaa/0x160
[<ffffffff81063c0b>] sched_fork+0x6b/0x140
[<ffffffff8106c3c2>] copy_process+0x5b2/0x1450
[<ffffffff81063b49>] ? wake_up_new_task+0xd9/0x130
[<ffffffff8106d2f4>] do_fork+0x94/0x460
[<ffffffff81072a9e>] ? sys_wait4+0xae/0x100
[<ffffffff81009598>] sys_clone+0x28/0x30
[<ffffffff8100b393>] stub_clone+0x13/0x20
[<ffffffff8100b072>] ? system_call_fastpath+0x16/0x1b
Signed-off-by: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/039601ceae06$733d3130$59b79390$@mxp.nes.nec.co.jp
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Prevents fork-migration adversely interacting with normal
migration (i.e. runqueues containing forked tasks being
selected as migration targets when there is a better
choice available)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Avoids accesses through cfs_rq going bad when the cpu_rq doesn't
have a cfs member.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
When an A15 goes idle, we should up-migrate anything which is
above the threshold and running on an A7.
Reuses the HMP force-migration spinlock, but adds its own new
cpu stopper client.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
rq->nr_running was better than cfs.nr_running, since it includes
all tasks actually on the CPU. However, it includes RT tasks which
we would rather ignore at this point.
Switching to cfs.h_nr_running includes all the CFS tasks but no
RT tasks.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Experimentally, one of the best policies for HMP migration CPU
selection is to completely ignore part-loaded CPUs and only look
for idle ones. If there are no idle ones, we will choose the one
which was least-recently-disturbed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The original intent here was to track unweighted runqueue load
with less resolution so we could use the least-recently-disturbed
runqueue to choose between 'closely related' load levels.
However, after experimenting with the resolution it turns out
that the following algorithm is highly beneficial for mobile
workloads.
In hmp_domain_min_load:
* If any CPU is zero, the overall load is zero
* If no CPUs are idle, the domain is 'fully loaded'
Additionally, the time since last migration count is used to
discriminate between idle CPUs.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Track when migrations were performed to runqueues.
Use this to decide between runqueues as migration targets when run
queues in an hmp domain have equal load.
Intention is to spread migration load amongst CPUs more fairly.
When all CPUs in an hmp domain are fully loaded, the existing code
always selects the last CPU as a migration target - this is unfair
and little better than doing no selection.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The hmp_get_{lightest,heaviest}_task() need to use
__pick_first_entity() to get a pointer to a sched_entity on the rq.
The current is not kept on the rq while running, so its rb-tree node
pointers are no longer valid.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
When we are looking for a task to migrate up, select the heaviest
one in the first 5 runnable on the runqueue.
Likewise, when looking for a task to offload, select the lightest
one in the first 5 runnable on the runqueue.
Ensure task selected is runnable in the target domain.
This change is necessary in order to implement idle pull in a
sensible manner, but here is used in up-migration and offload to
select the correct target task.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
This is the 3.10.9 stable release
|
|
continuously-running tasks
commit bf0bd948d1682e3996adc093b43021ed391983e6 upstream.
We typically update a task_group's shares within the dequeue/enqueue
path. However, continuously running tasks sharing a CPU are not
subject to these updates as they are only put/picked. Unfortunately,
when we reverted f269ae046 (in 17bc14b7), we lost the augmenting
periodic update that was supposed to account for this; resulting in a
potential loss of fairness.
To fix this, re-introduce the explicit update in
update_cfs_rq_blocked_load() [called via entity_tick()].
Reported-by: Max Hailperin <max@gustavus.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/n/tip-9545m3apw5d93ubyrotrj31y@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Prevents fork-migration adversely interacting with normal
migration (i.e. runqueues containing forked tasks being
selected as migration targets when there is a better
choice available)
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Avoids accesses through cfs_rq going bad when the cpu_rq doesn't
have a cfs member.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
When an A15 goes idle, we should up-migrate anything which is
above the threshold and running on an A7.
Reuses the HMP force-migration spinlock, but adds its own new
cpu stopper client.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
rq->nr_running was better than cfs.nr_running, since it includes
all tasks actually on the CPU. However, it includes RT tasks which
we would rather ignore at this point.
Switching to cfs.h_nr_running includes all the CFS tasks but no
RT tasks.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Experimentally, one of the best policies for HMP migration CPU
selection is to completely ignore part-loaded CPUs and only look
for idle ones. If there are no idle ones, we will choose the one
which was least-recently-disturbed.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The original intent here was to track unweighted runqueue load
with less resolution so we could use the least-recently-disturbed
runqueue to choose between 'closely related' load levels.
However, after experimenting with the resolution it turns out
that the following algorithm is highly beneficial for mobile
workloads.
In hmp_domain_min_load:
* If any CPU is zero, the overall load is zero
* If no CPUs are idle, the domain is 'fully loaded'
Additionally, the time since last migration count is used to
discriminate between idle CPUs.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
Track when migrations were performed to runqueues.
Use this to decide between runqueues as migration targets when run
queues in an hmp domain have equal load.
Intention is to spread migration load amongst CPUs more fairly.
When all CPUs in an hmp domain are fully loaded, the existing code
always selects the last CPU as a migration target - this is unfair
and little better than doing no selection.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|
|
The hmp_get_{lightest,heaviest}_task() need to use
__pick_first_entity() to get a pointer to a sched_entity on the rq.
The current is not kept on the rq while running, so its rb-tree node
pointers are no longer valid.
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Signed-off-by: Liviu Dudau <liviu.dudau@arm.com>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
|