aboutsummaryrefslogtreecommitdiff
path: root/arch/powerpc/platforms/cell/spufs/sched.c
AgeCommit message (Collapse)Author
2007-08-03[POWERPC] spufs: Fix affinity after introduction of node_allowed() callsAndre Detsch
This patch fixes affinity reference point placement, which was not being done in some situations, after the introduction of node_allowed() calls. The previously used parameter, 'ctx', is just the iterator of the previous list_for_each_entry_reverse loop, and its value might be invalid at the end of the loop. Also, the right context to seek for information when defining the reference ctx location _is_ the reference ctx. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-26[POWERPC] spufs: Fix incorrect initialization of cbe_spu_info.spusMasato Noguchi
We currently initialize cbe_spu_info[].spus in both init_spu_base and spu_sched_init. The initialise in spu_sched_init clears the SPU list, so we end up with no physical SPUs. Because of this, the spu_run syscall will block forever. This change removes the unnecessary initialization in spu_sched_init. Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-20[CELL] spufs: rework list management and associated lockingChristoph Hellwig
This sorts out the various lists and related locks in the spu code. In detail: - the per-node free_spus and active_list are gone. Instead struct spu gained an alloc_state member telling whether the spu is free or not - the per-node spus array is now locked by a per-node mutex, which takes over from the global spu_lock and the per-node active_mutex - the spu_alloc* and spu_free function are gone as the state change is now done inline in the spufs code. This allows some more sharing of code for the affinity vs normal case and more efficient locking - some little refactoring in the affinity code for this locking scheme Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] oprofile: add support to OProfile for profiling CELL BE SPUsBob Nelson
From: Maynard Johnson <mpjohn@us.ibm.com> This patch updates the existing arch/powerpc/oprofile/op_model_cell.c to add in the SPU profiling capabilities. In addition, a 'cell' subdirectory was added to arch/powerpc/oprofile to hold Cell-specific SPU profiling code. Exports spu_set_profile_private_kref and spu_get_profile_private_kref which are used by OProfile to store private profile information in spufs data structures. Also incorporated several fixes from other patches (rrn). Check pointer returned from kzalloc. Eliminated unnecessary cast. Better error handling and cleanup in the related area. 64-bit unsigned long parameter was being demoted to 32-bit unsigned int and eventually promoted back to unsigned long. Signed-off-by: Carl Love <carll@us.ibm.com> Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com> Signed-off-by: Bob Nelson <rrnelson@us.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20[CELL] oprofile: enable SPU switch notification to detect currently active ↵Bob Nelson
SPU tasks From: Maynard Johnson <mpjohn@us.ibm.com> This patch adds to the capability of spu_switch_event_register so that the caller is also notified of currently active SPU tasks. Exports spu_switch_event_register and spu_switch_event_unregister so that OProfile can get access to the notifications provided. Signed-off-by: Maynard Johnson <mpjohn@us.ibm.com> Signed-off-by: Carl Love <carll@us.ibm.com> Signed-off-by: Bob Nelson <rrnelson@us.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org>
2007-07-20[CELL] spufs: integration of SPE affinity with the schedullerArnd Bergmann
This patch makes the scheduller honor affinity information for each context being scheduled. If the context has no affinity information, behaviour is unchanged. If there are affinity information, context is schedulled to be run on the exact spu recommended by the affinity placement algorithm. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] cell: add placement computation for scheduling of affinity contextsArnd Bergmann
This patch provides the spu affinity placement logic for the spufs scheduler. Each time a gang is going to be scheduled, the placement of a reference context is defined. The placement of all other contexts with affinity from the gang is defined based on this reference context location and on a precomputed displacement offset. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] cell: add per BE structure with info about its SPUsArnd Bergmann
Addition of a spufs-global "cbe_info" array. Each entry contains information about one Cell/B.E. node, namelly: * list of spus (both free and busy spus are in this list); * list of free spus (replacing the static spu_list from spu_base.c) * number of spus; * number of reserved (non scheduleable) spus. SPE affinity implementation actually requires only access to one spu per BE node (since it implements its own pointer to walk through the other spus of the ring) and the number of scheduleable spus (n_spus - non_sched_spus) However having this more general structure can be useful for other functionalities, concentrating per-cbe statistics / data. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] spufs: use find_first_bit() instead of sched_find_first_bit()Masato Noguchi
spu_sched->bitmap has MAX_PRIO(=140) width in bits.However, since ff80a77f20f811c0cc5b251d0f657cbc6f788385, sched_find_first_bit() only supports 100-bit bitmaps. Thus, spu_sched->bitmap should be treated by generic find_first_bit(). Signed-off-by: Masato Noguchi <Masato.Noguchi@jp.sony.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] spufs: add spu stats in sysfs and ctx stat file in spufsAndre Detsch
This patch exports per-context statistics in spufs as long as spu statistics in sysfs. It was formed by merging: "spufs: add spu stats in sysfs" From: Christoph Hellwig "spufs: add stat file to spufs" From: Christoph Hellwig "spufs: fix libassist accounting" From: Jeremy Kerr "spusched: fix spu utilization statistics" From: Luke Browning And some adjustments by myself, after suggestions on cbe-oss-dev. Having separate patches was making the review process harder than it should, as we end up integrating spus and ctx statistics accounting much more than it was on the first implementation. Signed-off-by: Andre Detsch <adetsch@br.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] spufs: Remove spurious WARN_ON for spu_deactivate for NOSCHED contextsJeremy Kerr
In 6cbf93960e64f313f6e247cbca7afaa50e3ee2c we added a WARN_ON for calling spu_deactivate on contexts created with the SPU_CREATE_NOSCHED flag. However, all NOSCHED contexts will need to be deactivated when the context is destroyed, so this gives a spurious warning when any NOSCHED context is closed. This change removes the WARN_ON. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-20[CELL] spufs: remove section mismatch warningSebastian Siewior
WARNING: arch/powerpc/platforms/cell/spufs/spufs.o(.init.text+0x158): Section mismatch: reference to .exit.text:.spu_sched_exit (between '.init_module' and '.spu_sched_init') was introduced by c99c1994a2bb9493b4ac372b2b6ee2606d291171 This patch removes the warning. Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-07-03[POWERPC] spufs: Add spu stats in sysfsChristoph Hellwig
Export spu statistics in sysfs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Fix runqueue corruptionChristoph Hellwig
spu_activate can be called from multiple threads at the same time on behalf of the same spu context. We need to make sure to only add it once to avoid runqueue corruption. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Disable tick when not neededChristoph Hellwig
Only enable the scheduler tick if we have any context waiting to be scheduled. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spufs: Add stat file to spufsChristoph Hellwig
Export per-context statistics in spufs. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spufs: Implement /proc/spu_loadavgChristoph Hellwig
Provide load average information for spu context. The format is identical to /proc/loadavg, which is also where a lot of code and concepts is borrowed from. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spufs: Add tid fileChristoph Hellwig
The new tid file contains the ID of the thread currently running the context, if any. This is used so that the new spu-top and spu-ps tools can find the thread in /proc. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spufs: Trivial whitespace fixesJeremy Kerr
Remove redundant whitespace in arch/powerpc/platforms/cell/spufs/ Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: No preemption for nosched contextsChristoph Hellwig
And last but not least we need to make sure the scheduler tick never preempts a nosched context. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Catch nosched contexts in spu_deactivateChristoph Hellwig
spu_deactivate should never be called for nosched contets. Put in a check so we can print a stacktrace and exit early in case it happes erroneously. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: fix cpu/node bindingChristoph Hellwig
Add a cpus_allowed allowed filed to struct spu_context so that we always use the cpu mask of the owning thread instead of the one happening to call into the scheduler. Also use this information in grab_runnable_context to avoid spurious wakeups. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Update scheduling paramters on every spu_runChristoph Hellwig
Update scheduling information on every spu_run to allow for setting threads to realtime priority just before running them. This requires some slightly ugly code in spufs_run_spu because we can just update the information unlocked if the spu is not runnable, but we need to acquire the active_mutex when it is runnable to protect against find_victim. This locking scheme requires opencoding spu_acquire_runnable in spufs_run_spu which actually is a nice cleanup all by itself. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Print out scheduling tunables with DEBUGJeremy Kerr
Print out a few scheduler tuning parameters when we've compiled with DEBUG defined. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Fix timeslice calculationsJeremy Kerr
The current timeslice code mixes 'jiffies' up with 'spesched ticks'. This change correctly defines the number of time slices each SPE contexts is given, and clarifies the comment. This brings the default timeslice for SPE contexts into a reasonable range. Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Dynamic timeslicing for SCHED_OTHERChristoph Hellwig
Enable preemptive scheduling for non-RT contexts. We use the same algorithms as the CPU scheduler to calculate the time slice length, and for now we also use the same timeslice length as the CPU scheduler. This might be not enough for good performance and can be changed after some benchmarking. Note that currently we do not boost the priority for contexts waiting on the runqueue for a long time, so contexts with a higher nice value could starve ones with less priority. This could easily be fixed once the rework of the spu lists that Luke and I discussed is done. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-07-03[POWERPC] spusched: Switch from workqueues to kthread + timer tickChristoph Hellwig
Get rid of the scheduler workqueues that complicated things a lot to a dedicated spu scheduler thread that gets woken by a traditional scheduler tick. By default this scheduler tick runs a HZ * 10, aka one spu scheduler tick for every 10 cpu ticks. Currently the tick is not disabled when we have less context than available spus, but I will implement this later. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07[POWERPC] spufs: Don't yield nosched contextChristoph Hellwig
Nosched context sould never be scheduled out, thus we must not deactivate them in spu_yield ever. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-06-07[POWERPC] spufs scheduler: Fix wakeup racesChristoph Hellwig
Fix the race between checking for contexts on the runqueue and actually waking them in spu_deactive and spu_yield. The guts of spu_reschedule are split into a new helper called grab_runnable_context which shows if there is a runnable thread below a specified priority and if yes removes if from the runqueue and uses it. This function is used by the new __spu_deactivate hepler shared by preemption and spu_yield to grab a new context before deactivating a specified priority and if yes removes if from the runqueue and uses it. This function is used by the new __spu_deactivate hepler shared by preemption and spu_yield to grab a new context before deactivating the old one. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com> Signed-off-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-05-08header cleaning: don't include smp_lock.h when not usedRandy Dunlap
Remove includes of <linux/smp_lock.h> where it is not used/needed. Suggested by Al Viro. Builds cleanly on x86_64, i386, alpha, ia64, powerpc, sparc, sparc64, and arm (all 59 defconfigs). Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2007-04-23[POWERPC] spu sched: make addition to stop_wq and runque atomic vs wakeupLuke Browning
Addition to stop_wq needs to happen before adding to the runqeueue and under the same lock so that we don't have a race window for a lost wake up in the spu scheduler. Signed-off-by: Luke Browning <lukebrowning@us.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: remove woken threads from the runqueue earlyChristoph Hellwig
A single context should only be woken once, and we should not have more wakeups for a given priority than the number of contexts on that runqueue position. Also add some asserts to trap future problems in this area more easily. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: add memory barriers after set_bitArnd Bergmann
set_bit does not guarantee ordering on powerpc, so using it for communication between threads requires explicit mb() calls. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu sched: ensure preempted threads are put back on the runqueue, ↵Christoph Hellwig
part2 To not lose a spu thread we need to make sure it always gets put back on the runqueue. In find_victim aswell as in the scheduler tick as done in the previous patch. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spu sched: ensure preempted threads are put back on the runqueueChristoph Hellwig
To not lose a spu thread we need to make sure it always gets put back on the runqueue. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jeremy Kerr <jk@ozlabs.org> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-23[POWERPC] spufs: use cancel_rearming_delayed_workqueue when stopping spu ↵Christoph Hellwig
contexts The scheduler workqueue may rearm itself and deadlock when we try to stop it. Put a flag in place to avoid skip the work if we're tearing down the context. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-04-13[POWERPC] spufs: don't yield CPU in spu_yieldChristoph Hellwig
There is no reason to yield the CPU in spu_yield - if the backing thread reenters spu_run it gets added to the end of the runqueue for it's priority. So the yield is just a slowdown for the case where we have higher priority contexts waiting. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-03-10[POWERPC] Fix spu SLB invalidationsBenjamin Herrenschmidt
The SPU code doesn't properly invalidate SPUs SLBs when necessary, for example when changing a segment size from the hugetlbfs code. In addition, it saves and restores the SLB content on context switches which makes it harder to properly handle those invalidations. This patch removes the saving & restoring for now, something more efficient might be found later on. It also adds a spu_flush_all_slbs(mm) that can be used by the core mm code to flush the SLBs of all SPEs that are running a given mm at the time of the flush. In order to do that, it adds a spinlock to the list of all SPEs and move some bits & pieces from spufs to spu_base.c Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2007-03-10[POWERPC] avoid SPU_ACTIVATE_NOWAKE optimizationChristoph Hellwig
This optimization was added recently but is still buggy, so back it out for now. Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spu sched: static timeslicing for SCHED_RR contextsChristoph Hellwig
For SCHED_RR tasks we can do some really trivial timeslicing. Basically we fire up a time for every scheduler tick that searches for a higher or same priority thread that is on the runqueue and if there is one context switches to it. Because we can't lock spus from timer context we actually run this from a delayed runqueue instead of a timer. A nice optimization would be to skip the actual priority bitmap search when there are less contexts than physical spus available. To implement this I need a so far unpublished patch from Andre, and it will be added after we have that patch in. Note that right now we only do the time slicing for SCHED_RR tasks. The code would work for SCHED_OTHER tasks aswell, but their prio value is defered from the one the PPU thread has at time of spu_run, and using this for spu scheduling decisions would make the code very unfair. SCHED_OTHER support will be enabled once we the spu scheduler knows how to calculcate cpu_context.prio (very soon) Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spu sched: use DECLARE_BITMAPChristoph Hellwig
use DECLARE_BITMAP in the spu scheduler instead of reimplementing it. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spu sched: forced preemption at executionChristoph Hellwig
If we start a spu context with realtime priority we want it to run immediately and not wait until some other lower priority thread has finished. Try to find a suitable victim and use it's spu in this case. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spu sched: update some commentsChristoph Hellwig
Give spu_yield a kerneldoc comment and remove the old comment documenting spu_activate, spu_deactive and spu_yield as all of them now have descriptive kerneldoc comments of their own. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spu sched: simplity spu_remove_from_active_listChristoph Hellwig
If we call spu_remove_from_active_list that spu is always guaranteed to be on the active list and in runnable state, so we can simply do a list_del to remove it and unconditionally take the was_active codepath. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: optimize spu_runChristoph Hellwig
There is no need to directly wake up contexts in spu_activate when called from spu_run, so add a flag to surpress this wakeup. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: runqueue simplificationChristoph Hellwig
This is the biggest patch in this series, and it reworks the guts of the spu scheduler runqueue mechanism: - instead of embedding a waitqueue in the runqueue there is now a simple doubly-linked list, the actual wakeups happen by reusing the stop_wq in the spu context (maybe we should rename it one day) - spu_free and spu_prio_wakeup are merged into a single spu_reschedule function - various functionality is split out into small helpers, and kerneldoc comments are added in various places to document what's going on. - spu_activate is rewritten into a tight loop by removing test for various impossible conditions and using the infrastructure in this patch. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: move prio to spu_contextChristoph Hellwig
It doesn't make any sense to have a priority field in the physical spu structure. Move it into the spu context instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: simplify state_mutexChristoph Hellwig
The r/w semaphore to lock the spus was overkill and can be replaced with a mutex to make it faster, simpler and easier to debug. It also helps to allow making most spufs interruptible in future patches. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: sched.c cleanupsChristoph Hellwig
Various cleanups to sched.c that don't change the global control flow: - add kerneldoc comments to various functions - add spu_ prefixes to various functions - add/remove context from the runqueue in bind/unbind_context as it's part of the logical operation - add a call to put_active_spu to spu_unbind_contex as it's logically part of the unbind operation Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>
2007-02-13[POWERPC] spufs: bind_context sets SPU_STATE_RUNNABLEChristoph Hellwig
Only bind_context/unbind_context change the spu context state. Thus we can move all assignents of SPU_STATE_RUNNABLE into bind_context, which parallels the unbind side aswell. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Arnd Bergmann <arnd.bergmann@de.ibm.com>