aboutsummaryrefslogtreecommitdiff
path: root/arch/sh/mm
AgeCommit message (Collapse)Author
2010-03-23sh: Fix build after dynamic PMB reworkMatt Fleming
set_pmb_entry() is now only used by a function that is wrapped in #ifdef CONFIG_PM, so wrap set_pmb_entry() in CONFIG_PM too. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Replace unsafe manipulation of MMUCRMatt Fleming
Setting the TI in MMUCR causes all the TLB bits in MMUCR to be cleared. Unfortunately, the TLB wired bits are also cleared when setting the TI bit, causing any wired TLB entries to become unwired. Use local_flush_tlb_all() which implements TLB flushing in a safer manner by using the memory-mapped TLB registers. As each CPU has its own PMB the modifications in pmb_init() only affect the local CPU, so only flush the local CPU's TLB. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-23sh: Flush ITLB too in PTEAEX's flush_tlb_page()Matt Fleming
flush_tlb_page() can be used to flush TLB entries that map executable pages. Therefore, we need to ensure that the ITLB is also flushed in local_flush_tlb_page(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-10sh: Export uncached helper symbols.Paul Mundt
oprofile and others need to get at these, so provide symbol exports. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-08sh: Fix up uncached offset for legacy 29-bit mode.Paul Mundt
The uncached_start was being set up properly for 32-bit but managed to break 29-bit in the process, fix it up. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-05sh: Move PMB debugfs entry initialization to later stagePawel Moll
... so the "sh_debugfs_root" is already available. Previously it wasn't and in result its path was "/sys/kernel/debug/pmb" instead of "/sys/kernel/debug/sh/pmb". Signed-off-by: Pawel Moll <pawel.moll@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-04sh: fix up MMU reset with variable PMB mapping sizes.Paul Mundt
Presently we run in to issues with the MMU resetting the CPU when variable sized mappings are employed. This takes a slightly more aggressive approach to keeping the TLB and cache state sane before establishing the mappings in order to cut down on races observed on SMP configurations. At the same time, we bump the VMA range up to the 0xb000...0xc000 range, as there still seems to be some undocumented behaviour in setting up variable mappings in the 0xa000...0xb000 range, resulting in reset by the TLB. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03sh: establish PMB mappings for NUMA nodes.Paul Mundt
In the case of NUMA emulation when in range PPNs are being used for secondary nodes, we need to make sure that the PMB has a mapping for it before setting up the pgdat. This prevents the MMU from resetting. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-03sh: check for existing mappings for bolted PMB entries.Paul Mundt
When entries are being bolted unconditionally it's possible that the boot loader has established mappings that are within range that we don't want to clobber. Perform some basic validation to ensure that the new mapping is out of range before allowing the entry setup to take place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: fixed virt/phys mapping helpers for PMB.Paul Mundt
This moves the pmb_remap_caller() mapping logic out in to pmb_bolt_mapping(), which enables us to establish fixed mappings in places such as the NUMA code. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: make pmb iomapping configurable.Paul Mundt
This plugs in an early_param for permitting transparent PMB-backed ioremapping to be enabled/disabled. For the time being, we use a default-disabled policy. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02sh: reworked dynamic PMB mapping.Paul Mundt
This implements a fairly significant overhaul of the dynamic PMB mapping code. The primary change here is that the PMB gets its own VMA that follows the uncached mapping and we attempt to be a bit more intelligent with dynamic sizing, multi-entry mapping, and so forth. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-03-02Merge branches 'sh/dmaengine', 'sh/hw-breakpoints' and 'sh/trivial'Paul Mundt
2010-03-01Merge branch 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-armLinus Torvalds
* 'for-linus' of master.kernel.org:/home/rmk/linux-2.6-arm: (100 commits) ARM: Eliminate decompressor -Dstatic= PIC hack ARM: 5958/1: ARM: U300: fix inverted clk round rate ARM: 5956/1: misplaced parentheses ARM: 5955/1: ep93xx: move timer defines into core.c and document ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c ARM: 5953/1: ep93xx: fix broken build of clock.c ARM: 5952/1: ARM: MM: Add ARM_L1_CACHE_SHIFT_6 for handle inside each ARCH Kconfig ARM: 5949/1: NUC900 add gpio virtual memory map ARM: 5948/1: Enable timer0 to time4 clock support for nuc910 ARM: 5940/2: ARM: MMCI: remove custom DBG macro and printk ARM: make_coherent(): fix problems with highpte, part 2 MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itself ARM: 5945/1: ep93xx: include correct irq.h in core.c ARM: 5933/1: amba-pl011: support hardware flow control ARM: 5930/1: Add PKMAP area description to memory.txt. ARM: 5929/1: Add checks to detect overlap of memory regions. ARM: 5928/1: Change type of VMALLOC_END to unsigned long. ARM: 5927/1: Make delimiters of DMA area globally visibly. ARM: 5926/1: Add "Virtual kernel memory..." printout. ARM: 5920/1: OMAP4: Enable L2 Cache ... Fix up trivial conflict in arch/arm/mach-mx25/clock.c
2010-03-01sh: No need to explicitly include <linux/rwlock.h>.Robert P. J. Day
Since <linux/spinlock.h> already includes <linux/rwlock.h>, and the latter file will warn about not having included the former file anyway, there is no value in including rwlock.h explicitly. Signed-off-by: Robert P. J. Day <rpjday@crashcourse.ca> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23sh: wire up SET/GET_UNALIGN_CTL.Paul Mundt
This hooks up the SET/GET_UNALIGN_CTL knobs cribbing the bulk of it from the PPC and ia64 implementations. The thread flags happen to be the logical inverse of what the global fault mode is set to, so this works out pretty cleanly. By default the global fault mode is used, with tasks now being able to override their own settings via prctl(). Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-23sh: allow alignment fault mode to be configured at kernel boot.Paul Mundt
Follow the ARM change, which is what our alignment helpers are based on in the first place. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-20MM: Pass a PTE pointer to update_mmu_cache() rather than the PTE itselfRussell King
On VIVT ARM, when we have multiple shared mappings of the same file in the same MM, we need to ensure that we have coherency across all copies. We do this via make_coherent() by making the pages uncacheable. This used to work fine, until we allowed highmem with highpte - we now have a page table which is mapped as required, and is not available for modification via update_mmu_cache(). Ralf Beache suggested getting rid of the PTE value passed to update_mmu_cache(): On MIPS update_mmu_cache() calls __update_tlb() which walks pagetables to construct a pointer to the pte again. Passing a pte_t * is much more elegant. Maybe we might even replace the pte argument with the pte_t? Ben Herrenschmidt would also like the pte pointer for PowerPC: Passing the ptep in there is exactly what I want. I want that -instead- of the PTE value, because I have issue on some ppc cases, for I$/D$ coherency, where set_pte_at() may decide to mask out the _PAGE_EXEC. So, pass in the mapped page table pointer into update_mmu_cache(), and remove the PTE value, updating all implementations and call sites to suit. Includes a fix from Stephen Rothwell: sparc: fix fallout from update_mmu_cache API change Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2010-02-18sh: Merge legacy and dynamic PMB modes.Paul Mundt
This implements a bit of rework for the PMB code, which permits us to kill off the legacy PMB mode completely. Rather than trusting the boot loader to do the right thing, we do a quick verification of the PMB contents to determine whether to have the kernel setup the initial mappings or whether it needs to mangle them later on instead. If we're booting from legacy mappings, the kernel will now take control of them and make them match the kernel's initial mapping configuration. This is accomplished by breaking the initialization phase out in to multiple steps: synchronization, merging, and resizing. With the recent rework, the synchronization code establishes page links for compound mappings already, so we build on top of this for promoting mappings and reclaiming unused slots. At the same time, the changes introduced for the uncached helpers also permit us to dynamically resize the uncached mapping without any particular headaches. The smallest page size is more than sufficient for mapping all of kernel text, and as we're careful not to jump to any far off locations in the setup code the mapping can safely be resized regardless of whether we are executing from it or not. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-18sh: Use uncached I/O helpers in PMB setup.Paul Mundt
The PMB code is an example of something that spends an absurd amount of time running uncached when only a couple of operations really need to be. This switches over to the shiny new uncached helpers, permitting us to spend far more time running cached. Additionally, MMUCR twiddling is perfectly safe from cached space given that it's paired with a control register barrier, so fix that up, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB locking overhaul.Paul Mundt
This implements some locking for the PMB code. A high level rwlock is added for dealing with rw accesses on the entry map while a per-entry data structure spinlock is added to deal with the PMB entry changing out from underneath us. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up dynamically created write-through PMB mappings.Paul Mundt
Write-through PMB mappings still require the cache bit to be set, even if they're to be flagged with a different cache policy and bufferability bit. To reduce some of the confusion surrounding the flag encoding we centralize the cache mask based on the system cache policy while we're at it. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Build PMB entry links for existing contiguous multi-page mappings.Paul Mundt
This plugs in entry sizing support for existing mappings and then builds on top of that for linking together entries that are mapping contiguous areas. This will ultimately permit us to coalesce mappings and promote head pages while reclaiming PMB slots for dynamic remapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: uncached mapping helpers.Paul Mundt
This adds some helper routines for uncached mapping support. This simplifies some of the cases where we need to check the uncached mapping boundaries in addition to giving us a centralized location for building more complex manipulation on top of. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: PMB tidying.Paul Mundt
Some overdue cleanup of the PMB code, killing off unused functionality and duplication sprinkled about the tree. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-17sh: Fix up more 64-bit pgprot truncation on SH-X2 TLB.Paul Mundt
Both the store queue API and the PMB remapping take unsigned long for their pgprot flags, which cuts off the extended protection bits. In the case of the PMB this isn't really a problem since the cache attribute bits that we care about are all in the lower 32-bits, but we do it just to be safe. The store queue remapping on the other hand depends on the extended prot bits for enabling userspace access to the mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Merge the legacy PMB mapping and entry synchronization code.Paul Mundt
This merges the code for iterating over the legacy PMB mappings and the code for synchronizing software state with the hardware mappings. There's really no reason to do the same iteration twice, and this also buys us the legacy entry logging facility for the dynamic PMB case. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-16sh: Prevent fixed slot PMB remapping from clobbering boot entries.Paul Mundt
The PMB initialization code walks the entries and synchronizes the software PMB state with the hardware mappings, preserving the slot index. Unfortunately pmb_alloc() only tested the bit position in the entry map and failed to set it, resulting in subsequent remaps being able to be dynamically assigned a slot that trampled an existing boot mapping with general badness ensuing. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-02-12sh: Isolate uncached mapping support.Paul Mundt
This splits out the uncached mapping support under its own config option, presently only used by 29-bit mode and 32-bit + PMB. This will make it possible to optionally add an uncached mapping on sh64 as well as booting without an uncached mapping for 32-bit. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-29sh: Kill off deprecated fixed PCI memory window accessors.Paul Mundt
This kills off the deprected fixed memory range accessors for the cases of non-translatable ioremapping. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-26sh: Mass ctrl_in/outX to __raw_read/writeX conversion.Paul Mundt
The old ctrl in/out routines are non-portable and unsuitable for cross-platform use. While drivers/sh has already been sanitized, there is still quite a lot of code that is not. This converts the arch/sh/ bits over, which permits us to flag the routines as deprecated whilst still building with -Werror for the architecture code, and to ensure that future users are not added. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21sh: Kill off the special uncached section and fixmap.Paul Mundt
Now that cached_to_uncached works as advertized in 32-bit mode and we're never going to be able to map < 16MB anyways, there's no need for the special uncached section. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-21sh: Track the uncached mapping size.Paul Mundt
This provides a variable for tracking the uncached mapping size, and uses it for pretty printing the uncached lowmem range. Beyond this, we'll also be building on top of this for figuring out from where the remainder of P2 becomes usable when constructing unrelated mappings. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: pretty print virtual memory map on boot.Paul Mundt
This cribs the pretty printing from arch/x86/mm/init_32.c to dump the virtual memory layout on boot. This is primarily intended as a debugging aid, given that the newer CPUs have full control over their address space and as such have little to nothing in common with the legacy layout. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: Correct iounmap fixmap teardown.Paul Mundt
iounmap_fixed() had a couple of bugs in it that caused it to effectively fail at life. The total number of pages to unmap factored in the mapping offset and aligned up to the next page boundary, which doesn't match the ioremap_fixed() behaviour. When ioremap_fixed() pegs a slot, the address in the mapping data already contains the offset displacement, and the size is recorded verbatim given that we're only interested in total number of pages required. As such, we need to calculate the total number from the original size in the unmap path as well. At the same time, there was also an off-by-1 problem in the fixmap index calculation which has also been corrected. Previously subsequent remaps of an identical fixmap index would trigger the pte_ERROR() in set_pte_phys(): arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). arch/sh/mm/init.c:77: bad pte 8053ffb0(0000781003fff506). With this patch in place, the iounmap-driven fixmap teardown actually does what it's supposed to do. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-20sh: Make 29/32-bit mode check helper generally available.Paul Mundt
Presently __in_29bit_mode() is only defined for the PMB case, but it's also easily derived from the CONFIG_29BIT and CONFIG_32BIT && CONFIG_PMB=n cases. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh64: Fix up PC casting in unaligned fixup notifier with 32bit ABI.Paul Mundt
Presently the build bails with the following: CC arch/sh/mm/alignment.o cc1: warnings being treated as errors arch/sh/mm/alignment.c: In function 'unaligned_fixups_notify': arch/sh/mm/alignment.c:69: warning: cast to pointer from integer of different size arch/sh/mm/alignment.c:74: warning: cast to pointer from integer of different size make[2]: *** [arch/sh/mm/alignment.o] Error 1 This is due to the fact that regs->pc is always 64-bit, while the pointer size depends on the ABI. Wrapping through instruction_pointer() takes care of the appropriate casting for both configurations. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh: Kill off now bogus fixmap/page wiring documentation.Paul Mundt
The plans for _PAGE_WIRED were detailed in a comment with the fixmap code, but as it's now all taken care of, we no longer have any reason for keeping it around, particularly since it's no longer accurate. Kill it off. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh: Split out MMUCR.URB based entry wiring in to shared helper.Paul Mundt
Presently this is duplicated between tlb-sh4 and tlb-pteaex. Split the helpers out in to a generic tlb-urb that can be used by any parts equipped with MMUCR.URB. At the same time, move the SH-5 code out-of-line, as we require single global state for DTLB entry wiring. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh: Kill off duplicate address alignment in ioremap_fixed().Paul Mundt
This is already taken care of in the top-level ioremap, and now that no one should be calling ioremap_fixed() directly we can simply throw the mapping displacement in as an additional argument. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-19sh: Prevent 64-bit pgprot clobbering across ioremap implementations.Paul Mundt
Presently 'flags' gets passed around a lot between the various ioremap helpers and implementations, which is only 32-bits. In the X2TLB case we use 64-bit pgprots which presently results in the upper 32bits being chopped off (which handily include our read/write/exec permissions). As such, we convert everything internally to using pgprot_t directly and simply convert over with pgprot_val() where needed. With this in place, transparent fixmap utilization for early ioremap works as expected. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Flag __ioremap_caller() __init_refok.Paul Mundt
The mem_init_done test makes sure that this path is only entered in __init cases, so leaving ioremap_fixed() as __init and flagging the caller __init_refok is sufficient. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Handle unmapping of fixed slots transparently in iounmap().Paul Mundt
iounmap() should balance whatever is done by ioremap(). Presently ioremap() can do any of fixed mappings, PMB mappings, or page table mappings. Presently only the latter two are handled through the standard unmap path, so tie in the fixed unmapping, too. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Make iounmap_fixed() return success/failure for iounmap() path.Paul Mundt
This converts iounmap_fixed() to return success/error if it handled the unmap request or not. At the same time, drop the __init label, as this can be called in to later. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Merge _32/_64 ioremap implementations.Paul Mundt
There is nothing of interest in the _64 version anymore, so the _32 one can be renamed and used unconditionally. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18sh: Handle early ioremaps through fixed mappings.Paul Mundt
This adds in a mem_init_done to work out when a standard ioremap() is possible, falling back to the fixmap based ioremap otherwise. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-18Merge branch 'sh/ioremap-fixed'Paul Mundt
2010-01-18sh: Setup early PMB mappings.Matt Fleming
More and more boards are going to start shipping that boot with the MMU in 32BIT mode by default. Previously we relied on the bootloader to setup PMB mappings for use by the kernel but we also need to cater for boards whose bootloaders don't set them up. If CONFIG_PMB_LEGACY is not enabled we have full control over our PMB mappings and can compress our address space. Usually, the distance between the the cached and uncached mappings of RAM is always 512MB, however we can compress the distance to be the amount of RAM on the board. pmb_init() now becomes much simpler. It no longer has to calculate any mappings, it just has to synchronise the software PMB table with the hardware. Tested on SDK7786 and SH7785LCR. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-17sh: Tidy up non-translatable checks in iounmap path.Paul Mundt
This tidies up the iounmap path with consolidated checks for nontranslatable mappings. This is in preparation of unifying the implementations. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
2010-01-16sh: Use ioremap_fixed() to implement SH-5 ioremap()Matt Fleming
Use the fixmap-based memory mapping implementation for SH-5's ioremap() functions and delete the old static allocator that was borrowed from sparc. Signed-off-by: Matt Fleming <matt@console-pimps.org>