aboutsummaryrefslogtreecommitdiff
path: root/arch/sparc/lib
AgeCommit message (Collapse)Author
2016-11-21sparc64: Delete now unused user copy fixup functions.David S. Miller
[ Upstream commit 0fd0ff01d4c3c01e7fe69b762ee1a13236639acc ] Now that all of the user copy routines are converted to return accurate residual lengths when an exception occurs, we no longer need the broken fixup routines. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert U3copy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit ee841d0aff649164080e445e84885015958d8ff4 ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert NG2copy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit e93704e4464fdc191f73fce35129c18de2ebf95d ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert NGcopy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit 7ae3aaf53f1695877ccd5ebbc49ea65991e41f1e ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert NG4copy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit 95707704800988093a9b9a27e0f2f67f5b4bf2fa ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert U1copy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit cb736fdbb208eb3420f1a2eb2bfc024a6e9dcada ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert GENcopy_{from,to}_user to accurate exception reporting.David S. Miller
[ Upstream commit d0796b555ba60c22eb41ae39a8362156cb08eee9 ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Convert copy_in_user to accurate exception reporting.David S. Miller
[ Upstream commit 0096ac9f47b1a2e851b3165d44065d18e5f13d58 ] Report the exact number of bytes which have not been successfully copied when an exception occurs, using the running remaining length. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-21sparc64: Prepare to move to more saner user copy exception handling.David S. Miller
[ Upstream commit 83a17d2661674d8c198adc0e183418f72aabab79 ] The fixup helper function mechanism for handling user copy fault handling is not %100 accurrate, and can never be made so. We are going to transition the code to return the running return return length, which is always kept track in one or more registers of each of these routines. In order to convert them one by one, we have to allow the existing behavior to continue functioning. Therefore make all the copy code that wants the fixup helper to be used return negative one. After all of the user copy routines have been converted, this logic and the fixup helpers themselves can be removed completely. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-12-24sparc64: fix FP corruption in user copy functionsRob Gardner
Short story: Exception handlers used by some copy_to_user() and copy_from_user() functions do not diligently clean up floating point register usage, and this can result in a user process seeing invalid values in floating point registers. This sometimes makes the process fail. Long story: Several cpu-specific (NG4, NG2, U1, U3) memcpy functions use floating point registers and VIS alignaddr/faligndata to accelerate data copying when source and dest addresses don't align well. Linux uses a lazy scheme for saving floating point registers; It is not done upon entering the kernel since it's a very expensive operation. Rather, it is done only when needed. If the kernel ends up not using FP regs during the course of some trap or system call, then it can return to user space without saving or restoring them. The various memcpy functions begin their FP code with VISEntry (or a variation thereof), which saves the FP regs. They conclude their FP code with VISExit (or a variation) which essentially marks the FP regs "clean", ie, they contain no unsaved values. fprs.FPRS_FEF is turned off so that a lazy restore will be triggered when/if the user process accesses floating point regs again. The bug is that the user copy variants of memcpy, copy_from_user() and copy_to_user(), employ an exception handling mechanism to detect faults when accessing user space addresses, and when this handler is invoked, an immediate return from the function is forced, and VISExit is not executed, thus leaving the fprs register in an indeterminate state, but often with fprs.FPRS_FEF set and one or more dirty bits. This results in a return to user space with invalid values in the FP regs, and since fprs.FPRS_FEF is on, no lazy restore occurs. This bug affects copy_to_user() and copy_from_user() for NG4, NG2, U3, and U1. All are fixed by using a new exception handler for those loads and stores that are done during the time between VISEnter and VISExit. n.b. In NG4memcpy, the problematic code can be triggered by a copy size greater than 128 bytes and an unaligned source address. This bug is known to be the cause of random user process memory corruptions while perf is running with the callgraph option (ie, perf record -g). This occurs because perf uses copy_from_user() to read user stacks, and may fault when it follows a stack frame pointer off to an invalid page. Validation checks on the stack address just obscure the underlying problem. Signed-off-by: Rob Gardner <rob.gardner@oracle.com> Signed-off-by: Dave Aldridge <david.j.aldridge@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-11-05Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparcLinus Torvalds
Pull sparc updates from David Miller: "Just a couple of fixes/cleanups: - Correct NUMA latency calculations on sparc64, from Nitin Gupta. - ASI_ST_BLKINIT_MRU_S value was wrong, from Rob Gardner. - Fix non-faulting load handling of non-quad values, also from Rob Gardner. - Cleanup VISsave assembler, from Sam Ravnborg. - Fix iommu-common code so it doesn't emit rediculous warnings on some architectures, particularly ARM" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc: sparc64: Fix numa distance values sparc64: Don't restrict fp regs for no-fault loads iommu-common: Fix error code used in iommu_tbl_range_{alloc,free}(). sparc64: use ENTRY/ENDPROC in VISsave sparc64: Fix incorrect ASI_ST_BLKINIT_MRU_S value
2015-09-03Merge branch 'locking-core-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull locking and atomic updates from Ingo Molnar: "Main changes in this cycle are: - Extend atomic primitives with coherent logic op primitives (atomic_{or,and,xor}()) and deprecate the old partial APIs (atomic_{set,clear}_mask()) The old ops were incoherent with incompatible signatures across architectures and with incomplete support. Now every architecture supports the primitives consistently (by Peter Zijlstra) - Generic support for 'relaxed atomics': - _acquire/release/relaxed() flavours of xchg(), cmpxchg() and {add,sub}_return() - atomic_read_acquire() - atomic_set_release() This came out of porting qwrlock code to arm64 (by Will Deacon) - Clean up the fragile static_key APIs that were causing repeat bugs, by introducing a new one: DEFINE_STATIC_KEY_TRUE(name); DEFINE_STATIC_KEY_FALSE(name); which define a key of different types with an initial true/false value. Then allow: static_branch_likely() static_branch_unlikely() to take a key of either type and emit the right instruction for the case. To be able to know the 'type' of the static key we encode it in the jump entry (by Peter Zijlstra) - Static key self-tests (by Jason Baron) - qrwlock optimizations (by Waiman Long) - small futex enhancements (by Davidlohr Bueso) - ... and misc other changes" * 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (63 commits) jump_label/x86: Work around asm build bug on older/backported GCCs locking, ARM, atomics: Define our SMP atomics in terms of _relaxed() operations locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.h locking/qrwlock: Make use of _{acquire|release|relaxed}() atomics locking/qrwlock: Implement queue_write_unlock() using smp_store_release() locking/lockref: Remove homebrew cmpxchg64_relaxed() macro definition locking, asm-generic: Add _{relaxed|acquire|release}() variants for 'atomic_long_t' locking, asm-generic: Rework atomic-long.h to avoid bulk code duplication locking/atomics: Add _{acquire|release|relaxed}() variants of some atomic operations locking, compiler.h: Cast away attributes in the WRITE_ONCE() magic locking/static_keys: Make verify_keys() static jump label, locking/static_keys: Update docs locking/static_keys: Provide a selftest jump_label: Provide a self-test s390/uaccess, locking/static_keys: employ static_branch_likely() x86, tsc, locking/static_keys: Employ static_branch_likely() locking/static_keys: Add selftest locking/static_keys: Add a new static_key interface locking/static_keys: Rework update logic locking/static_keys: Add static_key_{en,dis}able() helpers ...
2015-08-07sparc64: use ENTRY/ENDPROC in VISsaveSam Ravnborg
From 7d8a508d74e6cacf0f2438286a959c3195a35a37 Mon Sep 17 00:00:00 2001 From: Sam Ravnborg <sam@ravnborg.org> Date: Fri, 7 Aug 2015 20:26:12 +0200 Subject: [PATCH] sparc64: use ENTRY/ENDPROC in VISsave Commit 44922150d87cef616fd183220d43d8fde4d41390 ("sparc64: Fix userspace FPU register corruptions") left a stale globl symbol which was not used. Fix this and introduce use of ENTRY/ENDPROC Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-08-06sparc64: Fix userspace FPU register corruptions.David S. Miller
If we have a series of events from userpsace, with %fprs=FPRS_FEF, like follows: ETRAP ETRAP VIS_ENTRY(fprs=0x4) VIS_EXIT RTRAP (kernel FPU restore with fpu_saved=0x4) RTRAP We will not restore the user registers that were clobbered by the FPU using kernel code in the inner-most trap. Traps allocate FPU save slots in the thread struct, and FPU using sequences save the "dirty" FPU registers only. This works at the initial trap level because all of the registers get recorded into the top-level FPU save area, and we'll return to userspace with the FPU disabled so that any FPU use by the user will take an FPU disabled trap wherein we'll load the registers back up properly. But this is not how trap returns from kernel to kernel operate. The simplest fix for this bug is to always save all FPU register state for anything other than the top-most FPU save area. Getting rid of the optimized inner-slot FPU saving code ends up making VISEntryHalf degenerate into plain VISEntry. Longer term we need to do something smarter to reinstate the partial save optimizations. Perhaps the fundament error is having trap entry and exit allocate FPU save slots and restore register state. Instead, the VISEntry et al. calls should be doing that work. This bug is about two decades old. Reported-by: James Y Knight <jyknight@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2015-07-27sparc: Provide atomic_{or,xor,and}Peter Zijlstra
Implement atomic logic ops -- atomic_{or,xor,and}. These will replace the atomic_{set,clear}_mask functions that are available on some archs. Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-03-23sparc64: Fix several bugs in memmove().David S. Miller
Firstly, handle zero length calls properly. Believe it or not there are a few of these happening during early boot. Next, we can't just drop to a memcpy() call in the forward copy case where dst <= src. The reason is that the cache initializing stores used in the Niagara memcpy() implementations can end up clearing out cache lines before we've sourced their original contents completely. For example, considering NG4memcpy, the main unrolled loop begins like this: load src + 0x00 load src + 0x08 load src + 0x10 load src + 0x18 load src + 0x20 store dst + 0x00 Assume dst is 64 byte aligned and let's say that dst is src - 8 for this memcpy() call. That store at the end there is the one to the first line in the cache line, thus clearing the whole line, which thus clobbers "src + 0x28" before it even gets loaded. To avoid this, just fall through to a simple copy only mildly optimized for the case where src and dst are 8 byte aligned and the length is a multiple of 8 as well. We could get fancy and call GENmemcpy() but this is good enough for how this thing is actually used. Reported-by: David Ahern <david.ahern@oracle.com> Reported-by: Bob Picco <bpicco@meloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-11-07sparc32: Implement xchg and atomic_xchg using ATOMIC_HASH locksAndreas Larsson
Atomicity between xchg and cmpxchg cannot be guaranteed when xchg is implemented with a swap and cmpxchg is implemented with locks. Without this, e.g. mcs_spin_lock and mcs_spin_unlock are broken. Signed-off-by: Andreas Larsson <andreas@gaisler.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-10-14sparc64: Fix FPU register corruption with AES crypto offload.David S. Miller
The AES loops in arch/sparc/crypto/aes_glue.c use a scheme where the key material is preloaded into the FPU registers, and then we loop over and over doing the crypt operation, reusing those pre-cooked key registers. There are intervening blkcipher*() calls between the crypt operation calls. And those might perform memcpy() and thus also try to use the FPU. The sparc64 kernel FPU usage mechanism is designed to allow such recursive uses, but with a catch. There has to be a trap between the two FPU using threads of control. The mechanism works by, when the FPU is already in use by the kernel, allocating a slot for FPU saving at trap time. Then if, within the trap handler, we try to use the FPU registers, the pre-trap FPU register state is saved into the slot. Then at trap return time we notice this and restore the pre-trap FPU state. Over the long term there are various more involved ways we can make this work, but for a quick fix let's take advantage of the fact that the situation where this happens is very limited. All sparc64 chips that support the crypto instructiosn also are using the Niagara4 memcpy routine, and that routine only uses the FPU for large copies where we can't get the source aligned properly to a multiple of 8 bytes. We look to see if the FPU is already in use in this context, and if so we use the non-large copy path which only uses integer registers. Furthermore, we also limit this special logic to when we are doing kernel copy, rather than a user copy. Signed-off-by: David S. Miller <davem@davemloft.net>
2014-10-13Merge branch 'locking-arch-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull arch atomic cleanups from Ingo Molnar: "This is a series kept separate from the main locking tree, which cleans up and improves various details in the atomics type handling: - Remove the unused atomic_or_long() method - Consolidate and compress atomic ops implementations between architectures, to reduce linecount and to make it easier to add new ops. - Rewrite generic atomic support to only require cmpxchg() from an architecture - generate all other methods from that" * 'locking-arch-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (23 commits) locking,arch: Use ACCESS_ONCE() instead of cast to volatile in atomic_read() locking, mips: Fix atomics locking, sparc64: Fix atomics locking,arch: Rewrite generic atomic support locking,arch,xtensa: Fold atomic_ops locking,arch,sparc: Fold atomic_ops locking,arch,sh: Fold atomic_ops locking,arch,powerpc: Fold atomic_ops locking,arch,parisc: Fold atomic_ops locking,arch,mn10300: Fold atomic_ops locking,arch,mips: Fold atomic_ops locking,arch,metag: Fold atomic_ops locking,arch,m68k: Fold atomic_ops locking,arch,m32r: Fold atomic_ops locking,arch,ia64: Fold atomic_ops locking,arch,hexagon: Fold atomic_ops locking,arch,cris: Fold atomic_ops locking,arch,avr32: Fold atomic_ops locking,arch,arm64: Fold atomic_ops locking,arch,arm: Fold atomic_ops ...
2014-09-10locking, sparc64: Fix atomicsPeter Zijlstra
The patch folding the atomic ops had a silly fail in the _return primitives. Fixes: 4f3316c2b5fe ("locking,arch,sparc: Fold atomic_ops") Reported-by: Guenter Roeck <linux@roeck-us.net> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: David S. Miller <davem@davemloft.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/20140902094016.GD31157@worktop.ger.corp.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-09-09sparc: Let memset return the address argumentAndreas Larsson
This makes memset follow the standard (instead of returning 0 on success). This is needed when certain versions of gcc optimizes around memset calls and assume that the address argument is preserved in %o0. Signed-off-by: Andreas Larsson <andreas@gaisler.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-08-14locking,arch,sparc: Fold atomic_opsPeter Zijlstra
Many of the atomic op implementations are the same except for one instruction; fold the lot into a few CPP macros and reduce LoC. This also prepares for easy addition of new ops. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: David S. Miller <davem@davemloft.net> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Kirill Tkhai <tkhai@yandex.ru> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: sparclinux@vger.kernel.org Link: http://lkml.kernel.org/r/20140508135852.825281379@infradead.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-08-06Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-nextLinus Torvalds
Pull sparc updates from David Miller: 1) Add sparc RAM output to /proc/iomem, from Bob Picco. 2) Allow seeks on /dev/mdesc, from Khalid Aziz. 3) Cleanup sparc64 I/O accessors, from Sam Ravnborg. 4) If update_mmu_cache{,_pmd}() is called with an not-valid mapping, do not insert it into the TLB miss hash tables otherwise we'll livelock. Based upon work by Christopher Alexander Tobias Schulze. 5) Fix BREAK detection in sunsab driver when no actual characters are pending, from Christopher Alexander Tobias Schulze. 6) Because we have modules --> openfirmware --> vmalloc ordering of virtual memory, the lazy VMAP TLB flusher can cons up an invocation of flush_tlb_kernel_range() that covers the openfirmware address range. Unfortunately this will flush out the firmware's locked TLB mapping which causes all kinds of trouble. Just split up the flush request if this happens, but in the long term the lazy VMAP flusher should probably be made a little bit smarter. Based upon work by Christopher Alexander Tobias Schulze. * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: sparc64: Fix up merge thinko. sparc: Add "install" target arch/sparc/math-emu/math_32.c: drop stray break operator sparc64: ldc_connect() should not return EINVAL when handshake is in progress. sparc64: Guard against flushing openfirmware mappings. sunsab: Fix detection of BREAK on sunsab serial console bbc-i2c: Fix BBC I2C envctrl on SunBlade 2000 sparc64: Do not insert non-valid PTEs into the TSB hash table. sparc64: avoid code duplication in io_64.h sparc64: reorder functions in io_64.h sparc64: drop unused SLOW_DOWN_IO definitions sparc64: remove macro indirection in io_64.h sparc64: update IO access functions in PeeCeeI sparcspkr: use sbus_*() primitives for IO sparc: Add support for seek and shorter read to /dev/mdesc sparc: use %s for unaligned panic drivers/sbus/char: Micro-optimization in display7seg.c display7seg: Introduce the use of the managed version of kzalloc sparc64 - add mem to iomem resource
2014-07-21sparc64: update IO access functions in PeeCeeISam Ravnborg
The PeeCeeI.c code used in*() + out*() for IO access. But these are in little endian and the native (big) endian result was required which resulted in some bit-shifting. Shift the code over to use the __raw_*() variants all over. This simplifies the code as we can drop the calls to le16_to_cpu() and le32_to_cpu(). And it should be a little faster too. With this change we now uses the same type of IO access functions in all of the file. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2014-07-18sparc64,ftrace: Remove check of obsolete variable function_trace_stopSteven Rostedt (Red Hat)
Nothing sets function_trace_stop to disable function tracing anymore. Remove the check for it in the arch code. Link: http://lkml.kernel.org/r/20140703.211820.1674895115102216877.davem@davemloft.net Cc: David S. Miller <davem@davemloft.net> OKed-to-go-through-tracing-tree-by: David S. Miller <davem@davemloft.net> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-06-19Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-nextLinus Torvalds
Pull sparc fixes from David Miller: "Sparc sparse fixes from Sam Ravnborg" * git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next: (67 commits) sparc64: fix sparse warnings in int_64.c sparc64: fix sparse warning in ftrace.c sparc64: fix sparse warning in kprobes.c sparc64: fix sparse warning in kgdb_64.c sparc64: fix sparse warnings in compat_audit.c sparc64: fix sparse warnings in init_64.c sparc64: fix sparse warnings in aes_glue.c sparc: fix sparse warnings in smp_32.c + smp_64.c sparc64: fix sparse warnings in perf_event.c sparc64: fix sparse warnings in kprobes.c sparc64: fix sparse warning in tsb.c sparc64: clean up compat_sigset_t.seta handling sparc64: fix sparse "Should it be static?" warnings in signal32.c sparc64: fix sparse warnings in sys_sparc32.c sparc64: fix sparse warning in pci.c sparc64: fix sparse warnings in smp_64.c sparc64: fix sparse warning in prom_64.c sparc64: fix sparse warning in btext.c sparc64: fix sparse warnings in sys_sparc_64.c + unaligned_64.c sparc64: fix sparse warning in process_64.c ... Conflicts: arch/sparc/include/asm/pgtable_64.h
2014-05-17sparc64: Add membar to Niagara2 memcpy code.David S. Miller
This is the prevent previous stores from overlapping the block stores done by the memcpy loop. Based upon a glibc patch by Jose E. Marchesi Signed-off-by: David S. Miller <davem@davemloft.net>
2014-05-02sparc32: introduce asm-generic/io.hSam Ravnborg
Use asm-generic/io.h definitions where applicable. The inxx() and outxx() methods whcih was duplicated in pcic.c + leon_pci.c are replaced by a set of static inlins from asm-generic/io.h iomap.c is replaced by the generic versions, but are still present to support sparc64. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Cc: Daniel Hellstrom <daniel@gaisler.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-11-12sparc64: Make PAGE_OFFSET variable.David S. Miller
Choose PAGE_OFFSET dynamically based upon cpu type. Original UltraSPARC-I (spitfire) chips only supported a 44-bit virtual address space. Newer chips (T4 and later) support 52-bit virtual addresses and up to 47-bits of physical memory space. Therefore we have to adjust PAGE_SIZE dynamically based upon the capabilities of the chip. Note that this change alone does not allow us to support > 43-bit physical memory, to do that we need to re-arrange our page table support. The current encodings of the pmd_t and pgd_t pointers restricts us to "32 + 11" == 43 bits. This change can waste quite a bit of memory for the various tables. In particular, a future change should work to size and allocate kern_linear_bitmap[] and sparc64_valid_addr_bitmap[] dynamically. This isn't easy as we really cannot take a TLB miss when accessing kern_linear_bitmap[]. We'd have to lock it into the TLB or similar. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Bob Picco <bob.picco@oracle.com>
2013-09-05sparc64: Remove RWSEM export leftoversKirill Tkhai
The functions __down_read __down_read_trylock __down_write __down_write_trylock __up_read __up_write __downgrade_write are implemented inline, so remove corresponding EXPORT_SYMBOLs (They lead to compile errors on RT kernel). Signed-off-by: Kirill Tkhai <tkhai@yandex.ru> CC: David Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-30Kconfig: consolidate CONFIG_DEBUG_STRICT_USER_COPY_CHECKSStephen Boyd
The help text for this config is duplicated across the x86, parisc, and s390 Kconfig.debug files. Arnd Bergman noted that the help text was slightly misleading and should be fixed to state that enabling this option isn't a problem when using pre 4.4 gcc. To simplify the rewording, consolidate the text into lib/Kconfig.debug and modify it there to be more explicit about when you should say N to this config. Also, make the text a bit more generic by stating that this option enables compile time checks so we can cover architectures which emit warnings vs. ones which emit errors. The details of how an architecture decided to implement the checks isn't as important as the concept of compile time checking of copy_from_user() calls. While we're doing this, remove all the copy_from_user_overflow() code that's duplicated many times and place it into lib/ so that any architecture supporting this option can get the function for free. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Acked-by: Helge Deller <deller@gmx.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Chris Metcalf <cmetcalf@tilera.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-03-31sparc/srmmu: clear trailing edge of bitmap properlyAkinobu Mita
srmmu_nocache_bitmap is cleared by bit_map_init(). But bit_map_init() attempts to clear by memset(), so it can't clear the trailing edge of bitmap properly on big-endian architecture if the number of bits is not a multiple of BITS_PER_LONG. Actually, the number of bits in srmmu_nocache_bitmap is not always a multiple of BITS_PER_LONG. It is calculated as below: bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT; srmmu_nocache_size is decided proportionally by the amount of system RAM and it is rounded to a multiple of PAGE_SIZE. SRMMU_NOCACHE_BITMAP_SHIFT is defined as (PAGE_SHIFT - 4). So it can only be said that bitmap_bits is a multiple of 16. This fixes the problem by using bitmap_clear() instead of memset() in bit_map_init() and this also uses BITS_TO_LONGS() to calculate correct size at bitmap allocation time. Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: sparclinux@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2012-11-09sparc: Support atomic64_dec_if_positive properly.David S. Miller
Sparc32 already supported it, as a consequence of using the generic atomic64 implementation. And the sparc64 implementation is rather trivial. This allows us to set ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE for all of sparc, and avoid the annoying warning from lib/atomic64_test.c Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-05sparc64: Niagara-4 bzero/memset, plus use MRU stores in page copy.David S. Miller
This adds optimized memset/bzero/page-clear routines for Niagara-4. We basically can do what powerpc has been able to do for a decade (via the "dcbz" instruction), which is use cache line clearing stores for bzero and memsets with a 'c' argument of zero. As long as we make the cache initializing store to each 32-byte subblock of the L2 cache line, it works. As with other Niagara-4 optimized routines, the key is to make sure to avoid any usage of the %asi register, as reads and writes to it cost at least 50 cycles. For the user clear cases, we don't use these new routines, we use the Niagara-1 variants instead. Those have to use %asi in an unavoidable way. A Niagara-4 8K page clear costs just under 600 cycles. Add definitions of the MRU variants of the cache initializing store ASIs. By default, cache initializing stores install the line as Least Recently Used. If we know we're going to use the data immediately (which is true for page copies and clears) we can use the Most Recently Used variant, to decrease the likelyhood of the lines being evicted before they get used. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-10-02Merge git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linuxDavid S. Miller
There's a Niagara 2 memcpy fix in this tree and I have a Kconfig fix from Dave Jones which requires the sparc-next changes which went upstream yesterday. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-28sparc64: Fix trailing whitespace in NG4 memcpy.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27sparc64: Fix comment type in NG4 copy from user.David S. Miller
Noticed by Greg Onufer. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27sparc64: Fix return value of Niagara-2 memcpy.David S. Miller
It gets clobbered by the kernel's VISEntryHalf, so we have to save it in a different register than the set clobbered by that macro. The instance in glibc is OK and doesn't have this problem. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-09-27sparc64: Add SPARC-T4 optimized memcpy.David S. Miller
Before After -------------- -------------- bw_tcp: 1288.53 MB/sec 1637.77 MB/sec bw_pipe: 1517.18 MB/sec 2107.61 MB/sec bw_unix: 1838.38 MB/sec 2640.91 MB/sec make -s -j128 allmodconfig 5min 49sec 5min 31sec Signed-off-by: David S. Miller <davem@davemloft.net>
2012-08-20sparc64: Add SHA1 driver making use of the 'sha1' instruction.David S. Miller
Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
2012-06-27sparc64: Consistently use fsrc2 rather than fmovd in optimized asm.David S. Miller
Because fsrc2, unlike fmovd, does not update the %fsr register. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-26sparc: use the new generic strnlen_user() functionDavid Miller
This throws away the sparc-specific functions in favor of the generic optimized version. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2012-05-24lib: Sparc's strncpy_from_user is generic enough, move under lib/David S. Miller
To use this, an architecture simply needs to: 1) Provide a user_addr_max() implementation via asm/uaccess.h 2) Add "select GENERIC_STRNCPY_FROM_USER" to their arch Kcnfig 3) Remove the existing strncpy_from_user() implementation and symbol exports their architecture had. Signed-off-by: David S. Miller <davem@davemloft.net> Acked-by: David Howells <dhowells@redhat.com>
2012-05-24kernel: Move REPEAT_BYTE definition into linux/kernel.hDavid S. Miller
And make sure that everything using it explicitly includes that header file. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-24sparc: Increase portability of strncpy_from_user() implementation.David S. Miller
Hide details of maximum user address calculation in a new asm/uaccess.h interface named user_addr_max(). Provide little-endian implementation in find_zero(), which should work but can probably be improved. Abstrace alignment check behind IS_UNALIGNED() macro. Kill double-semicolon, noticed by David Howells. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-23sparc: Optimize strncpy_from_user() zero byte search.David S. Miller
Compute a mask that will only have 0x80 in the bytes which had a zero in them. The formula is: ~(((x & 0x7f7f7f7f) + 0x7f7f7f7f) | x | 0x7f7f7f7f) In the inner word iteration, we have to compute the "x | 0x7f7f7f7f" part, so we can reuse that in the above calculation. Once we have this mask, we perform divide and conquer to find the highest 0x80 location. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-22sparc: Add full proper error handling to strncpy_from_user().David S. Miller
Linus removed the end-of-address-space hackery from fs/namei.c:do_getname() so we really have to validate these edge conditions and cannot cheat any more (as x86 used to as well). Move to a common C implementation like x86 did. And if both src and dst are sufficiently aligned we'll do word at a time copies and checks as well. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-19sparc32: Add ucmpdi2.o to obj-y instead of lib-y.David S. Miller
Otherwise if no references exist in the static kernel image, we won't export the symbol properly to modules. Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-19sparc32: add ucmpdi2Sam Ravnborg
Based on copy from microblaze add ucmpdi2 implementation. This fixes build of niu driver which failed with: drivers/built-in.o: In function `niu_get_nfc': niu.c:(.text+0x91494): undefined reference to `__ucmpdi2' This driver will never be used on a sparc32 system, but patch added to fix build breakage with all*config builds. Signed-off-by: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-05-15sparc32: Kill off software 32-bit multiply/divide routines.David S. Miller
For the explicit calls to .udiv/.umul in assembler, I made a mechanical (read as: safe) transformation. I didn't attempt to make any simplifications. In particular, __ndelay and __udelay can be simplified significantly. Some of the %y reads are unnecessary and these routines have no need any longer for allocating a register window, they can be leaf functions. Signed-off-by: David S. Miller <davem@davemloft.net>