aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64
AgeCommit message (Collapse)Author
2016-08-10Merge remote-tracking branch 'lsk/linux-linaro-lsk-v3.18' into ↵lsk-v3.18-16.09-rtlinux-linaro-lsk-v3.18-rt-testAnders Roxell
linux-linaro-lsk-v3.18-rt Signed-off-by: Anders Roxell <anders.roxell@linaro.org> Conflicts: kernel/futex.c kernel/printk/printk.c kernel/softirq.c mm/slub.c mm/swap.c
2016-07-17 Merge tag 'v3.18.37' into linux-linaro-lsk-v3.18lsk-v3.18-16.07Alex Shi
This is the 3.18.37 stable release
2016-07-12PCI: Move domain assignment from arm64 to generic codeLorenzo Pieralisi
[ Upstream commit 7c674700098c87b305b99652e3c694c4ef195866 ] The current logic in arm64 pci_bus_assign_domain_nr() is flawed in that depending on the host controller configuration for a platform and the initialization sequence, core code may end up allocating PCI domain numbers from both DT and the generic domain counter, which would result in PCI domain allocation aliases/errors. Fix the logic behind the PCI domain number assignment and move the resulting code to the PCI core so the same domain allocation logic is used on all platforms that select CONFIG_PCI_DOMAINS_GENERIC. [bhelgaas: tidy changelog] Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-by: Liviu Dudau <Liviu.Dudau@arm.com> Acked-by: Arnd Bergmann <arnd@arndb.de> CC: Rob Herring <robh+dt@kernel.org> CC: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-07-12arm64: mm: remove page_mapping check in __sync_icache_dcacheShaokun Zhang
[ Upstream commit 20c27a4270c775d7ed661491af8ac03264d60fc6 ] __sync_icache_dcache unconditionally skips the cache maintenance for anonymous pages, under the assumption that flushing is only required in the presence of D-side aliases [see 7249b79f6b4cc ("arm64: Do not flush the D-cache for anonymous pages")]. Unfortunately, this breaks migration of anonymous pages holding self-modifying code, where userspace cannot be reasonably expected to reissue maintenance instructions in response to a migration. This patch fixes the problem by removing the broken page_mapping(page) check from the cache syncing code, otherwise we may end up fetching and executing stale instructions from the PoU. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: <stable@vger.kernel.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-06-14 Merge tag 'v3.18.35' into linux-linaro-lsk-v3.18Alex Shi
This is the 3.18.35 stable release
2016-06-03arm64: Ensure pmd_present() returns false after pmd_mknotpresent()Catalin Marinas
[ Upstream commit 5bb1cc0ff9a6b68871970737e6c4c16919928d8b ] Currently, pmd_present() only checks for a non-zero value, returning true even after pmd_mknotpresent() (which only clears the type bits). This patch converts pmd_present() to using pte_present(), similar to the other pmd_*() checks. As a side effect, it will return true for PROT_NONE mappings, though they are not yet used by the kernel with transparent huge pages. For consistency, also change pmd_mknotpresent() to only clear the PMD_SECT_VALID bit, even though the PMD_TABLE_BIT is already 0 for block mappings (no functional change). The unused PMD_SECT_PROT_NONE definition is removed as transparent huge pages use the pte page prot values. Fixes: 9c7e535fcc17 ("arm64: mm: Route pmd thp functions through pte equivalents") Cc: <stable@vger.kernel.org> # 3.15+ Reviewed-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-06-03mm: fix huge zero page accounting in smaps reportKirill A. Shutemov
[ Upstream commit c164e038eee805147e95789dddb88ae3b3aca11c ] As a small zero page, huge zero page should not be accounted in smaps report as normal page. For small pages we rely on vm_normal_page() to filter out zero page, but vm_normal_page() is not designed to handle pmds. We only get here due hackish cast pmd to pte in smaps_pte_range() -- pte and pmd format is not necessary compatible on each and every architecture. Let's add separate codepath to handle pmds. follow_trans_huge_pmd() will detect huge zero page for us. We would need pmd_dirty() helper to do this properly. The patch adds it to THP-enabled architectures which don't yet have one. [akpm@linux-foundation.org: use do_div to fix 32-bit build] Signed-off-by: "Kirill A. Shutemov" <kirill@shutemov.name> Reported-by: Fengguang Wu <fengguang.wu@intel.com> Tested-by: Fengwei Yin <yfw.kernel@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-04-27Merge branch 'v3.18/topic/KASAN' into linux-linaro-lsk-v3.18lsk-v3.18-16.04Alex Shi
2016-04-27arm64: Don't relocate non-existent initrdMark Rutland
When booting a kernel without an initrd, the kernel reports that it moves -1 bytes worth, having gone through the motions with initrd_start equal to initrd_end: Moving initrd from [4080000000-407fffffff] to [9fff49000-9fff48fff] Prevent this by bailing out early when the initrd size is zero (i.e. we have no initrd), avoiding the confusing message and other associated work. Fixes: 1570f0d7ab425c1e ("arm64: support initrd outside kernel linear map") Cc: Mark Salter <msalter@redhat.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 4ca3bc86bea23f38596ce7508f75e072839bde44) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-27ARM64: kasan: print memory assignmentLinus Walleij
This prints out the virtual memory assigned to KASan in the boot crawl along with other memory assignments, if and only if KASan is activated. Example dmesg from the Juno Development board: Memory: 1691156K/2080768K available (5465K kernel code, 444K rwdata, 2160K rodata, 340K init, 217K bss, 373228K reserved, 16384K cma-reserved) Virtual kernel memory layout: kasan : 0xffffff8000000000 - 0xffffff9000000000 ( 64 GB) vmalloc : 0xffffff9000000000 - 0xffffffbdbfff0000 ( 182 GB) vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbdc2000000 - 0xffffffbdc3fc0000 ( 31 MB actual) fixed : 0xffffffbffabfd000 - 0xffffffbffac00000 ( 12 KB) PCI I/O : 0xffffffbffae00000 - 0xffffffbffbe00000 ( 16 MB) modules : 0xffffffbffc000000 - 0xffffffc000000000 ( 64 MB) memory : 0xffffffc000000000 - 0xffffffc07f000000 ( 2032 MB) .init : 0xffffffc0007f5000 - 0xffffffc00084a000 ( 340 KB) .text : 0xffffffc000080000 - 0xffffffc0007f45b4 ( 7634 KB) .data : 0xffffffc000850000 - 0xffffffc0008bf200 ( 445 KB) Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit ee7f881b59de4e0e0be250fd0c5d4ade3e30ec34) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-21 Merge tag 'v3.18.31' into linux-linaro-lsk-v3.18Alex Shi
This is the 3.18.31 stable release
2016-04-18arm64: errata: Add -mpc-relative-literal-loads to build flagsdann frazier
[ Upstream commit 67dfa1751ce71e629aad7c438e1678ad41054677 ] GCC6 (and Linaro's 2015.12 snapshot of GCC5) has a new default that uses adrp/ldr or adrp/add to address literal pools. When CONFIG_ARM64_ERRATUM_843419 is enabled, modules built with this toolchain fail to load: module libahci: unsupported RELA relocation: 275 This patch fixes the problem by passing '-mpc-relative-literal-loads' to the compiler. Cc: stable@vger.kernel.org Fixes: df057cc7b4fa ("arm64: errata: add module build workaround for erratum #843419") BugLink: http://bugs.launchpad.net/bugs/1533009 Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Suggested-by: Christophe Lyon <christophe.lyon@linaro.org> Signed-off-by: Dann Frazier <dann.frazier@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-04-13arm64: kill off the libgcc dependencyv3.18/topic/KASANKevin Hao
The arm64 kernel builds fine without the libgcc. Actually it should not be used at all in the kernel. The following are the reasons indicated by Russell King: Although libgcc is part of the compiler, libgcc is built with the expectation that it will be running in userland - it expects to link to a libc. That's why you can't build libgcc without having the glibc headers around. [...] Meanwhile, having the kernel build the compiler support functions that it needs ensures that (a) we know what compiler support functions are being used, (b) we know the implementation of those support functions are sane for use in the kernel, (c) we can build them with appropriate compiler flags for best performance, and (d) we remove an unnecessary dependency on the build toolchain. Signed-off-by: Kevin Hao <haokexin@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit d67703a8a69eecc8367bb9f848640b9efd430337) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-12arm64: account for sparsemem section alignment when choosing vmemmap offsetArd Biesheuvel
[ Upstream commit 36e5cd6b897e17d03008f81e075625d8e43e52d0 ] Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region") fixed an issue where the struct page array would overflow into the adjacent virtual memory region if system RAM was placed so high up in physical memory that its addresses were not representable in the build time configured virtual address size. However, the fix failed to take into account that the vmemmap region needs to be relatively aligned with respect to the sparsemem section size, so that a sequence of page structs corresponding with a sparsemem section in the linear region appears naturally aligned in the vmemmap region. So round up vmemmap to sparsemem section size. Since this essentially moves the projection of the linear region up in memory, also revert the reduction of the size of the vmemmap region. Cc: <stable@vger.kernel.org> Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region") Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: David Daney <david.daney@cavium.com> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-04-12arm64: vmemmap: use virtual projection of linear regionArd Biesheuvel
[ Upstream commit dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e ] Commit dd006da21646 ("arm64: mm: increase VA range of identity map") made some changes to the memory mapping code to allow physical memory to reside at an offset that exceeds the size of the virtual mapping. However, since the size of the vmemmap area is proportional to the size of the VA area, but it is populated relative to the physical space, we may end up with the struct page array being mapped outside of the vmemmap region. For instance, on my Seattle A0 box, I can see the following output in the dmesg log. vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum) 0xffffffbfc0000000 - 0xffffffbfd0000000 ( 256 MB actual) We can fix this by deciding that the vmemmap region is not a projection of the physical space, but of the virtual space above PAGE_OFFSET, i.e., the linear region. This way, we are guaranteed that the vmemmap region is of sufficient size, and we can even reduce the size by half. Cc: <stable@vger.kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-04-11arm64: KASAN depends on !(ARM64_16K_PAGES && ARM64_VA_BITS_48)Andrey Ryabinin
On KASAN + 16K_PAGES + 48BIT_VA arch/arm64/mm/kasan_init.c: In function ‘kasan_early_init’: include/linux/compiler.h:484:38: error: call to ‘__compiletime_assert_95’ declared with attribute error: BUILD_BUG_ON failed: !IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE) _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__) Currently KASAN will not work on 16K_PAGES and 48BIT_VA, so forbid such configuration to avoid above build failure. Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reported-by: Suzuki K. Poulose <Suzuki.Poulose@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit f1b9032f61c0412082a240cb7245f8b79e09ae8d) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: kasan: fix issues reported by sparseWill Deacon
Sparse reports some new issues introduced by the kasan patches: arch/arm64/mm/kasan_init.c:91:13: warning: no previous prototype for 'kasan_early_init' [-Wmissing-prototypes] void __init kasan_early_init(void) ^ arch/arm64/mm/kasan_init.c:91:13: warning: symbol 'kasan_early_init' was not declared. Should it be static? [sparse] This patch resolves the problem by adding a prototype for kasan_early_init and marking the function as asmlinkage, since it's only called from head.S. Signed-off-by: Will Deacon <will.deacon@arm.com> Acked-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 83040123fde42ec532d3b632efb5f7f84024e61d) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: add KASAN supportAndrey Ryabinin
This patch adds arch specific code for kernel address sanitizer (see Documentation/kasan.txt). 1/8 of kernel addresses reserved for shadow memory. There was no big enough hole for this, so virtual addresses for shadow were stolen from vmalloc area. At early boot stage the whole shadow region populated with just one physical page (kasan_zero_page). Later, this page reused as readonly zero shadow for some memory that KASan currently don't track (vmalloc). After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset/memmove/memcpy do a lot of memory accesses. If bad pointer passed to one of these function it is important to catch this. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in name, so we could call non-instrumented variant if needed. Some files built without kasan instrumentation (e.g. mm/slub.c). Original mem* function replaced (via #define) with prefixed variants to disable memory access checks for such files. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Tested-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 39d114ddc68223022c12ae3a1573912bc4b585e5) Signed-off-by: Alex Shi <alex.shi@linaro.org> BTW, commit 9f25e6ad5 arm64: expose number of page table levels on Kconfig level change CONFIG_ARM64_PGTABLE_LEVELS to CONFIG_PGTABLE_LEVELS, which would introduce much conflicts, since only arch/arm64/mm/kasan_init.c use two of instance of this config, just change them back.
2016-04-11mm: vmalloc: pass additional vm_flags to __vmalloc_node_range()Andrey Ryabinin
For instrumenting global variables KASan will shadow memory backing memory for modules. So on module loading we will need to allocate memory for shadow and map it at address in shadow that corresponds to the address allocated in module_alloc(). __vmalloc_node_range() could be used for this purpose, except it puts a guard hole after allocated area. Guard hole in shadow memory should be a problem because at some future point we might need to have a shadow memory at address occupied by guard hole. So we could fail to allocate shadow for module_alloc(). Now we have VM_NO_GUARD flag disabling guard page, so we need to pass into __vmalloc_node_range(). Add new parameter 'vm_flags' to __vmalloc_node_range() function. Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Konstantin Serebryany <kcc@google.com> Cc: Dmitry Chernenkov <dmitryc@google.com> Signed-off-by: Andrey Konovalov <adech.fo@gmail.com> Cc: Yuri Gribov <tetra2005@gmail.com> Cc: Konstantin Khlebnikov <koct9i@gmail.com> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit cb9e3c292d0115499c660028ad35ac5501d722b5) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: Move some head.text functions to executable sectionLaura Abbott
The head.text section is intended to be run at early bootup before any of the regular kernel mappings have been setup. Parts of head.text may be freed back into the buddy allocator due to TEXT_OFFSET so for security requirements this memory must not be executable. The suspend/resume/hotplug code path requires some of these head.S functions to run however which means they need to be executable. Support these conflicting requirements by moving the few head.text functions that need to be executable to the text section which has the appropriate page table permissions. Tested-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 034edabe6cf1d0dea49d4c836ba128cec90fad04) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: introduce VA_START macro - the first kernel virtual address.Andrey Ryabinin
In order to not use lengthy (UL(0xffffffffffffffff) << VA_BITS) everywhere, replace it with VA_START. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 127db024a7baee9874014dac33628253f438b4da) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: support initrd outside kernel linear mapMark Salter
The use of mem= could leave part or all of the initrd outside of the kernel linear map. This will lead to an error when unpacking the initrd and a probable failure to boot. This patch catches that situation and relocates the initrd to be fully within the linear map. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 1570f0d7ab425c1e0905715bf9cc98b2a82e723f) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: Change memcpy in kernel to use the copy template fileFeng Kan
This converts the memcpy.S to use the copy template file. The copy template file was based originally on the memcpy.S Signed-off-by: Feng Kan <fkan@apm.com> Signed-off-by: Balamurugan Shanmugam <bshanmugam@apm.com> [catalin.marinas@arm.com: removed tmp3(w) .req statements as they are not used] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit e5c88e3f2fb35dca5f3e46d65095bf5d008595b7) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: move PGD_SIZE definition to pgalloc.hAndrey Ryabinin
This will be used by KASAN latter. Signed-off-by: Andrey Ryabinin <ryabinin.a.a@gmail.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit fd2203dd3556f6553231fa026060793e67a25ce6) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: pgalloc: consistently use PGALLOC_GFPMark Rutland
We currently allocate different levels of page tables with a variety of differing flags, and the PGALLOC_GFP flags, intended for use when allocating any level of page table, are only used for ptes in pte_alloc_one. On x86, PGALLOC_GFP is used for all page table allocations. Currently the major differences are: * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in the presence of kmemcheck to prevent recursive faults. Currently kmemcheck cannot be selected for arm64. * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry upon a failure to allocate. * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants are used. While we've no encountered issues so far, it would be preferable to be consistent. This patch ensures all levels of table are allocated in the same manner, with PGALLOC_GFP. Cc: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit 15670ef1eac9817cf48da12c885aabcdd88e9add) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: use ENDPIPROC() to annotate position independent assembler routinesArd Biesheuvel
For more control over which functions are called with the MMU off or with the UEFI 1:1 mapping active, annotate some assembler routines as position independent. This is done by introducing ENDPIPROC(), which replaces the ENDPROC() declaration of those routines. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 207918461eb0aca720fddec5da79bc71c133b9f1) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: add macros for common adrp usagesArd Biesheuvel
The adrp instruction is mostly used in combination with either an add, a ldr or a str instruction with the low bits of the referenced symbol in the 12-bit immediate of the followup instruction. Introduce the macros adr_l, ldr_l and str_l that encapsulate these common patterns. Tested-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit b784a5d97d0af4835dd0125a3e0e5d0fd48128d6) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-04-11arm64: guard asm/assembler.h against multiple inclusionsMarc Zyngier
asm/assembler.h lacks the usual guard against multiple inclusion, leading to a compilation failure if it is accidentally included twice. Using the classic #ifndef/#define/#endif construct solves the issue. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit f3e39273e0a9a5c9dc78cd667ec3663e97e0e989) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2016-03-29arm64: psci: move psci firmware calls out of lineWill Deacon
[ Upstream commit f5e0a12ca2d939e47995f73428d9bf1ad372b289 ] An arm64 allmodconfig fails to build with GCC 5 due to __asmeq assertions in the PSCI firmware calling code firing due to mcount preambles breaking our assumptions about register allocation of function arguments: /tmp/ccDqJsJ6.s: Assembler messages: /tmp/ccDqJsJ6.s:60: Error: .err encountered /tmp/ccDqJsJ6.s:61: Error: .err encountered /tmp/ccDqJsJ6.s:62: Error: .err encountered /tmp/ccDqJsJ6.s:99: Error: .err encountered /tmp/ccDqJsJ6.s:100: Error: .err encountered /tmp/ccDqJsJ6.s:101: Error: .err encountered This patch fixes the issue by moving the PSCI calls out-of-line into their own assembly files, which are safe from the compiler's meddling fingers. Reported-by: Andy Whitcroft <apw@canonical.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-03-18 Merge tag 'v3.18.29' into linux-linaro-lsk-v3.18lsk-v3.18-16.03Alex Shi
This is the 3.18.29 stable release
2016-03-15arm64: kernel: pause/unpause function graph tracer in cpu_suspend()Lorenzo Pieralisi
[ Upstream commit de818bd4522c40ea02a81b387d2fa86f989c9623 ] The function graph tracer adds instrumentation that is required to trace both entry and exit of a function. In particular the function graph tracer updates the "return address" of a function in order to insert a trace callback on function exit. Kernel power management functions like cpu_suspend() are called upon power down entry with functions called "finishers" that are in turn called to trigger the power down sequence but they may not return to the kernel through the normal return path. When the core resumes from low-power it returns to the cpu_suspend() function through the cpu_resume path, which leaves the trace stack frame set-up by the function tracer in an incosistent state upon return to the kernel when tracing is enabled. This patch fixes the issue by pausing/resuming the function graph tracer on the thread executing cpu_suspend() (ie the function call that subsequently triggers the "suspend finishers"), so that the function graph tracer state is kept consistent across functions that enter power down states and never return by effectively disabling graph tracer while they are executing. Fixes: 819e50e25d0c ("arm64: Add ftrace support") Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reported-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Suggested-by: Steven Rostedt <rostedt@goodmis.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Cc: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.16+ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-03-11arm/arm64: KVM: Fix ioctl error handlingMichael S. Tsirkin
[ Upstream commit 4cad67fca3fc952d6f2ed9e799621f07666a560f ] Calling return copy_to_user(...) in an ioctl will not do the right thing if there's a pagefault: copy_to_user returns the number of bytes not copied in this case. Fix up kvm to do return copy_to_user(...)) ? -EFAULT : 0; everywhere. Cc: stable@vger.kernel.org Acked-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-02-26 Merge tag 'v3.18.27' into linux-linaro-lsk-v3.18Alex Shi
This is the 3.18.27 stable release
2016-02-15arm64: restore bogomips information in /proc/cpuinfoYang Shi
[ Upstream commit 92e788b749862ebe9920360513a718e5dd4da7a9 ] As previously reported, some userspace applications depend on bogomips showed by /proc/cpuinfo. Although there is much less legacy impact on aarch64 than arm, it does break libvirt. This patch reverts commit 326b16db9f69 ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo"), but with some tweak due to context change and without the pr_info(). Fixes: 326b16db9f69 ("arm64: delay: don't bother reporting bogomips in /proc/cpuinfo") Signed-off-by: Yang Shi <yang.shi@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.12+ Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-02-15arm64: mm: avoid calling apply_to_page_range on empty rangeMika Penttilä
[ Upstream commit 57adec866c0440976c96a4b8f5b59fb411b1cacb ] Calling apply_to_page_range with an empty range results in a BUG_ON from the core code. This can be triggered by trying to load the st_drv module with CONFIG_DEBUG_SET_MODULE_RONX enabled: kernel BUG at mm/memory.c:1874! Internal error: Oops - BUG: 0 [#1] PREEMPT SMP Modules linked in: CPU: 3 PID: 1764 Comm: insmod Not tainted 4.5.0-rc1+ #2 Hardware name: ARM Juno development board (r0) (DT) task: ffffffc9763b8000 ti: ffffffc975af8000 task.ti: ffffffc975af8000 PC is at apply_to_page_range+0x2cc/0x2d0 LR is at change_memory_common+0x80/0x108 This patch fixes the issue by making change_memory_common (called by the set_memory_* functions) a NOP when numpages == 0, therefore avoiding the erroneous call to apply_to_page_range and bringing us into line with x86 and s390. Cc: <stable@vger.kernel.org> Reviewed-by: Laura Abbott <labbott@redhat.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Mika Penttilä <mika.penttila@nextfour.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
2016-01-14Merge remote-tracking branch 'lts/linux-3.18.y' into linux-linaro-lsk-v3.18lsk-v3.18-16.01Alex Shi
2016-01-14arm64: Add dtb files to archclean ruleJungseok Lee
As dts files have been reorganised under vendor subdirs, dtb files cannot be removed with "make distclean" now. Thus, this patch moves dtb files under archclean rule and removes unnecessary entries. Cc: Robert Richter <rrichter@cavium.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Jungseok Lee <jungseoklee85@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit c7c52e482975cb9c390471df35ab85e86dbc5916) Signed-off-by: Alex Shi <alex.shi@linaro.org>
2015-12-08Merge branch 'pan-3.18' of ↵Kevin Hilman
https://git.linaro.org/people/david.brown/linux-lsk into linux-linaro-lsk-v3.18 * 'pan-3.18' of https://git.linaro.org/people/david.brown/linux-lsk: (27 commits) arm64: kernel: Add support for Privileged Access Never arm64: Generalise msr_s/mrs_s operations arm64: kernel: Add cpufeature 'enable' callback arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extension arm64: kernel: Add min_field_value and use '>=' for feature detection arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE() arm64: alternative: Provide if/else/endif assembler macros arm64: alternative: Work around .inst assembler bugs arm64: alternative: Merge alternative-asm.h into alternative.h arm64: Add AArch32 instruction set condition code checks arm64: lib: use pair accessors for copy_*_user routines arm64/uaccess: fix sparse errors arm64: kernel: Move config_sctlr_el1 arm64: Emulate SETEND for AArch32 tasks arm64: kconfig: move emulation option under kernel features arm64: Consolidate hotplug notifier for instruction emulation arm64: fix return code check when changing emulation handler arm64: Trace emulation of AArch32 legacy instructions arm64: Emulate CP15 Barrier instructions arm64: Port SWP/SWPB emulation support from arm ... Conflicts: arch/arm64/kernel/Makefile
2015-12-03arm64: kernel: Add support for Privileged Access Neverv3.18/topic/PANJames Morse
commit 338d4f49d6f7114a017d294ccf7374df4f998edc upstream. 'Privileged Access Never' is a new arm8.1 feature which prevents privileged code from accessing any virtual address where read or write access is also permitted at EL0. This patch enables the PAN feature on all CPUs, and modifies {get,put}_user helpers temporarily to permit access. This will catch kernel bugs where user memory is accessed directly. 'Unprivileged loads and stores' using ldtrb et al are unaffected by PAN. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> [will: use ALTERNATIVE in asm and tidy up pan_enable check] Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: Generalise msr_s/mrs_s operationsSuzuki K. Poulose
commit 9ded63aaf83eba76e1a54ac02581c2badc497f1a upstream. The system register encoding generated by sys_reg() works only for MRS/MSR(Register) operations, as we hardcode Bit20 to 1 in mrs_s/msr_s mask. This makes it unusable for generating instructions accessing registers with Op0 < 2(e.g, PSTATE.x with Op0=0). As per ARMv8 ARM, (Ref: ARMv8 ARM, Section: "System instruction class encoding overview", C5.2, version:ARM DDI 0487A.f), the instruction encoding reserves bits [20-19] for Op0. This patch generalises the sys_reg, mrs_s and msr_s macros, so that we could use them to access any of the supported system register. Cc: James Morse <james.morse@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: kernel: Add cpufeature 'enable' callbackJames Morse
commit 1c0763037f1e1caef739e36e09c6d41ed7b61b2d upstream. This patch adds an 'enable()' callback to cpu capability/feature detection, allowing features that require some setup or configuration to get this opportunity once the feature has been detected. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: kernel: Add cpuid_feature_extract_field() for 4bit sign extensionJames Morse
commit 79b0e09a3c9bd74ee54582efdb351179d7c00351 upstream. Based on arch/arm/include/asm/cputype.h, this function does the shifting and sign extension necessary when accessing cpu feature fields. Signed-off-by: James Morse <james.morse@arm.com> Suggested-by: Russell King <linux@arm.linux.org.uk> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: kernel: Add min_field_value and use '>=' for feature detectionJames Morse
commit 18ffa046c509d0cd011eeea2c0418f2d014771fc upstream. When a new cpu feature is available, the cpu feature bits will have some initial value, which is incremented when the feature is updated. This patch changes 'register_value' to be 'min_field_value', and checks the feature bits value (interpreted as a signed int) is greater than this minimum. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: kernel: Add optional CONFIG_ parameter to ALTERNATIVE()James Morse
commit 91a5cefa2f98bdd3404c2fba57048c4fa225cc37 upstream. Some uses of ALTERNATIVE() may depend on a feature that is disabled at compile time by a Kconfig option. In this case the unused alternative instructions waste space, and if the original instruction is a nop, it wastes time and space. This patch adds an optional 'config' option to ALTERNATIVE() and alternative_insn that allows the compiler to remove both the original and alternative instructions if the config option is not defined. Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: James Morse <james.morse@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: alternative: Provide if/else/endif assembler macrosDaniel Thompson
commit 63e40815f02584ba8174e0f6af40924b2b335cae upstream. The existing alternative_insn macro has some limitations that make it hard to work with. In particular the fact it takes instructions from it own macro arguments means it doesn't play very nicely with C pre-processor macros because the macro arguments look like a string to the C pre-processor. Workarounds are (probably) possible but things start to look ugly. Introduce an alternative set of macros that allows instructions to be presented to the assembler as normal and switch everything over to the new macros. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: alternative: Work around .inst assembler bugsMarc Zyngier
commit eb7c11ee3c5ce6c45ac28a5015a8e60ed458b412 upstream. AArch64 toolchains suffer from the following bug: $ cat blah.S 1: .inst 0x01020304 .if ((. - 1b) != 4) .error "blah" .endif $ aarch64-linux-gnu-gcc -c blah.S blah.S: Assembler messages: blah.S:3: Error: non-constant expression in ".if" statement which precludes the use of msr_s and co as part of alternatives. We workaround this issue by not directly testing the labels themselves, but by moving the current output pointer by a value that should always be zero. If this value is not null, then we will trigger a backward move, which is expclicitely forbidden. This triggers the error we're after: AS arch/arm64/kvm/hyp.o arch/arm64/kvm/hyp.S: Assembler messages: arch/arm64/kvm/hyp.S:1377: Error: attempt to move .org backwards scripts/Makefile.build:294: recipe for target 'arch/arm64/kvm/hyp.o' failed make[1]: *** [arch/arm64/kvm/hyp.o] Error 1 Makefile:946: recipe for target 'arch/arm64/kvm' failed Not pretty, but at least works on the current toolchains. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: alternative: Merge alternative-asm.h into alternative.hMarc Zyngier
commit 8d883b23aed73cad844ba48051c7e96eddf0f51c upstream. asm/alternative-asm.h and asm/alternative.h are extremely similar, and really deserve to live in the same file (as this makes further modufications a bit easier). Fold the content of alternative-asm.h into alternative.h, and update the few users. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: Add AArch32 instruction set condition code checksPunit Agrawal
commit 0be0e44c182c4f13df13903fd1377671d157d7b7 upstream. Port support for AArch32 instruction condition code checking from arm to arm64. Signed-off-by: Punit Agrawal <punit.agrawal@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64: lib: use pair accessors for copy_*_user routinesWill Deacon
commit 23e94994464a7281838785675e09c8ed1055f62f upstream. The AArch64 instruction set contains load/store pair memory accessors, so use these in our copy_*_user routines to transfer 16 bytes per iteration. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>
2015-12-03arm64/uaccess: fix sparse errorsMichael S. Tsirkin
commit 58fff51784cb5e1bcc06a1417be26eec4288507c upstream. virtio wants to read bitwise types from userspace using get_user. At the moment this triggers sparse errors, since the value is passed through an integer. Fix that up using __force. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: David Brown <david.brown@linaro.org>