aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
AgeCommit message (Collapse)Author
2015-01-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-efi' into ↵lsk-v3.14-15.01Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/kernel/Makefile drivers/firmware/efi/efi-stub-helper.c
2015-01-16Merge tag 'v3.14.29' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.29 stable release
2015-01-16arm64: kernel: fix __cpu_suspend mm switch on warm-bootLorenzo Pieralisi
commit f43c27188a49111b58e9611afa2f0365b0b55625 upstream. On arm64 the TTBR0_EL1 register is set to either the reserved TTBR0 page tables on boot or to the active_mm mappings belonging to user space processes, it must never be set to swapper_pg_dir page tables mappings. When a CPU is booted its active_mm is set to init_mm even though its TTBR0_EL1 points at the reserved TTBR0 page mappings. This implies that when __cpu_suspend is triggered the active_mm can point at init_mm even if the current TTBR0_EL1 register contains the reserved TTBR0_EL1 mappings. Therefore, the mm save and restore executed in __cpu_suspend might turn out to be erroneous in that, if the current->active_mm corresponds to init_mm, on resume from low power it ends up restoring in the TTBR0_EL1 the init_mm mappings that are global and can cause speculation of TLB entries which end up being propagated to user space. This patch fixes the issue by checking the active_mm pointer before restoring the TTBR0 mappings. If the current active_mm == &init_mm, the code sets the TTBR0_EL1 to the reserved TTBR0 mapping instead of switching back to the active_mm, which is the expected behaviour corresponding to the TTBR0_EL1 settings when __cpu_suspend was entered. Fixes: 95322526ef62 ("arm64: kernel: cpu_{suspend/resume} implementation") Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: Move cpu_resume into the text sectionLaura Abbott
commit c3684fbb446501b48dec6677a6a9f61c215053de upstream. The function cpu_resume currently lives in the .data section. There's no reason for it to be there since we can use relative instructions without a problem. Move a few cpu_resume data structures out of the assembly file so the .data annotation can be dropped completely and cpu_resume ends up in the read only text section. Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: kernel: refactor the CPU suspend API for retention statesLorenzo Pieralisi
commit 714f59925595b9c2ea9c22b107b340d38e3b3bc9 upstream. CPU suspend is the standard kernel interface to be used to enter low-power states on ARM64 systems. Current cpu_suspend implementation by default assumes that all low power states are losing the CPU context, so the CPU registers must be saved and cleaned to DRAM upon state entry. Furthermore, the current cpu_suspend() implementation assumes that if the CPU suspend back-end method returns when called, this has to be considered an error regardless of the return code (which can be successful) since the CPU was not expected to return from a code path that is different from cpu_resume code path - eg returning from the reset vector. All in all this means that the current API does not cope well with low-power states that preserve the CPU context when entered (ie retention states), since first of all the context is saved for nothing on state entry for those states and a successful state entry can return as a normal function return, which is considered an error by the current CPU suspend implementation. This patch refactors the cpu_suspend() API so that it can be split in two separate functionalities. The arm64 cpu_suspend API just provides a wrapper around CPU suspend operation hook. A new function is introduced (for architecture code use only) for states that require context saving upon entry: __cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) __cpu_suspend() saves the context on function entry and calls the so called suspend finisher (ie fn) to complete the suspend operation. The finisher is not expected to return, unless it fails in which case the error is propagated back to the __cpu_suspend caller. The API refactoring results in the following pseudo code call sequence for a suspending CPU, when triggered from a kernel subsystem: /* * int cpu_suspend(unsigned long idx) * @idx: idle state index */ { -> cpu_suspend(idx) |---> CPU operations suspend hook called, if present |--> if (retention_state) |--> direct suspend back-end call (eg PSCI suspend) else |--> __cpu_suspend(idx, &back_end_finisher); } By refactoring the cpu_suspend API this way, the CPU operations back-end has a chance to detect whether idle states require state saving or not and can call the required suspend operations accordingly either through simple function call or indirectly through __cpu_suspend() which carries out state saving and suspend finisher dispatching to complete idle state entry. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-16arm64: kernel: add missing __init section marker to cpu_suspend_initLorenzo Pieralisi
commit 18ab7db6b749ac27aac08d572afbbd2f4d937934 upstream. Suspend init function must be marked as __init, since it is not needed after the kernel has booted. This patch moves the cpu_suspend_init() function to the __init section. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2015-01-08Merge tag 'v3.14.28' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.28 stable release
2015-01-08arm64: Add COMPAT_HWCAP_LPAECatalin Marinas
commit 7d57511d2dba03a8046c8b428dd9192a4bfc1e73 upstream. Commit a469abd0f868 (ARM: elf: add new hwcap for identifying atomic ldrd/strd instructions) introduces HWCAP_ELF for 32-bit ARM applications. As LPAE is always present on arm64, report the corresponding compat HWCAP to user space. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-21Merge tag 'v3.14.25' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.25 stable release
2014-11-21Correct the race condition in aarch64_insn_patch_text_sync()William Cohen
commit 899d5933b2dd2720f2b20b01eaa07871aa6ad096 upstream. When experimenting with patches to provide kprobes support for aarch64 smp machines would hang when inserting breakpoints into kernel code. The hangs were caused by a race condition in the code called by aarch64_insn_patch_text_sync(). The first processor in the aarch64_insn_patch_text_cb() function would patch the code while other processors were still entering the function and incrementing the cpu_count field. This resulted in some processors never observing the exit condition and exiting the function. Thus, processors in the system hung. The first processor to enter the patching function performs the patching and signals that the patching is complete with an increment of the cpu_count field. When all the processors have incremented the cpu_count field the cpu_count will be num_cpus_online()+1 and they will return to normal execution. Fixes: ae16480785de arm64: introduce interfaces to hotpatch kernel and module code Signed-off-by: William Cohen <wcohen@redhat.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-11-18Merge remote-tracking branch 'lsk/v3.14/topic/arm64-fpsimd' into ↵Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/include/asm/thread_info.h
2014-11-18arm64: add support for kernel mode NEON in interrupt contextv3.14/topic/arm64-fpsimdArd Biesheuvel
This patch modifies kernel_neon_begin() and kernel_neon_end(), so they may be called from any context. To address the case where only a couple of registers are needed, kernel_neon_begin_partial(u32) is introduced which takes as a parameter the number of bottom 'n' NEON q-registers required. To mark the end of such a partial section, the regular kernel_neon_end() should be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit 190f1ca85d071114930dd7abe6b5d103e9d5572f) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-11-18arm64: defer reloading a task's FPSIMD state to userland resumeArd Biesheuvel
If a task gets scheduled out and back in again and nothing has touched its FPSIMD state in the mean time, there is really no reason to reload it from memory. Similarly, repeated calls to kernel_neon_begin() and kernel_neon_end() will preserve and restore the FPSIMD state every time. This patch defers the FPSIMD state restore to the last possible moment, i.e., right before the task returns to userland. If a task does not return to userland at all (for any reason), the existing FPSIMD state is preserved and may be reused by the owning task if it gets scheduled in again on the same CPU. This patch adds two more functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded - fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit 005f78cd88494457ed38ce817f4e3fe5d372f0cb) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-11-18arm64: add abstractions for FPSIMD state manipulationArd Biesheuvel
There are two tacit assumptions in the FPSIMD handling code that will no longer hold after the next patch that optimizes away some FPSIMD state restores: . the FPSIMD registers of this CPU contain the userland FPSIMD state of task 'current'; . when switching to a task, its FPSIMD state will always be restored from memory. This patch adds the following functions to abstract away from straight FPSIMD register file saves and restores: - fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved - fpsimd_update_current_state -> replace current's FPSIMD state Where necessary, the signal handling and fork code are updated to use the above wrappers instead of poking into the FPSIMD registers directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> (cherry picked from commit c51f92693c35c141cf7d9b7e2fcbb81128324eb4) Signed-off-by: Mark Brown <broonie@kernel.org>
2014-10-15arm64: enable context trackingLarry Bassel
Backport of the following patch to 3.14 LSK: commit 6c81fe7925cc4c42de49e17be21eb86d1173c3a7 Author: Larry Bassel <larry.bassel@linaro.org> Date: Fri May 30 12:34:15 2014 -0700 arm64: enable context tracking Make calls to ct_user_enter when the kernel is exited and ct_user_exit when the kernel is entered (in el0_da, el0_ia, el0_svc, el0_irq and all of the "error" paths). These macros expand to function calls which will only work properly if el0_sync and related code has been rearranged (in a previous patch of this series). The calls to ct_user_exit are made after hw debugging has been enabled (enable_dbg_and_irq). The call to ct_user_enter is made at the beginning of the kernel_exit macro. This patch is based on earlier work by Kevin Hilman. Save/restore optimizations were also done by Kevin. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Signed-off-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-15arm64: adjust el0_sync so that a function can be calledLarry Bassel
Backport of the following patch to 3.14 LSK: commit 6ab6463aeb5fbc75fa3227befb508fc33b34dbf1 Author: Larry Bassel <larry.bassel@linaro.org> Date: Fri May 30 20:34:14 2014 +0100 arm64: adjust el0_sync so that a function can be called To implement the context tracker properly on arm64, a function call needs to be made after debugging and interrupts are turned on, but before the lr is changed to point to ret_to_user(). If the function call is made after the lr is changed the function will not return to the correct place. For similar reasons, defer the setting of x0 so that it doesn't need to be saved around the function call (save far_el1 in x26 temporarily instead). Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Kevin Hilman <khilman@linaro.org> Tested-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-15arm64: Support arch_irq_work_raise() via self IPIsLarry Bassel
Backport of the following patch to LSK 3.14: commit eb631bb5bf5b042202aaaee4a8dd8f863ba2a900 Author: Larry Bassel <larry.bassel@linaro.org> Date: Mon May 12 16:48:51 2014 +0100 arm64: Support arch_irq_work_raise() via self IPIs Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525fd793101df42a1344ecc48b49b62e48c9 Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Larry Bassel <larry.bassel@linaro.org>
2014-10-09Merge remote-tracking branch 'lsk/v3.14/topic/kvm' into linux-linaro-lsk-v3.14Mark Brown
2014-10-08Merge remote-tracking branch 'lsk/v3.14/topic/gicv3' into linux-linaro-lsk-v3.14Mark Brown
2014-10-08Merge branch 'lsk/v3.14/topic/kvm' into lsk/lsk-with-kvm-v3.14Christoffer Dall
Conflicts: arch/arm64/include/asm/debug-monitors.h
2014-10-08Merge branch 'lsk/v3.14/topic/gic-v3' into lsk/lsk-with-kvm-v3.14Christoffer Dall
2014-10-06Merge tag 'v3.14.20' into linux-linaro-lsk-v3.14Mark Brown
This is the 3.14.20 stable release
2014-10-05arm64: ptrace: fix compat hardware watchpoint reportingWill Deacon
commit 27d7ff273c2aad37b28f6ff0cab2cfa35b51e648 upstream. I'm not sure what I was on when I wrote this, but when iterating over the hardware watchpoint array (hbp_watch_array), our index is off by ARM_MAX_BRP, so we walk off the end of our thread_struct... ... except, a dodgy condition in the loop means that it never executes at all (bp cannot be NULL). This patch fixes the code so that we remove the bp check and use the correct index for accessing the watchpoint structures. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05arm64: use irq_set_affinity with force=false when migrating irqsSudeep Holla
commit 3d8afe3099ebc602848aa7f09235cce3a9a023ce upstream. The arm64 interrupt migration code on cpu offline calls irqchip.irq_set_affinity() with the argument force=true. Originally this argument had no effect because it was not used by any interrupt chip driver and there was no semantics defined. This changed with commit 01f8fa4f01d8 ("genirq: Allow forcing cpu affinity of interrupts") which made the force argument useful to route interrupts to not yet online cpus without checking the target cpu against the cpu online mask. The following commit ffde1de64012 ("irqchip: gic: Support forced affinity setting") implemented this for the GIC interrupt controller. As a consequence the cpu offline irq migration fails if CPU0 is offlined, because CPU0 is still set in the affinity mask and the validation against cpu online mask is skipped to the force argument being true. The following first_cpu(mask) selection always selects CPU0 as the target. Commit 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") intended to fix the above mentioned issue but introduced another issue where affinity can be migrated to a wrong CPU due to unconditional copy of cpu_online_mask. As with for arm, solve the issue by calling irq_set_affinity() with force=false from the CPU offline irq migration code so the GIC driver validates the affinity mask against CPU online mask and therefore removes CPU0 from the possible target candidates. Also revert the changes done in the commit 601c942176d8 as it's no longer needed. Tested on Juno platform. Fixes: 601c942176d8("arm64: use cpu_online_mask when using forced irq_set_affinity") Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-05arm64: flush TLS registers during execWill Deacon
commit eb35bdd7bca29a13c8ecd44e6fd747a84ce675db upstream. Nathan reports that we leak TLS information from the parent context during an exec, as we don't clear the TLS registers when flushing the thread state. This patch updates the flushing code so that we: (1) Unconditionally zero the tpidr_el0 register (since this is fully context switched for native tasks and zeroed for compat tasks) (2) Zero the tp_value state in thread_info before clearing the tpidrr0_el0 register for compat tasks (since this is only writable by the set_tls compat syscall and therefore not fully switched). A missing compiler barrier is also added to the compat set_tls syscall. Acked-by: Nathan Lynch <Nathan_Lynch@mentor.com> Reported-by: Nathan Lynch <Nathan_Lynch@mentor.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-10-02arm64: KVM: implement lazy world switch for debug registersMarc Zyngier
Implement switching of the debug registers. While the number of registers is massive, CPUs usually don't implement them all (A57 has 6 breakpoints and 4 watchpoints, which gives us a total of 22 registers "only"). Also, we only save/restore them when MDSCR_EL1 has debug enabled, or when we've flagged the debug registers as dirty. It means that most of the time, we only save/restore MDSCR_EL1. Reviewed-by: Anup Patel <anup.patel@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit b0e626b380872b663918230fafdac128c34fea56) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02arm64: move DBG_MDSCR_* to asm/debug-monitors.hChristoffer Dall
In order to be able to use the DBG_MDSCR_* macros from the KVM code, move the relevant definitions to the obvious include file. Also move the debug_el enum to a portion of the file that is guarded by #ifndef __ASSEMBLY__ in order to use that file from assembly code. Acked-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Anup Patel <anup.patel@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit 51ba248164d0eeb8b4f94d405430c18a56c6ac9a) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02arm64: KVM: vgic: add GICv3 world switchMarc Zyngier
Introduce the GICv3 world switch code used to save/restore the GICv3 context. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit 754d37726010d872f1f714a8ce8920acdfa4978c) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02arm64: KVM: split GICv2 world switch from hyp codeMarc Zyngier
Move the GICv2 world switch code into its own file, and add the necessary indirection to the arm64 switch code. Also introduce a new type field to the vgic_params structure. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit 1a9b13056dde7e3092304d6041ccc60a913042ea) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02KVM: arm/arm64: vgic: move GICv2 registers to their own structureMarc Zyngier
In order to make way for the GICv3 registers, move the v2-specific registers to their own structure. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit eede821dbfd58df89edb072da64e006321eaef58) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02arm64: gicv3: Allow GICv3 compilation with older binutilsCatalin Marinas
GICv3 introduces new system registers accessible with the full msr/mrs syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent binutils understand the new syntax. This patch introduces msr_s/mrs_s assembly macros which generate the equivalent instructions above and converts the existing GICv3 code (both drivers/irqchip/ and arch/arm64/kernel/). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Olof Johansson <olof@lixom.net> Tested-by: Olof Johansson <olof@lixom.net> Suggested-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> (cherry picked from commit 72c5839515260dce966cd24f54436e6583288e6c) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-10-02irqchip: gic-v3: Initial support for GICv3Marc Zyngier
The Generic Interrupt Controller (version 3) offers services that are similar to GICv2, with a number of additional features: - Affinity routing based on the CPU MPIDR (ARE) - System register for the CPU interfaces (SRE) - Support for more that 8 CPUs - Locality-specific Peripheral Interrupts (LPIs) - Interrupt Translation Services (ITS) This patch adds preliminary support for GICv3 with ARE and SRE, non-secure mode only. It relies on higher exception levels to grant ARE and SRE access. Support for LPI and ITS will be added at a later time. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Reviewed-by: Zi Shen Lim <zlim@broadcom.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Tirumalesh Chalamarla <tchalamarla@cavium.com> Reviewed-by: Yun Wu <wuyun.wu@huawei.com> Reviewed-by: Zhen Lei <thunder.leizhen@huawei.com> Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com> Tested-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: Radha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.com Signed-off-by: Jason Cooper <jason@lakedaemon.net> (cherry picked from commit 021f653791ad17e03f98aaa7fb933816ae16f161) Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
2014-08-11Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵Mark Brown
linux-linaro-lsk-v3.14 Conflicts: arch/arm64/Kconfig arch/arm64/boot/dts/apm-storm.dtsi arch/arm64/include/asm/pgtable.h mm/Kconfig
2014-08-11arm64: head: create a new function for setting the boot_cpu_mode flagMatthew Leach
Currently, the code for setting the __cpu_boot_mode flag is munged in with el2_setup. This makes things difficult on a BE bringup as a memory access has to have occurred before el2_setup which is the place that we'd like to set the endianess on the current EL. Create a new function for setting __cpu_boot_mode and have el2_setup return the mode the CPU. Also define a new constant in virt.h, BOOT_CPU_MODE_EL1, for readability. Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matthew Leach <matthew.leach@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 828e9834e9a5b7e61046aa3c5f603a4fecba2fb4) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts (restoring a previous mismerge): arch/arm64/kernel/head.S
2014-08-11arm64: Clean up the default pgprot settingMark Brown
The primary aim of this patchset is to remove the pgprot_default and prot_sect_default global variables and rely strictly on predefined values. The original goal was to be able to run SMP kernels on UP hardware by not setting the Shareability bit. However, it is unlikely to see UP ARMv8 hardware and even if we do, the Shareability bit is no longer assumed to disable cacheable accesses. A side effect is that the device mappings now have the Shareability attribute set. The hardware, however, should ignore it since Device accesses are always Outer Shareable. Following the removal of the two global variables, there is some PROT_* macro reshuffling and cleanup, including the __PAGE_* macros (replaced by PAGE_*). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> (cherry picked from commit a501e32430d4232012ab708b8f0ce841f29e0f02) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/mm/mmu.c
2014-08-11arm64: add early_ioremap supportMark Salter
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit bf4b558eba920a38f91beb5ee62a8ce2628c92f7) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: Documentation/arm64/memory.txt arch/arm64/Kconfig arch/arm64/kernel/head.S arch/arm64/mm/mmu.c
2014-08-11arm64: barriers: make use of barrier options with explicit barriersWill Deacon
When calling our low-level barrier macros directly, we can often suffice with more relaxed behaviour than the default "all accesses, full system" option. This patch updates the users of dsb() to specify the option which they actually require. Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 98f7685ee69f871ba991089cb9685f0da07517ea) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Remove the aux_context structureCatalin Marinas
This patch removes the aux_context structure (and the containing file) to allow the placement of the _aarch64_ctx end magic based on the context stored on the signal stack. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 0e0276d1e1dd063cd14ce377707970d0417a0792) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: Remove boot thread synchronisation for spin-table release methodCatalin Marinas
The synchronisation with the boot thread already happens in __cpu_up() via wait_for_completion_timeout(). In addition, __cpu_up() calls are protected by the cpu_add_remove_lock mutex and already serialised. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 6400111399e16a535231ebd76389c894ea1837ff) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: smp: make local symbol staticJingoo Han
Make smp_spin_table_cpu_postboot() static, because this function is used only in this file. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 7184659bed3090248e382d98a49a3c1bcfe11174) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-11arm64: initialize pgprot info earlier in bootMark Salter
Presently, paging_init() calls init_mem_pgprot() to initialize pgprot values used by macros such as PAGE_KERNEL, PAGE_KERNEL_EXEC, etc. The new fixmap and early_ioremap support also needs to use these macros before paging_init() is called. This patch moves the init_mem_pgprot() call out of paging_init() and into setup_arch() so that pgprot_default gets initialized in time for fixmap and early_ioremap. Signed-off-by: Mark Salter <msalter@redhat.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> (cherry picked from commit 0bf757c73d6612d3d279de3f61b35062aa9c8b1d) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/include/asm/mmu.h
2014-08-08arm64: efi: only attempt efi map setup if booting via EFIv3.14/topic/arm64-efiLeif Lindholm
Booting a kernel with CONFIG_EFI enabled on a non-EFI system caused an oops with the current UEFI support code. Add the required test to prevent this. Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> (cherry picked from commit 74bcc2499291d38b6253f9dbd6af33a195222208) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-08arm64: add EFI runtime servicesMark Salter
This patch adds EFI runtime support for arm64. This runtime support allows the kernel to access various EFI runtime services provided by EFI firmware. Things like reboot, real time clock, EFI boot variables, and others. This functionality is supported for little endian kernels only. The UEFI firmware standard specifies that the firmware be little endian. A future patch is expected to add support for big endian kernels running with little endian firmware. Signed-off-by: Mark Salter <msalter@redhat.com> [ Remove unnecessary cache/tlb maintenance. ] Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com> (cherry picked from commit f84d02755f5a9f3b88e8d15d6384da25ad6dcf5e) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/Kconfig arch/arm64/kernel/Makefile
2014-08-08efi/arm64: efistub: remove local copy of linux_bannerArd Biesheuvel
The shared efistub code for ARM and arm64 contains a local copy of linux_banner, allowing it to be referenced from separate executables such as the ARM decompressor. However, this introduces a dependency on generated header files, causing unnecessary rebuilds of the stub itself and, in case of arm64, vmlinux which contains it. On arm64, the copy is not actually needed since we can reference the original symbol directly, and as it turns out, there may be better ways to deal with this for ARM as well, so let's remove it from the shared code. If it still needs to be reintroduced for ARM later, it should live under arch/arm anyway and not in shared code. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> (cherry picked from commit a55c072dfe520f8fa03cf11b07b9268a8a17820a) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-08-08arm64: efi: add EFI stubMark Salter
This patch adds PE/COFF header fields to the start of the kernel Image so that it appears as an EFI application to UEFI firmware. An EFI stub is included to allow direct booting of the kernel Image. Signed-off-by: Mark Salter <msalter@redhat.com> [Add support in PE/COFF header for signed images] Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Matt Fleming <matt.fleming@intel.com> (cherry picked from commit 3c7f255039a2ad6ee1e3890505caf0d029b22e29) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/Kconfig arch/arm64/kernel/Makefile Conflicts: arch/arm64/kernel/Makefile
2014-07-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-misc' into ↵lsk-v3.14-preview-14.07Mark Brown
linux-linaro-lsk-v3.14
2014-07-23arm64: place initial page tables above the kernelMark Rutland
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit bd00cd5f8c8c3c282bb1e1eac6a6679a4f808091) Signed-off-by: Mark Brown <broonie@linaro.org> Conflicts: arch/arm64/mm/init.c
2014-07-23arm64: Relax the kernel cache requirements for bootCatalin Marinas
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit c218bca74eeafa2f8528b6bbb34d112075fcf40a) Signed-off-by: Mark Brown <broonie@linaro.org>
2014-07-23Merge remote-tracking branch 'lsk/v3.14/topic/arm64-cmadma' into ↵Mark Brown
lsk-v3.14-arm64-misc
2014-07-23arm64: Extend the idmap to the whole kernel imageCatalin Marinas
This patch changes the idmap page table creation during boot to cover the whole kernel image, allowing functions like cpu_reset() to be safely called with the physical address. This patch also simplifies the create_block_map asm macro to no longer take an idmap argument and always use the phys/virt/end parameters. For the idmap case, phys == virt. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit ea8c2e1124457f266f82effc3e6558552527943a) Signed-off-by: Mark Brown <broonie@linaro.org>