aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel/smp.c
AgeCommit message (Collapse)Author
2016-03-15arm64: Add runtime NMI disable modeDaniel Thompson
For benchmarking... Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2016-03-15arm64: Implement IPI_CPU_BACKTRACE using pseudo-NMIsDaniel Thompson
Recently arm64 gained the capability to (optionally) mask interrupts using the GIC PMR rather than the CPU PSR. That allows us to introduce an NMI-like means to handle backtrace requests. This provides a useful debug aid by allowing the kernel to robustly show a backtrace for every processor in the system when, for example, we hang trying to acquire a spin lock. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2016-03-15arm64: irqflags: Use ICC sysregs to implement IRQ maskingDaniel Thompson
Currently irqflags is implemented using the PSR's I bit. It is possible to implement irqflags by using the co-processor interface to the GIC. Using the co-processor interface makes it feasible to simulate NMIs using GIC interrupt prioritization. This patch changes the irqflags macros to modify, save and restore ICC_PMR_EL1. This has a substantial knock on effect for the rest of the kernel. There are four reasons for this: 1. The state of the ICC_PMR_EL1_G_BIT becomes part of the CPU context and must be saved and restored during traps. To simplify the additional context management the ICC_PMR_EL1_G_BIT is converted into a fake (reserved) bit within the PSR (PSR_G_BIT). Naturally this approach will need to be changed if future ARM architecture extensions make use of this bit. 2. The hardware automatically masks the I bit (at boot, during traps, etc). When the I bit is set by hardware we must add code to switch from I bit masking and PMR masking. 3. Some instructions, noteably wfi, require that the PMR not be used for interrupt masking. Before calling these instructions we must switch from PMR masking to I bit masking. 4. We use the alternatives system to all a single kernel to boot and switch between the different masking approaches. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2016-03-15arm64: alternative: Apply alternatives early in boot processDaniel Thompson
Currently alternatives are applied very late in the boot process (and a long time after we enable scheduling). Some alternative sequences, such as those that alter the way CPU context is stored, must be applied much earlier in the boot sequence. Introduce apply_alternatives_early() to allow some alternatives to be applied immediately after we detect the CPU features of the boot CPU. Currently apply_alternatives_all() is not optimized and will re-patch code that has already been updated. This is harmless but could be removed by adding extra flags to the alternatives store. Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org>
2016-03-15arm64: Add support for on-demand backtrace of other CPUsDaniel Thompson
Currently arm64 has no implementation of arch_trigger_all_cpu_backtace. The patch provides one using library code recently added by Russell King for the majority of the implementation. Currently this is realized using regular irqs but could, in the future, be implemented using NMI-like mechanisms. Note: There is a small (and nasty) change to the generic code to ensure good stack traces. The generic code currently assumes that show_regs() will include a stack trace but arch/arm64 does not do this so we must add extra code here. Ideas on a better approach here would be very welcome (is there any appetite to change arm64 show_regs() or should we just tease out the dump code into a callback?). Signed-off-by: Daniel Thompson <daniel.thompson@linaro.org> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
2015-11-12arm64: smp: make of_parse_and_init_cpus staticJisheng Zhang
of_parse_and_init_cpus is only called from within smp.c, so it can be declared static. Signed-off-by: Jisheng Zhang <jszhang@marvell.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay cpu feature capability checksSuzuki K. Poulose
At the moment we run through the arm64_features capability list for each CPU and set the capability if one of the CPU supports it. This could be problematic in a heterogeneous system with differing capabilities. Delay the CPU feature checks until all the enabled CPUs are up(i.e, smp_cpus_done(), so that we can make better decisions based on the overall system capability. Once we decide and advertise the capabilities the alternatives can be applied. From this state, we cannot roll back a feature to disabled based on the values from a new hotplugged CPU, due to the runtime patching and other reasons. So, for all new CPUs, we need to make sure that they have the established system capabilities. Failing which, we bring the CPU down, preventing it from turning online. Once the capabilities are decided, any new CPU booting up goes through verification to ensure that it has all the enabled capabilities and also invokes the respective enable() method on the CPU. The CPU errata checks are not delayed and is still executed per-CPU to detect the respective capabilities. If we ever come across a non-errata capability that needs to be checked on each-CPU, we could introduce them via a new capability table(or introduce a flag), which can be processed per CPU. The next patch will make the feature checks use the system wide safe value of a feature register. NOTE: The enable() methods associated with the capability is scheduled on all the CPUs (which is the only use case at the moment). If we need a different type of 'enable()' which only needs to be run once on any CPU, we should be able to handle that when needed. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> [catalin.marinas@arm.com: static variable and coding style fixes] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay cpuinfo_store_boot_cpuSuzuki K. Poulose
At the moment the boot CPU stores the cpuinfo long before the PERCPU areas are initialised by the kernel. This could be problematic as the non-boot CPU data structures might get copied with the data from the boot CPU, giving us no chance to detect if a particular CPU updated its cpuinfo. This patch delays the boot cpu store to smp_prepare_boot_cpu(). Also kills the setup_processor() which no longer does meaningful work. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Delay ELF HWCAP initialisation until all CPUs are upSuzuki K. Poulose
Delay the ELF HWCAP initialisation until all the (enabled) CPUs are up, i.e, smp_cpus_done(). This is in preparation for detecting the common features across the CPUS and creating a consistent ELF HWCAP for the system. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-21arm64: Make the CPU information more clearSuzuki K. Poulose
At early boot, we print the CPU version/revision. On a heterogeneous system, we could have different types of CPUs. Print the CPU info for all active cpus. Also, the secondary CPUs prints the message only when they turn online. Also, remove the redundant 'revision' information which doesn't make any sense without the 'variant' field. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Tested-by: Dave Martin <Dave.Martin@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-09arm64: fix a migrating irq bug when hotplug cpuYang Yingliang
When cpu is disabled, all irqs will be migratged to another cpu. In some cases, a new affinity is different, the old affinity need to be updated and if irq_set_affinity's return value is IRQ_SET_MASK_OK_DONE, the old affinity can not be updated. Fix it by using irq_do_set_affinity. And migrating interrupts is a core code matter, so use the generic function irq_migrate_all_off_this_cpu() to migrate interrupts in kernel/irq/migration.c. Cc: Jiang Liu <jiang.liu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Russell King - ARM Linux <linux@arm.linux.org.uk> Cc: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: mm: kill mm_cpumask usageWill Deacon
mm_cpumask isn't actually used for anything on arm64, so remove all the code trying to keep it up-to-date. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-10-07arm64: flush: use local TLB and I-cache invalidationWill Deacon
There are a number of places where a single CPU is running with a private page-table and we need to perform maintenance on the TLB and I-cache in order to ensure correctness, but do not require the operation to be broadcast to other CPUs. This patch adds local variants of tlb_flush_all and __flush_icache_all to support these use-cases and updates the callers respectively. __local_flush_icache_all also implies an isb, since it is intended to be used synchronously. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: David Daney <david.daney@cavium.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-07-29arm64: remove dead-code depending on CONFIG_UP_LATE_INITJonas Rabenstein
Commit 4b3dc9679cf7 ("arm64: force CONFIG_SMP=y and remove redundant and therfore can not be selected anymore. Remove dead #ifdef-block depending on UP_LATE_INIT in arch/arm64/kernel/setup.c Signed-off-by: Jonas Rabenstein <jonas.rabenstein@studium.uni-erlangen.de> [will: kill do_post_cpus_up_work altogether] Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-07-07ACPI / ARM64 : use the new BAD_MADT_GICC_ENTRY macroAl Stone
For those parts of the arm64 ACPI code that need to check GICC subtables in the MADT, use the new BAD_MADT_GICC_ENTRY macro instead of the previous BAD_MADT_ENTRY. The new macro takes into account differences in the size of the GICC subtable that the old macro did not; this caused failures even though the subtable entries are valid. Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.") Signed-off-by: Al Stone <al.stone@linaro.org> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-07-03ARM64 / SMP: Switch pr_err() to pr_debug() for disabled GICC entryHanjun Guo
It is normal that firmware presents GICC entry or entries (processors) with disabled flag in ACPI MADT, taking a system of 16 cpus for example, ACPI firmware may present 8 ebabled first with another 8 cpus disabled in MADT, the disabled cpus can be hot-added later. Firmware may also present more cpus than the hardware actually has, but disabled the unused ones, and easily enable it when the hardware has such cpus to make the firmware code scalable. So that's not an error for disabled cpus in MADT, we can switch pr_err() to pr_debug() to make the boot a little quieter by default. Since hwid for disabled cpus often are invalid, and we check invalid hwid first in the code, for use case that hot add cpus later will be filtered out and will not be counted in possible cups, so move this check before the hwid one to prepare the code to count for disabeld cpus when cpu hot-plug is introduced. Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Reviewed-by: Al Stone <ahs3@redhat.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-25ARM64: smp: Fix suspicious RCU usage with ipi tracepointsStephen Boyd
John Stultz reported an RCU splat on ARM with ipi trace events enabled. It looks like the same problem exists on ARM64. At this point in the IPI handling path we haven't called irq_enter() yet, so RCU doesn't know that we're about to exit idle and properly warns that we're using RCU from an idle CPU. Use trace_ipi_entry_rcuidle() instead of trace_ipi_entry() so that RCU is informed about our exit from idle. Cc: John Stultz <john.stultz@linaro.org> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Acked-by: Steven Rostedt <rostedt@goodmis.org> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: <stable@vger.kernel.org> # 3.17+ Fixes: 45ed695ac10a ("ARM64: add IPI tracepoints") Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-06-05Merge branch 'arm64/psci-rework' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux * 'arm64/psci-rework' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: psci: remove ACPI coupling arm64: psci: kill psci_power_state arm64: psci: account for Trusted OS instances arm64: psci: support unsigned return values arm64: psci: remove unnecessary id indirection arm64: smp: consistently use error codes arm64: smp_plat: add get_logical_index arm/arm64: kvm: add missing PSCI include Conflicts: arch/arm64/kernel/smp.c
2015-05-27arm64: smp: consistently use error codesMark Rutland
cpu_kill currently returns one for success and zero for failure, which is unlike all the other cpu_operations, which return zero for success and an error code upon failure. This difference is unnecessarily confusing. Make cpu_kill consistent with the other cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Reviewed-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-05-21arm64: Use common outgoing-CPU-notification codePaul E. McKenney
This commit removes the open-coded CPU-offline notification with new common code. In particular, this change avoids calling scheduler code using RCU from an offline CPU that RCU is ignoring. This is a minimal change. A more intrusive change might invoke the cpu_check_up_prepare() and cpu_set_state_online() functions at CPU-online time, which would allow onlining throw an error if the CPU did not go offline properly. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19ARM64: kernel: unify ACPI and DT cpus initializationLorenzo Pieralisi
The code that initializes cpus on arm64 is currently split in two different code paths that carry out DT and ACPI cpus initialization. Most of the code executing SMP initialization is common and should be merged to reduce discrepancies between ACPI and DT initialization and to have code initializing cpus in a single common place in the kernel. This patch refactors arm64 SMP cpus initialization code to merge ACPI and DT boot paths in a common file and to create sanity checks that can be reused by both boot methods. Current code assumes PSCI is the only available boot method when arm64 boots with ACPI; this can be easily extended if/when the ACPI parking protocol is merged into the kernel. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-05-19ARM64: kernel: make cpu_ops hooks DT agnosticLorenzo Pieralisi
ARM64 CPU operations such as cpu_init and cpu_init_idle take a struct device_node pointer as a parameter, which corresponds to the device tree node of the logical cpu on which the operation has to be applied. With the advent of ACPI on arm64, where MADT static table entries are used to initialize cpus, the device tree node parameter in cpu_ops hooks become useless when booting with ACPI, since in that case cpu device tree nodes are not present and can not be used for cpu initialization. The current cpu_init hook requires a struct device_node pointer parameter because it is called while parsing the device tree to initialize CPUs, when the cpu_logical_map (that is used to match a cpu node reg property to a device tree node) for a given logical cpu id is not set up yet. This means that the cpu_init hook cannot rely on the of_get_cpu_node function to retrieve the device tree node corresponding to the logical cpu id passed in as parameter, so the cpu device tree node must be passed in as a parameter to fix this catch-22 dependency cycle. This patch reshuffles the cpu_logical_map initialization code so that the cpu_init cpu_ops hook can safely use the of_get_cpu_node function to retrieve the cpu device tree node, removing the need for the device tree node pointer parameter. In the process, the patch removes device tree node parameters from all cpu_ops hooks, in preparation for SMP DT/ACPI cpus initialization consolidation. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: Hanjun Guo <hanjun.guo@linaro.org> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Hanjun Guo <hanjun.guo@linaro.org> Tested-by: Mark Rutland <mark.rutland@arm.com> [DT] Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-04-24Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull initial ACPI support for arm64 from Will Deacon: "This series introduces preliminary ACPI 5.1 support to the arm64 kernel using the "hardware reduced" profile. We don't support any peripherals yet, so it's fairly limited in scope: - MEMORY init (UEFI) - ACPI discovery (RSDP via UEFI) - CPU init (FADT) - GIC init (MADT) - SMP boot (MADT + PSCI) - ACPI Kconfig options (dependent on EXPERT) ACPI for arm64 has been in development for a while now and hardware has been available that can boot with either FDT or ACPI tables. This has been made possible by both changes to the ACPI spec to cater for ARM-based machines (known as "hardware-reduced" in ACPI parlance) but also a Linaro-driven effort to get this supported on top of the Linux kernel. This pull request is the result of that work. These changes allow us to initialise the CPUs, interrupt controller, and timers via ACPI tables, with memory information and cmdline coming from EFI. We don't support a hybrid ACPI/FDT scheme. Of course, there is still plenty of work to do (a serial console would be nice!) but I expect that to happen on a per-driver basis after this core series has been merged. Anyway, the diff stat here is fairly horrible, but splitting this up and merging it via all the different subsystems would have been extremely painful. Instead, we've got all the relevant Acks in place and I've not seen anything other than trivial (Kconfig) conflicts in -next (for completeness, I've included my resolution below). Nearly half of the insertions fall under Documentation/. So, we'll see how this goes. Right now, it all depends on EXPERT and I fully expect people to use FDT by default for the immediate future" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (31 commits) ARM64 / ACPI: make acpi_map_gic_cpu_interface() as void function ARM64 / ACPI: Ignore the return error value of acpi_map_gic_cpu_interface() ARM64 / ACPI: fix usage of acpi_map_gic_cpu_interface ARM64: kernel: acpi: honour acpi=force command line parameter ARM64: kernel: acpi: refactor ACPI tables init and checks ARM64: kernel: psci: let ACPI probe PSCI version ARM64: kernel: psci: factor out probe function ACPI: move arm64 GSI IRQ model to generic GSI IRQ layer ARM64 / ACPI: Don't unflatten device tree if acpi=force is passed ARM64 / ACPI: additions of ACPI documentation for arm64 Documentation: ACPI for ARM64 ARM64 / ACPI: Enable ARM64 in Kconfig XEN / ACPI: Make XEN ACPI depend on X86 ARM64 / ACPI: Select ACPI_REDUCED_HARDWARE_ONLY if ACPI is enabled on ARM64 clocksource / arch_timer: Parse GTDT to initialize arch timer irqchip: Add GICv2 specific ACPI boot support ARM64 / ACPI: Introduce ACPI_IRQ_MODEL_GIC and register device's gsi ACPI / processor: Make it possible to get CPU hardware ID via GICC ACPI / processor: Introduce phys_cpuid_t for CPU hardware ID ARM64 / ACPI: Parse MADT for SMP initialization ...
2015-04-20Merge tag 'cpumask-next-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux Pull final removal of deprecated cpus_* cpumask functions from Rusty Russell: "This is the final removal (after several years!) of the obsolete cpus_* functions, prompted by their mis-use in staging. With these function removed, all cpu functions should only iterate to nr_cpu_ids, so we finally only allocate that many bits when cpumasks are allocated offstack" * tag 'cpumask-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux: (25 commits) cpumask: remove __first_cpu / __next_cpu cpumask: resurrect CPU_MASK_CPU0 linux/cpumask.h: add typechecking to cpumask_test_cpu cpumask: only allocate nr_cpumask_bits. Fix weird uses of num_online_cpus(). cpumask: remove deprecated functions. mips: fix obsolete cpumask_of_cpu usage. x86: fix more deprecated cpu function usage. ia64: remove deprecated cpus_ usage. powerpc: fix deprecated CPU_MASK_CPU0 usage. CPU_MASK_ALL/CPU_MASK_NONE: remove from deprecated region. staging/lustre/o2iblnd: Don't use cpus_weight staging/lustre/libcfs: replace deprecated cpus_ calls with cpumask_ staging/lustre/ptlrpc: Do not use deprecated cpus_* functions blackfin: fix up obsolete cpu function usage. parisc: fix up obsolete cpu function usage. tile: fix up obsolete cpu function usage. arm64: fix up obsolete cpu function usage. mips: fix up obsolete cpu function usage. x86: fix up obsolete cpu function usage. ...
2015-03-25ARM64 / ACPI: Parse MADT for SMP initializationHanjun Guo
MADT contains the information for MPIDR which is essential for SMP initialization, parse the GIC cpu interface structures to get the MPIDR value and map it to cpu_logical_map(), and add enabled cpu with valid MPIDR into cpu_possible_map. ACPI 5.1 only has two explicit methods to boot up SMP, PSCI and Parking protocol, but the Parking protocol is only specified for ARMv7 now, so make PSCI as the only way for the SMP boot protocol before some updates for the ACPI spec or the Parking protocol spec. Parking protocol patches for SMP boot will be sent to upstream when the new version of Parking protocol is ready. CC: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> CC: Catalin Marinas <catalin.marinas@arm.com> CC: Will Deacon <will.deacon@arm.com> CC: Mark Rutland <mark.rutland@arm.com> Tested-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com> Tested-by: Yijing Wang <wangyijing@huawei.com> Tested-by: Mark Langsdorf <mlangsdo@redhat.com> Tested-by: Jon Masters <jcm@redhat.com> Tested-by: Timur Tabi <timur@codeaurora.org> Tested-by: Robert Richter <rrichter@cavium.com> Acked-by: Robert Richter <rrichter@cavium.com> Acked-by: Olof Johansson <olof@lixom.net> Reviewed-by: Grant Likely <grant.likely@linaro.org> Signed-off-by: Hanjun Guo <hanjun.guo@linaro.org> Signed-off-by: Tomasz Nowicki <tomasz.nowicki@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-23arm64: mm: increase VA range of identity mapArd Biesheuvel
The page size and the number of translation levels, and hence the supported virtual address range, are build-time configurables on arm64 whose optimal values are use case dependent. However, in the current implementation, if the system's RAM is located at a very high offset, the virtual address range needs to reflect that merely because the identity mapping, which is only used to enable or disable the MMU, requires the extended virtual range to map the physical memory at an equal virtual offset. This patch relaxes that requirement, by increasing the number of translation levels for the identity mapping only, and only when actually needed, i.e., when system RAM's offset is found to be out of reach at runtime. Tested-by: Laura Abbott <lauraa@codeaurora.org> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-17arm64: apply alternatives for !SMP kernelsMark Rutland
Currently we only perform alternative patching for kernels built with CONFIG_SMP, as we call apply_alternatives_all() in smp.c, which is only built for CONFIG_SMP. Thus !SMP kernels may not have necessary alternatives patched in. This patch ensures that we call apply_alternatives_all() once all CPUs are booted, even for !SMP kernels, by having the smp_init_cpus() stub call this for !SMP kernels via up_late_init. A new wrapper, do_post_cpus_up_work, is added so we can hook other calls here later (e.g. boot mode logging). Cc: Andre Przywara <andre.przywara@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Fixes: e039ee4ee3fcf174 ("arm64: add alternative runtime patching") Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-03-05arm64: fix up obsolete cpu function usage.Rusty Russell
Thanks to spatch. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org
2015-01-23smp, ARM64: Kill SMP single function call interruptJiang Liu
Commit 9a46ad6d6df3b54 "smp: make smp_call_function_many() use logic similar to smp_call_function_single()" has unified the way to handle single and multiple cross-CPU function calls. Now only one interrupt is needed for architecture specific code to support generic SMP function call interfaces, so kill the redundant single function call interrupt. Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com> Acked-by: Will Deacon <will.deacon@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-12-04arm64: add module support for alternatives fixupsAndre Przywara
Currently the kernel patches all necessary instructions once at boot time, so modules are not covered by this. Change the apply_alternatives() function to take a beginning and an end pointer and introduce a new variant (apply_alternatives_all()) to cover the existing use case for the static kernel image section. Add a module_finalize() function to arm64 to check for an alternatives section in a module and patch only the instructions from that specific area. Since that module code is not touched before the module initialization has ended, we don't need to halt the machine before doing the patching in the module's code. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add alternative runtime patchingAndre Przywara
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-09-13arm64: Tell irq work about self IPI supportFrederic Weisbecker
ARM64 irq work self-IPI support depends on __smp_cross_call to point to some relevant IRQ controller operations. This information should be available after the call to init_IRQ(). Lets implement arch_irq_work_has_interrupt() accordingly. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
2014-08-09Merge tag 'trace-ipi-tracepoints' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull IPI tracepoints for ARM from Steven Rostedt: "Nicolas Pitre added generic tracepoints for tracing IPIs and updated the arm and arm64 architectures. It required some minor updates to the generic tracepoint system, so it had to wait for me to implement them" * tag 'trace-ipi-tracepoints' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: ARM64: add IPI tracepoints ARM: add IPI tracepoints tracepoint: add generic tracepoint definitions for IPI tracing tracing: Do not do anything special with tracepoint_string when tracing is disabled
2014-08-07ARM64: add IPI tracepointsNicolas Pitre
The strings used to list IPIs in /proc/interrupts are reused for tracing purposes. While at it, the code is slightly cleaned up so the ipi_types array indices are no longer offset by IPI_RESCHEDULE whose value is 0 anyway. Link: http://lkml.kernel.org/p/1406318733-26754-5-git-send-email-nicolas.pitre@linaro.org Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
2014-07-18arm64: cpuinfo: record cpu system register valuesMark Rutland
Several kernel subsystems need to know details about CPU system register values, sometimes for CPUs other than that they are executing on. Rather than hard-coding system register accesses and cross-calls for these cases, this patch adds logic to record various system register values at boot-time. This may be used for feature reporting, firmware bug detection, etc. Separate hooks are added for the boot and hotplug paths to enable one-time intialisation and cold/warm boot value mismatch detection in later patches. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-06Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux into next Pull arm64 updates from Catalin Marinas: - Optimised assembly string/memory routines (based on the AArch64 Cortex Strings library contributed to glibc but re-licensed under GPLv2) - Optimised crypto algorithms making use of the ARMv8 crypto extensions (together with kernel API for using FPSIMD instructions in interrupt context) - Ftrace support - CPU topology parsing from DT - ESR_EL1 (Exception Syndrome Register) exposed to user space signal handlers for SIGSEGV/SIGBUS (useful to emulation tools like Qemu) - 1GB section linear mapping if applicable - Barriers usage clean-up - Default pgprot clean-up Conflicts as per Catalin. * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (57 commits) arm64: kernel: initialize broadcast hrtimer based clock event device arm64: ftrace: Add system call tracepoint arm64: ftrace: Add CALLER_ADDRx macros arm64: ftrace: Add dynamic ftrace support arm64: Add ftrace support ftrace: Add arm64 support to recordmcount arm64: Add 'notrace' attribute to unwind_frame() for ftrace arm64: add __ASSEMBLY__ in asm/insn.h arm64: Fix linker script entry point arm64: lib: Implement optimized string length routines arm64: lib: Implement optimized string compare routines arm64: lib: Implement optimized memcmp routine arm64: lib: Implement optimized memset routine arm64: lib: Implement optimized memmove routine arm64: lib: Implement optimized memcpy routine arm64: defconfig: enable a few more common/useful options in defconfig ftrace: Make CALLER_ADDRx macros more generic arm64: Fix deadlock scenario with smp_send_stop() arm64: Fix machine_shutdown() definition arm64: Support arch_irq_work_raise() via self IPIs ...
2014-05-16arm64: Support arch_irq_work_raise() via self IPIsLarry Bassel
Support for arch_irq_work_raise() was missing from arm64 (a prerequisite for FULL_NOHZ). This patch is based on the arm32 patch ARM 7872/1. commit bf18525fd793101df42a1344ecc48b49b62e48c9 Author: Stephen Boyd <sboyd@codeaurora.org> Date: Tue Oct 29 20:32:56 2013 +0100 ARM: 7872/1: Support arch_irq_work_raise() via self IPIs By default, IRQ work is run from the tick interrupt (see irq_work_run() in update_process_times()). When we're in full NOHZ mode, restarting the tick requires the use of IRQ work and if the only place we run IRQ work is in the tick interrupt we have an unbreakable cycle. Implement arch_irq_work_raise() via self IPIs to break this cycle and get the tick started again. Note that we implement this via IPIs which are only available on SMP builds. This shouldn't be a problem because full NOHZ is only supported on SMP builds anyway. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Larry Bassel <larry.bassel@linaro.org> Reviewed-by: Kevin Hilman <khilman@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-05-15ARM: Check if a CPU has gone offlineAshwin Chaugule
PSCIv0.2 adds a new function called AFFINITY_INFO, which can be used to query if a specified CPU has actually gone offline. Calling this function via cpu_kill ensures that a CPU has quiesced after a call to cpu_die. This helps prevent the CPU from doing arbitrary bad things when data or instructions are clobbered (as happens with kexec) in the window between a CPU announcing that it is dead and said CPU leaving the kernel. Signed-off-by: Ashwin Chaugule <ashwin.chaugule@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
2014-03-04arm64: topology: Implement basic CPU topology supportMark Brown
Add basic CPU topology support to arm64, based on the existing pre-v8 code and some work done by Mark Hambleton. This patch does not implement any topology discovery support since that should be based on information from firmware, it merely implements the scaffolding for integration of topology support in the architecture. No locking of the topology data is done since it is only modified during CPU bringup with external serialisation from the SMP code. The goal is to separate the architecture hookup for providing topology information from the DT parsing in order to ease review and avoid blocking the architecture code (which will be built on by other work) with the DT code review by providing something simple and basic. Following patches will implement support for interpreting topology information from MPIDR and for parsing the DT topology bindings for ARM, similar patches will be needed for ACPI. Signed-off-by: Mark Brown <broonie@linaro.org> Acked-by: Mark Rutland <mark.rutland@arm.com> [catalin.marinas@arm.com: removed CONFIG_CPU_TOPOLOGY, always on if SMP] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-02-26arm64: enable processor debug state for secondary cpusVijaya Kumar K
processor debug state PSTATE.D is unmasked in smp call clear_os_lock for secondary cpus. So debug state is still masked in normal kernel context. With this patch, unmask debug state on secondary boot for the cpus in normal kernel context. Now kgdb tests passed with multicore. Signed-off-by: Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-01-30arm64: FIQs are unusedNicolas Pitre
So any FIQ handling is superfluous at the moment. The functions to disable/enable FIQs is kept around if ever someone needs them in the future, but existing calling sites including arch_cpu_idle_prepare() may go for now. Signed-off-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-19Merge tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp into upstreamCatalin Marinas
* tag 'arm64-suspend' of git://linux-arm.org/linux-2.6-lp: arm64: add CPU power management menu/entries arm64: kernel: add PM build infrastructure arm64: kernel: add CPU idle call arm64: enable generic clockevent broadcast arm64: kernel: implement HW breakpoints CPU PM notifier arm64: kernel: refactor code to install/uninstall breakpoints arm: kvm: implement CPU PM notifier arm64: kernel: implement fpsimd CPU PM notifier arm64: kernel: cpu_{suspend/resume} implementation arm64: kernel: suspend/resume registers save/restore arm64: kernel: build MPIDR_EL1 hash function data structure arm64: kernel: add MPIDR_EL1 accessors macros Conflicts: arch/arm64/Kconfig
2013-12-19arm64: percpu: implement optimised pcpu access using tpidr_el1Will Deacon
This patch implements optimised percpu variable accesses using the el1 r/w thread register (tpidr_el1) along the same lines as arch/arm/. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-12-16arm64: enable generic clockevent broadcastLorenzo Pieralisi
On platforms with power management capabilities, timers that are shut down when a CPU enters deep C-states must be emulated using an always-on timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP system. This patch enables the generic clockevents broadcast infrastructure for arm64, by providing the required Kconfig entries and adding the timer IPI infrastructure. Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
2013-11-25arm64: Unmask asynchronous aborts when in kernel modeCatalin Marinas
The asynchronous aborts are generally fatal for the kernel but they can be masked via the pstate A bit. If a system error happens while in kernel mode, it won't be visible until returning to user space. This patch enables this kind of abort early to help identifying the cause. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-07ARM64: /proc/interrupts: display IPIs of online CPUs onlySudeep KarkadaNagesha
The non-IPI interrupts are displayed only for the online cpus from show_interrupts in kernel/irq/proc.c before calling arch_show_interrupts(). As a result, the column headers and the IPI count don't match if any CPU is offline. This patch fixes show_ipi_list to display IPIs for online CPUs only. Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-11-04arm64: move enabling of GIC before CPUs are set onlineMarc Zyngier
Commit 53ae3acd (arm64: Only enable local interrupts after the CPU is marked online) moved the enabling of the GIC after the CPUs are marked online. This has some interesting effect: [...] [<ffffffc0002eefd8>] gic_raise_softirq+0xf8/0x160 [<ffffffc000088f58>] smp_send_reschedule+0x38/0x40 [<ffffffc0000c8728>] resched_task+0x84/0xc0 [<ffffffc0000c8cdc>] check_preempt_curr+0x58/0x98 [<ffffffc0000c8d38>] ttwu_do_wakeup+0x1c/0xf4 [<ffffffc0000c8f90>] ttwu_do_activate.constprop.84+0x64/0x70 [<ffffffc0000cad30>] try_to_wake_up+0x1d4/0x2b4 [<ffffffc0000cae6c>] default_wake_function+0x10/0x18 [<ffffffc0000c5ca4>] __wake_up_common+0x60/0xa0 [<ffffffc0000c7784>] complete+0x48/0x64 [<ffffffc000088bec>] secondary_start_kernel+0xe8/0x110 [...] Here, we end-up calling gic_raise_softirq without having initialized the interrupt controller for this CPU. While this goes unnoticed with GICv2 (the distributor is always accessible), it explodes with GICv3. The fix is to move the call to notify_cpu_starting before we set the secondary CPU online. Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: add CPU_HOTPLUG infrastructureMark Rutland
This patch adds the basic infrastructure necessary to support CPU_HOTPLUG on arm64, based on the arm implementation. Actual hotplug support will depend on an implementation's cpu_operations (e.g. PSCI). Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: read enable-method for CPU0Mark Rutland
With the advent of CPU_HOTPLUG, the enable-method property for CPU0 may tells us something useful (i.e. how to hotplug it back on), so we must read it along with all the enable-method for all the other CPUs. Even on UP the enable-method may tell us useful information (e.g. if a core has some mechanism that might be usable for cpuidle), so we should always read it. This patch factors out the reading of the enable method, and ensures that CPU0's enable method is read regardless of whether the kernel is built with SMP support. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2013-10-25arm64: factor out spin-table boot methodMark Rutland
The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>