diff options
author | Nicolas Pitre <nicolas.pitre@linaro.org> | 2013-03-29 17:30:10 -0400 |
---|---|---|
committer | Andrey Konovalov <andrey.konovalov@linaro.org> | 2013-05-25 13:24:33 +0400 |
commit | 561a989c0884475cafa5b325663b02c72e214942 (patch) | |
tree | b0bb249a26a547f8d91d5eb11ac1dacc4d07bda9 | |
parent | fcdca1fe734d6d58771636873c1b292e6de13383 (diff) |
ARM: perf_event_cpu.c: fix memory corruption causing unpleasant effects
1) The memory obtained via alloc_percpu() is defined (and zeroed) only
for those CPUs in cpu_possible_mask. For example, it is wrong to
itterate using:
for (i = 0; i < NR_CPUS; i++)
per_cpu_ptr(cpu_pmus, i)->mpidr = -1;
This is guaranteed to corrupt memory for those CPU numbers not marked
possible during CPU enumeration.
2) In cpu_pmu_free_irq(), an occasional cpu_pmu->mpidr of -1 (meaning
uninitialized) was nevertheless passed to find_logical_cpu() which
ended up returning very creative CPU numbers. This was then used
with this line:
if (!cpumask_test_and_clear_cpu(cpu, &pmu->active_irqs))
This corrupted memory due to the pmu->active_irqs overflow, and
provided rather random condition results.
What made this bug even nastier is the fact that a slight change in code
placement due to compiler version, kernel config options or even added
debugging traces could totally change the bug symptom.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
-rw-r--r-- | arch/arm/kernel/perf_event_cpu.c | 13 |
1 files changed, 8 insertions, 5 deletions
diff --git a/arch/arm/kernel/perf_event_cpu.c b/arch/arm/kernel/perf_event_cpu.c index 22459d545c7..b3ae24f6afa 100644 --- a/arch/arm/kernel/perf_event_cpu.c +++ b/arch/arm/kernel/perf_event_cpu.c @@ -105,10 +105,13 @@ static void cpu_pmu_free_irq(struct arm_pmu *pmu) int cpu; struct arm_cpu_pmu *cpu_pmu; - for (i = 0; i < NR_CPUS; ++i) { + for_each_possible_cpu(i) { if (!(cpu_pmu = per_cpu_ptr(pmu->cpu_pmus, i))) continue; + if (cpu_pmu->mpidr == -1) + continue; + cpu = find_logical_cpu(cpu_pmu->mpidr); if (cpu < 0) continue; @@ -127,7 +130,7 @@ static int cpu_pmu_request_irq(struct arm_pmu *pmu, irq_handler_t handler) struct arm_cpu_pmu *cpu_pmu; irqs = 0; - for (i = 0; i < NR_CPUS; i++) + for_each_possible_cpu(i) if (per_cpu_ptr(pmu->cpu_pmus, i)) ++irqs; @@ -136,7 +139,7 @@ static int cpu_pmu_request_irq(struct arm_pmu *pmu, irq_handler_t handler) return -ENODEV; } - for (i = 0; i < NR_CPUS; i++) { + for_each_possible_cpu(i) { if (!(cpu_pmu = per_cpu_ptr(pmu->cpu_pmus, i))) continue; @@ -355,7 +358,7 @@ static int bL_get_partner(int cpu, int cluster) unsigned int i; - for (i = 0; i < NR_CPUS; i++) { + for_each_possible_cpu(i) { if (cpu_topology[i].thread_id == cpu_topology[cpu].thread_id && cpu_topology[i].core_id == cpu_topology[cpu].core_id && cpu_topology[i].socket_id == cluster) @@ -463,7 +466,7 @@ static int cpu_pmu_device_probe(struct platform_device *pdev) * make sense when the switcher is disabled. Ideally, this * knowledge should come from the swithcer somehow. */ - for (i = 0; i < NR_CPUS; i++) { + for_each_possible_cpu(i) { int cpu = i; per_cpu_ptr(cpu_pmus, i)->mpidr = -1; |