aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/kernel
diff options
context:
space:
mode:
authorMark Rutland <mark.rutland@arm.com>2014-06-24 16:51:34 +0100
committerMark Brown <broonie@linaro.org>2014-07-23 00:34:26 +0100
commit22a137413e7530afbf2305465fd4021c72618305 (patch)
tree3abcb5bd1df0663d09db96d3ec6512c1aa8ff2ee /arch/arm64/kernel
parent8dcf10ec65597f9b5fd4066787a746cccf58eea4 (diff)
arm64: head.S: remove unnecessary function alignment
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't span any page boundary, which simplifies the idmap and spares us requiring an additional page table to map half of the function. In keeping with other important requirements in architecture code, this fact is undocumented. Additionally, as the function consists of three instructions totalling 12 bytes with no literal pool data, a smaller alignment of 16 bytes would be sufficient. This patch reduces the alignment to 16 bytes and documents the underlying reason for the alignment. This reduces the required alignment of the entire .head.text section from 64 bytes to 16 bytes, though it may still be aligned to a larger value depending on TEXT_OFFSET. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> (cherry picked from commit 909a4069da65a5cfca8c968edf9f0d99f694d2f3) Signed-off-by: Mark Brown <broonie@linaro.org>
Diffstat (limited to 'arch/arm64/kernel')
-rw-r--r--arch/arm64/kernel/head.S7
1 files changed, 6 insertions, 1 deletions
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 0b281fffda51..5627d9e69b3c 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -340,8 +340,13 @@ ENDPROC(__enable_mmu)
* x27 = *virtual* address to jump to upon completion
*
* other registers depend on the function called upon completion
+ *
+ * We align the entire function to the smallest power of two larger than it to
+ * ensure it fits within a single block map entry. Otherwise were PHYS_OFFSET
+ * close to the end of a 512MB or 1GB block we might require an additional
+ * table to map the entire function.
*/
- .align 6
+ .align 4
__turn_mmu_on:
msr sctlr_el1, x0
isb