aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPeter Maydell <peter.maydell@linaro.org>2023-11-10 16:25:46 +0000
committerPeter Maydell <peter.maydell@linaro.org>2023-11-13 13:15:50 +0000
commit4d044472ab7666adf99d4daa0cc90b7502b90109 (patch)
tree4b82e3123d570af7db4f829417cfe42dbfcd6fe7
parentfc58891d0422607d172a3d6b3158798f2556aef1 (diff)
target/arm: Correct MTE tag checking for reverse-copy MOPS
When we are doing a FEAT_MOPS copy that must be performed backwards, we call mte_mops_probe_rev(), passing it the address of the last byte in the region we are probing. However, allocation_tag_mem_probe() wants the address of the first byte to get the tag memory for. Because we passed it (ptr, size) we could incorrectly trip the allocation_tag_mem_probe() check for "does this access run across to the following page", and if that following page happened not to be valid then we would assert. We know we will always be only dealing with a single page because the code that calls mte_mops_probe_rev() ensures that. We could make mte_mops_probe_rev() pass 'ptr - (size - 1)' to allocation_tag_mem_probe(), but then we would have to adjust the returned 'mem' pointer to get back to the tag RAM for the last byte of the region. It's simpler to just pass in a size of 1 byte, because we know that allocation_tag_mem_probe() in pure-probe single-page mode doesn't care about the size. Fixes: 69c51dc3723b ("target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20231110162546.2192512-1-peter.maydell@linaro.org
-rw-r--r--target/arm/tcg/mte_helper.c12
1 files changed, 10 insertions, 2 deletions
diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c
index 70ac876105..ffb8ea1c34 100644
--- a/target/arm/tcg/mte_helper.c
+++ b/target/arm/tcg/mte_helper.c
@@ -1101,10 +1101,18 @@ uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size,
uint32_t n;
mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX);
- /* True probe; this will never fault */
+ /*
+ * True probe; this will never fault. Note that our caller passes
+ * us a pointer to the end of the region, but allocation_tag_mem_probe()
+ * wants a pointer to the start. Because we know we don't span a page
+ * boundary and that allocation_tag_mem_probe() doesn't otherwise care
+ * about the size, pass in a size of 1 byte. This is simpler than
+ * adjusting the ptr to point to the start of the region and then having
+ * to adjust the returned 'mem' to get the end of the tag memory.
+ */
mem = allocation_tag_mem_probe(env, mmu_idx, ptr,
w ? MMU_DATA_STORE : MMU_DATA_LOAD,
- size, MMU_DATA_LOAD, true, 0);
+ 1, MMU_DATA_LOAD, true, 0);
if (!mem) {
return size;
}