aboutsummaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorAnil Keshavamurthy <anil.s.keshavamurthy@intel.com>2006-07-14 00:23:57 -0700
committerLinus Torvalds <torvalds@g5.osdl.org>2006-07-14 21:53:51 -0700
commitc38c8db7225465c8d124f38b24d3024decc26bbd (patch)
tree79a7e7a99e0d67ac015c4fad689cdefb55a5c10f /mm
parent8757d5fa6b75e8ea906baf0309d49b980e7f9bc9 (diff)
[PATCH] ia64: race flushing icache in COW path
There is a race condition that showed up in a threaded JIT environment. The situation is that a process with a JIT code page forks, so the page is marked read-only, then some threads are created in the child. One of the threads attempts to add a new code block to the JIT page, so a copy-on-write fault is taken, and the kernel allocates a new page, copies the data, installs the new pte, and then calls lazy_mmu_prot_update() to flush caches to make sure that the icache and dcache are in sync. Unfortunately, the other thread runs right after the new pte is installed, but before the caches have been flushed. It tries to execute some old JIT code that was already in this page, but it sees some garbage in the i-cache from the previous users of the new physical page. Fix: we must make the caches consistent before installing the pte. This is an ia64 only fix because lazy_mmu_prot_update() is a no-op on all other architectures. Signed-off-by: Anil Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r--mm/memory.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/memory.c b/mm/memory.c
index dc0d82cf2a1..de8bc85dc8f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1549,9 +1549,9 @@ gotten:
flush_cache_page(vma, address, pte_pfn(orig_pte));
entry = mk_pte(new_page, vma->vm_page_prot);
entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+ lazy_mmu_prot_update(entry);
ptep_establish(vma, address, page_table, entry);
update_mmu_cache(vma, address, entry);
- lazy_mmu_prot_update(entry);
lru_cache_add_active(new_page);
page_add_new_anon_rmap(new_page, vma, address);