aboutsummaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>2014-04-18 15:07:25 -0700
committerJiri Slaby <jslaby@suse.cz>2015-01-13 08:47:49 +0100
commit198717b635ef0d51525f7b2c8282a05a00a42c51 (patch)
treefc6a655b333790c29270db704fd024b1ee87f279 /mm
parent05d75f8aa3a9227b5180354671dff77c9a72d0bf (diff)
thp: close race between split and zap huge pages
commit b5a8cad376eebbd8598642697e92a27983aee802 upstream. [stable 3.12 note] This commit was supposed to fix a completely other issue. But in 3.12, with commit f72e7dcdd25229446b102e587ef2f826f76bff28 (mm: let mm_find_pmd fix buggy race with THP fault), we need this commit as well (it fixes the issue as a by-product). Hugh Dickins writes: <== citation starts here> Fine for this to go in, but there is one catch, which I discovered when backporting to v3.11: it needed one more hunk. I haven't checked your base tree, but if this applies then I believe you need it - most of the time no problem, but it can case page migration to fail to find a migration entry it inserted earlier, then BUG_ON(!PageLocked(p)) in migration_entry_to_page() soon after. Here's what I wrote back then: Note on rebase to v3.11: added a hunk to replace the use of mm_find_pmd() in page_check_address_pmd(). This call had been similarly replaced by the time of my v3.16 commit, in Kirill Shutemov's v3.15 b5a8cad376ee ("thp: close race between split and zap huge pages"): which we do not need as such, since it's fixing v3.13 117b0791ac42 ("mm, thp: move ptl taking inside page_check_address_pmd()"), from a split page-table-lock series we are not backporting. But without this additional hunk, rmap sometimes broke when the new semantic for mm_find_pmd() was used here. <== end of citation> But instead of appending hunks to commits, I am taking a full, backported version of commit b5a8cad376ee with this note prepended. So the changelog of b5a8cad376ee is left below, but does not apply to 3.12 yet. [=== stable 3.12 note ends here] Sasha Levin has reported two THP BUGs[1][2]. I believe both of them have the same root cause. Let's look to them one by one. The first bug[1] is "kernel BUG at mm/huge_memory.c:1829!". It's BUG_ON(mapcount != page_mapcount(page)) in __split_huge_page(). From my testing I see that page_mapcount() is higher than mapcount here. I think it happens due to race between zap_huge_pmd() and page_check_address_pmd(). page_check_address_pmd() misses PMD which is under zap: CPU0 CPU1 zap_huge_pmd() pmdp_get_and_clear() __split_huge_page() anon_vma_interval_tree_foreach() __split_huge_page_splitting() page_check_address_pmd() mm_find_pmd() /* * We check if PMD present without taking ptl: no * serialization against zap_huge_pmd(). We miss this PMD, * it's not accounted to 'mapcount' in __split_huge_page(). */ pmd_present(pmd) == 0 BUG_ON(mapcount != page_mapcount(page)) // CRASH!!! page_remove_rmap(page) atomic_add_negative(-1, &page->_mapcount) The second bug[2] is "kernel BUG at mm/huge_memory.c:1371!". It's VM_BUG_ON_PAGE(!PageHead(page), page) in zap_huge_pmd(). This happens in similar way: CPU0 CPU1 zap_huge_pmd() pmdp_get_and_clear() page_remove_rmap(page) atomic_add_negative(-1, &page->_mapcount) __split_huge_page() anon_vma_interval_tree_foreach() __split_huge_page_splitting() page_check_address_pmd() mm_find_pmd() pmd_present(pmd) == 0 /* The same comment as above */ /* * No crash this time since we already decremented page->_mapcount in * zap_huge_pmd(). */ BUG_ON(mapcount != page_mapcount(page)) /* * We split the compound page here into small pages without * serialization against zap_huge_pmd() */ __split_huge_page_refcount() VM_BUG_ON_PAGE(!PageHead(page), page); // CRASH!!! So my understanding the problem is pmd_present() check in mm_find_pmd() without taking page table lock. The bug was introduced by me commit with commit 117b0791ac42. Sorry for that. :( Let's open code mm_find_pmd() in page_check_address_pmd() and do the check under page table lock. Note that __page_check_address() does the same for PTE entires if sync != 0. I've stress tested split and zap code paths for 36+ hours by now and don't see crashes with the patch applied. Before it took <20 min to trigger the first bug and few hours for second one (if we ignore first). [1] https://lkml.kernel.org/g/<53440991.9090001@oracle.com> [2] https://lkml.kernel.org/g/<5310C56C.60709@oracle.com> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reported-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Cc: Bob Liu <lliubbo@gmail.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Rik van Riel <riel@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Michel Lespinasse <walken@google.com> Cc: Dave Jones <davej@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Diffstat (limited to 'mm')
-rw-r--r--mm/huge_memory.c13
1 files changed, 10 insertions, 3 deletions
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 04d17ba00893..04535b64119c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1541,15 +1541,22 @@ pmd_t *page_check_address_pmd(struct page *page,
unsigned long address,
enum page_check_address_pmd_flag flag)
{
+ pgd_t *pgd;
+ pud_t *pud;
pmd_t *pmd, *ret = NULL;
if (address & ~HPAGE_PMD_MASK)
goto out;
- pmd = mm_find_pmd(mm, address);
- if (!pmd)
+ pgd = pgd_offset(mm, address);
+ if (!pgd_present(*pgd))
goto out;
- if (pmd_none(*pmd))
+ pud = pud_offset(pgd, address);
+ if (!pud_present(*pud))
+ goto out;
+ pmd = pmd_offset(pud, address);
+
+ if (!pmd_present(*pmd))
goto out;
if (pmd_page(*pmd) != page)
goto out;