|author||Michal Hocko <firstname.lastname@example.org>||2015-08-21 14:11:51 -0700|
|committer||Linus Torvalds <email@example.com>||2015-08-21 14:30:10 -0700|
mm: make page pfmemalloc check more robust
Commit c48a11c7ad26 ("netvm: propagate page->pfmemalloc to skb") added checks for page->pfmemalloc to __skb_fill_page_desc(): if (page->pfmemalloc && !page->mapping) skb->pfmemalloc = true; It assumes page->mapping == NULL implies that page->pfmemalloc can be trusted. However, __delete_from_page_cache() can set set page->mapping to NULL and leave page->index value alone. Due to being in union, a non-zero page->index will be interpreted as true page->pfmemalloc. So the assumption is invalid if the networking code can see such a page. And it seems it can. We have encountered this with a NFS over loopback setup when such a page is attached to a new skbuf. There is no copying going on in this case so the page confuses __skb_fill_page_desc which interprets the index as pfmemalloc flag and the network stack drops packets that have been allocated using the reserves unless they are to be queued on sockets handling the swapping which is the case here and that leads to hangs when the nfs client waits for a response from the server which has been dropped and thus never arrive. The struct page is already heavily packed so rather than finding another hole to put it in, let's do a trick instead. We can reuse the index again but define it to an impossible value (-1UL). This is the page index so it should never see the value that large. Replace all direct users of page->pfmemalloc by page_is_pfmemalloc which will hide this nastiness from unspoiled eyes. The information will get lost if somebody wants to use page->index obviously but that was the case before and the original code expected that the information should be persisted somewhere else if that is really needed (e.g. what SLAB and SLUB do). [firstname.lastname@example.org: fix blooper in slub] Fixes: c48a11c7ad26 ("netvm: propagate page->pfmemalloc to skb") Signed-off-by: Michal Hocko <email@example.com> Debugged-by: Vlastimil Babka <firstname.lastname@example.org> Debugged-by: Jiri Bohac <email@example.com> Cc: Eric Dumazet <firstname.lastname@example.org> Cc: David Miller <email@example.com> Acked-by: Mel Gorman <firstname.lastname@example.org> Cc: <email@example.com> [3.6+] Signed-off-by: Andrew Morton <firstname.lastname@example.org> Signed-off-by: Linus Torvalds <email@example.com>
Diffstat (limited to 'include/linux/mm.h')
1 files changed, 28 insertions, 0 deletions
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 2e872f92dbac..bf6f117fcf4d 100644
@@ -1003,6 +1003,34 @@ static inline int page_mapped(struct page *page)
+ * Return true only if the page has been allocated with
+ * ALLOC_NO_WATERMARKS and the low watermark was not
+ * met implying that the system is under some pressure.
+static inline bool page_is_pfmemalloc(struct page *page)
+ * Page index cannot be this large so this must be
+ * a pfmemalloc page.
+ return page->index == -1UL;
+ * Only to be called by the page allocator on a freshly allocated
+ * page.
+static inline void set_page_pfmemalloc(struct page *page)
+ page->index = -1UL;
+static inline void clear_page_pfmemalloc(struct page *page)
+ page->index = 0;
* Different kinds of faults, as returned by handle_mm_fault().
* Used to decide whether a process gets delivered SIGBUS or
* just gets major/minor fault counters bumped up.