aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJaewon Kim <jaewon31.kim@samsung.com>2015-09-08 15:02:21 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2015-10-01 12:07:32 +0200
commitde047ce49582e2fc3303efc58b40f4dbe7a4519f (patch)
tree185b0387485a2820994ab17cb158c24ee6b65c12
parent706ad8dcb5a2db85e6c2b2da30449c9a9918fa9d (diff)
downloadlinux-linaro-stable-de047ce49582e2fc3303efc58b40f4dbe7a4519f.tar.gz
vmscan: fix increasing nr_isolated incurred by putback unevictable pages
commit c54839a722a02818677bcabe57e957f0ce4f841d upstream. reclaim_clean_pages_from_list() assumes that shrink_page_list() returns number of pages removed from the candidate list. But shrink_page_list() puts back mlocked pages without passing it to caller and without counting as nr_reclaimed. This increases nr_isolated. To fix this, this patch changes shrink_page_list() to pass unevictable pages back to caller. Caller will take care those pages. Minchan said: It fixes two issues. 1. With unevictable page, cma_alloc will be successful. Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages. 2. fix leaking of NR_ISOLATED counter of vmstat With it, too_many_isolated works. Otherwise, it could make hang until the process get SIGKILL. Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Mel Gorman <mgorman@techsingularity.net> Acked-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--mm/vmscan.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 233f0011f768..a1e3becef05e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -925,7 +925,7 @@ cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;
activate_locked: