mm: avoid swapping out with swappiness==0
authorSatoru Moriya <satoru.moriya@hds.com>
Tue, 29 May 2012 22:06:47 +0000 (15:06 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 2 Oct 2012 17:30:38 +0000 (10:30 -0700)
commit49194d4e7d8c14fb83f213c54a476685f8389c70
treea773acfde98ebc23d3ecd56067e2e31a5f0bcbb3
parent642550436c8b3a933550468996ef547630344a0e
mm: avoid swapping out with swappiness==0

commit fe35004fbf9eaf67482b074a2e032abb9c89b1dd upstream.

Sometimes we'd like to avoid swapping out anonymous memory.  In
particular, avoid swapping out pages of important process or process
groups while there is a reasonable amount of pagecache on RAM so that we
can satisfy our customers' requirements.

OTOH, we can control how aggressive the kernel will swap memory pages with
/proc/sys/vm/swappiness for global and
/sys/fs/cgroup/memory/memory.swappiness for each memcg.

But with current reclaim implementation, the kernel may swap out even if
we set swappiness=0 and there is pagecache in RAM.

This patch changes the behavior with swappiness==0.  If we set
swappiness==0, the kernel does not swap out completely (for global reclaim
until the amount of free pages and filebacked pages in a zone has been
reduced to something very very small (nr_free + nr_filebacked < high
watermark)).

Signed-off-by: Satoru Moriya <satoru.moriya@hds.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
mm/vmscan.c