Message ID | 20210721063926.3024591-6-ying.huang@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [-V11,1/9] mm/numa: automatically generate node migration order | expand |
On 21 Jul 2021, at 2:39, Huang Ying wrote: > From: Dave Hansen <dave.hansen@linux.intel.com> > > Anonymous pages are kept on their own LRU(s). These lists could > theoretically always be scanned and maintained. But, without swap, there > is currently nothing the kernel can *do* with the results of a scanned, > sorted LRU for anonymous pages. > > A check for '!total_swap_pages' currently serves as a valid check as to > whether anonymous LRUs should be maintained. However, another method will > be added shortly: page demotion. > > Abstract out the 'total_swap_pages' checks into a helper, give it a > logically significant name, and check for the possibility of page > demotion. > > Link: https://lkml.kernel.org/r/20210715055145.195411-7-ying.huang@intel.com > Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> > Signed-off-by: "Huang, Ying" <ying.huang@intel.com> > Reviewed-by: Yang Shi <shy828301@gmail.com> > Reviewed-by: Greg Thelen <gthelen@google.com> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Wei Xu <weixugc@google.com> > Cc: Oscar Salvador <osalvador@suse.de> > Cc: Zi Yan <ziy@nvidia.com> > Cc: David Rientjes <rientjes@google.com> > Cc: Dan Williams <dan.j.williams@intel.com> > Cc: David Hildenbrand <david@redhat.com> > Cc: Keith Busch <kbusch@kernel.org> > Cc: Yang Shi <yang.shi@linux.alibaba.com> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org> > --- > mm/vmscan.c | 20 ++++++++++++++++++-- > 1 file changed, 18 insertions(+), 2 deletions(-) > LGTM. Reviewed-by: Zi Yan <ziy@nvidia.com> — Best Regards, Yan, Zi
diff --git a/mm/vmscan.c b/mm/vmscan.c index 90fa026cfa29..d79bf91700de 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2729,6 +2729,21 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, } } +/* + * Anonymous LRU management is a waste if there is + * ultimately no way to reclaim the memory. + */ +static bool can_age_anon_pages(struct pglist_data *pgdat, + struct scan_control *sc) +{ + /* Aging the anon LRU is valuable if swap is present: */ + if (total_swap_pages > 0) + return true; + + /* Also valuable if anon pages can be demoted: */ + return can_demote(pgdat->node_id, sc); +} + static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) { unsigned long nr[NR_LRU_LISTS]; @@ -2838,7 +2853,8 @@ static void shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc) * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (can_age_anon_pages(lruvec_pgdat(lruvec), sc) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -3669,7 +3685,7 @@ static void age_active_anon(struct pglist_data *pgdat, struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + if (!can_age_anon_pages(pgdat, sc)) return; lruvec = mem_cgroup_lruvec(NULL, pgdat);