From patchwork Thu Nov 5 13:10:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yafang Shao X-Patchwork-Id: 11884393 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 42BBE139F for ; Thu, 5 Nov 2020 13:10:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CB9A220786 for ; Thu, 5 Nov 2020 13:10:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qSc4q1La" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB9A220786 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D75336B0109; Thu, 5 Nov 2020 08:10:37 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CFEA16B010C; Thu, 5 Nov 2020 08:10:37 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEDC46B010D; Thu, 5 Nov 2020 08:10:37 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 592176B0109 for ; Thu, 5 Nov 2020 08:10:37 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EAAFF8249980 for ; Thu, 5 Nov 2020 13:10:36 +0000 (UTC) X-FDA: 77450398872.09.blow98_5015e58272ca Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id DD972180AD806 for ; Thu, 5 Nov 2020 13:10:36 +0000 (UTC) X-Spam-Summary: 13,1.2,0,adeaf4cc42d8b334,d41d8cd98f00b204,laoar.shao@gmail.com,,RULES_HIT:41:355:379:541:800:960:965:966:968:973:988:989:1260:1345:1437:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2693:2890:2904:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4042:4118:4250:4321:4385:4390:4395:4605:5007:6261:6653:7514:7903:8957:9010:9413:10008:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:14096:14181:14394:14687:14721:21080:21433:21444:21450:21451:21627:21660:21666:21966:30001:30054:30090,0,RBL:209.85.167.193:@gmail.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100;04yfknyijqtbibei9f6t3scf5r1d3yp8ps595za1eo8t45if4hhwbaxpngogf6w.7hj6ez885pu6ajita3nzwahj3ua6o6z7hyij66y97fdj8diata4zsg9wnumbaxe.4-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:69,LUA_SUMMARY:none X-HE-Tag: blow98_5015e58272ca X-Filterd-Recvd-Size: 7492 Received: from mail-oi1-f193.google.com (mail-oi1-f193.google.com [209.85.167.193]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 5 Nov 2020 13:10:36 +0000 (UTC) Received: by mail-oi1-f193.google.com with SMTP id m13so1583317oih.8 for ; Thu, 05 Nov 2020 05:10:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=iuMdU+JdaulJt4UO7ivwqxhekyaPW5ITnv1MGgh0et8=; b=qSc4q1Lah2wPcj9i8WQtbST70VquhawKxd8mMqVcD7FUs7+2mrUm83xyBAgbrT2ceP +rd5nkdQeJRQbkN5ZXqoAslBYE7cupCvqxpWv764ap4+ZAIX1h+QSycUihgKPj02jK1E LW65yDI9HH2Bpb20ysQsGx1VmSa/R72dpsbsXZJt3opeMq8wGseOYEPG1MYfaTxUL1DR C4q0t8Me2aTRuuFqZbjfq2Zm8PJZwHNKjJOAJ/p5z1UXq3CUMUeXcf1BMZj6pdtJCk3n N/f6p1wNj8OIqyUvSTT8AWayuxqr6tVzqXCmVV5kI6okVSVwT6+8v06s09OHOdNGyxqo U0wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=iuMdU+JdaulJt4UO7ivwqxhekyaPW5ITnv1MGgh0et8=; b=mSkzLB2IKQpHYMGdAPRuV5gJDiSxwDS9dVJSknAQCB0RdHrO1DyIYZRCn0atw7KwcB uqL6q3pNeY2mbIqYi+U8Rb9VGM2oY1ZXDGI6arAJHXaqZeprUab7OIX8AeEZIRs3x+UF svWBY7mS09oFXusRJC01pr/UHtOXubZ3tQW9+JOg2eVztHE7z64NJf4QMiUL4cnbFF1E wye/QqQED+QcL8X0OjNgt3ytb8zyaKJUp5car/sFVolvTmvZpKCmp921AmSdcPQ9j3dV Kr/QU76vZzmiY5V+GMcP5kCPlYj6ica+BIBwVQBL25EcieLX6k8jppTv0xsGPsCs/pUi 6CiA== X-Gm-Message-State: AOAM530WqNzVLWvHunRXwt8BOsQiMQeZF9rRRpeWqKRFs04T3paXL6y2 hYfoBS0PF9F4NqK2bLpKRRg= X-Google-Smtp-Source: ABdhPJye/V0Nq/F/Sn5aL3YG1T5v449VELsyfFNPkW7uhUKSfT6DCXjTfTqx4NMWfkq4drNMWW64ug== X-Received: by 2002:aca:170b:: with SMTP id j11mr1430693oii.144.1604581835621; Thu, 05 Nov 2020 05:10:35 -0800 (PST) Received: from localhost.localdomain ([50.236.19.102]) by smtp.gmail.com with ESMTPSA id a22sm348784oib.52.2020.11.05.05.10.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 05 Nov 2020 05:10:35 -0800 (PST) From: Yafang Shao To: akpm@linux-foundation.org, mhocko@suse.com, minchan@kernel.org, hannes@cmpxchg.org Cc: linux-mm@kvack.org, Yafang Shao Subject: [PATCH] mm: account lazily freed anon pages in NR_FILE_PAGES Date: Thu, 5 Nov 2020 21:10:12 +0800 Message-Id: <20201105131012.82457-1-laoar.shao@gmail.com> X-Mailer: git-send-email 2.17.2 (Apple Git-113) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The memory utilization (Used / Total) is used to monitor the memory pressure by us. If it is too high, it means the system may be OOM sooner or later when swap is off, then we will make adjustment on this system. However, this method is broken since MADV_FREE is introduced, because these lazily free anonymous can be reclaimed under memory pressure while they are still accounted in NR_ANON_MAPPED. Furthermore, since commit f7ad2a6cb9f7 ("mm: move MADV_FREE pages into LRU_INACTIVE_FILE list"), these lazily free anonymous pages are moved from anon lru list into file lru list. That means (Inactive(file) + Active(file)) may be much larger than Cached in /proc/meminfo. That makes our users confused. So we'd better account the lazily freed anonoymous pages in NR_FILE_PAGES as well. Signed-off-by: Yafang Shao Cc: Minchan Kim Cc: Johannes Weiner Cc: Michal Hocko --- mm/memcontrol.c | 11 +++++++++-- mm/rmap.c | 26 ++++++++++++++++++-------- mm/swap.c | 2 ++ mm/vmscan.c | 2 ++ 4 files changed, 31 insertions(+), 10 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3dcbf24d2227..217a6f10fa8d 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5659,8 +5659,15 @@ static int mem_cgroup_move_account(struct page *page, if (PageAnon(page)) { if (page_mapped(page)) { - __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); - __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + if (!PageSwapBacked(page) && !PageSwapCache(page) && + !PageUnevictable(page)) { + __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); + __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages); + } else { + __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + } + if (PageTransHuge(page)) { __mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages); diff --git a/mm/rmap.c b/mm/rmap.c index 1b84945d655c..690ca7ff2392 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1312,8 +1312,13 @@ static void page_remove_anon_compound_rmap(struct page *page) if (unlikely(PageMlocked(page))) clear_page_mlock(page); - if (nr) - __mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr); + if (nr) { + if (PageLRU(page) && PageAnon(page) && !PageSwapBacked(page) && + !PageSwapCache(page) && !PageUnevictable(page)) + __mod_lruvec_page_state(page, NR_FILE_PAGES, -nr); + else + __mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr); + } } /** @@ -1341,12 +1346,17 @@ void page_remove_rmap(struct page *page, bool compound) if (!atomic_add_negative(-1, &page->_mapcount)) goto out; - /* - * We use the irq-unsafe __{inc|mod}_zone_page_stat because - * these counters are not modified in interrupt context, and - * pte lock(a spinlock) is held, which implies preemption disabled. - */ - __dec_lruvec_page_state(page, NR_ANON_MAPPED); + if (PageLRU(page) && PageAnon(page) && !PageSwapBacked(page) && + !PageSwapCache(page) && !PageUnevictable(page)) { + __dec_lruvec_page_state(page, NR_FILE_PAGES); + } else { + /* + * We use the irq-unsafe __{inc|mod}_zone_page_stat because + * these counters are not modified in interrupt context, and + * pte lock(a spinlock) is held, which implies preemption disabled. + */ + __dec_lruvec_page_state(page, NR_ANON_MAPPED); + } if (unlikely(PageMlocked(page))) clear_page_mlock(page); diff --git a/mm/swap.c b/mm/swap.c index 47a47681c86b..340c5276a0f3 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -601,6 +601,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, LRU_INACTIVE_ANON + active); + __mod_lruvec_state(lruvec, NR_ANON_MAPPED, -nr_pages); ClearPageActive(page); ClearPageReferenced(page); /* @@ -610,6 +611,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, */ ClearPageSwapBacked(page); add_page_to_lru_list(page, lruvec, LRU_INACTIVE_FILE); + __mod_lruvec_state(lruvec, NR_FILE_PAGES, nr_pages); __count_vm_events(PGLAZYFREE, nr_pages); __count_memcg_events(lruvec_memcg(lruvec), PGLAZYFREE, diff --git a/mm/vmscan.c b/mm/vmscan.c index 1b8f0e059767..4821124c70f7 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1428,6 +1428,8 @@ static unsigned int shrink_page_list(struct list_head *page_list, goto keep_locked; } + mod_lruvec_page_state(page, NR_ANON_MAPPED, nr_pages); + mod_lruvec_page_state(page, NR_FILE_PAGES, -nr_pages); count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); } else if (!mapping || !__remove_mapping(mapping, page, true,