From patchwork Mon Dec 28 16:41:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A9E1C43381 for ; Mon, 28 Dec 2020 16:44:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 746412245C for ; Mon, 28 Dec 2020 16:44:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2441204AbgL1Qmz (ORCPT ); Mon, 28 Dec 2020 11:42:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2441138AbgL1Qmx (ORCPT ); Mon, 28 Dec 2020 11:42:53 -0500 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 99D30C06179C for ; Mon, 28 Dec 2020 08:41:53 -0800 (PST) Received: by mail-pf1-x42a.google.com with SMTP id c79so6550288pfc.2 for ; Mon, 28 Dec 2020 08:41:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sihbqHAwSrWiI/mby0PYKEUO9C1z2oJ0qLp+LWcjDHg=; b=j0OfcSNpacng4hvdoRiR1uS85K8rrj0CdsiliQJQ8Og/XxzfCZ6Ea3Vf62WLk18s1t 097T5NlYVhpsy3KHkrlBaAGQB/80yljFxfLNi69xsPgf/5Pv/HhU0nZPqmt35olPihVB oyfOiR0eqLBYui3e9zAcs8ntUCbtnuGhNAlBLauG7rodu0rZJfaNb3ApAb0ONT0iokb7 TnqteMs2En4iGsY7j/R/QtN4G9zGXpf1NW8pmmCI3Gqj5tbjpETbvDBGHFDTLcqFhejr TFjBesOPf+dliiY+rEfOv5tFQAYyf7MvvZ3PWlCwDOXxqIVJk0ENuaH8f2Jp8QrEVSrM EiVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sihbqHAwSrWiI/mby0PYKEUO9C1z2oJ0qLp+LWcjDHg=; b=lRPVzEpXI0hNYkbne/7tsn1vS4izl8jPxwcWo3ZVILpdVFGT4x1MWBPe6FY6+qzEub A3irMXPTabDEoeXha3kYZLdm/9WL0hAUZFKLiUjSQJahpmtdI0CbcdBIeGQpmKKiVhJy aI2NO4mC78DxNOvsxRozlqPdED7Z+MMx3k8W9rZo9Atbq36IWGY1Czy4uqLiReHWOM6N f9UoOeiy/hFR3ceTCb5EK2dN1F1MMt775RQaxyuBVJU2X6ejGtn5i/MhaV1Ktcs7jrH2 bMoGLUXw7kFSz6PoJK4Z3b/L0+HCeLDb+IDrYb61rFiKydp1NekP/3J9U55VkRaegnoG NtRg== X-Gm-Message-State: AOAM532xoN/Kd9ejWUSti95grpikGL4ZPq7N1YwMArmucvL1DGh+d8L+ a+59u8vZYTODZYhZidh2FqEucw== X-Google-Smtp-Source: ABdhPJwgt9/FNVb762Lx0ORcxyFLt25nlg3jWJEd+VXqNkkS117ss5mN792yc3AK8kPbqoIbIdOg7g== X-Received: by 2002:a63:78ca:: with SMTP id t193mr11854899pgc.391.1609173712855; Mon, 28 Dec 2020 08:41:52 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.41.45 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:41:52 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song , Michal Hocko , Pankaj Gupta Subject: [PATCH v6 1/7] mm: memcontrol: fix NR_ANON_THPS accounting in charge moving Date: Tue, 29 Dec 2020 00:41:04 +0800 Message-Id: <20201228164110.2838-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org The unit of NR_ANON_THPS is HPAGE_PMD_NR already. So it should inc/dec by one rather than nr_pages. Fixes: 468c398233da ("mm: memcontrol: switch to native NR_ANON_THPS counter") Signed-off-by: Muchun Song Acked-by: Michal Hocko Acked-by: Johannes Weiner Acked-by: Pankaj Gupta Reviewed-by: Roman Gushchin Reviewed-by: Shakeel Butt --- mm/memcontrol.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b80328f52fb4..8818bf64d6fe 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5653,10 +5653,8 @@ static int mem_cgroup_move_account(struct page *page, __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); if (PageTransHuge(page)) { - __mod_lruvec_state(from_vec, NR_ANON_THPS, - -nr_pages); - __mod_lruvec_state(to_vec, NR_ANON_THPS, - nr_pages); + __dec_lruvec_state(from_vec, NR_ANON_THPS); + __inc_lruvec_state(to_vec, NR_ANON_THPS); } } From patchwork Mon Dec 28 16:41:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78C58C433E9 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4FA61225AB for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2634556AbgL1Qnh (ORCPT ); Mon, 28 Dec 2020 11:43:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634482AbgL1QnG (ORCPT ); Mon, 28 Dec 2020 11:43:06 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C24FC0617A0 for ; Mon, 28 Dec 2020 08:42:00 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id e2so5847688plt.12 for ; Mon, 28 Dec 2020 08:42:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Hr2f4RExR2PrTJqAzW0crUOFygEtErY+YiZgtTonA30=; b=WShLKtO7qhLzaSlZiMFOZfj6gmx40TVIMKc4Velmjvbi3dCPaqdj0nqotQA7ZfaL+G aJA4g4P/QBhJiDnwNY3fAoAMwfgwKtoUia/hOoWxfUCVJlxxEJ7X0yA9gAJKvKPPKNJl qIxczo2EDqHiaowtjaWgkzA5kHLTT2gqTEP+NS8eGKIl/CCi8gNibuXXhsvmGEj2RGo6 ly5zLtgvrPFnii0hNuPU6w7Bq4XhNWjKB7J8TWNCG3UOUU7JXgLbFfuYj96RaLjoohXo G9tEDeTBRlbmOYaR/jpQViXWrm8byzCdQOH3ZPkb7mKHIRqYQ1LJ0CE1d9yUOv2z3YHo UBKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Hr2f4RExR2PrTJqAzW0crUOFygEtErY+YiZgtTonA30=; b=G8EmZemZ3BlEiURlESXO0uhDzpCsF/LPDt0IOuLhsMcbjQbcb8Cee+dtlW2cpPQCKt BnQNmG4b/pgv3Z9f08aAgroGts7u5vacUXfMsIYlPsFZS4mCU7CbApxuUwbo+0wSvdYS zspql7v2id8BZVPcv7+4kDYDYpnuJJISHtLw3vt7ZysZv/qOFJ1h+DSvei0Ozlsz6skT iWhMdgmQuzDXLPW5o2lhhCs/Ts/NepwBSmtGsdz+9N6igfB/O4vNj5h/X+Ui1xEKqLlf ar1sWlpMsP2kHgU8F9kCz2yxrb0z2AhXMUHbeQ70UeHVU5tHwEjGJG5Ggmgfe3Ah21nf UBNQ== X-Gm-Message-State: AOAM533dmFnzk3vFSV70yKm7QfV5DmVguYUn71ESCsYpqlq937qkPTer COZtBxuOMKhQ2S9aRkWLQuHLyw== X-Google-Smtp-Source: ABdhPJwiyVTnudoroU6Ge5cG1q5hs8Rxs4J7eAGjim8APv3Kcs5AA1MuuQlHqc4+GnE+AncsnS4Yfg== X-Received: by 2002:a17:902:bc47:b029:da:cbd5:dbdf with SMTP id t7-20020a170902bc47b02900dacbd5dbdfmr23260322plz.77.1609173720169; Mon, 28 Dec 2020 08:42:00 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.41.53 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:41:59 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 2/7] mm: memcontrol: convert NR_ANON_THPS account to pages Date: Tue, 29 Dec 2020 00:41:05 +0800 Message-Id: <20201228164110.2838-3-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_ANON_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Signed-off-by: Muchun Song --- drivers/base/node.c | 15 +++++++++------ fs/proc/meminfo.c | 2 +- include/linux/mmzone.h | 13 +++++++++++++ mm/huge_memory.c | 3 ++- mm/memcontrol.c | 20 ++++++-------------- mm/page_alloc.c | 2 +- mm/rmap.c | 6 +++--- mm/vmstat.c | 11 +++++++++-- 8 files changed, 44 insertions(+), 28 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 04f71c7bc3f8..6da0c3508bc9 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -461,8 +461,7 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(sunreclaimable) #ifdef CONFIG_TRANSPARENT_HUGEPAGE , - nid, K(node_page_state(pgdat, NR_ANON_THPS) * - HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_ANON_THPS)), nid, K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * @@ -519,10 +518,14 @@ static ssize_t node_read_vmstat(struct device *dev, sum_zone_numa_state(nid, i)); #endif - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) - len += sysfs_emit_at(buf, len, "%s %lu\n", - node_stat_name(i), - node_page_state_pages(pgdat, i)); + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { + unsigned long pages = node_page_state_pages(pgdat, i); + + if (vmstat_item_print_in_thp(i)) + pages /= HPAGE_PMD_NR; + len += sysfs_emit_at(buf, len, "%s %lu\n", node_stat_name(i), + pages); + } return len; } diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index d6fc74619625..a635c8a84ddf 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -129,7 +129,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) #ifdef CONFIG_TRANSPARENT_HUGEPAGE show_val_kb(m, "AnonHugePages: ", - global_node_page_state(NR_ANON_THPS) * HPAGE_PMD_NR); + global_node_page_state(NR_ANON_THPS)); show_val_kb(m, "ShmemHugePages: ", global_node_page_state(NR_SHMEM_THPS) * HPAGE_PMD_NR); show_val_kb(m, "ShmemPmdMapped: ", diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b593316bff3d..67d50ef5dd20 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -210,6 +210,19 @@ enum node_stat_item { }; /* + * Returns true if the item should be printed in THPs (/proc/vmstat + * currently prints number of anon, file and shmem THPs. But the item + * is charged in pages). + */ +static __always_inline bool vmstat_item_print_in_thp(enum node_stat_item item) +{ + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) + return false; + + return item == NR_ANON_THPS; +} + +/* * Returns true if the value is measured in bytes (most vmstat values are * measured in pages). This defines the API part, the internal representation * might be different. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 10dd3cae5f53..66ec454120de 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2178,7 +2178,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, lock_page_memcg(page); if (atomic_add_negative(-1, compound_mapcount_ptr(page))) { /* Last compound_mapcount is gone. */ - __dec_lruvec_page_state(page, NR_ANON_THPS); + __mod_lruvec_page_state(page, NR_ANON_THPS, + -HPAGE_PMD_NR); if (TestClearPageDoubleMap(page)) { /* No need in mapcount reference anymore */ for (i = 0; i < HPAGE_PMD_NR; i++) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 8818bf64d6fe..b18e25a5cdf3 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1532,7 +1532,7 @@ static struct memory_stat memory_stats[] = { * on some architectures, the macro of HPAGE_PMD_SIZE is not * constant(e.g. powerpc). */ - { "anon_thp", 0, NR_ANON_THPS }, + { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, { "file_thp", 0, NR_FILE_THPS }, { "shmem_thp", 0, NR_SHMEM_THPS }, #endif @@ -1565,8 +1565,7 @@ static int __init memory_stats_init(void) for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memory_stats[i].idx == NR_ANON_THPS || - memory_stats[i].idx == NR_FILE_THPS || + if (memory_stats[i].idx == NR_FILE_THPS || memory_stats[i].idx == NR_SHMEM_THPS) memory_stats[i].ratio = HPAGE_PMD_SIZE; #endif @@ -4088,10 +4087,6 @@ static int memcg_stat_show(struct seq_file *m, void *v) if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account()) continue; nr = memcg_page_state_local(memcg, memcg1_stats[i]); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memcg1_stats[i] == NR_ANON_THPS) - nr *= HPAGE_PMD_NR; -#endif seq_printf(m, "%s %lu\n", memcg1_stat_names[i], nr * PAGE_SIZE); } @@ -4122,10 +4117,6 @@ static int memcg_stat_show(struct seq_file *m, void *v) if (memcg1_stats[i] == MEMCG_SWAP && !do_memsw_account()) continue; nr = memcg_page_state(memcg, memcg1_stats[i]); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memcg1_stats[i] == NR_ANON_THPS) - nr *= HPAGE_PMD_NR; -#endif seq_printf(m, "total_%s %llu\n", memcg1_stat_names[i], (u64)nr * PAGE_SIZE); } @@ -5653,10 +5644,11 @@ static int mem_cgroup_move_account(struct page *page, __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); if (PageTransHuge(page)) { - __dec_lruvec_state(from_vec, NR_ANON_THPS); - __inc_lruvec_state(to_vec, NR_ANON_THPS); + __mod_lruvec_state(from_vec, NR_ANON_THPS, + -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_THPS, + nr_pages); } - } } else { __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 469e28f95ce7..1700f52b7869 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5580,7 +5580,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), - K(node_page_state(pgdat, NR_ANON_THPS) * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_ANON_THPS)), #endif K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), node_page_state(pgdat, NR_KERNEL_STACK_KB), diff --git a/mm/rmap.c b/mm/rmap.c index 08c56aaf72eb..c4d5c63cfd29 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1144,7 +1144,7 @@ void do_page_add_anon_rmap(struct page *page, * disabled. */ if (compound) - __inc_lruvec_page_state(page, NR_ANON_THPS); + __mod_lruvec_page_state(page, NR_ANON_THPS, nr); __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr); } @@ -1186,7 +1186,7 @@ void page_add_new_anon_rmap(struct page *page, if (hpage_pincount_available(page)) atomic_set(compound_pincount_ptr(page), 0); - __inc_lruvec_page_state(page, NR_ANON_THPS); + __mod_lruvec_page_state(page, NR_ANON_THPS, nr); } else { /* Anon THP always mapped first with PMD */ VM_BUG_ON_PAGE(PageTransCompound(page), page); @@ -1292,7 +1292,7 @@ static void page_remove_anon_compound_rmap(struct page *page) if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return; - __dec_lruvec_page_state(page, NR_ANON_THPS); + __mod_lruvec_page_state(page, NR_ANON_THPS, -thp_nr_pages(page)); if (TestClearPageDoubleMap(page)) { /* diff --git a/mm/vmstat.c b/mm/vmstat.c index 663f49ed1fd6..37c9e7b21e1e 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1624,8 +1624,12 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat, if (is_zone_first_populated(pgdat, zone)) { seq_printf(m, "\n per-node stats"); for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { + unsigned long pages = node_page_state_pages(pgdat, i); + + if (vmstat_item_print_in_thp(i)) + pages /= HPAGE_PMD_NR; seq_printf(m, "\n %-12s %lu", node_stat_name(i), - node_page_state_pages(pgdat, i)); + pages); } } seq_printf(m, @@ -1745,8 +1749,11 @@ static void *vmstat_start(struct seq_file *m, loff_t *pos) v += NR_VM_NUMA_STAT_ITEMS; #endif - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) { v[i] = global_node_page_state_pages(i); + if (vmstat_item_print_in_thp(i)) + v[i] /= HPAGE_PMD_NR; + } v += NR_VM_NODE_STAT_ITEMS; global_dirty_limits(v + NR_DIRTY_BG_THRESHOLD, From patchwork Mon Dec 28 16:41:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C0C9C43381 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6CFAE223E8 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2634549AbgL1Qng (ORCPT ); Mon, 28 Dec 2020 11:43:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634485AbgL1QnG (ORCPT ); Mon, 28 Dec 2020 11:43:06 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17191C0617A4 for ; Mon, 28 Dec 2020 08:42:08 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id g3so5883177plp.2 for ; Mon, 28 Dec 2020 08:42:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YaFJtCpm29MU+uBDpb/G+3K7mESLV7dCmDUHShvFzbM=; b=ilWAlsjD+8hh0dk3tj0v9AZo6cgeL6iXnAxM+EEXIWS5B1uKAHPslM3/R603sirUE5 T2i8AKcJUhuqvEZ3dgHknYt5eWX6Dd9J9QkVWEZKHMq2GcEs7/VSQpjfkQI9HwYgpguo 0vEWfvbecx7p7U+QBsWsbosF5ZrmmW2ngbdvLkFknMrF80JIFhRUjxyqdGeG0QtLxw1+ iTCHgaUeK6voXcFA5Jt49jYTDk465ZGJZrlQmrhh5QNEv5FGPjNdIJMtVnESJPU6SXnj f4B35n6CFBZHkfh+bqi7tlHu7Jj5Tr9NmYJmMEqEBIJSh9XRoVOHgM5lSc6hB1namcXQ BkTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YaFJtCpm29MU+uBDpb/G+3K7mESLV7dCmDUHShvFzbM=; b=edTgCshb0S2HYBLhm+2HcyBHtHFC0r+tl4ZcMG39lKWJCkaLYGbompgFZPil1+mTrQ 7+W6PW5eUbTzcPBvFLs14E8Lzbq/UwzTT/gVCZCp+ItqCQdXabt/DPNl+U/qzEkcdkLG 9AXz1XRYiox1CyWz9Bts6voOKKxcSx4vIqizOvNpRNcAZKgmNrngA+VzgF8bNNYbLAnd gnOJgnte8FCYUzc+snZ8HqYycA3p/enz0sdQ/xx/v3kLaOEdvwocjLr96dJvRuDGs/dX rOEwgPpg3yWlLKNDpH+/YRp+NnhLxbjSM6jiezkzFty2hjZMnPiPxtwYJsdAwNmb0G0G XvkQ== X-Gm-Message-State: AOAM5313SYDGpkGKs+zV7kFDpMr2v1W28aOANRmD0wbF4QOVba8hITXV qw40jMfM8/sPdrHHZptVDSPV/g== X-Google-Smtp-Source: ABdhPJyD8+xRO4mhMZ5/qi174NTM/+fcuKddW2cANd4ByLUl5yipCqoAhtgIMm3wyo/66L+o4Bc3cg== X-Received: by 2002:a17:90b:4396:: with SMTP id in22mr21375961pjb.63.1609173727565; Mon, 28 Dec 2020 08:42:07 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.42.00 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:42:07 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 3/7] mm: memcontrol: convert NR_FILE_THPS account to pages Date: Tue, 29 Dec 2020 00:41:06 +0800 Message-Id: <20201228164110.2838-4-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with if hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_FILE_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Signed-off-by: Muchun Song --- drivers/base/node.c | 3 +-- fs/proc/meminfo.c | 2 +- include/linux/mmzone.h | 3 ++- mm/filemap.c | 2 +- mm/huge_memory.c | 5 ++++- mm/khugepaged.c | 4 +++- mm/memcontrol.c | 5 ++--- 7 files changed, 14 insertions(+), 10 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 6da0c3508bc9..d5952f754911 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -466,8 +466,7 @@ static ssize_t node_read_meminfo(struct device *dev, HPAGE_PMD_NR), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), - nid, K(node_page_state(pgdat, NR_FILE_THPS) * - HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_FILE_THPS)), nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED) * HPAGE_PMD_NR) #endif diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index a635c8a84ddf..7ea4679880c8 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -135,7 +135,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "ShmemPmdMapped: ", global_node_page_state(NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR); show_val_kb(m, "FileHugePages: ", - global_node_page_state(NR_FILE_THPS) * HPAGE_PMD_NR); + global_node_page_state(NR_FILE_THPS)); show_val_kb(m, "FilePmdMapped: ", global_node_page_state(NR_FILE_PMDMAPPED) * HPAGE_PMD_NR); #endif diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 67d50ef5dd20..b751a9898bb6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -219,7 +219,8 @@ static __always_inline bool vmstat_item_print_in_thp(enum node_stat_item item) if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return false; - return item == NR_ANON_THPS; + return item == NR_ANON_THPS || + item == NR_FILE_THPS; } /* diff --git a/mm/filemap.c b/mm/filemap.c index 78090ee08ac2..c5e6f5202476 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -207,7 +207,7 @@ static void unaccount_page_cache_page(struct address_space *mapping, if (PageTransHuge(page)) __dec_lruvec_page_state(page, NR_SHMEM_THPS); } else if (PageTransHuge(page)) { - __dec_lruvec_page_state(page, NR_FILE_THPS); + __mod_lruvec_page_state(page, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 66ec454120de..cdf61596ef76 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2745,10 +2745,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { + int nr = thp_nr_pages(head); + if (PageSwapBacked(head)) __dec_lruvec_page_state(head, NR_SHMEM_THPS); else - __dec_lruvec_page_state(head, NR_FILE_THPS); + __mod_lruvec_page_state(head, NR_FILE_THPS, + -nr); } __split_huge_page(page, list, end); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 494d3cb0b58a..23f93a3e2e69 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1654,6 +1654,7 @@ static void collapse_file(struct mm_struct *mm, XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER); int nr_none = 0, result = SCAN_SUCCEED; bool is_shmem = shmem_file(file); + int nr; VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem); VM_BUG_ON(start & (HPAGE_PMD_NR - 1)); @@ -1865,11 +1866,12 @@ static void collapse_file(struct mm_struct *mm, put_page(page); goto xa_unlocked; } + nr = thp_nr_pages(new_page); if (is_shmem) __inc_lruvec_page_state(new_page, NR_SHMEM_THPS); else { - __inc_lruvec_page_state(new_page, NR_FILE_THPS); + __mod_lruvec_page_state(new_page, NR_FILE_THPS, nr); filemap_nr_thps_inc(mapping); } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b18e25a5cdf3..04985c8c6a0a 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1533,7 +1533,7 @@ static struct memory_stat memory_stats[] = { * constant(e.g. powerpc). */ { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, - { "file_thp", 0, NR_FILE_THPS }, + { "file_thp", PAGE_SIZE, NR_FILE_THPS }, { "shmem_thp", 0, NR_SHMEM_THPS }, #endif { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON }, @@ -1565,8 +1565,7 @@ static int __init memory_stats_init(void) for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memory_stats[i].idx == NR_FILE_THPS || - memory_stats[i].idx == NR_SHMEM_THPS) + if (memory_stats[i].idx == NR_SHMEM_THPS) memory_stats[i].ratio = HPAGE_PMD_SIZE; #endif VM_BUG_ON(!memory_stats[i].ratio); From patchwork Mon Dec 28 16:41:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991547 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1DFA1C4332E for ; Mon, 28 Dec 2020 16:43:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DB75422583 for ; Mon, 28 Dec 2020 16:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2633777AbgL1Qna (ORCPT ); Mon, 28 Dec 2020 11:43:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55192 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729368AbgL1Qn0 (ORCPT ); Mon, 28 Dec 2020 11:43:26 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B2994C0617B1 for ; Mon, 28 Dec 2020 08:42:15 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id h10so5589627pfo.9 for ; Mon, 28 Dec 2020 08:42:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jb1Eym+XLqQPsW2QTNMoNZMqfjs0Ix8qiKXfjN52ZPI=; b=LmSRbCrhpZ9XPaywSMevACditfsabGzTIW+UqUautAUnr9sn9Bp3Nhn5l1cD3ZwX9V CsKYJCP1wgxfJAtoq0CFpsXMwPRXQYeN3TFQRyph5TG2pVHzzdLrc+3NSFv8yxJNwNsm HJ5CxtY15tt/S30eZwZ2Z5/v5UDobseJztgJHTe7l4CkAUYMW/lALcqEDgBZjWfKbhGJ 1tEboe04BgCQS2K78D+ErnKZIuW0XYqY2BAvEPMgTrBU5Yv6TG0SIcV/WhAhSU4UqvqI wA32D/LncmAwAkQDbIcEnv0dtJhbXVfH+kV/JttANo8ITiZKOse34Co7ATsZ36/S5bnW 5t7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jb1Eym+XLqQPsW2QTNMoNZMqfjs0Ix8qiKXfjN52ZPI=; b=SU49uqhGxHtlvd459cEOMMEgjud57zpY+42KV9KJHYlf2NGWwXe29Pq0nIhMAtsFEt wFYOdYQgC5eZ7RXajXZUAUXXHhqvw0AabMOXfZiEq8sBXrRO2greUD6KWA3mk1U56Q/n Gr7lAHwDwYhqoDFxS5eCw2IKWBM7gF6wSL7JhPxiRd/D2UWNO7QF6XLstIzIv2PaB/UJ 87eZv2H08cr5sXkIbk/ZNtJS921FkOC3Betieer6agNSfBokf3hqkvQ5xailkaozUabU pVJIPy6AMmPnPG/QA5hcbrmHiTJQ9QMEuF7oQn/p2EOvZQVFRNJy/jx+Pn2QuG0u3oQv EC8g== X-Gm-Message-State: AOAM533hQGPaJmI/pF+yZNQE7K/9vbmiQrU73KYeNx7a7gQkxNTzLlQc +9DA2VLYPUHRGpxZgNepmzFtgQ== X-Google-Smtp-Source: ABdhPJxvvgYCseFJ5I6ulNK0iFJRzuWPnvzolQq7OM/xwAJDBJvAwdu5+/S4sx6wYJweHW65cicXUQ== X-Received: by 2002:a62:874a:0:b029:19e:6e03:cfc3 with SMTP id i71-20020a62874a0000b029019e6e03cfc3mr40741155pfe.67.1609173735250; Mon, 28 Dec 2020 08:42:15 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.42.07 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:42:14 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 4/7] mm: memcontrol: convert NR_SHMEM_THPS account to pages Date: Tue, 29 Dec 2020 00:41:07 +0800 Message-Id: <20201228164110.2838-5-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_SHMEM_THPS account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Signed-off-by: Muchun Song --- drivers/base/node.c | 3 +-- fs/proc/meminfo.c | 2 +- include/linux/mmzone.h | 3 ++- mm/filemap.c | 2 +- mm/huge_memory.c | 3 ++- mm/khugepaged.c | 2 +- mm/memcontrol.c | 26 ++------------------------ mm/page_alloc.c | 2 +- mm/shmem.c | 2 +- 9 files changed, 12 insertions(+), 33 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index d5952f754911..6d5ac6ffb6e1 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -462,8 +462,7 @@ static ssize_t node_read_meminfo(struct device *dev, #ifdef CONFIG_TRANSPARENT_HUGEPAGE , nid, K(node_page_state(pgdat, NR_ANON_THPS)), - nid, K(node_page_state(pgdat, NR_SHMEM_THPS) * - HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_SHMEM_THPS)), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), nid, K(node_page_state(pgdat, NR_FILE_THPS)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 7ea4679880c8..cfb107eaa3e6 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -131,7 +131,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "AnonHugePages: ", global_node_page_state(NR_ANON_THPS)); show_val_kb(m, "ShmemHugePages: ", - global_node_page_state(NR_SHMEM_THPS) * HPAGE_PMD_NR); + global_node_page_state(NR_SHMEM_THPS)); show_val_kb(m, "ShmemPmdMapped: ", global_node_page_state(NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR); show_val_kb(m, "FileHugePages: ", diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index b751a9898bb6..788837f40b38 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -220,7 +220,8 @@ static __always_inline bool vmstat_item_print_in_thp(enum node_stat_item item) return false; return item == NR_ANON_THPS || - item == NR_FILE_THPS; + item == NR_FILE_THPS || + item == NR_SHMEM_THPS; } /* diff --git a/mm/filemap.c b/mm/filemap.c index c5e6f5202476..1952e923cc2e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -205,7 +205,7 @@ static void unaccount_page_cache_page(struct address_space *mapping, if (PageSwapBacked(page)) { __mod_lruvec_page_state(page, NR_SHMEM, -nr); if (PageTransHuge(page)) - __dec_lruvec_page_state(page, NR_SHMEM_THPS); + __mod_lruvec_page_state(page, NR_SHMEM_THPS, -nr); } else if (PageTransHuge(page)) { __mod_lruvec_page_state(page, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index cdf61596ef76..5aa045e3b5dc 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2748,7 +2748,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) int nr = thp_nr_pages(head); if (PageSwapBacked(head)) - __dec_lruvec_page_state(head, NR_SHMEM_THPS); + __mod_lruvec_page_state(head, NR_SHMEM_THPS, + -nr); else __mod_lruvec_page_state(head, NR_FILE_THPS, -nr); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 23f93a3e2e69..8369d9620f6d 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1869,7 +1869,7 @@ static void collapse_file(struct mm_struct *mm, nr = thp_nr_pages(new_page); if (is_shmem) - __inc_lruvec_page_state(new_page, NR_SHMEM_THPS); + __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, nr); else { __mod_lruvec_page_state(new_page, NR_FILE_THPS, nr); filemap_nr_thps_inc(mapping); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 04985c8c6a0a..a40797a27f87 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1515,7 +1515,7 @@ struct memory_stat { unsigned int idx; }; -static struct memory_stat memory_stats[] = { +static const struct memory_stat memory_stats[] = { { "anon", PAGE_SIZE, NR_ANON_MAPPED }, { "file", PAGE_SIZE, NR_FILE_PAGES }, { "kernel_stack", 1024, NR_KERNEL_STACK_KB }, @@ -1527,14 +1527,9 @@ static struct memory_stat memory_stats[] = { { "file_dirty", PAGE_SIZE, NR_FILE_DIRTY }, { "file_writeback", PAGE_SIZE, NR_WRITEBACK }, #ifdef CONFIG_TRANSPARENT_HUGEPAGE - /* - * The ratio will be initialized in memory_stats_init(). Because - * on some architectures, the macro of HPAGE_PMD_SIZE is not - * constant(e.g. powerpc). - */ { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, { "file_thp", PAGE_SIZE, NR_FILE_THPS }, - { "shmem_thp", 0, NR_SHMEM_THPS }, + { "shmem_thp", PAGE_SIZE, NR_SHMEM_THPS }, #endif { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON }, { "active_anon", PAGE_SIZE, NR_ACTIVE_ANON }, @@ -1559,23 +1554,6 @@ static struct memory_stat memory_stats[] = { { "workingset_nodereclaim", 1, WORKINGSET_NODERECLAIM }, }; -static int __init memory_stats_init(void) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memory_stats[i].idx == NR_SHMEM_THPS) - memory_stats[i].ratio = HPAGE_PMD_SIZE; -#endif - VM_BUG_ON(!memory_stats[i].ratio); - VM_BUG_ON(memory_stats[i].idx >= MEMCG_NR_STAT); - } - - return 0; -} -pure_initcall(memory_stats_init); - static char *memory_stat_format(struct mem_cgroup *memcg) { struct seq_buf s; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1700f52b7869..720fb5a220b6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5577,7 +5577,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(node_page_state(pgdat, NR_WRITEBACK)), K(node_page_state(pgdat, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE - K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_SHMEM_THPS)), K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), K(node_page_state(pgdat, NR_ANON_THPS)), diff --git a/mm/shmem.c b/mm/shmem.c index 53d84d2c9fe5..de261cfbf987 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -713,7 +713,7 @@ static int shmem_add_to_page_cache(struct page *page, } if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); - __inc_lruvec_page_state(page, NR_SHMEM_THPS); + __mod_lruvec_page_state(page, NR_SHMEM_THPS, nr); } mapping->nrpages += nr; __mod_lruvec_page_state(page, NR_FILE_PAGES, nr); From patchwork Mon Dec 28 16:41:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30237C433E0 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 04C40223E8 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2633779AbgL1Qnc (ORCPT ); Mon, 28 Dec 2020 11:43:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634494AbgL1QnJ (ORCPT ); Mon, 28 Dec 2020 11:43:09 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5987AC0617BA for ; Mon, 28 Dec 2020 08:42:23 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id b5so6790193pjl.0 for ; Mon, 28 Dec 2020 08:42:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xyLdCKdBPhmYM4XCknvffTIPOc83LRw/uRsfk6wBWlU=; b=wCqveK2P/pLx8LQk5UbiAXARTnu3IjXoRrGa7vkHKY8k8eECgUlb3nAtld9UdX3JLH fiqt9Fw4ys70tJRNWYKVssKUVN9qXnCuuV5fSiuEqA4MSpcCjsxHY/ZaoucYxBkmQ+2Z N8m6k9Ar6xpyNb9qMKdRzv9aAljvM+yA9+ax2hrR+pghnQ4hHMRP7k/A6WkX8d+lo8JH a1xedl1iqHi9nG6806AbuKFQcvOr8aARE4L/H+P8Ji+3Hxy4uEuL4ygCN4s0qtObgdqJ pRKInsp03V6Zj9J11KrdQcx972Zj3qQNqMkG8y7iZYQEGS/C9e+deEIFoDq4v3+rHN1m 9qmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xyLdCKdBPhmYM4XCknvffTIPOc83LRw/uRsfk6wBWlU=; b=lBgU7GsQCni4AlsoAvdjlQmQY7NbHSgnHMgQg6Q6i6Qsrr2zJvdgcGfwwMLuXAu15i 8Hh0q8WJHOw/68vV/VggOXmghZVYHm4toN8HlUOPM492MxkLil2KUU/rqAEfl8dHgStX 2Fx9q7aqakgROHXT6jSNBQ9/bNFF6PKJxSpqSarCAN3ly+umz8POXUinuaB1AVnsxUEl zED6zx2HVgWL2rzgZ1gfMDFtJz86ZIZzdyCebwKf8hPYRUelL50yEWLuw3K4XhEqwmfn 5wLJ6lU1XCGEanqxDbOShPOeeH2Q+hbmIjXzYGJ77vqunYTMvUPSFpJhpoZhoVTp+/Id 77SQ== X-Gm-Message-State: AOAM5338F3T9yXpmccU+Qs+bLWhrVBhUap0tCi4RBB3dSk0q6iVP8sKq Fo747lcynvE9j/vKM2Sgxw9XIA== X-Google-Smtp-Source: ABdhPJxra25aKR3WylK88qKH/WjpXK2oYA1dqP3VdNKGtSKHIbeg/koZV5lyDt8b8/FkgpvKJvYhQQ== X-Received: by 2002:a17:902:6bca:b029:dc:34e1:26b1 with SMTP id m10-20020a1709026bcab02900dc34e126b1mr26493019plt.52.1609173742963; Mon, 28 Dec 2020 08:42:22 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.42.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:42:22 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 5/7] mm: memcontrol: convert NR_SHMEM_PMDMAPPED account to pages Date: Tue, 29 Dec 2020 00:41:08 +0800 Message-Id: <20201228164110.2838-6-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_SHMEM_PMDMAPPED account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Signed-off-by: Muchun Song --- drivers/base/node.c | 3 +-- fs/proc/meminfo.c | 2 +- include/linux/mmzone.h | 3 ++- mm/page_alloc.c | 3 +-- mm/rmap.c | 14 ++++++++++---- 5 files changed, 15 insertions(+), 10 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 6d5ac6ffb6e1..7a66aefe4e46 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -463,8 +463,7 @@ static ssize_t node_read_meminfo(struct device *dev, , nid, K(node_page_state(pgdat, NR_ANON_THPS)), nid, K(node_page_state(pgdat, NR_SHMEM_THPS)), - nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * - HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), nid, K(node_page_state(pgdat, NR_FILE_THPS)), nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED) * HPAGE_PMD_NR) diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index cfb107eaa3e6..c61f440570f9 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -133,7 +133,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "ShmemHugePages: ", global_node_page_state(NR_SHMEM_THPS)); show_val_kb(m, "ShmemPmdMapped: ", - global_node_page_state(NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR); + global_node_page_state(NR_SHMEM_PMDMAPPED)); show_val_kb(m, "FileHugePages: ", global_node_page_state(NR_FILE_THPS)); show_val_kb(m, "FilePmdMapped: ", diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 788837f40b38..7bdbfeeb5c8c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -221,7 +221,8 @@ static __always_inline bool vmstat_item_print_in_thp(enum node_stat_item item) return item == NR_ANON_THPS || item == NR_FILE_THPS || - item == NR_SHMEM_THPS; + item == NR_SHMEM_THPS || + item == NR_SHMEM_PMDMAPPED; } /* diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 720fb5a220b6..575fbfeea4b5 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5578,8 +5578,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(node_page_state(pgdat, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE K(node_page_state(pgdat, NR_SHMEM_THPS)), - K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) - * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), K(node_page_state(pgdat, NR_ANON_THPS)), #endif K(node_page_state(pgdat, NR_WRITEBACK_TEMP)), diff --git a/mm/rmap.c b/mm/rmap.c index c4d5c63cfd29..1c1b576c0627 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1211,14 +1211,17 @@ void page_add_file_rmap(struct page *page, bool compound) VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page); lock_page_memcg(page); if (compound && PageTransHuge(page)) { - for (i = 0, nr = 0; i < thp_nr_pages(page); i++) { + int nr_pages = thp_nr_pages(page); + + for (i = 0, nr = 0; i < nr_pages; i++) { if (atomic_inc_and_test(&page[i]._mapcount)) nr++; } if (!atomic_inc_and_test(compound_mapcount_ptr(page))) goto out; if (PageSwapBacked(page)) - __inc_node_page_state(page, NR_SHMEM_PMDMAPPED); + __mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED, + nr_pages); else __inc_node_page_state(page, NR_FILE_PMDMAPPED); } else { @@ -1252,14 +1255,17 @@ static void page_remove_file_rmap(struct page *page, bool compound) /* page still mapped by someone else? */ if (compound && PageTransHuge(page)) { - for (i = 0, nr = 0; i < thp_nr_pages(page); i++) { + int nr_pages = thp_nr_pages(page); + + for (i = 0, nr = 0; i < nr_pages; i++) { if (atomic_add_negative(-1, &page[i]._mapcount)) nr++; } if (!atomic_add_negative(-1, compound_mapcount_ptr(page))) return; if (PageSwapBacked(page)) - __dec_node_page_state(page, NR_SHMEM_PMDMAPPED); + __mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED, + -nr_pages); else __dec_node_page_state(page, NR_FILE_PMDMAPPED); } else { From patchwork Mon Dec 28 16:41:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D3C5C4332B for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 80A2C22583 for ; Mon, 28 Dec 2020 16:43:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2634542AbgL1Qnf (ORCPT ); Mon, 28 Dec 2020 11:43:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634539AbgL1Qne (ORCPT ); Mon, 28 Dec 2020 11:43:34 -0500 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C147FC061796 for ; Mon, 28 Dec 2020 08:42:30 -0800 (PST) Received: by mail-pg1-x530.google.com with SMTP id n10so7592185pgl.10 for ; Mon, 28 Dec 2020 08:42:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=y11k/yAhQtiX4jvjt+OyLEmCGsuSKea0uuhTwB1/ieQ=; b=FPZLrjymQcTQhHcbFBY5GURoO6/MBFZDsa7b96P8haA5Mgx30rHleWJoIV0qhEG00G 3z2/FS1GYAkK07qJW0/c6IwmmiUf/buufDWHAtIQbZeSS+WPzfCrMO9BCpCGfzIv6uL9 O8HwJnuL2QTWLBqqXMLnzJNg76FcIbTxJs/Enixh80I7g23f7/+3MtX7x0rMEskFGIRd 130BrgUutbqWc7nikubz2CzB9jgk043Dt4aadPRuTOWYD52uWhlb85p0ELZa54gwIF5o LRb7g+x5QsFz/CxA6liqhBdNsBQguk9VEItGopAU+zaQyG1tpF8OQHuhj17sY+Y+M591 YL5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=y11k/yAhQtiX4jvjt+OyLEmCGsuSKea0uuhTwB1/ieQ=; b=api7LZ6bvK8x6L6zg0w0IIDdr2HVFlt4gVbUS3jniYEC0Rn0l+JYgGgNlijAjZIG5/ t1hjxyL+lDoPuJ5ahN5qrCknUvTrWYRWLk9EHNYXybBsk7SFWgiG63y1lZZG/rLzTpwM v9Z9evzkEtrcecHzGLjdkrX+mF/0fs1R6uO7sfCN8YBGh22CTohXs54Nbf6D4M7Z/t1+ 9hurahl/YsD3Vme5GVNbOacJz0sqzSSX3w523nQhnnwGwto+TSCj1ARVhUfELHF/7Lop OjGIoiC5iMMCChj1VX9r6Ha5IeRpaks2spps9zYpsOZf4Cmu0dFen6O9xBn8WRhR/fRA HwgQ== X-Gm-Message-State: AOAM531XFhcxbEed/kIMhQlUTA8j6zKWr1n0Xl5FBGwQ+nJ+DFzKBSjl HtE7r21/gktFa9+gN3aykif25g== X-Google-Smtp-Source: ABdhPJyX7LC8ivbfwXIOlAXFeG8WozkJd7NhE50k4EUfbPesepYHWkIT1/Tzg4nPQXYI0NW37RWrUw== X-Received: by 2002:a62:8683:0:b029:1a4:4f3f:7e75 with SMTP id x125-20020a6286830000b02901a44f3f7e75mr41466468pfd.68.1609173750232; Mon, 28 Dec 2020 08:42:30 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.42.23 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:42:29 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 6/7] mm: memcontrol: convert NR_FILE_PMDMAPPED account to pages Date: Tue, 29 Dec 2020 00:41:09 +0800 Message-Id: <20201228164110.2838-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Currently we use struct per_cpu_nodestat to cache the vmstat counters, which leads to inaccurate statistics especially THP vmstat counters. In the systems with hundreds of processors it can be GBs of memory. For example, for a 96 CPUs system, the threshold is the maximum number of 125. And the per cpu counters can cache 23.4375 GB in total. The THP page is already a form of batched addition (it will add 512 worth of memory in one go) so skipping the batching seems like sensible. Although every THP stats update overflows the per-cpu counter, resorting to atomic global updates. But it can make the statistics more accuracy for the THP vmstat counters. So we convert the NR_FILE_PMDMAPPED account to pages. This patch is consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival"). Doing this also can make the unit of vmstat counters more unified. Finally, the unit of the vmstat counters are pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes or kB. The rest which is without suffix are pages. Signed-off-by: Muchun Song --- drivers/base/node.c | 3 +-- fs/proc/meminfo.c | 2 +- include/linux/mmzone.h | 3 ++- mm/rmap.c | 6 ++++-- 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 7a66aefe4e46..d02d86aec19f 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -465,8 +465,7 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_SHMEM_THPS)), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)), nid, K(node_page_state(pgdat, NR_FILE_THPS)), - nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED) * - HPAGE_PMD_NR) + nid, K(node_page_state(pgdat, NR_FILE_PMDMAPPED)) #endif ); len += hugetlb_report_node_meminfo(buf, len, nid); diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index c61f440570f9..6fa761c9cc78 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -137,7 +137,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "FileHugePages: ", global_node_page_state(NR_FILE_THPS)); show_val_kb(m, "FilePmdMapped: ", - global_node_page_state(NR_FILE_PMDMAPPED) * HPAGE_PMD_NR); + global_node_page_state(NR_FILE_PMDMAPPED)); #endif #ifdef CONFIG_CMA diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 7bdbfeeb5c8c..66d68e5d5a0f 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -222,7 +222,8 @@ static __always_inline bool vmstat_item_print_in_thp(enum node_stat_item item) return item == NR_ANON_THPS || item == NR_FILE_THPS || item == NR_SHMEM_THPS || - item == NR_SHMEM_PMDMAPPED; + item == NR_SHMEM_PMDMAPPED || + item == NR_FILE_PMDMAPPED; } /* diff --git a/mm/rmap.c b/mm/rmap.c index 1c1b576c0627..5ebf16fae4b9 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1223,7 +1223,8 @@ void page_add_file_rmap(struct page *page, bool compound) __mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED, nr_pages); else - __inc_node_page_state(page, NR_FILE_PMDMAPPED); + __mod_lruvec_page_state(page, NR_FILE_PMDMAPPED, + nr_pages); } else { if (PageTransCompound(page) && page_mapping(page)) { VM_WARN_ON_ONCE(!PageLocked(page)); @@ -1267,7 +1268,8 @@ static void page_remove_file_rmap(struct page *page, bool compound) __mod_lruvec_page_state(page, NR_SHMEM_PMDMAPPED, -nr_pages); else - __dec_node_page_state(page, NR_FILE_PMDMAPPED); + __mod_lruvec_page_state(page, NR_FILE_PMDMAPPED, + -nr_pages); } else { if (!atomic_add_negative(-1, &page->_mapcount)) return; From patchwork Mon Dec 28 16:41:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11991549 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D950C4332B for ; Mon, 28 Dec 2020 16:43:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C7B03223E8 for ; Mon, 28 Dec 2020 16:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2634521AbgL1Qna (ORCPT ); Mon, 28 Dec 2020 11:43:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2634520AbgL1QnX (ORCPT ); Mon, 28 Dec 2020 11:43:23 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B1875C0617BE for ; Mon, 28 Dec 2020 08:42:38 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id s15so5859158plr.9 for ; Mon, 28 Dec 2020 08:42:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GBe/b9s8qqbpbQdwEPlaqqS3TNcD0u7ThCgRkdEr4Jw=; b=v/NoCGdgXah6y6zVWO1fsQndVgPIjrJ2rE13jiKiVgAfL2vJoXnAiOybkczPpZ/srP skVWcIyvWXhe5dQ/5m9jLz0daq/xw1o6x3jqAlvWC8ulyaVFR5CbfWY8eA/YgoMVzSCa JqtsZnuZQkfih0UwIj3cN0hkol74gQzcCeKr92Z91S6RCMEu9sHpsNmZRWV4YfUlIDp6 Rdv3S2bC5BvYY3H97cjp4Kl7+Kpa8rBrtguAyaU2SkQW5Cb9ex+b85qBJhG/PrVp8h8x 5n0UXarClyuYSiF5y3krUP9iDfHPTFa4+F4OXPnnxFglNZ0LYFUEXHlY3U63/b1TEcqj p7Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GBe/b9s8qqbpbQdwEPlaqqS3TNcD0u7ThCgRkdEr4Jw=; b=aOxY51YC7GdNbho7wrat+A94ahZQnHSsZKh8zmmoXZvqKxvrJmJiqnQ3vX23tWT8ZS EBBXreJFbevtyUiGR0z+WCDkVRuFGf5QVioB1g0cmgQKHM/hPL3dbpK8IOH/2r79pR5Q fBBMQNElAtJ4LoUaDSH+YRQKeqbvn4EOhpqIoSkyh6/GpSyhu1PM1x4Hubfo1xyFSKwJ 78yeGzMQpCP3x6hMuWIEwStwzzufC0EYwAGKaRjWvhBb/CjQgjOysKzI3ygggOecoZie uYRLU8LuEIjN+gMJzUJ8LsGR3j/nxLsinuwNLFglHoqxxTvVrITsHygwSUMtSr5KhdRU gp9g== X-Gm-Message-State: AOAM5335qeRd/uyp1gVAV57emEblqg0LIMVE8B9zCnWM+IDfPFA15KQQ Cls0mgkDGEJKnL6cgGqWYQ5jZQ== X-Google-Smtp-Source: ABdhPJxGCLcCGaBEmbcaJmvGtBP69UmZRyhbqjAIFfaVBg8jiTijtNZ9RZkGPdnXmFLkWZEtRM+DtQ== X-Received: by 2002:a17:902:8487:b029:dc:20bc:2812 with SMTP id c7-20020a1709028487b02900dc20bc2812mr8913396plo.66.1609173757811; Mon, 28 Dec 2020 08:42:37 -0800 (PST) Received: from localhost.bytedance.net ([139.177.225.247]) by smtp.gmail.com with ESMTPSA id r68sm36686306pfr.113.2020.12.28.08.42.30 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 28 Dec 2020 08:42:37 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, shakeelb@google.com, guro@fb.com, samitolvanen@google.com, feng.tang@intel.com, neilb@suse.de, iamjoonsoo.kim@lge.com, rdunlap@infradead.org Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v6 7/7] mm: memcontrol: make the slab calculation consistent Date: Tue, 29 Dec 2020 00:41:10 +0800 Message-Id: <20201228164110.2838-8-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201228164110.2838-1-songmuchun@bytedance.com> References: <20201228164110.2838-1-songmuchun@bytedance.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Although the ratio of the slab is one, we also should read the ratio from the related memory_stats instead of hard-coding. And the local variable of size is already the value of slab_unreclaimable. So we do not need to read again. To do this we need some code like below: if (unlikely(memory_stats[i].idx == NR_SLAB_UNRECLAIMABLE_B)) { - size = memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) + - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B); + VM_BUG_ON(i < 1); + VM_BUG_ON(memory_stats[i - 1].idx != NR_SLAB_RECLAIMABLE_B); + size += memcg_page_state(memcg, memory_stats[i - 1].idx) * + memory_stats[i - 1].ratio; It requires a series of VM_BUG_ONs or comments to ensure these two items are actually adjacent and in the right order. So it would probably be easier to implement this using a wrapper that has a big switch() for unit conversion. More details about this discussion can refer to: https://lore.kernel.org/patchwork/patch/1348611/ This would fix the ratio inconsistency and get rid of the order guarantee. Signed-off-by: Muchun Song --- mm/memcontrol.c | 105 +++++++++++++++++++++++++++++++++++--------------------- 1 file changed, 66 insertions(+), 39 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index a40797a27f87..eec44918d373 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1511,49 +1511,71 @@ static bool mem_cgroup_wait_acct_move(struct mem_cgroup *memcg) struct memory_stat { const char *name; - unsigned int ratio; unsigned int idx; }; static const struct memory_stat memory_stats[] = { - { "anon", PAGE_SIZE, NR_ANON_MAPPED }, - { "file", PAGE_SIZE, NR_FILE_PAGES }, - { "kernel_stack", 1024, NR_KERNEL_STACK_KB }, - { "pagetables", PAGE_SIZE, NR_PAGETABLE }, - { "percpu", 1, MEMCG_PERCPU_B }, - { "sock", PAGE_SIZE, MEMCG_SOCK }, - { "shmem", PAGE_SIZE, NR_SHMEM }, - { "file_mapped", PAGE_SIZE, NR_FILE_MAPPED }, - { "file_dirty", PAGE_SIZE, NR_FILE_DIRTY }, - { "file_writeback", PAGE_SIZE, NR_WRITEBACK }, + { "anon", NR_ANON_MAPPED }, + { "file", NR_FILE_PAGES }, + { "kernel_stack", NR_KERNEL_STACK_KB }, + { "pagetables", NR_PAGETABLE }, + { "percpu", MEMCG_PERCPU_B }, + { "sock", MEMCG_SOCK }, + { "shmem", NR_SHMEM }, + { "file_mapped", NR_FILE_MAPPED }, + { "file_dirty", NR_FILE_DIRTY }, + { "file_writeback", NR_WRITEBACK }, #ifdef CONFIG_TRANSPARENT_HUGEPAGE - { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, - { "file_thp", PAGE_SIZE, NR_FILE_THPS }, - { "shmem_thp", PAGE_SIZE, NR_SHMEM_THPS }, + { "anon_thp", NR_ANON_THPS }, + { "file_thp", NR_FILE_THPS }, + { "shmem_thp", NR_SHMEM_THPS }, #endif - { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON }, - { "active_anon", PAGE_SIZE, NR_ACTIVE_ANON }, - { "inactive_file", PAGE_SIZE, NR_INACTIVE_FILE }, - { "active_file", PAGE_SIZE, NR_ACTIVE_FILE }, - { "unevictable", PAGE_SIZE, NR_UNEVICTABLE }, - - /* - * Note: The slab_reclaimable and slab_unreclaimable must be - * together and slab_reclaimable must be in front. - */ - { "slab_reclaimable", 1, NR_SLAB_RECLAIMABLE_B }, - { "slab_unreclaimable", 1, NR_SLAB_UNRECLAIMABLE_B }, + { "inactive_anon", NR_INACTIVE_ANON }, + { "active_anon", NR_ACTIVE_ANON }, + { "inactive_file", NR_INACTIVE_FILE }, + { "active_file", NR_ACTIVE_FILE }, + { "unevictable", NR_UNEVICTABLE }, + { "slab_reclaimable", NR_SLAB_RECLAIMABLE_B }, + { "slab_unreclaimable", NR_SLAB_UNRECLAIMABLE_B }, /* The memory events */ - { "workingset_refault_anon", 1, WORKINGSET_REFAULT_ANON }, - { "workingset_refault_file", 1, WORKINGSET_REFAULT_FILE }, - { "workingset_activate_anon", 1, WORKINGSET_ACTIVATE_ANON }, - { "workingset_activate_file", 1, WORKINGSET_ACTIVATE_FILE }, - { "workingset_restore_anon", 1, WORKINGSET_RESTORE_ANON }, - { "workingset_restore_file", 1, WORKINGSET_RESTORE_FILE }, - { "workingset_nodereclaim", 1, WORKINGSET_NODERECLAIM }, + { "workingset_refault_anon", WORKINGSET_REFAULT_ANON }, + { "workingset_refault_file", WORKINGSET_REFAULT_FILE }, + { "workingset_activate_anon", WORKINGSET_ACTIVATE_ANON }, + { "workingset_activate_file", WORKINGSET_ACTIVATE_FILE }, + { "workingset_restore_anon", WORKINGSET_RESTORE_ANON }, + { "workingset_restore_file", WORKINGSET_RESTORE_FILE }, + { "workingset_nodereclaim", WORKINGSET_NODERECLAIM }, }; +/* Translate stat items to the correct unit for memory.stat output */ +static int memcg_page_state_unit(int item) +{ + switch (item) { + case MEMCG_PERCPU_B: + case NR_SLAB_RECLAIMABLE_B: + case NR_SLAB_UNRECLAIMABLE_B: + case WORKINGSET_REFAULT_ANON: + case WORKINGSET_REFAULT_FILE: + case WORKINGSET_ACTIVATE_ANON: + case WORKINGSET_ACTIVATE_FILE: + case WORKINGSET_RESTORE_ANON: + case WORKINGSET_RESTORE_FILE: + case WORKINGSET_NODERECLAIM: + return 1; + case NR_KERNEL_STACK_KB: + return SZ_1K; + default: + return PAGE_SIZE; + } +} + +static inline unsigned long memcg_page_state_output(struct mem_cgroup *memcg, + int item) +{ + return memcg_page_state(memcg, item) * memcg_page_state_unit(item); +} + static char *memory_stat_format(struct mem_cgroup *memcg) { struct seq_buf s; @@ -1577,13 +1599,12 @@ static char *memory_stat_format(struct mem_cgroup *memcg) for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { u64 size; - size = memcg_page_state(memcg, memory_stats[i].idx); - size *= memory_stats[i].ratio; + size = memcg_page_state_output(memcg, memory_stats[i].idx); seq_buf_printf(&s, "%s %llu\n", memory_stats[i].name, size); if (unlikely(memory_stats[i].idx == NR_SLAB_UNRECLAIMABLE_B)) { - size = memcg_page_state(memcg, NR_SLAB_RECLAIMABLE_B) + - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE_B); + size += memcg_page_state_output(memcg, + NR_SLAB_RECLAIMABLE_B); seq_buf_printf(&s, "slab %llu\n", size); } } @@ -6377,6 +6398,12 @@ static int memory_stat_show(struct seq_file *m, void *v) } #ifdef CONFIG_NUMA +static inline unsigned long lruvec_page_state_output(struct lruvec *lruvec, + int item) +{ + return lruvec_page_state(lruvec, item) * memcg_page_state_unit(item); +} + static int memory_numa_stat_show(struct seq_file *m, void *v) { int i; @@ -6394,8 +6421,8 @@ static int memory_numa_stat_show(struct seq_file *m, void *v) struct lruvec *lruvec; lruvec = mem_cgroup_lruvec(memcg, NODE_DATA(nid)); - size = lruvec_page_state(lruvec, memory_stats[i].idx); - size *= memory_stats[i].ratio; + size = lruvec_page_state_output(lruvec, + memory_stats[i].idx); seq_printf(m, " N%d=%llu", nid, size); } seq_putc(m, '\n');