From patchwork Sun Dec 6 08:23:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 11953803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75250C4361B for ; Sun, 6 Dec 2020 08:26:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E47722311B for ; Sun, 6 Dec 2020 08:26:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E47722311B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C98A6B0068; Sun, 6 Dec 2020 03:26:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 79F308D0003; Sun, 6 Dec 2020 03:26:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B5BB8D0002; Sun, 6 Dec 2020 03:26:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id 55D256B0068 for ; Sun, 6 Dec 2020 03:26:36 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1E7C6362D for ; Sun, 6 Dec 2020 08:26:36 +0000 (UTC) X-FDA: 77562175992.21.pen21_34049f0273d4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id EEC4B180442C2 for ; Sun, 6 Dec 2020 08:26:35 +0000 (UTC) X-HE-Tag: pen21_34049f0273d4 X-Filterd-Recvd-Size: 9528 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Sun, 6 Dec 2020 08:26:35 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id s21so6902797pfu.13 for ; Sun, 06 Dec 2020 00:26:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jqJ1l3kUFANKBttrjy8eqm/Bfbx0CB9v1pkRqyyS8to=; b=O0hNyjU5C3bc4aCk4SWJT5k3IJ6RHTiMIw9kw7GpGeqhJ//eDe6xwkfBXbwxoTLTgN vtWMUpTV9SopY9Ab+fJsmA7yCmKZhcFEKnmxfIdT6Y/DpJHQ6LQNZVRJOlLsMT+16tE0 udek8Fes/bV/edoikXuCPFTAqnQMbBrmFJcDtXoZRHHAsSjIwyfl/oP9yPIZv8UDapMb 1nOr9b9D9xmzbsKr9saNkZjWyuEku12csrfSabiAsFDo1fLOQl7/EoWOoHEgVHVEh4eZ 6/qypcmqQ+L3FP4rlEblMLufo9IeN+oRupTk5uNQp8jaU4SZF7KjlyUtILjo8FogUBDa abKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jqJ1l3kUFANKBttrjy8eqm/Bfbx0CB9v1pkRqyyS8to=; b=EfttHB89ThgcrlOH8SQODPUYAS4pwfdKB0Ltyqrlop3orKa2ki2R8wQgjttPiipDWu tSxrmGs6Pl8EgUAr5OLq16hl36di4jRhnzPUXYcA1W8Fm6MlmknIFxb4a/czqUZl7/m2 U0+DtXUxP7uT4vgTY0hz0sVodSu8WAzjvL0YOfYf4smQW++dD9J0euICZOtlaYapVJP8 V2FuQhm/aXavGODY+Owt4YVy//UUTbJLe33Gog1YMS2O2fCTy7/OIDTwKsn+42vXHfDQ cxn4xffsRy8COlmsIL3F83rtEswk2sLit/nPQDlRFErJUg9bZ/g7Xc6EVYj/uvjKP1vG R1Hw== X-Gm-Message-State: AOAM530//eoKVYvccZXnq1OqLnjAOlydUyybZ/bSdqIEUiShrW2oMXYX zUiAFLwaFD0/JAYHl5fWMF6pnQ== X-Google-Smtp-Source: ABdhPJx7cJcJ6Ll0EwvaZxofO1t8OQlEzj5N3F4M4BPj95fuTnqyEZqmEYZU5SblZKXLP0jWIMi2Jg== X-Received: by 2002:aa7:8e8d:0:b029:19d:943b:329d with SMTP id a13-20020aa78e8d0000b029019d943b329dmr11476949pfr.42.1607243194541; Sun, 06 Dec 2020 00:26:34 -0800 (PST) Received: from localhost.localdomain ([103.136.221.70]) by smtp.gmail.com with ESMTPSA id iq3sm6884104pjb.57.2020.12.06.00.26.26 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 06 Dec 2020 00:26:33 -0800 (PST) From: Muchun Song To: gregkh@linuxfoundation.org, rafael@kernel.org, adobriyan@gmail.com, akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@kernel.org, vdavydov.dev@gmail.com, hughd@google.com, will@kernel.org, guro@fb.com, rppt@kernel.org, tglx@linutronix.de, esyr@redhat.com, peterx@redhat.com, krisman@collabora.com, surenb@google.com, avagin@openvz.org, elver@google.com, rdunlap@infradead.org, iamjoonsoo.kim@lge.com Cc: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org, Muchun Song Subject: [PATCH v2 04/12] mm: memcontrol: convert NR_SHMEM_THPS account to pages Date: Sun, 6 Dec 2020 16:23:05 +0800 Message-Id: <20201206082318.11532-9-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20201206082318.11532-1-songmuchun@bytedance.com> References: <20201206082318.11532-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Convert NR_SHMEM_THPS account to pages Signed-off-by: Muchun Song --- drivers/base/node.c | 3 +-- fs/proc/meminfo.c | 2 +- mm/filemap.c | 2 +- mm/huge_memory.c | 3 ++- mm/khugepaged.c | 2 +- mm/memcontrol.c | 26 ++------------------------ mm/page_alloc.c | 2 +- mm/shmem.c | 3 ++- 8 files changed, 11 insertions(+), 32 deletions(-) diff --git a/drivers/base/node.c b/drivers/base/node.c index 2db28acdaa4f..3e1094717e40 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -462,8 +462,7 @@ static ssize_t node_read_meminfo(struct device *dev, #ifdef CONFIG_TRANSPARENT_HUGEPAGE , nid, K(node_page_state(pgdat, NR_ANON_THPS)), - nid, K(node_page_state(pgdat, NR_SHMEM_THPS) * - HPAGE_PMD_NR), + nid, K(node_page_state(pgdat, NR_SHMEM_THPS)), nid, K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), nid, K(node_page_state(pgdat, NR_FILE_THPS)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index f4157f26cbf5..b4d8a6ee822d 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -131,7 +131,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "AnonHugePages: ", global_node_page_state(NR_ANON_THPS)); show_val_kb(m, "ShmemHugePages: ", - global_node_page_state(NR_SHMEM_THPS) * HPAGE_PMD_NR); + global_node_page_state(NR_SHMEM_THPS)); show_val_kb(m, "ShmemPmdMapped: ", global_node_page_state(NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR); show_val_kb(m, "FileHugePages: ", diff --git a/mm/filemap.c b/mm/filemap.c index c4dcb1144883..5fdefbbc1bc2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -204,7 +204,7 @@ static void unaccount_page_cache_page(struct address_space *mapping, if (PageSwapBacked(page)) { __mod_lruvec_page_state(page, NR_SHMEM, -nr); if (PageTransHuge(page)) - __dec_lruvec_page_state(page, NR_SHMEM_THPS); + __mod_lruvec_page_state(page, NR_SHMEM_THPS, -HPAGE_PMD_NR); } else if (PageTransHuge(page)) { __mod_lruvec_page_state(page, NR_FILE_THPS, -HPAGE_PMD_NR); filemap_nr_thps_dec(mapping); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 37840bdeaad0..0e8541bd9f50 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2746,7 +2746,8 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); if (mapping) { if (PageSwapBacked(head)) - __dec_lruvec_page_state(head, NR_SHMEM_THPS); + __mod_lruvec_page_state(head, NR_SHMEM_THPS, + -HPAGE_PMD_NR); else __mod_lruvec_page_state(head, NR_FILE_THPS, -HPAGE_PMD_NR); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 1e1ced2208d0..4fe79ccfc312 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1857,7 +1857,7 @@ static void collapse_file(struct mm_struct *mm, } if (is_shmem) - __inc_lruvec_page_state(new_page, NR_SHMEM_THPS); + __mod_lruvec_page_state(new_page, NR_SHMEM_THPS, HPAGE_PMD_NR); else { __mod_lruvec_page_state(new_page, NR_FILE_THPS, HPAGE_PMD_NR); filemap_nr_thps_inc(mapping); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index c4557de2b211..6d4365d2fd1c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1497,7 +1497,7 @@ struct memory_stat { unsigned int idx; }; -static struct memory_stat memory_stats[] = { +static const struct memory_stat memory_stats[] = { { "anon", PAGE_SIZE, NR_ANON_MAPPED }, { "file", PAGE_SIZE, NR_FILE_PAGES }, { "kernel_stack", 1024, NR_KERNEL_STACK_KB }, @@ -1508,14 +1508,9 @@ static struct memory_stat memory_stats[] = { { "file_dirty", PAGE_SIZE, NR_FILE_DIRTY }, { "file_writeback", PAGE_SIZE, NR_WRITEBACK }, #ifdef CONFIG_TRANSPARENT_HUGEPAGE - /* - * The ratio will be initialized in memory_stats_init(). Because - * on some architectures, the macro of HPAGE_PMD_SIZE is not - * constant(e.g. powerpc). - */ { "anon_thp", PAGE_SIZE, NR_ANON_THPS }, { "file_thp", PAGE_SIZE, NR_FILE_THPS }, - { "shmem_thp", 0, NR_SHMEM_THPS }, + { "shmem_thp", PAGE_SIZE, NR_SHMEM_THPS }, #endif { "inactive_anon", PAGE_SIZE, NR_INACTIVE_ANON }, { "active_anon", PAGE_SIZE, NR_ACTIVE_ANON }, @@ -1540,23 +1535,6 @@ static struct memory_stat memory_stats[] = { { "workingset_nodereclaim", 1, WORKINGSET_NODERECLAIM }, }; -static int __init memory_stats_init(void) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(memory_stats); i++) { -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - if (memory_stats[i].idx == NR_SHMEM_THPS) - memory_stats[i].ratio = HPAGE_PMD_SIZE; -#endif - VM_BUG_ON(!memory_stats[i].ratio); - VM_BUG_ON(memory_stats[i].idx >= MEMCG_NR_STAT); - } - - return 0; -} -pure_initcall(memory_stats_init); - static char *memory_stat_format(struct mem_cgroup *memcg) { struct seq_buf s; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index f97ca98d361f..b6a79196e870 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5567,7 +5567,7 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(node_page_state(pgdat, NR_WRITEBACK)), K(node_page_state(pgdat, NR_SHMEM)), #ifdef CONFIG_TRANSPARENT_HUGEPAGE - K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR), + K(node_page_state(pgdat, NR_SHMEM_THPS)), K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED) * HPAGE_PMD_NR), K(node_page_state(pgdat, NR_ANON_THPS)), diff --git a/mm/shmem.c b/mm/shmem.c index 5da4f1a3e663..ea5d8c9ccb5b 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -713,7 +713,8 @@ static int shmem_add_to_page_cache(struct page *page, } if (PageTransHuge(page)) { count_vm_event(THP_FILE_ALLOC); - __inc_lruvec_page_state(page, NR_SHMEM_THPS); + __mod_lruvec_page_state(page, NR_SHMEM_THPS, + HPAGE_PMD_NR); } mapping->nrpages += nr; __mod_lruvec_page_state(page, NR_FILE_PAGES, nr);