From patchwork Thu Aug 22 22:40:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13774301 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5C85CC3DA4A for ; Thu, 22 Aug 2024 22:40:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E25888006D; Thu, 22 Aug 2024 18:40:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DAFC58005A; Thu, 22 Aug 2024 18:40:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFCC08006D; Thu, 22 Aug 2024 18:40:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A1D1D8005A for ; Thu, 22 Aug 2024 18:40:40 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 235BE4104F for ; Thu, 22 Aug 2024 22:40:40 +0000 (UTC) X-FDA: 82481352240.03.46E33F4 Received: from mail-ot1-f48.google.com (mail-ot1-f48.google.com [209.85.210.48]) by imf19.hostedemail.com (Postfix) with ESMTP id 47BD71A0004 for ; Thu, 22 Aug 2024 22:40:38 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="l/1kz7M/"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724366421; a=rsa-sha256; cv=none; b=TIVtIxChJnNeDRuhPQCFht5zMBLAmuHt1mO4hzD69ituLRHmBkq7tOHvwM0CJRGtq69u6a jirufBtkwi4uUytSkzvtmG9350oDLfViFBlGjiD/zJqssds8afwgbU/gpIRiPY9nA+gAlP k3nV7v30Yn67IFt+zrunH+ippYWnbVI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="l/1kz7M/"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf19.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724366421; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ve8ZCh8PCPvV2efG9h3uLLekR5A9SRZ7fqOdixfPViI=; b=ythf5f2k+oWrEiEFJSa9QhmhbcNXlD//vR1RP2A5AKCQr8JtguLkmmC+aq2JQ5v81FDXnj YcvK18mJwZuYOQCLIafV08dukWQA32g1ceE1rDRRTrtRmJDiwxbn+jCxRWh3yLhk4KRDhH tRPHFCHY+M2AK0dAOct/Qrh16HjlRcY= Received: by mail-ot1-f48.google.com with SMTP id 46e09a7af769-70945a007f0so1074344a34.2 for ; Thu, 22 Aug 2024 15:40:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724366437; x=1724971237; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ve8ZCh8PCPvV2efG9h3uLLekR5A9SRZ7fqOdixfPViI=; b=l/1kz7M/H0Z0cUn/T4eqvNMsxl2zqLWiuCHHXGGv/wCNc+eD/XZwT+z7D0TuVETpUk CE9UDfpVZjE5fTh9BczjnEZ+MYPgyNd23u4WGPXZsgV0eGil+Arb9RthKQFfYbNbGU2P XMNK5V5t5xYE3PuRdIunDTuszBDNmKzr2x684QEpuugDnNvic1vhskMrI5pyqls3BgxL hiVEtGe3wsdvLZgfLkbRYa5S4I5nji76ONbecUlrUpaNUzXiuyZqbmPx7pfzxYmII6xK qbEImMEBynOLiNzznIvMUpLDi9UQ5Eu7J9mxQuZYq6R7fdfxzJB98xLud9btrxEof64w QUlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724366437; x=1724971237; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ve8ZCh8PCPvV2efG9h3uLLekR5A9SRZ7fqOdixfPViI=; b=geFXPADpyjaqO4f9HgEaux1ApsaQ7qJOZRSpc1Sg10a+3LEApivrGw4F958amJYL4A N5KRdv5jz7yft266VeIEMaUBoNZmdiD9vPoBn6pRdUxiINkV+XYckWjRgyBzSD4W4UZ2 svyNEPETWe2YSAMpLv4S6APfwmB/CMn1Zg3o5aN6dw7QvX7bBKukW8KVbd1/RVoIPOZk pkdgLM78YLkaXPB18F1MfnXL3d4UsinfOy8YYt86dB8hZ1q/lUL/1ZnUX7z/pIkKDgOD GMz6slij0cr/TMZ+fFgPW4gOZLPzMFYiGVzCd+j36cIaZsKBAK9vxSRONmlGS63oFScH jeIw== X-Forwarded-Encrypted: i=1; AJvYcCW0N01kuWataTN0+8pP4bIY17fghC4alqYXcAhaDIu+C9SHH8VNupNigxe6qGkwHeFvGs5rbjSkGQ==@kvack.org X-Gm-Message-State: AOJu0YyJo5TigAkCG0YJRaPkCrbykuYP5K7baa8Lw2MvXZ3Qph7D52+u 4d3xSyOMt3wK/NcYSUN093H1x1pgL225VPjfXLWmiEIoih2daGZu X-Google-Smtp-Source: AGHT+IHEGh03zz8rdlIrdwZ9xm21Ww4yPfyKxp1Z6t9DNAg6uNvWD8bxaHrYmFhbSUY1uYyv9lMlqQ== X-Received: by 2002:a05:6808:4a:b0:3db:26fa:b470 with SMTP id 5614622812f47-3de2a88cc31mr307677b6e.30.1724366436986; Thu, 22 Aug 2024 15:40:36 -0700 (PDT) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:427:337e:a4f:8e74]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7cd9acabccasm1609912a12.27.2024.08.22.15.40.31 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 22 Aug 2024 15:40:36 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, ioworker0@gmail.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, v-songbaohua@oppo.com, yuanshuai@oppo.com, ziy@nvidia.com, usamaarif642@gmail.com Subject: [PATCH v3 1/2] mm: collect the number of anon large folios Date: Fri, 23 Aug 2024 10:40:14 +1200 Message-Id: <20240822224015.93186-2-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20240822224015.93186-1-21cnbao@gmail.com> References: <20240822224015.93186-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 47BD71A0004 X-Rspamd-Server: rspam01 X-Stat-Signature: jsgrbqq94s7m7oe7iuxomfqiu8htti5z X-HE-Tag: 1724366438-239218 X-HE-Meta: U2FsdGVkX19oQShcZatqgTR/PnLF0stwx300mkYa7zB4yfAbzeuJXlJr4WAJUWbSs1bOwj8oetujIQnZ2ofR4yHmTlhf4l3a24Uk6qRLefyuNeRxMo8IDOtEexia5zR5gjs1XPi3ctJIDoH6DKmvzMeU9NElgl33TIIFjTijIBqc1+QsAGibTQLRcR+Jp9aRueyWJ2GopOPg8WbrYtMq4SCvB7D43i4OarxotQ66LouQOKsMDcSvv+wzOT5+hxkZ967L8ljKX7jTWZDTznoG2odnhz6JZNwNn8/2pWEjx5vHM4Dp5yBZHDMt4RIkG7Q8TGnsKH+Hnwv5ekz2n/hRtZ5r/f63ZhrIqhX05UMjhSc4Z1azLJZKgV+jNI7rNWWWE9rcYmKYyv4b2CnXmNlDdQrPbTv0iMCO3hhULw1/bAnY5SXgY27sK2aq0JF5JLJvvpWnFrTJ9ybLNjgPbx1ln//UaLDCggWC5fNkFnHfq7a5qHiWpjfBkBtIC5VFxKPc3EKSqypdGY+KXJR+q5+t6oGGyfxsu13+JdKtA40KlO7J+Ris0kvhZtxfWSzIH0+iD7rvZ1uT06htgGG4Ve1CbNB+MnU77qn1OKQvA1quTlTR5k0wsUj8KLCcDBL99MzDt+J+4wrc+ITEojWIVhFG6GMqKVtN3K3jTM1eGEOloEeJGBxXtqMj9rADAEPjoFN5TCasSUCnbzX8VX5TcShKvXj8ONzwAZ6t5ZbHw9ggK8hRzaOkuA6vtRYGTNGycgedNrZEQFEcF8kNpSmnaHHs+vC94UwAviHiRFr2MHSS7KpcfM2fMvP3ivnwBsWkHjRy6L2ZMIrGbrGiREPbg0lIBFRiExXM+usjf36pUYzBOIJbFsW3pwagMcgdTYnZTjCaHEDMtcsI9XxxRToMtZ9moAwQG/xRQwkgrZVXmNG/4mHV8oPgnjCxC/nVEn9yA7XKNW/sTA4xDpAgIw5H90h FjGNiRAQ chZAODsL03unjdl3MhN0qT/P+lHZfj1qj419mL3FQ91PFv37w3lFDBpf/QOgIsDOxrwe12x7QP+WUztNTnpEh/9KnRIQ96ZYbEN/kGHHmTLzNenE2COu7Fip1tsXh6ymYigXZbxxtj1rwnTGtSeT60UYpCe0Oy0QLHRwowLGlnUL1K/XqGPmx9audp72mt+yDnVcVjlG50SBHdtBOjYH6ajiL66S9BgWo6MtC126jTgB9n4siFlsAloHm4SWWmiELPYWfROKZNAmpTRUcY/NpL9iU51x2Utnqh/CRm21JeVRZgL0684ybTQVxojyaEM2AL8aJmSH8KzED9ryuSiWp+1gJt3ekUEfoTAW5RDcnBNV9HXnZSaM4EeLEBN5o7/XtrsTnG7+zAWm5iHv/FtbW0h5oxqig5uK1Gc5CpApPMtwbqYQMKIkCOfRDGrYWoIo+h1/YyH6F9dfzq2Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song Anon large folios come from three places: 1. new allocated large folios in page faults, they will call folio_add_new_anon_rmap() for rmap; 2. a large folio is split into multiple lower-order large folios; 3. a large folio is migrated to a new large folio. In all above three counts, we increase nr_anon by 1; Anon large folios might go either because of be split or be put to free, in these cases, we reduce the count by 1. Folios added to the swap cache without an anonymous mapping won't be counted. This aligns with the AnonPages statistics in /proc/meminfo. However, folios that have been fully unmapped but not yet freed are counted. Unlike AnonPages, this can help identify anonymous memory leaks, such as when an anon folio is still pinned after being unmapped. Signed-off-by: Barry Song Acked-by: David Hildenbrand --- Documentation/admin-guide/mm/transhuge.rst | 5 +++++ include/linux/huge_mm.h | 15 +++++++++++++-- mm/huge_memory.c | 13 ++++++++++--- mm/migrate.c | 4 ++++ mm/page_alloc.c | 5 ++++- mm/rmap.c | 1 + 6 files changed, 37 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 79435c537e21..b78f2148b242 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -551,6 +551,11 @@ split_deferred it would free up some memory. Pages on split queue are going to be split under memory pressure, if splitting is possible. +nr_anon + the number of transparent anon huge pages we have in the whole system. + These huge pages could be entirely mapped or have partially + unmapped/unused subpages. + As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use. There are some counters in ``/proc/vmstat`` to help diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4c32058cacfe..2ee2971e4e10 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -126,6 +126,7 @@ enum mthp_stat_item { MTHP_STAT_SPLIT, MTHP_STAT_SPLIT_FAILED, MTHP_STAT_SPLIT_DEFERRED, + MTHP_STAT_NR_ANON, __MTHP_STAT_COUNT }; @@ -136,14 +137,24 @@ struct mthp_stat { DECLARE_PER_CPU(struct mthp_stat, mthp_stats); -static inline void count_mthp_stat(int order, enum mthp_stat_item item) +static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta) { if (order <= 0 || order > PMD_ORDER) return; - this_cpu_inc(mthp_stats.stats[order][item]); + this_cpu_add(mthp_stats.stats[order][item], delta); +} + +static inline void count_mthp_stat(int order, enum mthp_stat_item item) +{ + mod_mthp_stat(order, item, 1); } + #else +static inline void mod_mthp_stat(int order, enum mthp_stat_item item, int delta) +{ +} + static inline void count_mthp_stat(int order, enum mthp_stat_item item) { } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 513e7c87efee..26ad75fcda62 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ DEFINE_MTHP_STAT_ATTR(shmem_fallback_charge, MTHP_STAT_SHMEM_FALLBACK_CHARGE); DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); +DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON); static struct attribute *anon_stats_attrs[] = { &anon_fault_alloc_attr.attr, @@ -609,6 +610,7 @@ static struct attribute *anon_stats_attrs[] = { &split_attr.attr, &split_failed_attr.attr, &split_deferred_attr.attr, + &nr_anon_attr.attr, NULL, }; @@ -3314,8 +3316,9 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, struct deferred_split *ds_queue = get_deferred_split_queue(folio); /* reset xarray order to new order after split */ XA_STATE_ORDER(xas, &folio->mapping->i_pages, folio->index, new_order); - struct anon_vma *anon_vma = NULL; + bool is_anon = folio_test_anon(folio); struct address_space *mapping = NULL; + struct anon_vma *anon_vma = NULL; int order = folio_order(folio); int extra_pins, ret; pgoff_t end; @@ -3327,7 +3330,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, if (new_order >= folio_order(folio)) return -EINVAL; - if (folio_test_anon(folio)) { + if (is_anon) { /* order-1 is not supported for anonymous THP. */ if (new_order == 1) { VM_WARN_ONCE(1, "Cannot split to order-1 folio"); @@ -3367,7 +3370,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, if (folio_test_writeback(folio)) return -EBUSY; - if (folio_test_anon(folio)) { + if (is_anon) { /* * The caller does not necessarily hold an mmap_lock that would * prevent the anon_vma disappearing so we first we take a @@ -3480,6 +3483,10 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, } } + if (is_anon) { + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); + mod_mthp_stat(new_order, MTHP_STAT_NR_ANON, 1 << (order - new_order)); + } __split_huge_page(page, list, end, new_order); ret = 0; } else { diff --git a/mm/migrate.c b/mm/migrate.c index 4f55f4930fe8..3cc8555de6d6 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -450,6 +450,8 @@ static int __folio_migrate_mapping(struct address_space *mapping, /* No turning back from here */ newfolio->index = folio->index; newfolio->mapping = folio->mapping; + if (folio_test_anon(folio) && folio_test_large(folio)) + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); if (folio_test_swapbacked(folio)) __folio_set_swapbacked(newfolio); @@ -474,6 +476,8 @@ static int __folio_migrate_mapping(struct address_space *mapping, */ newfolio->index = folio->index; newfolio->mapping = folio->mapping; + if (folio_test_anon(folio) && folio_test_large(folio)) + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); folio_ref_add(newfolio, nr); /* add cache reference */ if (folio_test_swapbacked(folio)) { __folio_set_swapbacked(newfolio); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8a67d760b71a..7dcb0713eb57 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1084,8 +1084,11 @@ __always_inline bool free_pages_prepare(struct page *page, (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP; } } - if (PageMappingFlags(page)) + if (PageMappingFlags(page)) { + if (PageAnon(page)) + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); page->mapping = NULL; + } if (is_check_pages_enabled()) { if (free_page_is_bad(page)) bad++; diff --git a/mm/rmap.c b/mm/rmap.c index 1103a536e474..78529cf0fd66 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1467,6 +1467,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } __folio_mod_stat(folio, nr, nr_pmdmapped); + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); } static __always_inline void __folio_add_file_rmap(struct folio *folio,