From patchwork Thu Aug 22 22:40:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13774302 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BAECC5320E for ; Thu, 22 Aug 2024 22:40:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1EC088006E; Thu, 22 Aug 2024 18:40:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19C418005A; Thu, 22 Aug 2024 18:40:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0150C8006E; Thu, 22 Aug 2024 18:40:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D2BD38005A for ; Thu, 22 Aug 2024 18:40:47 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 8026CAA1B7 for ; Thu, 22 Aug 2024 22:40:47 +0000 (UTC) X-FDA: 82481352534.12.A7923AE Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) by imf10.hostedemail.com (Postfix) with ESMTP id 9E487C0021 for ; Thu, 22 Aug 2024 22:40:45 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZJcsDRMf; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724366379; a=rsa-sha256; cv=none; b=3kGv5tGMbSHfy6bIavsZ2MLlCXESj8QRfpd+Va4LBkGnAul9v235xkcJX/I7gBU45zhpy9 KVpe579/Kt/rwjfe+doGEgeSPyMxFtpxklf5QjJ/re5SOT6/AjX7NNwopqFYg/U0WRmPBt 4NmkUw7ZUbTdtkCO1Mkz3yqGgBtuuSE= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=ZJcsDRMf; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf10.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.210.172 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724366379; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=OVEO528xLjg/PuKuE9cDFRxA62lNB/Fmrg4qlWADHI0=; b=W76YD280kk2w8MCjd+r9+BauAjaa2U7O5dY4FEgdwhDBUkZ4ApVbkN25h20SRRjpN3wphs L9sbPMYRkS9jyZqTVmbBTU+DZCDpeUdctJ++e0IIOXe4aLfWuiW0nyo7spKd1xFQyh1K5a 9Fl5aDOpM2JRMq6oN8JyPdJgm+4IrQM= Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-714302e7285so1131867b3a.2 for ; Thu, 22 Aug 2024 15:40:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1724366444; x=1724971244; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OVEO528xLjg/PuKuE9cDFRxA62lNB/Fmrg4qlWADHI0=; b=ZJcsDRMfa1j/40i/8v6MPmBGGSWhZ8z0ompwxQGfYZBTUV14nuQWvMi+sCNO8xrldT triviZgTP0fJd4Y8QYjHi1z7jW7ABvO/teZm7WZqJK4FCVpdDFHEEwmovwjsWCOlJZEG M9v4/FldwIrMW1geLnudeXPHe1k7zVyJUjKtqfSqr8q3NdyQ9nSkQPjZn9R2/q/xqJEG GnfGMgLzZs07xuvmY6RkYagqPQ9p0djHzqxjhuB9s8Js8tjlssDRksaUiXrPKu7gZGtT dswIbj6x4x7xIDw05qChsaCeFP6dxQQVXQsjjj4P0/2uVYpTOgTUJX70x27BV5wOLwPV cqew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1724366444; x=1724971244; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=OVEO528xLjg/PuKuE9cDFRxA62lNB/Fmrg4qlWADHI0=; b=JtU22MYQijwEaXmvHo5oTDJbBUwn8gnTln7hVpd+pVhOa6G9UQ83VU3ZLVx0W/fViI U1KZGZZlCRUkqyfDKqEJbCK2T03AFe84cHDMSk0dTudqSh9lD7v/6GZe5CLH4AS506IW IdYcZyod2pJKmsCFHdNqoUYkHwPgK75t1EH60EZqrzmUiIPQWvhl9l69KiYnPDH3T/Bl DFyrpXFJ5l/i1yhUblPOfvVTgciPR9i83EruhYYEbWORuegK6VyqCkfTRbFCSTD+VAFf MjTwCqlNKHkETUVeUKaVSOdf7lzMer7EuvRunJhrM5gA/HFMk9ytnCJhHeyzB7rtof2Y aBxQ== X-Forwarded-Encrypted: i=1; AJvYcCX6qH0UVQYLdbcA6YRYwBAEYK6d/h8NK95GbkuxgHk0sq3SqtK+Hd2x3K3dCoWc+hmzjnnzp3FNyw==@kvack.org X-Gm-Message-State: AOJu0Yyxddq77tXpH1eAVVEps6S030rSBNa7QoEWaM/zyLMrYS/dICT7 mMdY2TWEeadSf7jym84WKPv4xsw4Lru4vr4ceh6mS3CrhMdaUoLg X-Google-Smtp-Source: AGHT+IHnqb968+Zitp3p5u4TOI4aFeXzvXdoE4qLweNPuNBY4uAHHRFd2qkEsni5ggfyg4UXXhAL5g== X-Received: by 2002:a05:6a00:1a8e:b0:710:4d08:e094 with SMTP id d2e1a72fcca58-7144573cd07mr499649b3a.2.1724366444216; Thu, 22 Aug 2024 15:40:44 -0700 (PDT) Received: from Barrys-MBP.hub ([2407:7000:8942:5500:427:337e:a4f:8e74]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-7cd9acabccasm1609912a12.27.2024.08.22.15.40.38 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 22 Aug 2024 15:40:43 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hanchuanhua@oppo.com, ioworker0@gmail.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, ryan.roberts@arm.com, v-songbaohua@oppo.com, yuanshuai@oppo.com, ziy@nvidia.com, usamaarif642@gmail.com Subject: [PATCH v3 2/2] mm: collect the number of anon large folios partially mapped Date: Fri, 23 Aug 2024 10:40:15 +1200 Message-Id: <20240822224015.93186-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.39.3 (Apple Git-146) In-Reply-To: <20240822224015.93186-1-21cnbao@gmail.com> References: <20240822224015.93186-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9E487C0021 X-Stat-Signature: zw15sysrdjy74jput4quf18kntzdq3dd X-Rspam-User: X-HE-Tag: 1724366445-54315 X-HE-Meta: U2FsdGVkX1/v42rdyqQXK3vX6ZVrQ0ifko4p3Glib4T4FgHA4qOYFAvezlcp8wTBYvNbrna25t7v5uKQwYpNA9GXOhE/9aapuBWeh3QtVFxvUQNGmNsucpiuiV4Wtj7UtUbtC1UuKFaSXs0X3Ux3poDIfsy8bzLldaq03CGuDIEBBGhP2BbK6WuWSnPEa58xAyls16ugbtVVd6u/4boKnDlAWvKDQ3mSnY5oGVNiQqtETwWSlniLC7TbwGQRfHQPXOeeCaaAZuiLBcg06szWOIbkxLXLGappLCV4O2Ba6ajh10EWj04RRPrSQQ5YIN6ctaWQR4qXr10PhrWA/R1/b/ertfS5Y92uf48lfHCkzjT8j2RurvEEuhl/5yruWu3OUwx4fSEaTSsRyxcgVqEq0GgfZ29RLKJTFjBmiCaBPx8tLgAxuonMoFZPhSHyhp9/1W4Oa3w+3DcDP6D5WQOwk1Y0dTZ73HnvA3bnS1byhp9ySYtIvdQL0/ar3oOEyLDCX3+usc2/OIgbU4IOGjIThHtFcN5d6aY5d9o2ZluxAFThmElZEcxLeYHner/1OdW67Z3RkZ0M/XP9zwWozN6wfo3tww/cQ9uYqpPLPr8THAnLWCk2z2If34PttGphOK1GEAzxlljPvrxCiTnKadEmpD6hXU0yZB+eAzGC3iURpIuazDe4UeJDJ2Z6G+gvIWjGylj+0J+SbVAuCVgeVWsmOtx2q3I0oWriPEDuqfjn000croypgCw8mtH1wpXTV63kaHEGXnHlo+pZTRl3Fvg8t6p8Hns7aOx/0SX27gw8DzXk5EPrSzPjsyW+EKjSztpgB4Fo1HG/1ZSmfdUGr4tB2GMr8l+KQZo870284z/8gNQTu64n2VGOLEmEeTvUGjf2b3HPrtysQFdfbb7gJaWUuQzCLb6ZxyIuIqdimOWqqCr7nY3STNzio7LwIN8AQIBMSRUvYtEAY+ECAGTS3kg qB3nwD+Z qwSi7N4l7UxxiezocAZz9a+ihmaejtG5Js9/ZHAly1EGcvJRGddpdUF+5P6BSYr+vb/3ESz/ntUTaLn8gNmJQF9tRNo+tb8HAKMO3QaXJns+vD+zMZbl/Cl9Br+oSly8RHv93iLFz126YsOSsX+qksi/tQQ1gXcHUQ41XY75jSBycD6qd+dO8GamgMEDj1ItJLbGXeJ/s04XO0G8HJW462nR6YYqKv36qK0kh+DHknylfYvo+PkjpIVCsMQv8a/Qy/BbA1HLuWllS60CEdpuJeeamt+xd5/kja5TvgdYYpzTTHTJBHRaaLw/BScl1FkQK41QLpjs3UpaETwA673qouQ0gp9Y9p+Ef08xpqhW0EOkicYCVbm48n+4pZ6iv5HnN4UnC19maDHh+0fpc5CXR6o6emB8qiKe34DFRsM4+UaE1nJnR/fCuniVwWg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song When an mTHP is added to the deferred_list due to partially mapped, its partial pages are unused, leading to wasted memory and potentially increasing memory reclamation pressure. Detailing the specifics of how unmapping occurs is quite difficult and not that useful, so we adopt a simple approach: each time an mTHP enters the deferred_list, we increment the count by 1; whenever it leaves for any reason, we decrement the count by 1. Signed-off-by: Barry Song Acked-by: David Hildenbrand --- Documentation/admin-guide/mm/transhuge.rst | 5 +++++ include/linux/huge_mm.h | 1 + mm/huge_memory.c | 6 ++++++ 3 files changed, 12 insertions(+) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index b78f2148b242..b1c948c7de9d 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -556,6 +556,11 @@ nr_anon These huge pages could be entirely mapped or have partially unmapped/unused subpages. +nr_anon_partially_mapped + the number of transparent anon huge pages which have been partially + unmapped and put onto split queue. Those unmapped subpages are + also unused and temporarily wasting memory. + As the system ages, allocating huge pages may be expensive as the system uses memory compaction to copy data around memory to free a huge page for use. There are some counters in ``/proc/vmstat`` to help diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 2ee2971e4e10..4ff4e7fedc95 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -127,6 +127,7 @@ enum mthp_stat_item { MTHP_STAT_SPLIT_FAILED, MTHP_STAT_SPLIT_DEFERRED, MTHP_STAT_NR_ANON, + MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, __MTHP_STAT_COUNT }; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 26ad75fcda62..b5ee950df524 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -598,6 +598,7 @@ DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON); +DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED); static struct attribute *anon_stats_attrs[] = { &anon_fault_alloc_attr.attr, @@ -611,6 +612,7 @@ static struct attribute *anon_stats_attrs[] = { &split_failed_attr.attr, &split_deferred_attr.attr, &nr_anon_attr.attr, + &nr_anon_partially_mapped_attr.attr, NULL, }; @@ -3457,6 +3459,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list, if (folio_order(folio) > 1 && !list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); /* * Reinitialize page_deferred_list after removing the * page from the split_queue, otherwise a subsequent @@ -3523,6 +3526,7 @@ void __folio_undo_large_rmappable(struct folio *folio) spin_lock_irqsave(&ds_queue->split_queue_lock, flags); if (!list_empty(&folio->_deferred_list)) { ds_queue->split_queue_len--; + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); list_del_init(&folio->_deferred_list); } spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags); @@ -3564,6 +3568,7 @@ void deferred_split_folio(struct folio *folio) if (folio_test_pmd_mappable(folio)) count_vm_event(THP_DEFERRED_SPLIT_PAGE); count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED); + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, 1); list_add_tail(&folio->_deferred_list, &ds_queue->split_queue); ds_queue->split_queue_len++; #ifdef CONFIG_MEMCG @@ -3611,6 +3616,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink, list_move(&folio->_deferred_list, &list); } else { /* We lost race with folio_put() */ + mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1); list_del_init(&folio->_deferred_list); ds_queue->split_queue_len--; }