From patchwork Tue Jul 16 13:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13734541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84BCAC3DA49 for ; Tue, 16 Jul 2024 13:59:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A71786B00A9; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FA6D6B00AB; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AF676B00AC; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5C1996B00A9 for ; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2823E416F6 for ; Tue, 16 Jul 2024 13:59:23 +0000 (UTC) X-FDA: 82345773006.01.E7F8F9A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 8A3181C0025 for ; Tue, 16 Jul 2024 13:59:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721138342; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eVws1mZLbsGFhyn/1ubxEwBc45W1jrtOVU8zePuBE0Y=; b=HApW+JFVkn8O6rAHCYeAxmKaFYVyPjpJAue9nBRbs85ReaT6OOYFZ4Wh4xFaYJBlbO2GAv RVguM+pqJ/YXtRbyeOYs/2a184oYyeDTqxNXQILu5gzw54fWLufFxN9SYMbZBI8dmliWU2 JoJu37ZaDaRxztYYJjCZka4/pTO8Zsw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721138342; a=rsa-sha256; cv=none; b=i5xQAbM4UGFzFByObR0PZFwwKpc2q5xEOg3FQtCcTFKrgvP4cQgh/20JvZ7zjavs5qDSV6 3D0Rs6fqceUPPfCjaWr4gOdjCwkIe7EZ2v/QCv1kF0GjUzBA1Yw2TwPdQr86WU0ZPF1lPg dN24OaKd5yvXFgy2+ThL2gDXQPwgg0I= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F31B106F; Tue, 16 Jul 2024 06:59:46 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 660803F762; Tue, 16 Jul 2024 06:59:19 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Barry Song , Lance Yang , Baolin Wang , Gavin Shan Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/3] mm: mTHP stats for pagecache folio allocations Date: Tue, 16 Jul 2024 14:59:06 +0100 Message-ID: <20240716135907.4047689-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240716135907.4047689-1-ryan.roberts@arm.com> References: <20240716135907.4047689-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 8A3181C0025 X-Stat-Signature: 1ddn6ja7oiczrfp7bg1r5oo4sukbp9ei X-HE-Tag: 1721138361-530442 X-HE-Meta: U2FsdGVkX19+pG0FQQM3SYyYb7sXSRhYTGt2Xl5djC3Hhn7GtVpB4/H4y5CluVnYjoo+jZZzfkVobkU1ocxkn3lZdFO3H17NMRMJqDHjDkHN/aA2ZteTlfANb/a8yHpHKfTwdzFhojxvbiOo4fA32nyRl8gHKR8FbOcnjdyE7fX3G4xFrCpPYvFPan8H0ULSFI17OB1xLqa+SszZNCYrbkWvg7qbALK90AKjWysXlknhWDwdrhsmq+fS0f50OXF1XHU7Pz6I9N0+BevUxpknD7CNsQnPHB5C5s01j4rMdfdOomkAb1QCYgz62rbUPA39SmXS9LKxyXSwkMil8k4FGumwaLdHl4mrj0QVC4DDvNuUtxVDDszAsVZfSLV8hXnv/HTWF+BC7k7hYL0tSJ5P+7sg9w56oQq36fFYwSUqQoe71iwQmPbfDRhVAQMRCm7Jyt0agjASd2iEGOGwIV0WC9WoK+cCRwX5rtvNxqmH0mAO+p4kzrdPsEJlAuqOenNQ2/QVrPtC/FthI2x2OdJ+0X3uvfAhkX3yQG7fhbu5hZuv5uoBHjER9IabFRkNOrIbRprfnL9luV5mHS+7hs3kAna8BMEAehWRLNmt9vqKv24xVR17bzJgWYVZiBVt+EFg37AD/kh9KMyx6YE9nqSEtxgohs9LFq0tv/TF1ik1lXBWwQ/PEZgA/O0gMPxq2DMTL3/WQ8PSWrJYxeAsnOF7cTitD7NabFQEx2Kfo5gXaK7/eQtXTteW78NhecbV/JelUu29acRlmpGfGGnV0rZKo75NifMqcnrvQh7aQhWlU8TVENB6iY+Qmq0uVvYzvt0rHiZpF3ZgoIkZi61Vd39mkmsAvM2CcbnMnKc93IxUv1pQ3cexfbYq1S2aiiJ9ln9NiRRYJYOdWHP63d5IF5CKfPsIOrnhTf+iqdnfBYcoijM5KCXWey1soQn3Gtw2qMHPN0/SflrCJntFWX1O156 DZSThTrP A88i+bj5VrDh2LtaHo2FhPU1oLlPGDdSze+X7dja8Sp9abnqbBC223JB9/Fc9leqo8EykQFBqSG6rgnTkTAJp4DcDnVse1m+mCjRyfGSgfoQzJDk8r/JxZj6rs3XWpW9mEO3nbx6Vb0UEcuduyddXDXBhrju9cR13zjzJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Expose 3 new mTHP stats for file (pagecache) folio allocations: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_alloc /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback_charge This will provide some insight on the sizes of large folios being allocated for file-backed memory, and how often allocation is failing. All non-order-0 (and most order-0) folio allocations are currently done through filemap_alloc_folio(), and folios are charged in a subsequent call to filemap_add_folio(). So count file_fallback when allocation fails in filemap_alloc_folio() and count file_alloc or file_fallback_charge in filemap_add_folio(), based on whether charging succeeded or not. There are some users of filemap_add_folio() that allocate their own order-0 folio by other means, so we would not count an allocation failure in this case, but we also don't care about order-0 allocations. This approach feels like it should be good enough and doesn't require any (impractically large) refactoring. The existing mTHP stats interface is reused to provide consistency to users. And because we are reusing the same interface, we can reuse the same infrastructure on the kernel side. Signed-off-by: Ryan Roberts --- Documentation/admin-guide/mm/transhuge.rst | 13 +++++++++++++ include/linux/huge_mm.h | 3 +++ include/linux/pagemap.h | 16 ++++++++++++++-- mm/filemap.c | 6 ++++-- mm/huge_memory.c | 7 +++++++ 5 files changed, 41 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 058485daf186..d4857e457add 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -512,6 +512,19 @@ shmem_fallback_charge falls back to using small pages even though the allocation was successful. +file_alloc + is incremented every time a file huge page is successfully + allocated. + +file_fallback + is incremented if a file huge page is attempted to be allocated + but fails and instead falls back to using small pages. + +file_fallback_charge + is incremented if a file huge page cannot be charged and instead + falls back to using small pages even though the allocation was + successful. + split is incremented every time a huge page is successfully split into smaller orders. This can happen for a variety of reasons but a diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b8c63c3e967f..4f9109fcdded 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -123,6 +123,9 @@ enum mthp_stat_item { MTHP_STAT_SHMEM_ALLOC, MTHP_STAT_SHMEM_FALLBACK, MTHP_STAT_SHMEM_FALLBACK_CHARGE, + MTHP_STAT_FILE_ALLOC, + MTHP_STAT_FILE_FALLBACK, + MTHP_STAT_FILE_FALLBACK_CHARGE, MTHP_STAT_SPLIT, MTHP_STAT_SPLIT_FAILED, MTHP_STAT_SPLIT_DEFERRED, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6e2f72d03176..95a147b5d117 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -562,14 +562,26 @@ static inline void *detach_page_private(struct page *page) } #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); #else -static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +static inline struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { return folio_alloc_noprof(gfp, order); } #endif +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +{ + struct folio *folio; + + folio = __filemap_alloc_folio_noprof(gfp, order); + + if (!folio) + count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); + + return folio; +} + #define filemap_alloc_folio(...) \ alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) diff --git a/mm/filemap.c b/mm/filemap.c index 53d5d0410b51..131d514fca29 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -963,6 +963,8 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, int ret; ret = mem_cgroup_charge(folio, NULL, gfp); + count_mthp_stat(folio_order(folio), + ret ? MTHP_STAT_FILE_FALLBACK_CHARGE : MTHP_STAT_FILE_ALLOC); if (ret) return ret; @@ -990,7 +992,7 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, EXPORT_SYMBOL_GPL(filemap_add_folio); #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { int n; struct folio *folio; @@ -1007,7 +1009,7 @@ struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) } return folio_alloc_noprof(gfp, order); } -EXPORT_SYMBOL(filemap_alloc_folio_noprof); +EXPORT_SYMBOL(__filemap_alloc_folio_noprof); #endif /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 578ac212c172..26d558e3e80f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -608,7 +608,14 @@ static struct attribute_group anon_stats_attr_grp = { .attrs = anon_stats_attrs, }; +DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); +DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE); + static struct attribute *file_stats_attrs[] = { + &file_alloc_attr.attr, + &file_fallback_attr.attr, + &file_fallback_charge_attr.attr, #ifdef CONFIG_SHMEM &shmem_alloc_attr.attr, &shmem_fallback_attr.attr,