From patchwork Tue Jul 16 13:59:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13734539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8FDFC3DA49 for ; Tue, 16 Jul 2024 13:59:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EDB16B00A5; Tue, 16 Jul 2024 09:59:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 64E8D6B00A8; Tue, 16 Jul 2024 09:59:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47C296B00A9; Tue, 16 Jul 2024 09:59:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 239466B00A5 for ; Tue, 16 Jul 2024 09:59:20 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id CC972C05CB for ; Tue, 16 Jul 2024 13:59:19 +0000 (UTC) X-FDA: 82345772838.05.62FEC15 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf14.hostedemail.com (Postfix) with ESMTP id 2F96D100011 for ; Tue, 16 Jul 2024 13:59:17 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721138329; a=rsa-sha256; cv=none; b=yaMwjh0kWZCLduVU9uRg1rvVgJXdf8j3XmWiXRwgTIZox7Z+BuQ57hx4CcNerFBDzsUQO3 IGUFxV1vxDnto2giGFSWe/NtGXwD9ACYTQqabSO7xgn684rW3xDNHcERZzA7SsYLvPqclm 1Tf4GCjtCwecd0Lat5x+l4kxRgZ+1bY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf14.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721138329; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jmrStxLjhMaj29NsB3Tqe1xB1c2JHYb2g5NuK9PJop8=; b=vJE8xXj6PyjItxXjBZQPckyROBmxLYVzxliSW6CBGjiZUHRbibvnXyPAZFkRDpJOZ6cjmU UjpOvr2HexOVIxC3XsbLCEuznmKiWFeeKtODsPcGYEkR2g6uV0UyQTZRadICcsRK7W5SxB 2inO7B2xZqRhilRdPTBvo+nj+JV+RYU= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BE2AF106F; Tue, 16 Jul 2024 06:59:42 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 00A7F3F762; Tue, 16 Jul 2024 06:59:15 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Barry Song , Lance Yang , Baolin Wang , Gavin Shan Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 1/3] mm: Cleanup count_mthp_stat() definition Date: Tue, 16 Jul 2024 14:59:04 +0100 Message-ID: <20240716135907.4047689-2-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240716135907.4047689-1-ryan.roberts@arm.com> References: <20240716135907.4047689-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 2F96D100011 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: djgsg78bubs7ufif5ndpuoqgweyy8b5u X-HE-Tag: 1721138357-788899 X-HE-Meta: U2FsdGVkX19pvB9irzrulwPjq7+Am60ihw3iNYxKtI3+aS58hIe565h9Ge9pRWNIIMwzqYOyLPl1/ELEqcNbiHKLg2YiKmfCRCwGCfDyHTrLjBQ+11oq1yh1McBiMfA4dPfoV5opfDbFySZ8zh/868Cq2fawKOpSvbbrPbQhG+32VIq1t7TTWSNIqBT310YrH+kbH2Cj7wawCMiT/4ZsPuO326I6Y7jIq1hCRGkUGRhQVdkMPuk22JrCbLSnFN9PNdsv2kELUKYMRTYN9kO4w7ihL5idj9kViRDFHVL2sWzMe/aKj+7Ee7m0xj25OnDhCZtgcwCJB3aXTC8IYkVUnELdRoOTyH5ow5Sv7DPukze8S7sR15lE84VsA2TEvvnjgWZPvHfFLy2EstjAKIysRRAvyjDPzkf/jRkFtdaS+drM4PxSCIjUtrhP8d7kCrZM7uQMqqjgSHj5hUlVanHttH5KeWiI4iyYdp0G6iDdpaNLqLm3NZrtNT+uG9uIG3lmlD5MGyInJXPTVCWMdr4tlVjXm1ekJvaiiJsYKWlkGX6MDiJDO1nsnLDi6YBUvqJjMPHMNljl6aCYMGEJGz1WtIltU9J+ED4X5ag5Q7M9Ed2Uzbh88IYqwB12qP68q4yIJtIXdKqgspTDgTbtOiG99gaKPyr/j8tgTKnGkZuYfKqlIaahWZejlK8+fp5La0uiB2zSm/W8f9tCnqKPwEY61o9qia6qszV0L4fjGTggTP4ZiYlub30ql43FT/pOdOvzz4E/9uP2HebmyFQJ2UtgLOonjJO9eB4wR9zeSM4TU+agquapoi4ViDlGiD38Xj5k4ZS/SKzqYaCgw7Ub3TglIHu7RPxD7IXI6Uw7FDwYETscQ9A0tZ/EydmbiqNDb2i7+btoCvDMQ3u9bHPi7DRNnLFdHpzl8pQEVIUoYfVCXZYT14UV6JBdXJw6wrBw2YUWJKwl7BTMTqazaHFpFvJ 3JJnRljq WhX+Rw/MOHFb3ZpZPjOoWQ3CGTa32ySkgzUZAmHnbunQjPES/UGiNAz78cqEYeq8frfz2QzZ5oUNCDyl1KfyQxxBzwatrzL4VZAaY3o8XouZtgc4CtzidZ1hYfbNeQAc+fgAr5G5qXWf/56xUO2j1ZVg1c7YZJK6OOhwXMHSezRVtgEC2m1XYHzurBIMOkH2M9NN4rcg5D5CFv0rTo5JhEPSrukHDVLI+I1v146VCJOd2W+K6Lt8PkKyvdagw20TU0G9fisfOMsdCbyoDf66R7ch5TNCJk4Ni5uG8XgCdd+mjVzo9CAZJhxKSmlAL0HTSNjDmPe0zAMQjhfM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's move count_mthp_stat() so that it's always defined, even when THP is disabled. Previously uses of the function in files such as shmem.c, which are compiled even when THP is disabled, required ugly THP ifdeferry. With this cleanup, we can remove those ifdefs and the function resolves to a nop when THP is disabled. I shortly plan to call count_mthp_stat() from more THP-invariant source files. Signed-off-by: Ryan Roberts Acked-by: Barry Song Reviewed-by: Baolin Wang Reviewed-by: Lance Yang Acked-by: David Hildenbrand --- include/linux/huge_mm.h | 70 ++++++++++++++++++++--------------------- mm/memory.c | 2 -- mm/shmem.c | 6 ---- 3 files changed, 35 insertions(+), 43 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index e25d9ebfdf89..b8c63c3e967f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -114,6 +114,41 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr; #define HPAGE_PUD_MASK (~(HPAGE_PUD_SIZE - 1)) #define HPAGE_PUD_SIZE ((1UL) << HPAGE_PUD_SHIFT) +enum mthp_stat_item { + MTHP_STAT_ANON_FAULT_ALLOC, + MTHP_STAT_ANON_FAULT_FALLBACK, + MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, + MTHP_STAT_SWPOUT, + MTHP_STAT_SWPOUT_FALLBACK, + MTHP_STAT_SHMEM_ALLOC, + MTHP_STAT_SHMEM_FALLBACK, + MTHP_STAT_SHMEM_FALLBACK_CHARGE, + MTHP_STAT_SPLIT, + MTHP_STAT_SPLIT_FAILED, + MTHP_STAT_SPLIT_DEFERRED, + __MTHP_STAT_COUNT +}; + +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS) +struct mthp_stat { + unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT]; +}; + +DECLARE_PER_CPU(struct mthp_stat, mthp_stats); + +static inline void count_mthp_stat(int order, enum mthp_stat_item item) +{ + if (order <= 0 || order > PMD_ORDER) + return; + + this_cpu_inc(mthp_stats.stats[order][item]); +} +#else +static inline void count_mthp_stat(int order, enum mthp_stat_item item) +{ +} +#endif + #ifdef CONFIG_TRANSPARENT_HUGEPAGE extern unsigned long transparent_hugepage_flags; @@ -269,41 +304,6 @@ struct thpsize { #define to_thpsize(kobj) container_of(kobj, struct thpsize, kobj) -enum mthp_stat_item { - MTHP_STAT_ANON_FAULT_ALLOC, - MTHP_STAT_ANON_FAULT_FALLBACK, - MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE, - MTHP_STAT_SWPOUT, - MTHP_STAT_SWPOUT_FALLBACK, - MTHP_STAT_SHMEM_ALLOC, - MTHP_STAT_SHMEM_FALLBACK, - MTHP_STAT_SHMEM_FALLBACK_CHARGE, - MTHP_STAT_SPLIT, - MTHP_STAT_SPLIT_FAILED, - MTHP_STAT_SPLIT_DEFERRED, - __MTHP_STAT_COUNT -}; - -struct mthp_stat { - unsigned long stats[ilog2(MAX_PTRS_PER_PTE) + 1][__MTHP_STAT_COUNT]; -}; - -#ifdef CONFIG_SYSFS -DECLARE_PER_CPU(struct mthp_stat, mthp_stats); - -static inline void count_mthp_stat(int order, enum mthp_stat_item item) -{ - if (order <= 0 || order > PMD_ORDER) - return; - - this_cpu_inc(mthp_stats.stats[order][item]); -} -#else -static inline void count_mthp_stat(int order, enum mthp_stat_item item) -{ -} -#endif - #define transparent_hugepage_use_zero_page() \ (transparent_hugepage_flags & \ (1<vm_mm, MM_ANONPAGES, nr_pages); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); -#endif folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); folio_add_lru_vma(folio, vma); setpte: diff --git a/mm/shmem.c b/mm/shmem.c index f24dfbd387ba..fce1343f44e6 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1776,9 +1776,7 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, if (pages == HPAGE_PMD_NR) count_vm_event(THP_FILE_FALLBACK); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE count_mthp_stat(order, MTHP_STAT_SHMEM_FALLBACK); -#endif order = next_order(&suitable_orders, order); } } else { @@ -1803,10 +1801,8 @@ static struct folio *shmem_alloc_and_add_folio(struct vm_fault *vmf, count_vm_event(THP_FILE_FALLBACK); count_vm_event(THP_FILE_FALLBACK_CHARGE); } -#ifdef CONFIG_TRANSPARENT_HUGEPAGE count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK); count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_FALLBACK_CHARGE); -#endif } goto unlock; } @@ -2180,9 +2176,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, if (!IS_ERR(folio)) { if (folio_test_pmd_mappable(folio)) count_vm_event(THP_FILE_ALLOC); -#ifdef CONFIG_TRANSPARENT_HUGEPAGE count_mthp_stat(folio_order(folio), MTHP_STAT_SHMEM_ALLOC); -#endif goto alloced; } if (PTR_ERR(folio) == -EEXIST) From patchwork Tue Jul 16 13:59:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13734540 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09FFDC3DA59 for ; Tue, 16 Jul 2024 13:59:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 455CC6B00A8; Tue, 16 Jul 2024 09:59:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B84D6B00A9; Tue, 16 Jul 2024 09:59:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BE726B00AB; Tue, 16 Jul 2024 09:59:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id F0E466B00A8 for ; Tue, 16 Jul 2024 09:59:21 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id AF03D14175D for ; Tue, 16 Jul 2024 13:59:21 +0000 (UTC) X-FDA: 82345772922.13.FCE907C Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf24.hostedemail.com (Postfix) with ESMTP id 0CC1C180027 for ; Tue, 16 Jul 2024 13:59:19 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721138321; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=pxlMlr8qsicJ+zV5KcAguJPe+H/TL/ukuh6IP8ASZSc=; b=hiHafcrvnDVeerf55sgVkSW48mKyZW5x72I3MfP5yL1UoboJrTuanEBk6QLvBIGCpuPaWc +eHwi4qEmiit6xcPk+jDk94k9gmy7aCGA9njmcK3iyAQmJnAwTbQQvS3DjDh+t4Cpz0KfB OTo/NZXmpg3m8FHGBF9PZNXzpb0Je+s= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721138321; a=rsa-sha256; cv=none; b=LCZtgjWzBWJ2RsTOZFgA4+BRf1dIuW0K2SdjFTOq6DAfMVib0iJzM3oB0Kwel341+tyu0f 9BnJQX71UK1RST4oYWhKMZvh7atHVqRrRW0H9zUAN6nGPXWVT3i5rkPxMxIA/x74RppW3h 4g7+cYLOp9H7B2FdwzjspBQ1trxaH8s= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=none; spf=pass (imf24.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 76C89113E; Tue, 16 Jul 2024 06:59:44 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id ADBC33F762; Tue, 16 Jul 2024 06:59:17 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Barry Song , Lance Yang , Baolin Wang , Gavin Shan Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 2/3] mm: Tidy up shmem mTHP controls and stats Date: Tue, 16 Jul 2024 14:59:05 +0100 Message-ID: <20240716135907.4047689-3-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240716135907.4047689-1-ryan.roberts@arm.com> References: <20240716135907.4047689-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Stat-Signature: mntqoigu98oqddzrfa3rg6615scss389 X-Rspamd-Queue-Id: 0CC1C180027 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1721138359-878410 X-HE-Meta: U2FsdGVkX19OeQVk+WWDl53MIT0xLlT4on16y1111itaIbk19/EEjyC9TnUiUs8KHUrvCp184cs2e+EEIgdZh8z2TuchXcURqXsQXyZ0hbuPo8K2y2td+VCJlcLSDUJpwxBLItXtUrPwBuYyj2QFkj8Bbb03gGVbU1Mm1TiEnpNOBzJvDxHw7fsKre2h/CZZmiUM+yPlJ6SMFHWw82/LI3IxnFRE4c5xVOOeNp/11y8F0IFKC6aJUP0Ib26xfEc4JiVU6IX/fGb/sk1Wa5S3pk4lCFJPk2P2/EYGxcEh7M97lS9y/l3uz4dT8M3PpxPWR4f0sX3rgbQ1FokA8OKiWvx4lOS4dO57mQLueBKrVhn64okBMX8CCAia8Gfr6W6cFRNQpcQRFilPvlwNsDjIIsZZ7aVIhOwcZulDOYN3/CmeBjGloW+zs50NoFtB3DWS120pe8bQXpSuxRjeX4THJrEaLJZjusG0hhddjZYmykvEOQgGLwqhSjt0ya9EG9TCjM7W+7ZULGvgRXHJ6gQH3DkMEcHDzrGcBgUkIkpdUIk6KP9IyqLVKpyc4hDwve6mjLhwVZyNTgQ0VcQQjJbfyy/QmFJhTG/YDhZIwT4QT2+wDORCzbagpF5Vnbxhh6JbHGdFOh0tAruYgW8XJkX1SX41jujy8N+3XXcGIbUc+Y+rQRIQngIACVBl0ZJ6y2017nvjWSg0jSjKyNtEAR80tiXVlD9ZpwfVOfyueYepJxZNbfknMvPOY2hiA4IRsGFt3lVMgdbjkg5TfRMkbxubxaA3xdV3RDqyi3sKYOHJA8y3/ox+UVY8qYC9zonEkUnvgcmpQHbb/J0RWyLbNVvwqFt7Z2sD74zF1r4EyRrN9+MkoBC6X8F/L0uAILxBfM2uyoXPJVe04lOciVxXqv5MVY3pTPEW/EIfUO9/jBtvEzvscsd8wiaeY2ilPE6U9Vg8N2LEGqhPvG9nj1t1th/ VGELThPv FcsR+xcnCS0g+KCaKszo7VnamApuhL+3w0tHAVJsEk+iajyoJROPyOdyztXpR5qX/36v4go2j7WjkTrEllJbELfGBuZbyFEMm2UwUo8XjjY5r0L3sYMfZdaEy2bKxT7cJDRi/IXbFi0VokHmbmI/tYuYAaRET8qoxOqgm X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Previously we had a situation where shmem mTHP controls and stats were not exposed for some supported sizes and were exposed for some unsupported sizes. So let's clean that up. Anon mTHP can support all large orders (2, PMD_ORDER). But shmem can support all large orders (1, MAX_PAGECACHE_ORDER). However, per-size shmem controls and stats were previously being exposed for all the anon mTHP orders, meaning order-1 was not present, and for arm64 64K base pages, orders 12 and 13 were exposed but were not supported internally. Tidy this all up by defining ctrl and stats attribute groups for anon and file separately. Anon ctrl and stats groups are populated for all orders in THP_ORDERS_ALL_ANON and file ctrl and stats groups are populated for all orders in THP_ORDERS_ALL_FILE_DEFAULT. The side-effect of all this is that different hugepage-*kB directories contain different sets of controls and stats, depending on which memory types support that size. This approach is preferred over the alternative, which is to populate dummy controls and stats for memory types that do not support a given size. Signed-off-by: Ryan Roberts --- mm/huge_memory.c | 110 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 80 insertions(+), 30 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f4be468e06a4..578ac212c172 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -463,8 +463,8 @@ static void thpsize_release(struct kobject *kobj); static DEFINE_SPINLOCK(huge_anon_orders_lock); static LIST_HEAD(thpsize_list); -static ssize_t thpsize_enabled_show(struct kobject *kobj, - struct kobj_attribute *attr, char *buf) +static ssize_t anon_enabled_show(struct kobject *kobj, + struct kobj_attribute *attr, char *buf) { int order = to_thpsize(kobj)->order; const char *output; @@ -481,9 +481,9 @@ static ssize_t thpsize_enabled_show(struct kobject *kobj, return sysfs_emit(buf, "%s\n", output); } -static ssize_t thpsize_enabled_store(struct kobject *kobj, - struct kobj_attribute *attr, - const char *buf, size_t count) +static ssize_t anon_enabled_store(struct kobject *kobj, + struct kobj_attribute *attr, + const char *buf, size_t count) { int order = to_thpsize(kobj)->order; ssize_t ret = count; @@ -525,19 +525,27 @@ static ssize_t thpsize_enabled_store(struct kobject *kobj, return ret; } -static struct kobj_attribute thpsize_enabled_attr = - __ATTR(enabled, 0644, thpsize_enabled_show, thpsize_enabled_store); +static struct kobj_attribute anon_enabled_attr = + __ATTR(enabled, 0644, anon_enabled_show, anon_enabled_store); -static struct attribute *thpsize_attrs[] = { - &thpsize_enabled_attr.attr, +static struct attribute *anon_ctrl_attrs[] = { + &anon_enabled_attr.attr, + NULL, +}; + +static const struct attribute_group anon_ctrl_attr_grp = { + .attrs = anon_ctrl_attrs, +}; + +static struct attribute *file_ctrl_attrs[] = { #ifdef CONFIG_SHMEM &thpsize_shmem_enabled_attr.attr, #endif NULL, }; -static const struct attribute_group thpsize_attr_group = { - .attrs = thpsize_attrs, +static const struct attribute_group file_ctrl_attr_grp = { + .attrs = file_ctrl_attrs, }; static const struct kobj_type thpsize_ktype = { @@ -583,57 +591,99 @@ DEFINE_MTHP_STAT_ATTR(split, MTHP_STAT_SPLIT); DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED); DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED); -static struct attribute *stats_attrs[] = { +static struct attribute *anon_stats_attrs[] = { &anon_fault_alloc_attr.attr, &anon_fault_fallback_attr.attr, &anon_fault_fallback_charge_attr.attr, &swpout_attr.attr, &swpout_fallback_attr.attr, - &shmem_alloc_attr.attr, - &shmem_fallback_attr.attr, - &shmem_fallback_charge_attr.attr, &split_attr.attr, &split_failed_attr.attr, &split_deferred_attr.attr, NULL, }; -static struct attribute_group stats_attr_group = { +static struct attribute_group anon_stats_attr_grp = { + .name = "stats", + .attrs = anon_stats_attrs, +}; + +static struct attribute *file_stats_attrs[] = { +#ifdef CONFIG_SHMEM + &shmem_alloc_attr.attr, + &shmem_fallback_attr.attr, + &shmem_fallback_charge_attr.attr, +#endif + NULL, +}; + +static struct attribute_group file_stats_attr_grp = { .name = "stats", - .attrs = stats_attrs, + .attrs = file_stats_attrs, }; +static int sysfs_add_group(struct kobject *kobj, + const struct attribute_group *grp) +{ + int ret = -ENOENT; + + /* + * If the group is named, try to merge first, assuming the subdirectory + * was already created. This avoids the warning emitted by + * sysfs_create_group() if the directory already exists. + */ + if (grp->name) + ret = sysfs_merge_group(kobj, grp); + if (ret) + ret = sysfs_create_group(kobj, grp); + + return ret; +} + static struct thpsize *thpsize_create(int order, struct kobject *parent) { unsigned long size = (PAGE_SIZE << order) / SZ_1K; struct thpsize *thpsize; - int ret; + int ret = -ENOMEM; thpsize = kzalloc(sizeof(*thpsize), GFP_KERNEL); if (!thpsize) - return ERR_PTR(-ENOMEM); + goto err; + + thpsize->order = order; ret = kobject_init_and_add(&thpsize->kobj, &thpsize_ktype, parent, "hugepages-%lukB", size); if (ret) { kfree(thpsize); - return ERR_PTR(ret); + goto err; } - ret = sysfs_create_group(&thpsize->kobj, &thpsize_attr_group); - if (ret) { - kobject_put(&thpsize->kobj); - return ERR_PTR(ret); + if (BIT(order) & THP_ORDERS_ALL_ANON) { + ret = sysfs_add_group(&thpsize->kobj, &anon_ctrl_attr_grp); + if (ret) + goto err_put; + + ret = sysfs_add_group(&thpsize->kobj, &anon_stats_attr_grp); + if (ret) + goto err_put; } - ret = sysfs_create_group(&thpsize->kobj, &stats_attr_group); - if (ret) { - kobject_put(&thpsize->kobj); - return ERR_PTR(ret); + if (BIT(order) & THP_ORDERS_ALL_FILE_DEFAULT) { + ret = sysfs_add_group(&thpsize->kobj, &file_ctrl_attr_grp); + if (ret) + goto err_put; + + ret = sysfs_add_group(&thpsize->kobj, &file_stats_attr_grp); + if (ret) + goto err_put; } - thpsize->order = order; return thpsize; +err_put: + kobject_put(&thpsize->kobj); +err: + return ERR_PTR(ret); } static void thpsize_release(struct kobject *kobj) @@ -673,7 +723,7 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj) goto remove_hp_group; } - orders = THP_ORDERS_ALL_ANON; + orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT; order = highest_order(orders); while (orders) { thpsize = thpsize_create(order, *hugepage_kobj); From patchwork Tue Jul 16 13:59:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13734541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84BCAC3DA49 for ; Tue, 16 Jul 2024 13:59:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A71786B00A9; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FA6D6B00AB; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AF676B00AC; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 5C1996B00A9 for ; Tue, 16 Jul 2024 09:59:23 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 2823E416F6 for ; Tue, 16 Jul 2024 13:59:23 +0000 (UTC) X-FDA: 82345773006.01.E7F8F9A Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf21.hostedemail.com (Postfix) with ESMTP id 8A3181C0025 for ; Tue, 16 Jul 2024 13:59:21 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721138342; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eVws1mZLbsGFhyn/1ubxEwBc45W1jrtOVU8zePuBE0Y=; b=HApW+JFVkn8O6rAHCYeAxmKaFYVyPjpJAue9nBRbs85ReaT6OOYFZ4Wh4xFaYJBlbO2GAv RVguM+pqJ/YXtRbyeOYs/2a184oYyeDTqxNXQILu5gzw54fWLufFxN9SYMbZBI8dmliWU2 JoJu37ZaDaRxztYYJjCZka4/pTO8Zsw= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721138342; a=rsa-sha256; cv=none; b=i5xQAbM4UGFzFByObR0PZFwwKpc2q5xEOg3FQtCcTFKrgvP4cQgh/20JvZ7zjavs5qDSV6 3D0Rs6fqceUPPfCjaWr4gOdjCwkIe7EZ2v/QCv1kF0GjUzBA1Yw2TwPdQr86WU0ZPF1lPg dN24OaKd5yvXFgy2+ThL2gDXQPwgg0I= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2F31B106F; Tue, 16 Jul 2024 06:59:46 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.27]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 660803F762; Tue, 16 Jul 2024 06:59:19 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , Hugh Dickins , Jonathan Corbet , "Matthew Wilcox (Oracle)" , David Hildenbrand , Barry Song , Lance Yang , Baolin Wang , Gavin Shan Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v2 3/3] mm: mTHP stats for pagecache folio allocations Date: Tue, 16 Jul 2024 14:59:06 +0100 Message-ID: <20240716135907.4047689-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240716135907.4047689-1-ryan.roberts@arm.com> References: <20240716135907.4047689-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 8A3181C0025 X-Stat-Signature: 1ddn6ja7oiczrfp7bg1r5oo4sukbp9ei X-HE-Tag: 1721138361-530442 X-HE-Meta: U2FsdGVkX19+pG0FQQM3SYyYb7sXSRhYTGt2Xl5djC3Hhn7GtVpB4/H4y5CluVnYjoo+jZZzfkVobkU1ocxkn3lZdFO3H17NMRMJqDHjDkHN/aA2ZteTlfANb/a8yHpHKfTwdzFhojxvbiOo4fA32nyRl8gHKR8FbOcnjdyE7fX3G4xFrCpPYvFPan8H0ULSFI17OB1xLqa+SszZNCYrbkWvg7qbALK90AKjWysXlknhWDwdrhsmq+fS0f50OXF1XHU7Pz6I9N0+BevUxpknD7CNsQnPHB5C5s01j4rMdfdOomkAb1QCYgz62rbUPA39SmXS9LKxyXSwkMil8k4FGumwaLdHl4mrj0QVC4DDvNuUtxVDDszAsVZfSLV8hXnv/HTWF+BC7k7hYL0tSJ5P+7sg9w56oQq36fFYwSUqQoe71iwQmPbfDRhVAQMRCm7Jyt0agjASd2iEGOGwIV0WC9WoK+cCRwX5rtvNxqmH0mAO+p4kzrdPsEJlAuqOenNQ2/QVrPtC/FthI2x2OdJ+0X3uvfAhkX3yQG7fhbu5hZuv5uoBHjER9IabFRkNOrIbRprfnL9luV5mHS+7hs3kAna8BMEAehWRLNmt9vqKv24xVR17bzJgWYVZiBVt+EFg37AD/kh9KMyx6YE9nqSEtxgohs9LFq0tv/TF1ik1lXBWwQ/PEZgA/O0gMPxq2DMTL3/WQ8PSWrJYxeAsnOF7cTitD7NabFQEx2Kfo5gXaK7/eQtXTteW78NhecbV/JelUu29acRlmpGfGGnV0rZKo75NifMqcnrvQh7aQhWlU8TVENB6iY+Qmq0uVvYzvt0rHiZpF3ZgoIkZi61Vd39mkmsAvM2CcbnMnKc93IxUv1pQ3cexfbYq1S2aiiJ9ln9NiRRYJYOdWHP63d5IF5CKfPsIOrnhTf+iqdnfBYcoijM5KCXWey1soQn3Gtw2qMHPN0/SflrCJntFWX1O156 DZSThTrP A88i+bj5VrDh2LtaHo2FhPU1oLlPGDdSze+X7dja8Sp9abnqbBC223JB9/Fc9leqo8EykQFBqSG6rgnTkTAJp4DcDnVse1m+mCjRyfGSgfoQzJDk8r/JxZj6rs3XWpW9mEO3nbx6Vb0UEcuduyddXDXBhrju9cR13zjzJ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Expose 3 new mTHP stats for file (pagecache) folio allocations: /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_alloc /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback /sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback_charge This will provide some insight on the sizes of large folios being allocated for file-backed memory, and how often allocation is failing. All non-order-0 (and most order-0) folio allocations are currently done through filemap_alloc_folio(), and folios are charged in a subsequent call to filemap_add_folio(). So count file_fallback when allocation fails in filemap_alloc_folio() and count file_alloc or file_fallback_charge in filemap_add_folio(), based on whether charging succeeded or not. There are some users of filemap_add_folio() that allocate their own order-0 folio by other means, so we would not count an allocation failure in this case, but we also don't care about order-0 allocations. This approach feels like it should be good enough and doesn't require any (impractically large) refactoring. The existing mTHP stats interface is reused to provide consistency to users. And because we are reusing the same interface, we can reuse the same infrastructure on the kernel side. Signed-off-by: Ryan Roberts --- Documentation/admin-guide/mm/transhuge.rst | 13 +++++++++++++ include/linux/huge_mm.h | 3 +++ include/linux/pagemap.h | 16 ++++++++++++++-- mm/filemap.c | 6 ++++-- mm/huge_memory.c | 7 +++++++ 5 files changed, 41 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst index 058485daf186..d4857e457add 100644 --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -512,6 +512,19 @@ shmem_fallback_charge falls back to using small pages even though the allocation was successful. +file_alloc + is incremented every time a file huge page is successfully + allocated. + +file_fallback + is incremented if a file huge page is attempted to be allocated + but fails and instead falls back to using small pages. + +file_fallback_charge + is incremented if a file huge page cannot be charged and instead + falls back to using small pages even though the allocation was + successful. + split is incremented every time a huge page is successfully split into smaller orders. This can happen for a variety of reasons but a diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index b8c63c3e967f..4f9109fcdded 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -123,6 +123,9 @@ enum mthp_stat_item { MTHP_STAT_SHMEM_ALLOC, MTHP_STAT_SHMEM_FALLBACK, MTHP_STAT_SHMEM_FALLBACK_CHARGE, + MTHP_STAT_FILE_ALLOC, + MTHP_STAT_FILE_FALLBACK, + MTHP_STAT_FILE_FALLBACK_CHARGE, MTHP_STAT_SPLIT, MTHP_STAT_SPLIT_FAILED, MTHP_STAT_SPLIT_DEFERRED, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 6e2f72d03176..95a147b5d117 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -562,14 +562,26 @@ static inline void *detach_page_private(struct page *page) } #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order); #else -static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +static inline struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { return folio_alloc_noprof(gfp, order); } #endif +static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +{ + struct folio *folio; + + folio = __filemap_alloc_folio_noprof(gfp, order); + + if (!folio) + count_mthp_stat(order, MTHP_STAT_FILE_FALLBACK); + + return folio; +} + #define filemap_alloc_folio(...) \ alloc_hooks(filemap_alloc_folio_noprof(__VA_ARGS__)) diff --git a/mm/filemap.c b/mm/filemap.c index 53d5d0410b51..131d514fca29 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -963,6 +963,8 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, int ret; ret = mem_cgroup_charge(folio, NULL, gfp); + count_mthp_stat(folio_order(folio), + ret ? MTHP_STAT_FILE_FALLBACK_CHARGE : MTHP_STAT_FILE_ALLOC); if (ret) return ret; @@ -990,7 +992,7 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio, EXPORT_SYMBOL_GPL(filemap_add_folio); #ifdef CONFIG_NUMA -struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) +struct folio *__filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) { int n; struct folio *folio; @@ -1007,7 +1009,7 @@ struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order) } return folio_alloc_noprof(gfp, order); } -EXPORT_SYMBOL(filemap_alloc_folio_noprof); +EXPORT_SYMBOL(__filemap_alloc_folio_noprof); #endif /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 578ac212c172..26d558e3e80f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -608,7 +608,14 @@ static struct attribute_group anon_stats_attr_grp = { .attrs = anon_stats_attrs, }; +DEFINE_MTHP_STAT_ATTR(file_alloc, MTHP_STAT_FILE_ALLOC); +DEFINE_MTHP_STAT_ATTR(file_fallback, MTHP_STAT_FILE_FALLBACK); +DEFINE_MTHP_STAT_ATTR(file_fallback_charge, MTHP_STAT_FILE_FALLBACK_CHARGE); + static struct attribute *file_stats_attrs[] = { + &file_alloc_attr.attr, + &file_fallback_attr.attr, + &file_fallback_charge_attr.attr, #ifdef CONFIG_SHMEM &shmem_alloc_attr.attr, &shmem_fallback_attr.attr,