From patchwork Sat Feb 8 01:47:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: GONG Ruiqi X-Patchwork-Id: 13966208 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 171F9C0219E for ; Sat, 8 Feb 2025 01:37:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E76D6B0089; Fri, 7 Feb 2025 20:37:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7DA35280005; Fri, 7 Feb 2025 20:37:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5B2B9280004; Fri, 7 Feb 2025 20:37:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 2DC516B008C for ; Fri, 7 Feb 2025 20:37:37 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B8F8C141274 for ; Sat, 8 Feb 2025 01:37:36 +0000 (UTC) X-FDA: 83095065312.03.1ADDC99 Received: from szxga04-in.huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf23.hostedemail.com (Postfix) with ESMTP id 5BE97140002 for ; Sat, 8 Feb 2025 01:37:32 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of gongruiqi1@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=gongruiqi1@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1738978654; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ev9zRw+oM1/RqqeZ92jr22g9C+HdITLu3Ww3EDK9bQs=; b=day4UtAKerAmusqRZ6XSzKIXqigH9nlNOArtPF4Ja6S5MQ3Ak+Y5OuIEX7z1JVKhyzNqFU 9R2CYp6sT9B+od+5AVNZQgmVyyLy/yYa6Idtc+g+2Rwfl87X2gtVqLsCvqWqTaW0Ta6Yu7 qbkcH+yoK9Wn8F68VeFlDmTX6hFTfr8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf23.hostedemail.com: domain of gongruiqi1@huawei.com designates 45.249.212.190 as permitted sender) smtp.mailfrom=gongruiqi1@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1738978654; a=rsa-sha256; cv=none; b=2cvlvYrzBCtqUvynSJoNJP2t2qXFGQlW/n7J9Usvl779SVllAON/8c/qpdgFcF7iesEz8X 9PUeBLX1ZrfTJP8z3dDXCkiSYFvg4MI/4/Q6L+wUbv5J8bWO8pkGKCO4uGYhOcGLbbdiqN 90ZxLoOWBCsFUtYUOjQgpC7lbZBheqo= Received: from mail.maildlp.com (unknown [172.19.88.234]) by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4YqYKs1B3fz22mh5; Sat, 8 Feb 2025 09:34:41 +0800 (CST) Received: from kwepemg100016.china.huawei.com (unknown [7.202.181.57]) by mail.maildlp.com (Postfix) with ESMTPS id 3EC0214022E; Sat, 8 Feb 2025 09:37:29 +0800 (CST) Received: from huawei.com (10.67.174.33) by kwepemg100016.china.huawei.com (7.202.181.57) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Sat, 8 Feb 2025 09:37:28 +0800 From: GONG Ruiqi To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Vlastimil Babka , Kees Cook CC: Tamas Koczka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Xiu Jianfeng , , , , Subject: [PATCH v2 1/2] slab: Adjust placement of __kvmalloc_node_noprof Date: Sat, 8 Feb 2025 09:47:22 +0800 Message-ID: <20250208014723.1514049-2-gongruiqi1@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250208014723.1514049-1-gongruiqi1@huawei.com> References: <20250208014723.1514049-1-gongruiqi1@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.174.33] X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To kwepemg100016.china.huawei.com (7.202.181.57) X-Rspam-User: X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 5BE97140002 X-Stat-Signature: wmwfouuup9cmo3k1ebbtpxr3ds7dbm7t X-HE-Tag: 1738978652-372000 X-HE-Meta: U2FsdGVkX1+waKhJ1xYieBUnOyvPrv+8MsmxZUUk7e2bcl23jy+9AaUxfPUU3FWAxhSMDEzqLierDb3XPG7bPV+PPW2ld0b53Hlu0PHLF6GkfXNKY/GKzRaGZUhxb/sv+VaD0PHk/Pv34rEED9mJcotjpGqKnZdQf7xtwkI7aq9aeVw/ecntP+mctxESTs60HqeJbeT2xGKvrwsidfamQUzayzh6TXSIJszgi5NPF+9sMw76/Xme5CYlbCl/FYFNMVT0YsPRHn4xmxIdahf/qEqRCNzNegIIpR0z69yXK7GUDGmcv+yuBtHUBYeiCnD/n+IQyYzvtQeNhagivtrFHeb+AX21tWyUX5nraxdwX65yoiN/eNmn7DjuzEv1dDeuI2PeUc6mDROMWXnkr3C+fcP029MrqUs+37HJjNmz1zbMihf5M9RlDsoPOsk/We2AM5MHRCB2MIkrkT5bf9UoVsbCRxH+qveQsjkR+z6nAaI+IrLbLWwQ66BZng7kRL2sNewGkYGhFQuSnGtEKMia1jHnArK/edZTkgWNq5mNulc0Lu+gj3rGmsaE5fIqiqrjXjwjNvxEHlxWHUgpHP898SD6u/X4nBCWw/TSaCNfWbwKwXJOHMA2u9i+q15LsxroTAa1JJnXiipogXcEQw8L2DWt9CoJ93rv3owTHiTGh53kuyotHphZ55fTBVIrFQCQIOHU6ROK+mtILKAEw0NGNFsqg8W2WZRc/TkDr8gnUu7m7QbYoCaF9hNe3XYYVhIhSEOhJDed0H0m1nClKVsG5/YEF/DCOnb+hUFlfbApykOr3txk8ubiSCwMpdFzOtQavVNV9AMR/LjeJo2NXfxpW0TwT59zY7MoBjtD7g9PrMRRlYtL+nGIyOg40c2ZdVEDKXQMRemDaTZim1wCj+yGWSDmLGYSumsKVzoW6J+x47VrLzTvTV2RRMMByHbWN0BZsoC1s/R2OTyY8zJsM6K JYO7h23+ UKR9El2ddan1PFsUsFYxF2sQXgRbBzwS7Tqp9FgFSh2Gvbx43q6HkPpzNHdgBGi0TPmhsWXVNvckpgZKtWZ5FSJkTTImjr/LuSxHv40/s5/19CZGmp6CXQbjEEtFryxTTNUppbZ5ZdK4JvfmW7uTX1YpVEdatC3qyI0vbZZFbKXrsoNhctyFGyMPtvKv+E342b9fiqteOAMq7PhCNjMUGbUzEhq/V6Ql9CLiwfQ34gK7/xVX8Ag4UNwVm2BB0GmwC8DXjj6AGM8ZScMc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Move __kvmalloc_node_noprof (and also kvfree* for consistency) into mm/slub.c so that it can directly invoke __do_kmalloc_node, which is needed for the next patch. Move kmalloc_gfp_adjust to slab.h since now its two callers are in different .c files. No functional changes intended. Signed-off-by: GONG Ruiqi --- include/linux/slab.h | 22 +++++++++ mm/slub.c | 90 ++++++++++++++++++++++++++++++++++ mm/util.c | 112 ------------------------------------------- 3 files changed, 112 insertions(+), 112 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 09eedaecf120..0bf4cbf306fe 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -1101,4 +1101,26 @@ size_t kmalloc_size_roundup(size_t size); void __init kmem_cache_init_late(void); void __init kvfree_rcu_init(void); +static inline gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size) +{ + /* + * We want to attempt a large physically contiguous block first because + * it is less likely to fragment multiple larger blocks and therefore + * contribute to a long term fragmentation less than vmalloc fallback. + * However make sure that larger requests are not too disruptive - no + * OOM killer and no allocation failure warnings as we have a fallback. + */ + if (size > PAGE_SIZE) { + flags |= __GFP_NOWARN; + + if (!(flags & __GFP_RETRY_MAYFAIL)) + flags |= __GFP_NORETRY; + + /* nofail semantic is implemented by the vmalloc fallback */ + flags &= ~__GFP_NOFAIL; + } + + return flags; +} + #endif /* _LINUX_SLAB_H */ diff --git a/mm/slub.c b/mm/slub.c index 1f50129dcfb3..0830894bb92c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4878,6 +4878,96 @@ void *krealloc_noprof(const void *p, size_t new_size, gfp_t flags) } EXPORT_SYMBOL(krealloc_noprof); +/** + * __kvmalloc_node - attempt to allocate physically contiguous memory, but upon + * failure, fall back to non-contiguous (vmalloc) allocation. + * @size: size of the request. + * @b: which set of kmalloc buckets to allocate from. + * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL. + * @node: numa node to allocate from + * + * Uses kmalloc to get the memory but if the allocation fails then falls back + * to the vmalloc allocator. Use kvfree for freeing the memory. + * + * GFP_NOWAIT and GFP_ATOMIC are not supported, neither is the __GFP_NORETRY modifier. + * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is + * preferable to the vmalloc fallback, due to visible performance drawbacks. + * + * Return: pointer to the allocated memory of %NULL in case of failure + */ +void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) +{ + void *ret; + + /* + * It doesn't really make sense to fallback to vmalloc for sub page + * requests + */ + ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b), + kmalloc_gfp_adjust(flags, size), + node); + if (ret || size <= PAGE_SIZE) + return ret; + + /* non-sleeping allocations are not supported by vmalloc */ + if (!gfpflags_allow_blocking(flags)) + return NULL; + + /* Don't even allow crazy sizes */ + if (unlikely(size > INT_MAX)) { + WARN_ON_ONCE(!(flags & __GFP_NOWARN)); + return NULL; + } + + /* + * kvmalloc() can always use VM_ALLOW_HUGE_VMAP, + * since the callers already cannot assume anything + * about the resulting pointer, and cannot play + * protection games. + */ + return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, + flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, + node, __builtin_return_address(0)); +} +EXPORT_SYMBOL(__kvmalloc_node_noprof); + +/** + * kvfree() - Free memory. + * @addr: Pointer to allocated memory. + * + * kvfree frees memory allocated by any of vmalloc(), kmalloc() or kvmalloc(). + * It is slightly more efficient to use kfree() or vfree() if you are certain + * that you know which one to use. + * + * Context: Either preemptible task context or not-NMI interrupt. + */ +void kvfree(const void *addr) +{ + if (is_vmalloc_addr(addr)) + vfree(addr); + else + kfree(addr); +} +EXPORT_SYMBOL(kvfree); + +/** + * kvfree_sensitive - Free a data object containing sensitive information. + * @addr: address of the data object to be freed. + * @len: length of the data object. + * + * Use the special memzero_explicit() function to clear the content of a + * kvmalloc'ed object containing sensitive data to make sure that the + * compiler won't optimize out the data clearing. + */ +void kvfree_sensitive(const void *addr, size_t len) +{ + if (likely(!ZERO_OR_NULL_PTR(addr))) { + memzero_explicit((void *)addr, len); + kvfree(addr); + } +} +EXPORT_SYMBOL(kvfree_sensitive); + struct detached_freelist { struct slab *slab; void *tail; diff --git a/mm/util.c b/mm/util.c index b6b9684a1438..5a755d2a7347 100644 --- a/mm/util.c +++ b/mm/util.c @@ -612,118 +612,6 @@ unsigned long vm_mmap(struct file *file, unsigned long addr, } EXPORT_SYMBOL(vm_mmap); -static gfp_t kmalloc_gfp_adjust(gfp_t flags, size_t size) -{ - /* - * We want to attempt a large physically contiguous block first because - * it is less likely to fragment multiple larger blocks and therefore - * contribute to a long term fragmentation less than vmalloc fallback. - * However make sure that larger requests are not too disruptive - no - * OOM killer and no allocation failure warnings as we have a fallback. - */ - if (size > PAGE_SIZE) { - flags |= __GFP_NOWARN; - - if (!(flags & __GFP_RETRY_MAYFAIL)) - flags |= __GFP_NORETRY; - - /* nofail semantic is implemented by the vmalloc fallback */ - flags &= ~__GFP_NOFAIL; - } - - return flags; -} - -/** - * __kvmalloc_node - attempt to allocate physically contiguous memory, but upon - * failure, fall back to non-contiguous (vmalloc) allocation. - * @size: size of the request. - * @b: which set of kmalloc buckets to allocate from. - * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL. - * @node: numa node to allocate from - * - * Uses kmalloc to get the memory but if the allocation fails then falls back - * to the vmalloc allocator. Use kvfree for freeing the memory. - * - * GFP_NOWAIT and GFP_ATOMIC are not supported, neither is the __GFP_NORETRY modifier. - * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is - * preferable to the vmalloc fallback, due to visible performance drawbacks. - * - * Return: pointer to the allocated memory of %NULL in case of failure - */ -void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node) -{ - void *ret; - - /* - * It doesn't really make sense to fallback to vmalloc for sub page - * requests - */ - ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b), - kmalloc_gfp_adjust(flags, size), - node); - if (ret || size <= PAGE_SIZE) - return ret; - - /* non-sleeping allocations are not supported by vmalloc */ - if (!gfpflags_allow_blocking(flags)) - return NULL; - - /* Don't even allow crazy sizes */ - if (unlikely(size > INT_MAX)) { - WARN_ON_ONCE(!(flags & __GFP_NOWARN)); - return NULL; - } - - /* - * kvmalloc() can always use VM_ALLOW_HUGE_VMAP, - * since the callers already cannot assume anything - * about the resulting pointer, and cannot play - * protection games. - */ - return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END, - flags, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP, - node, __builtin_return_address(0)); -} -EXPORT_SYMBOL(__kvmalloc_node_noprof); - -/** - * kvfree() - Free memory. - * @addr: Pointer to allocated memory. - * - * kvfree frees memory allocated by any of vmalloc(), kmalloc() or kvmalloc(). - * It is slightly more efficient to use kfree() or vfree() if you are certain - * that you know which one to use. - * - * Context: Either preemptible task context or not-NMI interrupt. - */ -void kvfree(const void *addr) -{ - if (is_vmalloc_addr(addr)) - vfree(addr); - else - kfree(addr); -} -EXPORT_SYMBOL(kvfree); - -/** - * kvfree_sensitive - Free a data object containing sensitive information. - * @addr: address of the data object to be freed. - * @len: length of the data object. - * - * Use the special memzero_explicit() function to clear the content of a - * kvmalloc'ed object containing sensitive data to make sure that the - * compiler won't optimize out the data clearing. - */ -void kvfree_sensitive(const void *addr, size_t len) -{ - if (likely(!ZERO_OR_NULL_PTR(addr))) { - memzero_explicit((void *)addr, len); - kvfree(addr); - } -} -EXPORT_SYMBOL(kvfree_sensitive); - /** * kvrealloc - reallocate memory; contents remain unchanged * @p: object to reallocate memory for