From patchwork Mon Sep 2 15:31:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Brauner X-Patchwork-Id: 13787430 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A646CD13CF for ; Mon, 2 Sep 2024 15:31:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CB9798D00EC; Mon, 2 Sep 2024 11:31:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C6A716B029B; Mon, 2 Sep 2024 11:31:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABEF38D00EC; Mon, 2 Sep 2024 11:31:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8E7E16B0297 for ; Mon, 2 Sep 2024 11:31:27 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 1F388C02F6 for ; Mon, 2 Sep 2024 15:31:27 +0000 (UTC) X-FDA: 82520187414.18.831D81B Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf20.hostedemail.com (Postfix) with ESMTP id 4FD511C0011 for ; Mon, 2 Sep 2024 15:31:25 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GkDfTY81; spf=pass (imf20.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725291038; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EvQEy6kiCow1TB7cL8fZdfuDr28LVReVZuntcDK3BlY=; b=n5UcLfuAZWHaAxB1KQO0u1sgdzd10fQpbdLb/DHPCEx1ChbKtEZJHrDE26TY85QcuK3cJb tR5l+jBbIavzKaxcgkU44/T2uYdCrYbK8OKiiasaMOrwORffvpIw+un0yy9D49c9Ga9uDR 9VSq9br62zcdPrFv4NK+//KyC37XimM= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GkDfTY81; spf=pass (imf20.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725291038; a=rsa-sha256; cv=none; b=wt10SH0pPYEfWz7abCJOGAiqw/LHuh9A+1xjTL7ny1SG/PFu6kjT+o5q4sDHUYeS116kZk U+5tKgmXy6tY+NwX8pn/XpK0zTyEyDpTWOldLF7z77oYvXc5qnIwy9FgdbMPHr2pKajbcE Uznpa8IHt9vkm9rHB3NEiKzlPQasdrY= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 6BC7AA426E7; Mon, 2 Sep 2024 15:31:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6089C4CEC7; Mon, 2 Sep 2024 15:31:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725291084; bh=FHhCTOggZ90P2bzR8bIGtQhfJ1koBe39D/5rVosKgnw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=GkDfTY818QvUJiWNDID1pLShAjMGuCRzHx8UB2Tecndx9t1XTpJrbuPdhkbCr0anT FevEBEO0xiSH0R9L8YiuO8J95NPFpqSb99UiDDk1z7z9jL/c22DSObZB14n5EZOBqo pUxGP/oCVJEepI8jJ9HDazhb6FEefj9qrWM6810lN1GQjQsK6ScRLJTIxB14LXiryN M6WHOz8BLeGraC7E3wDikBYB1HyOgjGbjTQ0ABLzPOE0Duw5/EMzENnsy2b/OWOuSM dhTeXwSY63No/Z8pef8vM07oiKHQWXS+blPA6gOPcRTgncrCla1zzV0Jp8JpuIp1pD oxBT/4EC/kRag== From: Christian Brauner Date: Mon, 02 Sep 2024 17:31:11 +0200 Subject: [PATCH RFC 1/6] slab: introduce kmem_cache_setup() MIME-Version: 1.0 Message-Id: <20240902-work-kmem_cache_args-v1-1-27d05bc05128@kernel.org> References: <20240902-work-kmem_cache_args-v1-0-27d05bc05128@kernel.org> In-Reply-To: <20240902-work-kmem_cache_args-v1-0-27d05bc05128@kernel.org> To: Vlastimil Babka , Jens Axboe , Jann Horn , Linus Torvalds Cc: linux-mm@kvack.org, Christian Brauner X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=openpgp-sha256; l=9454; i=brauner@kernel.org; h=from:subject:message-id; bh=FHhCTOggZ90P2bzR8bIGtQhfJ1koBe39D/5rVosKgnw=; b=owGbwMvMwCU28Zj0gdSKO4sYT6slMaRdveUhvlL80jlz0YchK2t6xbTSH8c63c7+46FTulxba 4Nr1MumjlIWBjEuBlkxRRaHdpNwueU8FZuNMjVg5rAygQxh4OIUgItYMDI8OPyMa4qcSd/6rwzt tzvf8Jqy6Kz8o1Mxa9azPvFtT6ubGBkO6DWqGixo2ssk8DxsZ8avnqpjXmd0OCv2haieFN555g4 XAA== X-Developer-Key: i=brauner@kernel.org; a=openpgp; fpr=4880B8C9BD0E5106FC070F4F7B3C391EFEA93624 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 4FD511C0011 X-Stat-Signature: 1pdmhuzuaigcs68u7dth8i7h1phfm8df X-HE-Tag: 1725291085-382571 X-HE-Meta: U2FsdGVkX19iq+KLHM5QC9qUvx9SHGj7nR/KmBSrjP/1R2DJh0JZftZaDbCKAnhPtbqGLBcJ3M9qLIO0m47rFCokdVWIthgyTXLEwRhj1kZ2nN+Zr5JAxg/iPfwoms81fOSsK8tkX6Ret9HKfRQYsKuhHiBgoGbuNXbg3F2tZzLB58vsSoMbsWXiKVeq4sjRgOkVB8CsmHlIlb6X2uauW7ke1QQ5YGWKIRgM49BaoF0WsaNO+GdwmUf6lBX1elpPpyXFw+rMKctZewNMJe7xrgDcKMg9+zJGxawWtpx2P8acEVtkkUHdZZGbjdszyumeGKV5i3tyhxskwnZoiILMZ2/WmiaR0l+DpXy1tUnc2r2dLtBsiFBbOKDpYY1sCZ6jZrpe12jXa3rVtF4i6OTw/k0c9tg+m5SgGeW+/OA2PNMtFeYqIinDmRd+bBKgAWgrdN/KKseKmpRVaZ89ZN9ySZaQFDEKnFQ7/S2pSfMieQiaw+q07Z2Mx+56lszHfRv9xqo6ZS0U9nsgDgpo3abDMBzW5p4R8CLYUtj4yUXLfYv7I/5krN9et63fGvp5w4ELep0Zcr03gya3JlFaYuUv732YgS8wHaIIswjuqWBOLE8mhFLcQyBIYwg137nj/QmXMaYSPLeoQu1X9QumdBuQSkl0JZV6ZMEaAs7XZ5JCxWCuphU1PgYYDmLYjRCk7Y/AA0RcoV1mGrgyJdImwXiaIe5aB8tI/idTRQSeysF3AFfbAPuuIsZVIBiyE7c909m3tEZdHPhefS0P4e2zYftkUDywbYKe8ohsQn+1yk4LIzkmWly1OiL5KUnPF+EtYvXWgzEnXIbHJRQnB8sDiVgP0BUhtaR38w52P+xC0QaLn96J1IpydffKT2Qtu64wXblqETXyn3ofT5Uas1ubkEksDkCOAR6dgZJK50bu/u5IWAWlIqAwd/bva15Uk4qUSBtRYJCQjOJ4JknW3dWy+61 9vfbw0DR s4yiFKc4mbOYlfIuV8Sqzpx6xYebd4hoO8jgn/qjFeZrF9Vygo5DevJ8ZURM017Ops4lZ4IfITdaFmzTN8qRY4nPkHR70MHn/L67Bb49J+0K9apLm1vlZjvretjoOWiyxjMtq9FNcXz8SfJmbGDKEWXDSyD+6UR455jIyJkT2mlQqa6wrm/qa6T1xRhDjbyypyqD/B2JUEBhc1+GrsubI8UOP0/viLT2DtLY6e2THak4blnx6fBioV1lAA+cs3lk1MwsLyKYkcg6oAqcIDpGXXi2c7sqRAcN7UFze X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace the custom kmem_cache_*() functions with a unified kmem_cache_setup() function that is based on struct kmem_cache_args and will replace kmem_cache_create(), kmem_cache_create_usercopy(), and kmem_cache_create_rcu(). The @name, @object_size, and @flags parameters are passed separately as they are nearly universally used. The rest of the arguments moves into struct kmem_cache_args. A new "use_freeptr_offset" boolean is added as zero is a valid freelist pointer offset. The boolean allows callers to avoid having to do anything special if they don't care about freelist pointer offsets (most callers don't). Signed-off-by: Christian Brauner --- Documentation/core-api/memory-allocation.rst | 10 +-- include/linux/slab.h | 21 ++++++ mm/slab_common.c | 102 +++++++++++++++++---------- 3 files changed, 91 insertions(+), 42 deletions(-) diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst index 8b84eb4bdae7..83ae3021c89a 100644 --- a/Documentation/core-api/memory-allocation.rst +++ b/Documentation/core-api/memory-allocation.rst @@ -166,11 +166,11 @@ documentation. Note that `kvmalloc` may return memory that is not physically contiguous. If you need to allocate many identical objects you can use the slab -cache allocator. The cache should be set up with kmem_cache_create() or -kmem_cache_create_usercopy() before it can be used. The second function -should be used if a part of the cache might be copied to the userspace. -After the cache is created kmem_cache_alloc() and its convenience -wrappers can allocate memory from that cache. +cache allocator. The cache should be set up with kmem_cache_setup() +before it can be used. The second function should be used if a part of +the cache might be copied to the userspace. After the cache is created +kmem_cache_alloc() and its convenience wrappers can allocate memory from +that cache. When the allocated memory is no longer needed it must be freed. diff --git a/include/linux/slab.h b/include/linux/slab.h index 5b2da2cf31a8..41135ea03299 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -240,6 +240,27 @@ struct mem_cgroup; */ bool slab_is_available(void); +/** + * @align: The required alignment for the objects. + * @useroffset: Usercopy region offset + * @usersize: Usercopy region size + * @freeptr_offset: Custom offset for the free pointer in RCU caches + * @use_freeptr_offset: Whether a @freeptr_offset is used + * @ctor: A constructor for the objects. + */ +struct kmem_cache_args { + unsigned int align; + unsigned int useroffset; + unsigned int usersize; + unsigned int freeptr_offset; + bool use_freeptr_offset; + void (*ctor)(void *); +}; + +struct kmem_cache *kmem_cache_setup(const char *name, unsigned int object_size, + struct kmem_cache_args *args, + slab_flags_t flags); + struct kmem_cache *kmem_cache_create(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)); diff --git a/mm/slab_common.c b/mm/slab_common.c index 95db3702f8d6..515ad422cf30 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -202,22 +202,22 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, } static struct kmem_cache *create_cache(const char *name, - unsigned int object_size, unsigned int freeptr_offset, - unsigned int align, slab_flags_t flags, - unsigned int useroffset, unsigned int usersize, - void (*ctor)(void *)) + unsigned int object_size, + struct kmem_cache_args *args, + slab_flags_t flags) { struct kmem_cache *s; int err; - if (WARN_ON(useroffset + usersize > object_size)) - useroffset = usersize = 0; + if (WARN_ON(args->useroffset + args->usersize > object_size)) + args->useroffset = args->usersize = 0; /* If a custom freelist pointer is requested make sure it's sane. */ err = -EINVAL; - if (freeptr_offset != UINT_MAX && - (freeptr_offset >= object_size || !(flags & SLAB_TYPESAFE_BY_RCU) || - !IS_ALIGNED(freeptr_offset, sizeof(freeptr_t)))) + if (args->use_freeptr_offset && + (args->freeptr_offset >= object_size || + !(flags & SLAB_TYPESAFE_BY_RCU) || + !IS_ALIGNED(args->freeptr_offset, sizeof(freeptr_t)))) goto out; err = -ENOMEM; @@ -227,12 +227,15 @@ static struct kmem_cache *create_cache(const char *name, s->name = name; s->size = s->object_size = object_size; - s->rcu_freeptr_offset = freeptr_offset; - s->align = align; - s->ctor = ctor; + if (args->use_freeptr_offset) + s->rcu_freeptr_offset = args->freeptr_offset; + else + s->rcu_freeptr_offset = UINT_MAX; + s->align = args->align; + s->ctor = args->ctor; #ifdef CONFIG_HARDENED_USERCOPY - s->useroffset = useroffset; - s->usersize = usersize; + s->useroffset = args->useroffset; + s->usersize = args->usersize; #endif err = __kmem_cache_create(s, flags); if (err) @@ -248,12 +251,22 @@ static struct kmem_cache *create_cache(const char *name, return ERR_PTR(err); } -static struct kmem_cache * -do_kmem_cache_create_usercopy(const char *name, - unsigned int size, unsigned int freeptr_offset, - unsigned int align, slab_flags_t flags, - unsigned int useroffset, unsigned int usersize, - void (*ctor)(void *)) +/** + * kmem_cache_setup - Create a kmem cache + * @name: A string which is used in /proc/slabinfo to identify this cache. + * @object_size: The size of objects to be created in this cache. + * @args: Arguments for the cache creation (see struct kmem_cache_args). + * + * Cannot be called within a interrupt, but can be interrupted. + * The @ctor is run when new pages are allocated by the cache. + * + * See %SLAB_* flags for an explanation of individual @flags. + * + * Return: a pointer to the cache on success, NULL on failure. + */ +struct kmem_cache *kmem_cache_setup(const char *name, unsigned int object_size, + struct kmem_cache_args *args, + slab_flags_t flags) { struct kmem_cache *s = NULL; const char *cache_name; @@ -275,7 +288,7 @@ do_kmem_cache_create_usercopy(const char *name, mutex_lock(&slab_mutex); - err = kmem_cache_sanity_check(name, size); + err = kmem_cache_sanity_check(name, object_size); if (err) { goto out_unlock; } @@ -296,12 +309,14 @@ do_kmem_cache_create_usercopy(const char *name, /* Fail closed on bad usersize of useroffset values. */ if (!IS_ENABLED(CONFIG_HARDENED_USERCOPY) || - WARN_ON(!usersize && useroffset) || - WARN_ON(size < usersize || size - usersize < useroffset)) - usersize = useroffset = 0; - - if (!usersize) - s = __kmem_cache_alias(name, size, align, flags, ctor); + WARN_ON(!args->usersize && args->useroffset) || + WARN_ON(object_size < args->usersize || + object_size - args->usersize < args->useroffset)) + args->usersize = args->useroffset = 0; + + if (!args->usersize) + s = __kmem_cache_alias(name, object_size, args->align, flags, + args->ctor); if (s) goto out_unlock; @@ -311,9 +326,8 @@ do_kmem_cache_create_usercopy(const char *name, goto out_unlock; } - s = create_cache(cache_name, size, freeptr_offset, - calculate_alignment(flags, align, size), - flags, useroffset, usersize, ctor); + args->align = calculate_alignment(flags, args->align, object_size); + s = create_cache(cache_name, object_size, args, flags); if (IS_ERR(s)) { err = PTR_ERR(s); kfree_const(cache_name); @@ -335,6 +349,7 @@ do_kmem_cache_create_usercopy(const char *name, } return s; } +EXPORT_SYMBOL(kmem_cache_setup); /** * kmem_cache_create_usercopy - Create a cache with a region suitable @@ -370,8 +385,14 @@ kmem_cache_create_usercopy(const char *name, unsigned int size, unsigned int useroffset, unsigned int usersize, void (*ctor)(void *)) { - return do_kmem_cache_create_usercopy(name, size, UINT_MAX, align, flags, - useroffset, usersize, ctor); + return kmem_cache_setup(name, size, + &(struct kmem_cache_args){ + .align = align, + .ctor = ctor, + .useroffset = useroffset, + .usersize = usersize, + }, + flags); } EXPORT_SYMBOL(kmem_cache_create_usercopy); @@ -404,8 +425,12 @@ struct kmem_cache * kmem_cache_create(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)) { - return do_kmem_cache_create_usercopy(name, size, UINT_MAX, align, flags, - 0, 0, ctor); + return kmem_cache_setup(name, size, + &(struct kmem_cache_args){ + .align = align, + .ctor = ctor, + }, + flags); } EXPORT_SYMBOL(kmem_cache_create); @@ -442,9 +467,12 @@ struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size, unsigned int freeptr_offset, slab_flags_t flags) { - return do_kmem_cache_create_usercopy(name, size, freeptr_offset, 0, - flags | SLAB_TYPESAFE_BY_RCU, 0, 0, - NULL); + return kmem_cache_setup(name, size, + &(struct kmem_cache_args){ + .freeptr_offset = freeptr_offset, + .use_freeptr_offset = true, + }, + flags | SLAB_TYPESAFE_BY_RCU); } EXPORT_SYMBOL(kmem_cache_create_rcu);