From patchwork Wed Sep 4 10:21:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Brauner X-Patchwork-Id: 13790360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4EE3CD3431 for ; Wed, 4 Sep 2024 10:22:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 605136B00A7; Wed, 4 Sep 2024 06:22:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B4F76B00AC; Wed, 4 Sep 2024 06:22:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 42DF86B00B2; Wed, 4 Sep 2024 06:22:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1FC3B6B00A7 for ; Wed, 4 Sep 2024 06:22:22 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C9264160FCC for ; Wed, 4 Sep 2024 10:22:21 +0000 (UTC) X-FDA: 82526666082.19.8604F20 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf26.hostedemail.com (Postfix) with ESMTP id E57D9140021 for ; Wed, 4 Sep 2024 10:22:19 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dHNDkNyo; spf=pass (imf26.hostedemail.com: domain of brauner@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725445243; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Vs7vbuUxiD8+4I1sHX2bOXordzhmmy8PlAhIo7UL34o=; b=U6fuAcLvumCm8iphgs18+ZORFnFAlKb8WlPOKjt1igVbves4Abmfj0fc1EgbIOJXWqWDFS cnZjAYziQfnIbBKlL7/F7ByQw525yPcUkfjqB8KlK48AFZ+zoZRUxuVwiBDw0Rltc138up aJzrl/vrYfJoXVNcg9NFWrXmn6wMLHo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725445243; a=rsa-sha256; cv=none; b=WNWe7KflJdU3hs747xbPL6yxBryuLiEXqETMs1QKJCU31I6QFnCBQSHL3tpixiQ2NN36jH qbz33l5312U7MESI/HybxaxneB5kfawpOZM9dnH51AwNbmkK4CLFe6M1VY5JlyD6lrTpz0 at5h3+Guh3DjnhKrUcQLkKDnLDnqIw0= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=dHNDkNyo; spf=pass (imf26.hostedemail.com: domain of brauner@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id B024C5C56EA; Wed, 4 Sep 2024 10:22:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6FCF6C4CEC8; Wed, 4 Sep 2024 10:22:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725445338; bh=XvA3OjNvZTXuO+1s1Yt+F4vUj5bmDXzlHdYPfx3s9dQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=dHNDkNyof6PXzmIXAY/8Nxv2qxyJEp6i9DMdtkFlhAvTa/5gvBP4elVrdQWCQbs/g UqHo6xYX/3ja7Mla50VPP7g0eWGgcGcFZPdp6Xe/K9ksOQzVGHjDcZ/x/lg9hZrpI4 w7jrv19fpk/YrutcUmR+3sVhnY1AW/KTGxJ3a8PzpMu42uUBzCanWtarU3eHK/6i3w GB39Wfo27w8EWKr07F9s5ue6ob2fFV97Z/9RKVmKTXiCez04wM5+4dsdQkfyKvkXPn mGrUwTUx6TgmWhjgdb/JXbR9H/VGapduDWqswy7FQXdvKM3csBKfacz50Nm1GSP3J2 tLntU20O+Ds8A== From: Christian Brauner Date: Wed, 04 Sep 2024 12:21:12 +0200 Subject: [PATCH v3 07/17] slab: pull kmem_cache_open() into do_kmem_cache_create() MIME-Version: 1.0 Message-Id: <20240904-work-kmem_cache_args-v3-7-05db2179a8c2@kernel.org> References: <20240904-work-kmem_cache_args-v3-0-05db2179a8c2@kernel.org> In-Reply-To: <20240904-work-kmem_cache_args-v3-0-05db2179a8c2@kernel.org> To: Vlastimil Babka , Jens Axboe , Jann Horn , Linus Torvalds , Mike Rapoport Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christian Brauner X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=openpgp-sha256; l=4287; i=brauner@kernel.org; h=from:subject:message-id; bh=XvA3OjNvZTXuO+1s1Yt+F4vUj5bmDXzlHdYPfx3s9dQ=; b=owGbwMvMwCU28Zj0gdSKO4sYT6slMaTdMNm19VTQ/A31/Pcu2kyd5/7N9tnrK+eEj5TmndY0t DG4qP9lXkcpC4MYF4OsmCKLQ7tJuNxynorNRpkaMHNYmUCGMHBxCsBEvK4wMrzY0Sn98yJDqfmu HNOPRaseRR7wXDt5jfvSf15amn62AQ8Z/oocyBH68vthrP6iL/4TN0+xFO8VmnIh98LPPQ5Lb9u 1HeYHAA== X-Developer-Key: i=brauner@kernel.org; a=openpgp; fpr=4880B8C9BD0E5106FC070F4F7B3C391EFEA93624 X-Stat-Signature: itzmd6pjf6bcpkhquu8wjk55kqrnsoe6 X-Rspamd-Queue-Id: E57D9140021 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1725445339-887520 X-HE-Meta: U2FsdGVkX1+2ACO24Y0Kla9rtC3qOEOEbEo+RvwGUuitLbVlxtRi6yLPng/f63aqL76q34SwTNGuyso9N7gE1WFC75kOJuLs+Cb3cohdw2BsSwO3bDf4XgQ/dcIpRJmXKY9AOyLfuvH6mD1DlvIHgf7UniVDmgTfpeWEJHUy05o3x/HE9IQDNDkcZLTCHoAHbzxb4E1+DE5hMnQINX4BPmoxbWY0LD2Zs19fWSEkA6WvQ8emIJmOlQXCu6IcRHFOjxJn49v1JKBfZIyCH8rVe54QR2TZlSYiCMS5e2PHgZgT8Ve2SGg7+38+T+7H143g3TDEZxx9L0gt8AqMmFa33MqahK6f8gPpcJApQkCuz2meii0r+MASGwRvH5mTxi8Ql4OtgdkjuSU8obFG/eFAiJLAOJOmXJqwi3jVrNUrd8MA0wGEGa9NvVJjU/mmYoIpRnUy4YsNC5TY2aAIn8uSBk3ZJpxoF8rvsB/2EOaI6XZwtzosUeqUKSn7Vx9jw+UCpCxM+s9D89A5uTqVe9d7OZsT96G3CXthr5G1oGY7O/4H8poLpmXJjYC5QjWxp1aw+ZCJJrOqUWwMNjNMQ5ZTgfcJSM+UKEWEpkE5mJ0rAKF9QrM3DLt4xj0ppvgqD62i/6FcIzYA/8z/DGlIULqRxrZIPCeI7/8UF2zCbYn9Ps9GLM+pOAdGKnk3yGj2/ivl3yXy1qFElm3ma5CLr/zHRHVF4IcylaTuXLdP1ilkpsM/pluPY6OBd3UbhOsfVJiXJMDZIaSMP0iSDiUuAbAFABk5QvkzBmWmj/gkICRdz/QAvuRE+87T3rzqd8YAdNdPlvIaK/q0ILyukxTLq6KVBgX6A8rUsu25oHt8Tzg/uWQlmx8NbP9V0WRScsR0LW8rmHp58Qhr97yLAmGf2sg96I5Z+Ujg2uhVz9jiAGWcGgfPjGyQXpSKXQUHsu/c9OM6O0CxEKO2H6n/BuCFbtc lSzCkTcW 06ZGwnjcOm08MOeLKp8gQTovGfZ5ydAxQQb+h6kkXNEblUf3eLdVn5g3l072SsuYvnTrma1iNHzWSUrtRXFjoSQJKDCy6D9H7Z/N+OuHtt1mn9wQMHc94k9qMXoyvVYdASRSbYOrbu+G2L1lkULoK33ShCk6TU85UISeS54uWsSWNZeDyaVX9u8qDlKFsS9Me6XBXiz8IfndkmFCil2LA4B2gqcQ4SeVDzuyTI9yz+k6K1wqfUC/1y1NgeeN7Cu9TNyXRdalMv3zLFRRhneolMOgzOa4PkVCwbSm/Y1BZZsyUfu5y/3BOgWLqNNKQHIGH53xVyXrpHaNS8cIW2uvkUla62jHmXCcl2W+PLZ02SlmGDAg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: do_kmem_cache_create() is the only caller and we're going to pass down struct kmem_cache_args in a follow-up patch. Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Vlastimil Babka Signed-off-by: Christian Brauner --- mm/slub.c | 132 +++++++++++++++++++++++++++++--------------------------------- 1 file changed, 62 insertions(+), 70 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 23d9d783ff26..30f4ca6335c7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5290,65 +5290,6 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) -{ - s->flags = kmem_cache_flags(flags, s->name); -#ifdef CONFIG_SLAB_FREELIST_HARDENED - s->random = get_random_long(); -#endif - - if (!calculate_sizes(s)) - goto error; - if (disable_higher_order_debug) { - /* - * Disable debugging flags that store metadata if the min slab - * order increased. - */ - if (get_order(s->size) > get_order(s->object_size)) { - s->flags &= ~DEBUG_METADATA_FLAGS; - s->offset = 0; - if (!calculate_sizes(s)) - goto error; - } - } - -#ifdef system_has_freelist_aba - if (system_has_freelist_aba() && !(s->flags & SLAB_NO_CMPXCHG)) { - /* Enable fast mode */ - s->flags |= __CMPXCHG_DOUBLE; - } -#endif - - /* - * The larger the object size is, the more slabs we want on the partial - * list to avoid pounding the page allocator excessively. - */ - s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); - s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); - - set_cpu_partial(s); - -#ifdef CONFIG_NUMA - s->remote_node_defrag_ratio = 1000; -#endif - - /* Initialize the pre-computed randomized freelist if slab is up */ - if (slab_state >= UP) { - if (init_cache_random_seq(s)) - goto error; - } - - if (!init_kmem_cache_nodes(s)) - goto error; - - if (alloc_kmem_cache_cpus(s)) - return 0; - -error: - __kmem_cache_release(s); - return -EINVAL; -} - static void list_slab_objects(struct kmem_cache *s, struct slab *slab, const char *text) { @@ -5904,26 +5845,77 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, int do_kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { - int err; + int err = -EINVAL; - err = kmem_cache_open(s, flags); - if (err) - return err; + s->flags = kmem_cache_flags(flags, s->name); +#ifdef CONFIG_SLAB_FREELIST_HARDENED + s->random = get_random_long(); +#endif + + if (!calculate_sizes(s)) + goto out; + if (disable_higher_order_debug) { + /* + * Disable debugging flags that store metadata if the min slab + * order increased. + */ + if (get_order(s->size) > get_order(s->object_size)) { + s->flags &= ~DEBUG_METADATA_FLAGS; + s->offset = 0; + if (!calculate_sizes(s)) + goto out; + } + } + +#ifdef system_has_freelist_aba + if (system_has_freelist_aba() && !(s->flags & SLAB_NO_CMPXCHG)) { + /* Enable fast mode */ + s->flags |= __CMPXCHG_DOUBLE; + } +#endif + + /* + * The larger the object size is, the more slabs we want on the partial + * list to avoid pounding the page allocator excessively. + */ + s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); + + set_cpu_partial(s); + +#ifdef CONFIG_NUMA + s->remote_node_defrag_ratio = 1000; +#endif + + /* Initialize the pre-computed randomized freelist if slab is up */ + if (slab_state >= UP) { + if (init_cache_random_seq(s)) + goto out; + } + + if (!init_kmem_cache_nodes(s)) + goto out; + + if (!alloc_kmem_cache_cpus(s)) + goto out; /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; + if (slab_state <= UP) { + err = 0; + goto out; + } err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } + if (err) + goto out; if (s->flags & SLAB_STORE_USER) debugfs_slab_add(s); - return 0; +out: + if (err) + __kmem_cache_release(s); + return err; } #ifdef SLAB_SUPPORTS_SYSFS