From patchwork Thu Sep 5 07:56:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Brauner X-Patchwork-Id: 13791887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46F64CD4F5B for ; Thu, 5 Sep 2024 07:57:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C464D6B0253; Thu, 5 Sep 2024 03:57:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C1CAD6B0254; Thu, 5 Sep 2024 03:57:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ABDAC6B0255; Thu, 5 Sep 2024 03:57:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8A4FF6B0253 for ; Thu, 5 Sep 2024 03:57:34 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 40F311219C5 for ; Thu, 5 Sep 2024 07:57:34 +0000 (UTC) X-FDA: 82529930028.01.402062A Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf04.hostedemail.com (Postfix) with ESMTP id 81BAD40006 for ; Thu, 5 Sep 2024 07:57:32 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ObpeGRhp; spf=pass (imf04.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1725523003; a=rsa-sha256; cv=none; b=GEF6Xt4s9rybHj3Q/3iuFBXFX1c2xc+EL++k8SH3m0yIaXW/Akroal0LXbMsSrptd6Ixiq nhFYOBWI1frpDAr1ZiygFooTCoVSW/tVF5yQnVnbMzVGHFd+DepJOwV8zhJ9EfP02Pjx/W kfReuNqc8Edn1tuf+5lIS4y198B9Nos= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=ObpeGRhp; spf=pass (imf04.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1725523003; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fR15JKYr3nIrxBVSvmJg8KI8NnjqKFw8vV8l27FBH98=; b=EdnFWw6lsmHVk8M76K6RbHWHcK5Nx2Q8mfJ5utsr89do3iHTORGe5eWOJu6YqyJ5pPraeP lx5gjhcZerxyNAPL54ke465nKQ1shediSmXYOIwgdvlJTr92N+E0Qs26m5yi5tC4Pcld8u hHNqCgacwceIZxwAU8i2gnwJzxhtOnc= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id A4F8FA431BC; Thu, 5 Sep 2024 07:57:24 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 29AD4C4CEC6; Thu, 5 Sep 2024 07:57:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1725523051; bh=F4tMDDfenpBgeWRetu8ug4Xik9BM3oJX2qts2ZpcdrM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=ObpeGRhpLdQ/xuBN6oc6xnWemMqwQAuRAUt5YcF7NoCkWvjhAwlxQWyj0YMXbjEnA xL+aJg6yRld7k9ZRH+ok+weCGJzWrsHl08suV/gG2HeJ3/bwtGoGhpv6BivMo7Gh/Y UVXJXgjKC/yBQHHnVSSqhpPpIc46SKUmR3BUjov2sSc1Mi+CfEyraqEvSHni4F3tkZ MzrzFsfHl0cO15XuhnUg81sW2N32QewmdDV6gkcmZXKczJ+sn1glfWxTCxgd4k3LwZ vyBL3URCpolzi5OPCOlGEedhX0Hvc+I3WaG+oV2yYW7weQgP4k4TnhHF4l50gtempi NTlN8VzUHGzvw== From: Christian Brauner Date: Thu, 05 Sep 2024 09:56:50 +0200 Subject: [PATCH v4 07/17] slab: pull kmem_cache_open() into do_kmem_cache_create() MIME-Version: 1.0 Message-Id: <20240905-work-kmem_cache_args-v4-7-ed45d5380679@kernel.org> References: <20240905-work-kmem_cache_args-v4-0-ed45d5380679@kernel.org> In-Reply-To: <20240905-work-kmem_cache_args-v4-0-ed45d5380679@kernel.org> To: Vlastimil Babka , Jens Axboe , Jann Horn , Linus Torvalds , Mike Rapoport Cc: Kees Cook , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Christian Brauner X-Mailer: b4 0.15-dev-37811 X-Developer-Signature: v=1; a=openpgp-sha256; l=4372; i=brauner@kernel.org; h=from:subject:message-id; bh=F4tMDDfenpBgeWRetu8ug4Xik9BM3oJX2qts2ZpcdrM=; b=owGbwMvMwCU28Zj0gdSKO4sYT6slMaTdTPF+/LIxODE3/l71sjm8Yc+2flq/cFrAmkffmULnH w0LqTr5paOUhUGMi0FWTJHFod0kXG45T8Vmo0wNmDmsTCBDGLg4BWAiDBEM/4sL0/4K6YoFeClI L/9byMhqxCZcLfdt52Kv+x5Zwh/nCzMybM5ddf7X++DrBstaBVet+fJQ5/yETMadStuc6jZ9r9f dwgIA X-Developer-Key: i=brauner@kernel.org; a=openpgp; fpr=4880B8C9BD0E5106FC070F4F7B3C391EFEA93624 X-Stat-Signature: sftem1pqy9subtcybxiua8bwuazk9yxw X-Rspamd-Queue-Id: 81BAD40006 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1725523052-96142 X-HE-Meta: U2FsdGVkX1/GSe+XD+BA1v8u4lwA9Y2K+cXWUDMbB23nFOFRtWf7CAI6+YTs+Mop8R30Hbrse6rwQ/YEKDkG6BQgPHpicEjSZ1J4J5wz5G1dqQCnFkw5pwW7YbpGRu6bUjJtn1oXdyUED6w7+ivxH7gIcZ49K/DBADrdErY6Ts5e4QZekjD8xY6s/QCskWAjWNP0bO9yhzMYvgFqelmX6Orpwy7tAb89U5DgIqWB4fY++bVmp0N+t/fPvvdPs9UoYNT06SU82J1foz8cqMHrErL5X/9j1Ya/eoIciUrbZBKFcQY5uTQl+eUGlmRdoily2/j7Il5prq2eGbqQoihe9nzazr017sLulL00/zceCNxn79YL5/Q1NsBxDP2z5/BOAoIbHiS8yMJV05Bgs8AHETz6gwrPblTdYmjxOZRBBqJ0cDDEyEpTaDFfJS7pMpm0WhRhWVjLvF7Fbiz7KubDiNaMGI097hJsbOe70drVL9FsGyloaVTuCEdbNtjGtCvBf1NAQbBpTtxygkJFLxNsKTbn11dxcuAU1yyND94hm+TZ/zGEzHEc0z9DjxmD3UDoTUem9ghXFO+bAJu0BET63UarKXZHV9intYQxDqYUqbD1lS1psf4yOtel05r9UOQbZO/pZmGpo8KXccPHIcEXm/D+b6Xs4vQXv7j2kAqDQvZr5WfVhrnI8TeqLSVoiRfUHhbXfZZEm11SA/lteFvGKIjQfJtJLdNlPUnH+k888OMmU37wcrRhU2bXxtVD9BGaKdZiCM2xr2I4xncAsGbf3YaQuE6m/fZ3+39Q/r8aOLCEqY3po3fnSsx9YqR69Eq7e+s75N9Jr+YqyFitbsdVRP0ng4R5dfIw6gCmRZjn1hJ9phkkdS6SsLl0BRp0cLe5pLyjEpSdgMTroTHysUty9lw3kiCvuW+fFcO3nfDXNR5ay+YNRLffi0Inx3TTPHfxa8o4LLznKnTttZIWS3V +X9QyRa0 n6G+kL79XusX85/H7sUzVhQTRvOvlER6oymK04zlWt62yr1DkLz/GTPLLy70bVUQbyYb4UKOyk8YEN99VbiWlZ7xLc/NnDh5Zxwonk7x78v5RzEmSqved01rtmKT6nrw6K5GPPcuxTQV11bceQanWH0X+DmpwLzOyAbFxsyp5TLhkqiY879iZZSRDgsKE5b0sqqsp+UkMHL7uZVdh33qXLXYYWht5NOebSFOInE0ZGfo55/ePUNpJXVNugJM8qMHoaBDYRiPnFSD4LjcGRg7cLZCFHXuEwFpi35M46yTb5oI1O2EBf6Ioa3Ts6iIwi8MYRQhxu+xk7/SVTQ4d5EPdB3OI0a/HB7NYmogPvSk2RQr+cj0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: do_kmem_cache_create() is the only caller and we're going to pass down struct kmem_cache_args in a follow-up patch. Reviewed-by: Kees Cook Reviewed-by: Jens Axboe Reviewed-by: Mike Rapoport (Microsoft) Reviewed-by: Vlastimil Babka Signed-off-by: Christian Brauner Reviewed-by: Roman Gushchin --- mm/slub.c | 132 +++++++++++++++++++++++++++++--------------------------------- 1 file changed, 62 insertions(+), 70 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 23d9d783ff26..30f4ca6335c7 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5290,65 +5290,6 @@ static int calculate_sizes(struct kmem_cache *s) return !!oo_objects(s->oo); } -static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) -{ - s->flags = kmem_cache_flags(flags, s->name); -#ifdef CONFIG_SLAB_FREELIST_HARDENED - s->random = get_random_long(); -#endif - - if (!calculate_sizes(s)) - goto error; - if (disable_higher_order_debug) { - /* - * Disable debugging flags that store metadata if the min slab - * order increased. - */ - if (get_order(s->size) > get_order(s->object_size)) { - s->flags &= ~DEBUG_METADATA_FLAGS; - s->offset = 0; - if (!calculate_sizes(s)) - goto error; - } - } - -#ifdef system_has_freelist_aba - if (system_has_freelist_aba() && !(s->flags & SLAB_NO_CMPXCHG)) { - /* Enable fast mode */ - s->flags |= __CMPXCHG_DOUBLE; - } -#endif - - /* - * The larger the object size is, the more slabs we want on the partial - * list to avoid pounding the page allocator excessively. - */ - s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); - s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); - - set_cpu_partial(s); - -#ifdef CONFIG_NUMA - s->remote_node_defrag_ratio = 1000; -#endif - - /* Initialize the pre-computed randomized freelist if slab is up */ - if (slab_state >= UP) { - if (init_cache_random_seq(s)) - goto error; - } - - if (!init_kmem_cache_nodes(s)) - goto error; - - if (alloc_kmem_cache_cpus(s)) - return 0; - -error: - __kmem_cache_release(s); - return -EINVAL; -} - static void list_slab_objects(struct kmem_cache *s, struct slab *slab, const char *text) { @@ -5904,26 +5845,77 @@ __kmem_cache_alias(const char *name, unsigned int size, unsigned int align, int do_kmem_cache_create(struct kmem_cache *s, slab_flags_t flags) { - int err; + int err = -EINVAL; - err = kmem_cache_open(s, flags); - if (err) - return err; + s->flags = kmem_cache_flags(flags, s->name); +#ifdef CONFIG_SLAB_FREELIST_HARDENED + s->random = get_random_long(); +#endif + + if (!calculate_sizes(s)) + goto out; + if (disable_higher_order_debug) { + /* + * Disable debugging flags that store metadata if the min slab + * order increased. + */ + if (get_order(s->size) > get_order(s->object_size)) { + s->flags &= ~DEBUG_METADATA_FLAGS; + s->offset = 0; + if (!calculate_sizes(s)) + goto out; + } + } + +#ifdef system_has_freelist_aba + if (system_has_freelist_aba() && !(s->flags & SLAB_NO_CMPXCHG)) { + /* Enable fast mode */ + s->flags |= __CMPXCHG_DOUBLE; + } +#endif + + /* + * The larger the object size is, the more slabs we want on the partial + * list to avoid pounding the page allocator excessively. + */ + s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); + s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial); + + set_cpu_partial(s); + +#ifdef CONFIG_NUMA + s->remote_node_defrag_ratio = 1000; +#endif + + /* Initialize the pre-computed randomized freelist if slab is up */ + if (slab_state >= UP) { + if (init_cache_random_seq(s)) + goto out; + } + + if (!init_kmem_cache_nodes(s)) + goto out; + + if (!alloc_kmem_cache_cpus(s)) + goto out; /* Mutex is not taken during early boot */ - if (slab_state <= UP) - return 0; + if (slab_state <= UP) { + err = 0; + goto out; + } err = sysfs_slab_add(s); - if (err) { - __kmem_cache_release(s); - return err; - } + if (err) + goto out; if (s->flags & SLAB_STORE_USER) debugfs_slab_add(s); - return 0; +out: + if (err) + __kmem_cache_release(s); + return err; } #ifdef SLAB_SUPPORTS_SYSFS