From patchwork Mon Aug 26 16:04:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Brauner X-Patchwork-Id: 13778133 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4E93C5321E for ; Mon, 26 Aug 2024 16:12:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C2B16B007B; Mon, 26 Aug 2024 12:12:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1741F6B0082; Mon, 26 Aug 2024 12:12:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 03B896B0083; Mon, 26 Aug 2024 12:12:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D62036B007B for ; Mon, 26 Aug 2024 12:12:33 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 81C53A076A for ; Mon, 26 Aug 2024 16:12:33 +0000 (UTC) X-FDA: 82494889386.13.D4C1D8E Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by imf17.hostedemail.com (Postfix) with ESMTP id B2AE340016 for ; Mon, 26 Aug 2024 16:12:31 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GAHp3mR3; spf=pass (imf17.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724688732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Zlx6DyY6s5ma6Q4cEXS9u2gO0QW394mdgJ159EhLl2M=; b=ql84f07Ce4w5FVPwpBeZ1uEyBZoGCKiSW0NHZRcDLqJJzkbiEg87xHJsmZwzYq3HW96ND0 om8VgZDPOa3IrqZqOlJcvlg8OJ6M07OpanSvEjfRay2kMAhwMlb0mPl9mdemsIWFnKZ/AE H+GCv61C3DcdVACPIWGUJAcoDuopaH0= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=GAHp3mR3; spf=pass (imf17.hostedemail.com: domain of brauner@kernel.org designates 147.75.193.91 as permitted sender) smtp.mailfrom=brauner@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724688732; a=rsa-sha256; cv=none; b=WiG4Y0R/zgpwTnvKVz8KPBfiiiHvanI2xUSKx3xFjJGJXKpYDvtaAVviO6cIiUp7wtn2bZ eTZ3e2NLPRh9Gl0Dj1VB55kGu9NPh5Fxel//lqTo/S4/7Mv/UNr2sQeJ2ZYEfyjYVWczdf xwzMo3E614HQhBudTSYaQu4xMtwjq4Y= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id CF5B8A41497; Mon, 26 Aug 2024 16:12:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A1D7C51410; Mon, 26 Aug 2024 16:04:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1724688299; bh=zCdUBCTNXMAPKPYMvpWM5SxPfTPwS44zdPGKhMjzG+E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GAHp3mR3B9iWFN94tR7d7v9loxLdU5eyE4m7xY3mCyJ7TaigT6/2B63QY/miimnpQ SS6JMLy6Baq37sNP13xT0DYuvFmO7kVh/G0aVH8bz9sdNbq3+0JX2r+bc1F0OtkRpq 2hw1B/iOzlG9e2/cF3EBLedi7bYI8VeRymzNCB6XHaNxxiupCBNlgKXBx2WGaZ6cFA ikmOTHZ//Q/wVyxVhdXZA7V13DZDxEPg/yngqhnYWLMbzi7S0usdyLlXNkLWC9LkvT gmg3YbVO3bgBF4inMkiZ5WrVIKc511lWvpgOpu+3pBHbY+/WWaagMlH1x1472TB0+P zwo9bLCAH5POg== From: Christian Brauner To: Vlastimil Babka , Jens Axboe , "Paul E. McKenney" , Roman Gushchin , Linus Torvalds , Jann Horn Cc: Christian Brauner , linux-mm@kvack.org Subject: [PATCH] [RFC] mm: add kmem_cache_create_rcu() Date: Mon, 26 Aug 2024 18:04:13 +0200 Message-ID: <20240826-okkupieren-nachdenken-d88ac627e9bc@brauner> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240826-zweihundert-luftkammer-9b2b47f4338e@brauner> References: <20240826-zweihundert-luftkammer-9b2b47f4338e@brauner> MIME-Version: 1.0 X-Developer-Signature: v=1; a=openpgp-sha256; l=10033; i=brauner@kernel.org; h=from:subject:message-id; bh=zCdUBCTNXMAPKPYMvpWM5SxPfTPwS44zdPGKhMjzG+E=; b=owGbwMvMwCU28Zj0gdSKO4sYT6slMaSdWT67J6vjQqHQQr4I0X0Xdp2Vjv37w3ZaVqRh+Xf5s 4nvuPILOkpZGMS4GGTFFFkc2k3C5ZbzVGw2ytSAmcPKBDKEgYtTACaisp2RYc9h5c8Nh85L3okx YUvUt9sSueG2g+HGxp98Cevu7gv8MJORYZF5l+7u6cXregRSrH149CfyC5R9LV7x5uLFk38r3IO 8mAA= X-Developer-Key: i=brauner@kernel.org; a=openpgp; fpr=4880B8C9BD0E5106FC070F4F7B3C391EFEA93624 X-Rspam-User: X-Stat-Signature: jmkyy31ttoqbdwky85jf885oeaskign1 X-Rspamd-Queue-Id: B2AE340016 X-Rspamd-Server: rspam11 X-HE-Tag: 1724688751-267870 X-HE-Meta: U2FsdGVkX18Y/wWhcCiA4p17KNp0mkWcc1KzkfNkBPnNpmL0gfK6WjcVGpuIDHxO5AYuUp4CZd30yifvfryR48ZNkI2YR/teT0+PltHUzGIA7CiPsRvBMgtezhhGqwXz+EtfmCZOPd1O/5bNb6iQzQ0d5UwJPsKnME1mviLPonEOaRUfNWNbIgrj0WhVr4+dOvDpxjWoX8yiqFYE6LfYWC5g79MogzwavCrxEhPDeFPY9Xo9hry6fn/R6jYGIk26l9zIKHBlgdmxJAnNRB/oII5fo66NUDrDXkqi2au4LPCHrole3fD1M7OBaGg4MEbsi6hMeLXPN5pDz/KmD2klp02g/9f6hF00xJ7xOjcXg3OxqZiSr0IqxE9H9Ow19AvbgHpvvHNZq2CmS+k/pHFwgz4rJfue2mDC1DuvlOzdDEEOy2Wb+x3puWMHHQTsOOaXHXG9Fbhn0JTRZzjdHcHEQyzQ1fn701k/eC+oGXzZ97JJf5lY4SgpBiynGcR5GgzaIQwqbYJKNmvt18cs8V7NBtrbOURf7N1KciGH0loCMAY3NAbPkEnNxhJ/vj4OFOkeei+IemXvcPerd958ivB0ZRkAtzUkUGz1R2gpG3NTBx7/ojEr+XS79HdZAIDU7xjS41rgL0khmV3c8caK/8F8HK1VpXo+QM69LxFvTPBSCqW8s/tvfZR3MsskQp03Zdlw/Kp91KfqlpI7udbTkSHVlCbhTUVczSphikRD7ssBHZ91quy5hEDAO+/X38Lifa4LvjiWQCtSQj3Fup0m/bj4z8lRu5zcmd4mYaO44743IRnpGZY2TctYiDuRNpvO/3YZ/CVurn1mezKiawjwG/OE+5etcX02rUosds7SiuUyiy1A6RDLsIMf1EISScRPLzpGllXAQKeI+c6ZWaMx+DZwbxDg4h6BUExS0DuCZ3+sEE1nZThVNgZDd54eybXpT2MAQdYOnByYxX4TpXbExry RIP7JJbV mCaIuvCzNwf9R0gCgHLGruv+mbBAVw4vwOfxqPJgOvvAWMm6//BuyRahmfkCveWg0ph1JBfgS95Eug+muzCagD4X4omBgAvkRA5Kbv6XuyVcgQ2m4lcNvUKh3DCG6BSBtqdTtc7vZu7tdoLmjnbhGl3LH44vpeaWQ5cKVUjE1uk4TkmFInSxhKsl3QBaXBMn9LYkGyiPxNFmZlUT3qNr3Hz5HZmBYL03NDGojPH7f3zw0IV4615jj/o7VBIxVqkVZDwa28PHgC1GWk0mPx902vBAzxjuEq1VcxIOJzOXS1D3sPGdHRJ1zaR0J63JsYgQiv5ja0/WyYqE15NbJ9CzyJC7+Ng== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When a kmem cache is created with SLAB_TYPESAFE_BY_RCU the free pointer must be located outside of the object because we don't know what part of the memory can safely be overwritten as it may be needed to prevent object recycling. That has the consequence that SLAB_TYPESAFE_BY_RCU may end up adding a new cacheline. This is the case for .e.g, struct file. After having it shrunk down by 40 bytes and having it fit in three cachelines we still have SLAB_TYPESAFE_BY_RCU adding a fourth cacheline because it needs to accomodate the free pointer and is hardware cacheline aligned. I tried to find ways to rectify this as struct file is pretty much everywhere and having it use less memory is a good thing. So here's a proposal that might be totally the wrong api and broken but I thought I give it a try. Signed-off-by: Christian Brauner --- fs/file_table.c | 7 ++-- include/linux/fs.h | 1 + include/linux/slab.h | 4 +++ mm/slab.h | 1 + mm/slab_common.c | 76 +++++++++++++++++++++++++++++++++++++------- mm/slub.c | 22 +++++++++---- 6 files changed, 91 insertions(+), 20 deletions(-) diff --git a/fs/file_table.c b/fs/file_table.c index 694199a1a966..a69b8a71eacb 100644 --- a/fs/file_table.c +++ b/fs/file_table.c @@ -514,9 +514,10 @@ EXPORT_SYMBOL(__fput_sync); void __init files_init(void) { - filp_cachep = kmem_cache_create("filp", sizeof(struct file), 0, - SLAB_TYPESAFE_BY_RCU | SLAB_HWCACHE_ALIGN | - SLAB_PANIC | SLAB_ACCOUNT, NULL); + filp_cachep = kmem_cache_create_rcu("filp", sizeof(struct file), + offsetof(struct file, __f_slab_free_ptr), + SLAB_HWCACHE_ALIGN | SLAB_PANIC | SLAB_ACCOUNT, + NULL); percpu_counter_init(&nr_files, 0, GFP_KERNEL); } diff --git a/include/linux/fs.h b/include/linux/fs.h index 61097a9cf317..de509f5d1446 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -1057,6 +1057,7 @@ struct file { struct callback_head f_task_work; struct llist_node f_llist; struct file_ra_state f_ra; + void *__f_slab_free_ptr; }; /* --- cacheline 3 boundary (192 bytes) --- */ } __randomize_layout diff --git a/include/linux/slab.h b/include/linux/slab.h index eb2bf4629157..fc3c3cc9f689 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -242,6 +242,10 @@ struct kmem_cache *kmem_cache_create_usercopy(const char *name, slab_flags_t flags, unsigned int useroffset, unsigned int usersize, void (*ctor)(void *)); +struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size, + unsigned int offset, + slab_flags_t flags, + void (*ctor)(void *)); void kmem_cache_destroy(struct kmem_cache *s); int kmem_cache_shrink(struct kmem_cache *s); diff --git a/mm/slab.h b/mm/slab.h index dcdb56b8e7f5..122ca41fea34 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -261,6 +261,7 @@ struct kmem_cache { unsigned int object_size; /* Object size without metadata */ struct reciprocal_value reciprocal_size; unsigned int offset; /* Free pointer offset */ + bool dedicated_offset; /* Specific free pointer requested */ #ifdef CONFIG_SLUB_CPU_PARTIAL /* Number of per cpu partial objects to keep around */ unsigned int cpu_partial; diff --git a/mm/slab_common.c b/mm/slab_common.c index 40b582a014b8..b6ca63859b3a 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -202,10 +202,10 @@ struct kmem_cache *find_mergeable(unsigned int size, unsigned int align, } static struct kmem_cache *create_cache(const char *name, - unsigned int object_size, unsigned int align, - slab_flags_t flags, unsigned int useroffset, - unsigned int usersize, void (*ctor)(void *), - struct kmem_cache *root_cache) + unsigned int object_size, unsigned int offset, + unsigned int align, slab_flags_t flags, + unsigned int useroffset, unsigned int usersize, + void (*ctor)(void *), struct kmem_cache *root_cache) { struct kmem_cache *s; int err; @@ -213,6 +213,10 @@ static struct kmem_cache *create_cache(const char *name, if (WARN_ON(useroffset + usersize > object_size)) useroffset = usersize = 0; + if (WARN_ON(offset >= object_size || + (offset && !(flags & SLAB_TYPESAFE_BY_RCU)))) + offset = 0; + err = -ENOMEM; s = kmem_cache_zalloc(kmem_cache, GFP_KERNEL); if (!s) @@ -226,6 +230,10 @@ static struct kmem_cache *create_cache(const char *name, s->useroffset = useroffset; s->usersize = usersize; #endif + if (offset > 0) { + s->offset = offset; + s->dedicated_offset = true; + } err = __kmem_cache_create(s, flags); if (err) @@ -269,10 +277,10 @@ static struct kmem_cache *create_cache(const char *name, * * Return: a pointer to the cache on success, NULL on failure. */ -struct kmem_cache * -kmem_cache_create_usercopy(const char *name, - unsigned int size, unsigned int align, - slab_flags_t flags, +static struct kmem_cache * +do_kmem_cache_create_usercopy(const char *name, + unsigned int size, unsigned int offset, + unsigned int align, slab_flags_t flags, unsigned int useroffset, unsigned int usersize, void (*ctor)(void *)) { @@ -332,7 +340,7 @@ kmem_cache_create_usercopy(const char *name, goto out_unlock; } - s = create_cache(cache_name, size, + s = create_cache(cache_name, size, offset, calculate_alignment(flags, align, size), flags, useroffset, usersize, ctor, NULL); if (IS_ERR(s)) { @@ -356,6 +364,16 @@ kmem_cache_create_usercopy(const char *name, } return s; } + +struct kmem_cache * +kmem_cache_create_usercopy(const char *name, unsigned int size, + unsigned int align, slab_flags_t flags, + unsigned int useroffset, unsigned int usersize, + void (*ctor)(void *)) +{ + return do_kmem_cache_create_usercopy(name, size, 0, align, flags, + useroffset, usersize, ctor); +} EXPORT_SYMBOL(kmem_cache_create_usercopy); /** @@ -387,11 +405,47 @@ struct kmem_cache * kmem_cache_create(const char *name, unsigned int size, unsigned int align, slab_flags_t flags, void (*ctor)(void *)) { - return kmem_cache_create_usercopy(name, size, align, flags, 0, 0, - ctor); + return do_kmem_cache_create_usercopy(name, size, 0, align, flags, 0, 0, + ctor); } EXPORT_SYMBOL(kmem_cache_create); +/** + * kmem_cache_create_rcu - Create a SLAB_TYPESAFE_BY_RCU cache. + * @name: A string which is used in /proc/slabinfo to identify this cache. + * @size: The size of objects to be created in this cache. + * @offset: The offset into the memory to the free pointer + * @flags: SLAB flags + * @ctor: A constructor for the objects. + * + * Cannot be called within a interrupt, but can be interrupted. + * The @ctor is run when new pages are allocated by the cache. + * + * The flags are + * + * %SLAB_POISON - Poison the slab with a known test pattern (a5a5a5a5) + * to catch references to uninitialised memory. + * + * %SLAB_RED_ZONE - Insert `Red` zones around the allocated memory to check + * for buffer overruns. + * + * %SLAB_HWCACHE_ALIGN - Align the objects in this cache to a hardware + * cacheline. This can be beneficial if you're counting cycles as closely + * as davem. + * + * Return: a pointer to the cache on success, NULL on failure. + */ +struct kmem_cache *kmem_cache_create_rcu(const char *name, unsigned int size, + unsigned int offset, + slab_flags_t flags, + void (*ctor)(void *)) +{ + return do_kmem_cache_create_usercopy(name, size, offset, 0, + flags | SLAB_TYPESAFE_BY_RCU, 0, 0, + ctor); +} +EXPORT_SYMBOL(kmem_cache_create_rcu); + static struct kmem_cache *kmem_buckets_cache __ro_after_init; /** diff --git a/mm/slub.c b/mm/slub.c index c9d8a2497fd6..34eac3f9a46e 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3926,7 +3926,7 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, void *obj) { if (unlikely(slab_want_init_on_free(s)) && obj && - !freeptr_outside_object(s)) + !freeptr_outside_object(s) && !s->dedicated_offset) memset((void *)((char *)kasan_reset_tag(obj) + s->offset), 0, sizeof(void *)); } @@ -5153,6 +5153,7 @@ static int calculate_sizes(struct kmem_cache *s) slab_flags_t flags = s->flags; unsigned int size = s->object_size; unsigned int order; + bool must_use_freeptr_offset; /* * Round up object size to the next word boundary. We can only @@ -5189,9 +5190,12 @@ static int calculate_sizes(struct kmem_cache *s) */ s->inuse = size; - if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) || s->ctor || - ((flags & SLAB_RED_ZONE) && - (s->object_size < sizeof(void *) || slub_debug_orig_size(s)))) { + must_use_freeptr_offset = + (flags & SLAB_POISON) || s->ctor || + ((flags & SLAB_RED_ZONE) && + (s->object_size < sizeof(void *) || slub_debug_orig_size(s))); + + if ((flags & SLAB_TYPESAFE_BY_RCU) || must_use_freeptr_offset) { /* * Relocate free pointer after the object if it is not * permitted to overwrite the first word of the object on @@ -5208,8 +5212,13 @@ static int calculate_sizes(struct kmem_cache *s) * freeptr_outside_object() function. If that is no * longer true, the function needs to be modified. */ - s->offset = size; - size += sizeof(void *); + if (!(flags & SLAB_TYPESAFE_BY_RCU) || must_use_freeptr_offset) { + s->offset = size; + size += sizeof(void *); + s->dedicated_offset = false; + } else { + s->dedicated_offset = true; + } } else { /* * Store freelist pointer near middle of object to keep @@ -5301,6 +5310,7 @@ static int kmem_cache_open(struct kmem_cache *s, slab_flags_t flags) if (get_order(s->size) > get_order(s->object_size)) { s->flags &= ~DEBUG_METADATA_FLAGS; s->offset = 0; + s->dedicated_offset = false; if (!calculate_sizes(s)) goto error; }