From patchwork Wed Jul 3 07:25:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13721480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66B10C2BD09 for ; Wed, 3 Jul 2024 07:25:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BAD906B007B; Wed, 3 Jul 2024 03:25:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B5E656B0082; Wed, 3 Jul 2024 03:25:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9FE446B0083; Wed, 3 Jul 2024 03:25:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 8190C6B007B for ; Wed, 3 Jul 2024 03:25:31 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id D6BB71A09A6 for ; Wed, 3 Jul 2024 07:25:30 +0000 (UTC) X-FDA: 82297606020.26.877BE8A Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf26.hostedemail.com (Postfix) with ESMTP id 02060140006 for ; Wed, 3 Jul 2024 07:25:26 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=o1awERoN; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=0MS5+Bmr; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=o1awERoN; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=0MS5+Bmr; spf=pass (imf26.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1719991516; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=kH5792dO8jlMERftjnrjLzOnrCxaNxD4wMDDSAQ35CY=; b=kAP7d9ytCQIMv+YRJb24c1kCiOXd5muFGzaKCJCWnkpFVfWprtdsSagxvkk1S3p+bdwawz JOOCOp+ZJ7RJU9wypshiCbm0expT3s7j8jEtBNrdAtswKHRrdywzw/ksdeXiW+IxLRkUe9 sonqcuNlFBpQZv8eOVvcpdTkxes9nOY= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=o1awERoN; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=0MS5+Bmr; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=o1awERoN; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=0MS5+Bmr; spf=pass (imf26.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1719991516; a=rsa-sha256; cv=none; b=zs7yKXItuYmeo1f2X/kuzf6TP7MX0YYzeS3oJwdjxueoaVCgqTJiOWaavT6oIja4Mv+qAC Ep+oVmmYSBQe8YLol+/ivik3LXvWzXFLjvkCq8oYEx6oX7pkUOqIxDSBZaEMZkWGgIOd3M 8si8B2R1e2buRHwHnB7sNqj1WwMo7Sw= Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 21AB81FC84; Wed, 3 Jul 2024 07:25:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1719991525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=kH5792dO8jlMERftjnrjLzOnrCxaNxD4wMDDSAQ35CY=; b=o1awERoNq1ycynNcRHOcmhCqqJPnTZA6XBpKYcwZc45L/4BRGLnUBZnjTd/PVm3s3QlxYe EosRzfH2yl0oPgCt3Z59fu8ey+CuLo3ZS6Zr815uUMkR1cGXTOA1I2FsQk4vUKR79/Vgoa tBkkueH7doT/BHts+Yh0o5K/YlARamw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1719991525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=kH5792dO8jlMERftjnrjLzOnrCxaNxD4wMDDSAQ35CY=; b=0MS5+BmrfeG8qTzWVSTfn0mzWElY1nlfuTLzYYahYviNAwo/qEWksTn5O5PY6br9WsJqBG YBsq5y2+j9tdFTDQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1719991525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=kH5792dO8jlMERftjnrjLzOnrCxaNxD4wMDDSAQ35CY=; b=o1awERoNq1ycynNcRHOcmhCqqJPnTZA6XBpKYcwZc45L/4BRGLnUBZnjTd/PVm3s3QlxYe EosRzfH2yl0oPgCt3Z59fu8ey+CuLo3ZS6Zr815uUMkR1cGXTOA1I2FsQk4vUKR79/Vgoa tBkkueH7doT/BHts+Yh0o5K/YlARamw= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1719991525; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=kH5792dO8jlMERftjnrjLzOnrCxaNxD4wMDDSAQ35CY=; b=0MS5+BmrfeG8qTzWVSTfn0mzWElY1nlfuTLzYYahYviNAwo/qEWksTn5O5PY6br9WsJqBG YBsq5y2+j9tdFTDQ== Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 09A4813974; Wed, 3 Jul 2024 07:25:25 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id RiEDAuX8hGZKLgAAD6G6ig (envelope-from ); Wed, 03 Jul 2024 07:25:25 +0000 From: Vlastimil Babka To: linux-mm@kvack.org, David Rientjes , Christoph Lameter Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Kees Cook , Alice Ryhl , Boqun Feng , rust-for-linux@vger.kernel.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Vlastimil Babka Subject: [PATCH v2] slab, rust: extend kmalloc() alignment guarantees to remove Rust padding Date: Wed, 3 Jul 2024 09:25:21 +0200 Message-ID: <20240703072520.45837-2-vbabka@suse.cz> X-Mailer: git-send-email 2.45.2 MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: 02060140006 X-Stat-Signature: pcc15kiaspqn6nkstynisbhu4ka4igoq X-HE-Tag: 1719991526-352409 X-HE-Meta: U2FsdGVkX1+zvRrhM8FF2G656bqtS5Pn8644Rn06Vj4PDmYsEGIABRvWq3g0mYsh5Mbp/jqZjB9/RZnW6Bhl1egxcq1Cf2k02kj1io14XMEGkg9y2reZXunQR7bjJNukuSBDkipSciVSCCMN85mBssgDPzpCW2sE3hivShwCZYPx+ghgTMx8iKwn2SQjJqKKsxgLRQ88FW4uhQf59kZ9N8965PgdNaO+d98lnnbzlNYqUoTVa+SumqaUTu65CK4mBQ53scytFlxmt5jUBzDdp8FvSJCQX7ZHcyA7v6qCq5WqZfFaXN4RATLCA1YS4uqsVZ88LmmHiN76JtgWCtS9f86BLPMLzHZCuYUecvTFlKttuhWeRPbKFqSrvXRKX06M30ZUjZRY8dLbJdYoNrGoXJtph0JF4QZgHCss+szGPmlHa5nCxDblyFZGk+iMbPLhsbFnUzYMGOydK7OyRQHkuQryopuSjxyTULNw+6GBkUhs9t+xWAvfgLVBPahGCZdtVTajcniOrwI1WPXDT/jBazqjNKp63DsCacvgwL20KmARL67oQNWySBQyX1GuqNqctimMMOeHa07weiAMflZntUhn6JGNUPQZDgcD0tXOuMgEhBf0DRHlWqQ9W7wvo28wlqR1T48f4Cikpp1mUvrGjT1yIMGCjN5WA3xclBlFmpWTFwgLcVbKLV4enjPCr3DRLSm0cA+fNjyIO0IBJu69nUsr3MKl7SUoNKQmy7do2ItrZIetaFCk5WFq0AN7kpSTI5Tamo8telSH1C9t9Sg5DQ02HLTcDoAspsRYUcDhwTfNFd8Sd4GHrvBm6ovQ/GJnIuEB7WYyoPGZJtKtEoyU+RhG+X5isWtZTzYEVf0aVaefgqjdzzI3cj5zpk/v9NXE4kSipGn+DI8QIGj2EiIBtc90Fv0LVlS01605BCSW2KD98zhwQKyCx8tEMQ41yc3honjp/d/nXl9L/EVuLcu RyMNKoYZ mYTEZjTz0chVZ3q7WfZWmebYRAwvugPnLw7ceC33O9Vmlxgxw102qV4BdeE/xvTiYAcMeBa3HwiJeDQs92bJhkWYAGoNneq5YwbD/KJZ03lWUME2U53qi/QZuRs7nxcTLq5nHdRKETCJZTbz4tw3rrcl9J77GIEjtL9yGfxtLpXYGbAGBdMWFyPzUiscOChsa3P9/zwTfSO1yztXw6HQcBDvOkO0tomaavM2lj4UK1BB3PVTXHdYcdl9rLTMCpreVTof4Sn6HTP0oimZXJS6P6q2JvOTvfW256hwOqzj/s3hp2isqMQVvi0GKc96AYJh5d0NRCDNtXYehavAp/cZMKUrEICmQArHChGVW+gFWgj8g+kf3n13YDyTxRrREra2GS7T8F0YOp0gOEwo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Slab allocators have been guaranteeing natural alignment for power-of-two sizes since commit 59bb47985c1d ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)"), while any other sizes are guaranteed to be aligned only to ARCH_KMALLOC_MINALIGN bytes (although in practice are aligned more than that in non-debug scenarios). Rust's allocator API specifies size and alignment per allocation, which have to satisfy the following rules, per Alice Ryhl [1]: 1. The alignment is a power of two. 2. The size is non-zero. 3. When you round up the size to the next multiple of the alignment, then it must not overflow the signed type isize / ssize_t. In order to map this to kmalloc()'s guarantees, some requested allocation sizes have to be padded to the next power-of-two size [2]. For example, an allocation of size 96 and alignment of 32 will be padded to an allocation of size 128, because the existing kmalloc-96 bucket doesn't guarantee alignent above ARCH_KMALLOC_MINALIGN. Without slab debugging active, the layout of the kmalloc-96 slabs however naturally align the objects to 32 bytes, so extending the size to 128 bytes is wasteful. To improve the situation we can extend the kmalloc() alignment guarantees in a way that 1) doesn't change the current slab layout (and thus does not increase internal fragmentation) when slab debugging is not active 2) reduces waste in the Rust allocator use case 3) is a superset of the current guarantee for power-of-two sizes. The extended guarantee is that alignment is at least the largest power-of-two divisor of the requested size. For power-of-two sizes the largest divisor is the size itself, but let's keep this case documented separately for clarity. For current kmalloc size buckets, it means kmalloc-96 will guarantee alignment of 32 bytes and kmalloc-196 will guarantee 64 bytes. This covers the rules 1 and 2 above of Rust's API as long as the size is a multiple of the alignment. The Rust layer should now only need to round up the size to the next multiple if it isn't, while enforcing the rule 3. Implementation-wise, this changes the alignment calculation in create_boot_cache(). While at it also do the calulation only for caches with the SLAB_KMALLOC flag, because the function is also used to create the initial kmem_cache and kmem_cache_node caches, where no alignment guarantee is necessary. In the Rust allocator's krealloc_aligned(), remove the code that padded sizes to the next power of two (suggested by Alice Ryhl) as it's no longer necessary with the new guarantees. Reported-by: Alice Ryhl Reported-by: Boqun Feng Link: https://lore.kernel.org/all/CAH5fLggjrbdUuT-H-5vbQfMazjRDpp2%2Bk3%3DYhPyS17ezEqxwcw@mail.gmail.com/ [1] Link: https://lore.kernel.org/all/CAH5fLghsZRemYUwVvhk77o6y1foqnCeDzW4WZv6ScEWna2+_jw@mail.gmail.com/ [2] Signed-off-by: Vlastimil Babka Reviewed-by: Boqun Feng Acked-by: Roman Gushchin Reviewed-by: Alice Ryhl --- v2: - add Rust side change as suggested by Alice, also thanks Boqun for fixups - clarify that the alignment already existed (unless debugging) but was not guaranteed, so there's no extra fragmentation in slab - add r-b, a-b thanks tO Boqun and Roman If it's fine with Rust folks, I can put this in the slab.git tree. Documentation/core-api/memory-allocation.rst | 6 ++++-- include/linux/slab.h | 3 ++- mm/slab_common.c | 9 +++++---- rust/kernel/alloc/allocator.rs | 19 ++++++------------- 4 files changed, 17 insertions(+), 20 deletions(-) diff --git a/Documentation/core-api/memory-allocation.rst b/Documentation/core-api/memory-allocation.rst index 1c58d883b273..8b84eb4bdae7 100644 --- a/Documentation/core-api/memory-allocation.rst +++ b/Documentation/core-api/memory-allocation.rst @@ -144,8 +144,10 @@ configuration, but it is a good practice to use `kmalloc` for objects smaller than page size. The address of a chunk allocated with `kmalloc` is aligned to at least -ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the -alignment is also guaranteed to be at least the respective size. +ARCH_KMALLOC_MINALIGN bytes. For sizes which are a power of two, the +alignment is also guaranteed to be at least the respective size. For other +sizes, the alignment is guaranteed to be at least the largest power-of-two +divisor of the size. Chunks allocated with kmalloc() can be resized with krealloc(). Similarly to kmalloc_array(): a helper for resizing arrays is provided in the form of diff --git a/include/linux/slab.h b/include/linux/slab.h index ed6bee5ec2b6..640cea6e6323 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -604,7 +604,8 @@ void *__kmalloc_large_node_noprof(size_t size, gfp_t flags, int node) * * The allocated object address is aligned to at least ARCH_KMALLOC_MINALIGN * bytes. For @size of power of two bytes, the alignment is also guaranteed - * to be at least to the size. + * to be at least to the size. For other sizes, the alignment is guaranteed to + * be at least the largest power-of-two divisor of @size. * * The @flags argument may be one of the GFP flags defined at * include/linux/gfp_types.h and described at diff --git a/mm/slab_common.c b/mm/slab_common.c index 1560a1546bb1..7272ef7bc55f 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -617,11 +617,12 @@ void __init create_boot_cache(struct kmem_cache *s, const char *name, s->size = s->object_size = size; /* - * For power of two sizes, guarantee natural alignment for kmalloc - * caches, regardless of SL*B debugging options. + * kmalloc caches guarantee alignment of at least the largest + * power-of-two divisor of the size. For power-of-two sizes, + * it is the size itself. */ - if (is_power_of_2(size)) - align = max(align, size); + if (flags & SLAB_KMALLOC) + align = max(align, 1U << (ffs(size) - 1)); s->align = calculate_alignment(flags, align, size); #ifdef CONFIG_HARDENED_USERCOPY diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs index 229642960cd1..e6ea601f38c6 100644 --- a/rust/kernel/alloc/allocator.rs +++ b/rust/kernel/alloc/allocator.rs @@ -18,23 +18,16 @@ pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: F // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. let layout = new_layout.pad_to_align(); - let mut size = layout.size(); - - if layout.align() > bindings::ARCH_SLAB_MINALIGN { - // The alignment requirement exceeds the slab guarantee, thus try to enlarge the size - // to use the "power-of-two" size/alignment guarantee (see comments in `kmalloc()` for - // more information). - // - // Note that `layout.size()` (after padding) is guaranteed to be a multiple of - // `layout.align()`, so `next_power_of_two` gives enough alignment guarantee. - size = size.next_power_of_two(); - } + // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()` + // which together with the slab guarantees means the `krealloc` will return a properly aligned + // object (see comments in `kmalloc()` for more information). + let size = layout.size(); // SAFETY: // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the // function safety requirement. - // - `size` is greater than 0 since it's either a `layout.size()` (which cannot be zero - // according to the function safety requirement) or a result from `next_power_of_two()`. + // - `size` is greater than 0 since it's from `layout.size()` (which cannot be zero according + // to the function safety requirement) unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 } }