From patchwork Wed Nov 29 09:53:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13472565 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15D7FC4167B for ; Wed, 29 Nov 2023 09:53:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7EB166B03C4; Wed, 29 Nov 2023 04:53:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 796846B03C6; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 638926B03C5; Wed, 29 Nov 2023 04:53:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 412B96B03BF for ; Wed, 29 Nov 2023 04:53:42 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 1F5A5A0385 for ; Wed, 29 Nov 2023 09:53:42 +0000 (UTC) X-FDA: 81510529884.17.4BF64A9 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130]) by imf19.hostedemail.com (Postfix) with ESMTP id F19541A0024 for ; Wed, 29 Nov 2023 09:53:39 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701251620; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r0ZIwChSWj55C5ZofQebA+Ysb/A45+GXttFMd4vVFZY=; b=gYWcbjCp2/n6Baeu+D5WJ80vpxI9GNwxUwuX8pqmKjy7+lzdzLlBKRsxoEe6QugFjuiXxX ds0FMSTNDyP7iAmir4aDTdfb+M2ulEa1kJuNBg9nP8vDkB1N+diKF5sU3tWJMxHzGSDQ5r SGkyKd3Nvu6ypDQ2c+045EzUDqhfVso= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none; spf=pass (imf19.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.130 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701251620; a=rsa-sha256; cv=none; b=bTmAKMy40xO1WH6gGkcneqnQbyRku+BKaZKya+cmgLbcf6Lmv9az7Vz/DBeOAugHtgc7hf 1zq5mJQ+S0T+LrKrS//O4jBuhYlCBk4u02ZGDcxVytV2OyMkYnClNuPhxcaxwi5qCB5n3y PoRsGTRm42OfsQmyl2N9Llt3hUWmuKw= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 0AF1221991; Wed, 29 Nov 2023 09:53:37 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E23D613A97; Wed, 29 Nov 2023 09:53:36 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id AGjyNiAKZ2UrfQAAD6G6ig (envelope-from ); Wed, 29 Nov 2023 09:53:36 +0000 From: Vlastimil Babka Date: Wed, 29 Nov 2023 10:53:27 +0100 Subject: [PATCH RFC v3 2/9] mm/slub: introduce __kmem_cache_free_bulk() without free hooks MIME-Version: 1.0 Message-Id: <20231129-slub-percpu-caches-v3-2-6bcf536772bc@suse.cz> References: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> In-Reply-To: <20231129-slub-percpu-caches-v3-0-6bcf536772bc@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Matthew Wilcox , "Liam R. Howlett" Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, maple-tree@lists.infradead.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Spamd-Bar: +++++++++++++++++ X-Rspamd-Queue-Id: F19541A0024 X-Rspam-User: X-Stat-Signature: oxgn1in7r59pupy3ckwduw7wwai9cqkp X-Rspamd-Server: rspam01 X-HE-Tag: 1701251619-22542 X-HE-Meta: U2FsdGVkX1+yDTXi6vWlIdHUjqX9uKLrM9cGn/gTPjbi+Dlr76Lx8eyb7Qg1GOiNbxSBWdNppYqmCTMOuRyjNL8kVljKqbp1WDFfNurLsKWqqsAI/uHIztK2psF/a6y/+M7SN96mH66xH/NOEhfalxqUfWBSP0qgB0qkKpp2yGZRhy6+sXItir3uj6T4XRwWisiMcqAqULOCr11NaynT3cVFT23MUVtrUMenfVoUIhlJz7LVO6XrNglqBO0UpVeGV/6HpjIROknqJmukjs1HnJ2nHbVqG7KE15sWrAEORHRV2qiAoyMc6ciH6hN88nG1tB+drgckhbY5BhIZ2dtrMQM6wzVA+VN1vzKFXohFenGUsa+5fvOCpybt/5kOxYg8LSQU2HXNSgLI6asVPXWG2pbIVaUmGFTvDNHr2pGJt+oQahNTp1tExHMozDB2uHoyTb+PEFQS4QwQUVxpuPCa7oXpr7MCjdAsO+AJAFh2qzJxI4HNAhHa3nw/h7aqCFhhuiRu8LlTPosaNJoNRDO2J3XhV4N1bseLi3kNmkxMeQ6x/gZdwAqVYi/Jon3zM1jyThFi/x6E54S5VScP3VHmZZPoVbLL7ILTyvjhBCjms4rlsURQr5fAMLU8W/utJ53M94owwgulYbH8QNSuVwWsuh66nYaIWxuQhBOE0Cm9g4KszemgrBx8eqAMcuMA5NGgZX71Bvdrbj6bKcJON50NZdKX7fmUaf+VfIkpmbhKYZHqJU9NykzVqJGP16I6THCfBtyCDkUMzotVMtdyHUf/JpUqBof0orMqjA4k6B31MokXmaB/hTq5hrOdjDI8Uagt8AkXXkp7EPcOSUiVUKiCQ9dT6vb1M7uHzOTGIqC7epoklto3dcbw9+W7bXovzq1v8V5wDtBxQEFdBdJeDoF71G4HS1VF66Z2K6MBKn2em/PUQQGVehgxO3xt0cDWrvIzXIBaeEpsLO+S0O8dJR1 ABOzKF7i g5IJ1mFarlSmekoPO54TnxD0kcwa0hunBCjYLUDRmq3Na7Xq9u/yNhqklro80m4jE+sSkn+wJQORl0HyP8UF6RfuAsaFBP4zmXEAc4zvvA+yIFbXdRdxk1xgppAY0f4LRi3BQ2nLieSrJIsBTkUdBa4KcFT/CaRO6KIeEKRUdJl+PZUmpJWJ467KPWZ+73ZxrtLJXRAZ9BHyPP1J1lVithrcj+e+C6v2YXGJW6R7Qvg4mdUg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, when __kmem_cache_alloc_bulk() fails, it frees back the objects that were allocated before the failure, using kmem_cache_free_bulk(). Because kmem_cache_free_bulk() calls the free hooks (kasan etc.) and those expect objects processed by the post alloc hooks, slab_post_alloc_hook() is called before kmem_cache_free_bulk(). This is wasteful, although not a big concern in practice for the very rare error path. But in order to efficiently handle percpu array batch refill and free in the following patch, we will also need a variant of kmem_cache_free_bulk() that avoids the free hooks. So introduce it first and use it in the error path too. As a consequence, __kmem_cache_alloc_bulk() no longer needs the objcg parameter, remove it. Signed-off-by: Vlastimil Babka --- mm/slub.c | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f0cd55bb4e11..16748aeada8f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3919,6 +3919,27 @@ int build_detached_freelist(struct kmem_cache *s, size_t size, return same; } +/* + * Internal bulk free of objects that were not initialised by the post alloc + * hooks and thus should not be processed by the free hooks + */ +static void __kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) +{ + if (!size) + return; + + do { + struct detached_freelist df; + + size = build_detached_freelist(s, size, p, &df); + if (!df.slab) + continue; + + do_slab_free(df.s, df.slab, df.freelist, df.tail, df.cnt, + _RET_IP_); + } while (likely(size)); +} + /* Note that interrupts must be enabled when calling this function. */ void kmem_cache_free_bulk(struct kmem_cache *s, size_t size, void **p) { @@ -3940,7 +3961,7 @@ EXPORT_SYMBOL(kmem_cache_free_bulk); #ifndef CONFIG_SLUB_TINY static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, - size_t size, void **p, struct obj_cgroup *objcg) + size_t size, void **p) { struct kmem_cache_cpu *c; unsigned long irqflags; @@ -4004,14 +4025,13 @@ static inline int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, error: slub_put_cpu_ptr(s->cpu_slab); - slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); - kmem_cache_free_bulk(s, i, p); + __kmem_cache_free_bulk(s, i, p); return 0; } #else /* CONFIG_SLUB_TINY */ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, - size_t size, void **p, struct obj_cgroup *objcg) + size_t size, void **p) { int i; @@ -4034,8 +4054,7 @@ static int __kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, return i; error: - slab_post_alloc_hook(s, objcg, flags, i, p, false, s->object_size); - kmem_cache_free_bulk(s, i, p); + __kmem_cache_free_bulk(s, i, p); return 0; } #endif /* CONFIG_SLUB_TINY */ @@ -4055,7 +4074,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, if (unlikely(!s)) return 0; - i = __kmem_cache_alloc_bulk(s, flags, size, p, objcg); + i = __kmem_cache_alloc_bulk(s, flags, size, p); /* * memcg and kmem_cache debug support and memory initialization.