From patchwork Mon Nov 20 18:34:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13461889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2632EC197A0 for ; Mon, 20 Nov 2023 18:35:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 21A4E6B0403; Mon, 20 Nov 2023 13:34:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F3C866B0406; Mon, 20 Nov 2023 13:34:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9BC3D6B0403; Mon, 20 Nov 2023 13:34:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 28ADF6B03E0 for ; Mon, 20 Nov 2023 13:34:48 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id EAAC7140763 for ; Mon, 20 Nov 2023 18:34:47 +0000 (UTC) X-FDA: 81479183814.04.10E875F Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf08.hostedemail.com (Postfix) with ESMTP id C406116000D for ; Mon, 20 Nov 2023 18:34:45 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=GurbnCGj; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=2KGJL7ZA; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1700505286; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=0dd/IArvjIRWfQN3Q+n8Y43IiuEYmqtBXkwN7diMKwU=; b=vRwgjdJQIO1SmUTwrhE5ZSdCizx/6wxaUlCUgP3o7v7RaMkPiVL7uJaA4ZecCP+piM17nA +k9BgiAMC5yEAsWsB5BL2xIGEE1DPF7F57XSZ4057PQXuc69Ln7fHwZzXy/NOSLZuxp2h4 mAXYKIfP1146rsYSTY3gPxR7GW/RHsY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1700505286; a=rsa-sha256; cv=none; b=yYOcP/nVokXHhu86nmWB5P9CMaS6xhYTCuKIkQTS0TBqHreJJXDINSdjqTxxckyvKrOpq3 f4R36vDUbZHdQv9jMAuZyQMFSXZUBO2cZ9NdRoimNxpOMNPrFJciAlpe8t+K5jih3beBMN OjVfDP8J1r+5nDjKxyOWuBcdfnndyuc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=GurbnCGj; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=2KGJL7ZA; spf=pass (imf08.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.29 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id 34F561F8C0; Mon, 20 Nov 2023 18:34:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1700505283; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0dd/IArvjIRWfQN3Q+n8Y43IiuEYmqtBXkwN7diMKwU=; b=GurbnCGj79cofMqZNPG5ggQmlEGil9xl2Qc9QDrDU876ARGhNDVN6UHUcgTrd/RgAiAp6q wq48EYqN9pInZujNhWLyDFmzJ3B8jZC+++PZaNdFghIMT84xO9p0za0gTakob1u3XyyZPG 8xvARXzCWa9lZ3irelYfoUecZM6zhnQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1700505283; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0dd/IArvjIRWfQN3Q+n8Y43IiuEYmqtBXkwN7diMKwU=; b=2KGJL7ZAkrwa/Tpv3q3xTY10kiRMrlQOQHqsN8kMGpf3r3mzdtqg/dSVXVYujQtdsvMJEg YhrpXfvL6XTCEDDA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id F1FA313499; Mon, 20 Nov 2023 18:34:42 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id QEOSOsKmW2UUMgAAMHmgww (envelope-from ); Mon, 20 Nov 2023 18:34:42 +0000 From: Vlastimil Babka Date: Mon, 20 Nov 2023 19:34:27 +0100 Subject: [PATCH v2 16/21] mm/slab: move kfree() from slab_common.c to slub.c MIME-Version: 1.0 Message-Id: <20231120-slab-remove-slab-v2-16-9c9c70177183@suse.cz> References: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> In-Reply-To: <20231120-slab-remove-slab-v2-0-9c9c70177183@suse.cz> To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, cgroups@vger.kernel.org, linux-hardening@vger.kernel.org, Vlastimil Babka X-Mailer: b4 0.12.4 X-Rspamd-Queue-Id: C406116000D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: mcyroqu8ezmmwfg13yhuc39n5d6847jf X-HE-Tag: 1700505285-727915 X-HE-Meta: U2FsdGVkX190nYxq50k1WnDFa7gxt/Nr0OgOAcQas6LtkTFXhIkti9Vo4nGDF3ybp67wMt5e6yQqwibC4zyY3AUH8z7R3OQxd5Cf74+ymjyHraL3HoUsulAR0zgwYmuCa8TQpwZIG0M/U2mCWoh2+3l2N+uYWkq3Zu1tZ6CUcoSBwXQKqWHUb49aW4hJQKMXO4dOK22PpHUUCbRCf/rzqqreoFCQRhcIF4kNOx/2qKc1XUhIsvcypIA2PiQgQwJ+tCJGBJ8bXy0QxZoIBGvJzphqsTN9UT+PHTSptHB1Mygb5oCHM2mQwe8NCEGg5fYUjAMvyy9hSzfl9MLwHE5/Mwfckavpuo0lDcBTwh8ENftNjbguC1LC+urkUUd8NFHwaj1X1VqaDXmti0j4mNG7U2VDtK97O6kavasDh0f9Tg5FpQuFR4RaPY+1DVWZwC1bFJgTXcajD6sVvyl2+xHH2iM/tRdkGjfJdTNr+hzszA6wbZ0E36+EWn/Tm3sBJP1Ucpbx1U961XI7nvDVE/UnwLZgrYrzsLR8fldjMVQ5jhBoDocx2N66iYqbzkhBzH82f9aT/3ZBotovLRTmqmlyhiMUZ0EEtHg+oXTalJuxpuZ41Bvu2DkkR6OsZ+6L9P7lWHchXkBCjpdxbJiY4LX7IW0/L7ZaJ52UK/uDWm+DzhyMmWY7ouAfz2P6Zw+guE1aaYCritoa0rJd7cZXc6kL4+32XJ4ZA5LICFQB8sQmX/9B9zBNpEoeAMux8JIVH8rHHh1/RxfGNt80o3uCNcJ3VZw2C1mGeYN68chQ1YGp67qqeEDMU4dQK4YWyS6srBJ3mYZmKs7CrJhnxRLBIB1uk2Fh8stqfpqeL6DUXMVA/iLuncxXGp7bWj/dRjfK60Vyet84Qg9NdyZ0+lL0sfzZ8dsqMO1RbhGNkGyhOqOS7FfCp3kmo586PnPsS24Ty1ZUvsV5G5rt1Sj03E8FnSu vDBm0+he JcHG/znIBsmaSjAgptkfV58oyhET79bE3bmdpSmmgnythRWSxUmJf5hFicUJOTW8BHQvtVepghELxiYtIMzt7ub6cmw2ArUNQgOP5tb2/vmhnWy6byclWJIrkB/esgTIUHy5DOXqH0FzSslN3/CXz2DcJlDx639FLgzbIwg0Tlk+9QmSWTnIPS+LKAaBUHoK5rALanv1NnF5HY6VHV3EBvmJJi3ztncYs5XHcDKCJK6nk8SPGDfK6I4J7SL6KszWovwyjbrl7vwSS9DNOiWNeg3tHubnI6h3skYdV8r+v9FT80cY0okRKU5Rb4COzXitb0aTWh1IZoe6swESsK2/Le9N9LCUySl0Lg1gJuDNuStC2lQp0+M4S8wwZ8SOYh/6m8XRK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This should result in better code. Currently kfree() makes a function call between compilation units to __kmem_cache_free() which does its own virt_to_slab(), throwing away the struct slab pointer we already had in kfree(). Now it can be reused. Additionally kfree() can now inline the whole SLUB freeing fastpath. Also move over free_large_kmalloc() as the only callsites are now in slub.c, and make it static. Reviewed-by: Kees Cook Signed-off-by: Vlastimil Babka Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> --- mm/slab.h | 4 ---- mm/slab_common.c | 45 --------------------------------------------- mm/slub.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++----- 3 files changed, 46 insertions(+), 54 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 5ae6a978e9c2..35a55c4a407d 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -395,8 +395,6 @@ struct kmem_cache *kmalloc_slab(size_t size, gfp_t flags, unsigned long caller); void *__kmem_cache_alloc_node(struct kmem_cache *s, gfp_t gfpflags, int node, size_t orig_size, unsigned long caller); -void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller); - gfp_t kmalloc_fix_flags(gfp_t flags); /* Functions provided by the slab allocators */ @@ -559,8 +557,6 @@ static inline int memcg_alloc_slab_cgroups(struct slab *slab, } #endif /* CONFIG_MEMCG_KMEM */ -void free_large_kmalloc(struct folio *folio, void *object); - size_t __ksize(const void *objp); static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slab_common.c b/mm/slab_common.c index bbc2e3f061f1..f4f275613d2a 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -963,22 +963,6 @@ void __init create_kmalloc_caches(slab_flags_t flags) slab_state = UP; } -void free_large_kmalloc(struct folio *folio, void *object) -{ - unsigned int order = folio_order(folio); - - if (WARN_ON_ONCE(order == 0)) - pr_warn_once("object pointer: 0x%p\n", object); - - kmemleak_free(object); - kasan_kfree_large(object); - kmsan_kfree_large(object); - - mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); - __free_pages(folio_page(folio, 0), order); -} - static void *__kmalloc_large_node(size_t size, gfp_t flags, int node); static __always_inline void *__do_kmalloc_node(size_t size, gfp_t flags, int node, unsigned long caller) @@ -1023,35 +1007,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags, } EXPORT_SYMBOL(__kmalloc_node_track_caller); -/** - * kfree - free previously allocated memory - * @object: pointer returned by kmalloc() or kmem_cache_alloc() - * - * If @object is NULL, no operation is performed. - */ -void kfree(const void *object) -{ - struct folio *folio; - struct slab *slab; - struct kmem_cache *s; - - trace_kfree(_RET_IP_, object); - - if (unlikely(ZERO_OR_NULL_PTR(object))) - return; - - folio = virt_to_folio(object); - if (unlikely(!folio_test_slab(folio))) { - free_large_kmalloc(folio, (void *)object); - return; - } - - slab = folio_slab(folio); - s = slab->slab_cache; - __kmem_cache_free(s, (void *)object, _RET_IP_); -} -EXPORT_SYMBOL(kfree); - /** * __ksize -- Report full size of underlying allocation * @object: pointer to the object diff --git a/mm/slub.c b/mm/slub.c index cc801f8258fe..2baa9e94d9df 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4197,11 +4197,6 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) return cachep; } -void __kmem_cache_free(struct kmem_cache *s, void *x, unsigned long caller) -{ - slab_free(s, virt_to_slab(x), x, NULL, &x, 1, caller); -} - /** * kmem_cache_free - Deallocate an object * @s: The cache the allocation was from. @@ -4220,6 +4215,52 @@ void kmem_cache_free(struct kmem_cache *s, void *x) } EXPORT_SYMBOL(kmem_cache_free); +static void free_large_kmalloc(struct folio *folio, void *object) +{ + unsigned int order = folio_order(folio); + + if (WARN_ON_ONCE(order == 0)) + pr_warn_once("object pointer: 0x%p\n", object); + + kmemleak_free(object); + kasan_kfree_large(object); + kmsan_kfree_large(object); + + mod_lruvec_page_state(folio_page(folio, 0), NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); + __free_pages(folio_page(folio, 0), order); +} + +/** + * kfree - free previously allocated memory + * @object: pointer returned by kmalloc() or kmem_cache_alloc() + * + * If @object is NULL, no operation is performed. + */ +void kfree(const void *object) +{ + struct folio *folio; + struct slab *slab; + struct kmem_cache *s; + void *x = (void *)object; + + trace_kfree(_RET_IP_, object); + + if (unlikely(ZERO_OR_NULL_PTR(object))) + return; + + folio = virt_to_folio(object); + if (unlikely(!folio_test_slab(folio))) { + free_large_kmalloc(folio, (void *)object); + return; + } + + slab = folio_slab(folio); + s = slab->slab_cache; + slab_free(s, slab, x, NULL, &x, 1, _RET_IP_); +} +EXPORT_SYMBOL(kfree); + struct detached_freelist { struct slab *slab; void *tail;