From patchwork Mon Nov 13 19:14:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13454374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4885BC4167D for ; Mon, 13 Nov 2023 19:15:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6BB56B0287; Mon, 13 Nov 2023 14:14:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD6126B028B; Mon, 13 Nov 2023 14:14:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 707F36B028D; Mon, 13 Nov 2023 14:14:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 3806B6B0289 for ; Mon, 13 Nov 2023 14:14:19 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1432D14088E for ; Mon, 13 Nov 2023 19:14:19 +0000 (UTC) X-FDA: 81453881838.17.A600649 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf09.hostedemail.com (Postfix) with ESMTP id 29E98140011 for ; Mon, 13 Nov 2023 19:14:16 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=LiNsUX0q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=5iBlhYbr; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699902857; a=rsa-sha256; cv=none; b=4DICiX7QNIKkR+dYoFzXoPBLmyp1/uD6G0Y8bJkCBRR/ECbAlTanDKBWfJ1bwhliXXiAgU nSny0Kj0DOAOP/iHW7XHtUClIQUzPndxZqCnP4Ro6iLLF5UOOcdAnUJj7/VMkighMAN2z9 VgO0dNp3ZBXKIrBsiDOKe7QPzdPm9TE= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=LiNsUX0q; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=5iBlhYbr; spf=pass (imf09.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699902857; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Lj2Ybkc9bEyxzgjbB/gyBkYp7HC/f0D1F7NR5xIwYLA=; b=41Xq4l48zSHkCLiFkbypPCzX+MKMolIbCybOh5tMk82R81pTP18fVjF3MGKtbBBlwzSP7/ BGzjjaUls+a8o702H5F/QkLAVlzuRCEb7IDNB6rMMm5IkeAIVTUt+vBeo0yjzbTsf4DcBO 82eYfrnGYTL1QDpPi/lQcJpHoJyx70s= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C8C7221940; Mon, 13 Nov 2023 19:14:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1699902855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lj2Ybkc9bEyxzgjbB/gyBkYp7HC/f0D1F7NR5xIwYLA=; b=LiNsUX0qC19DyvSVJKsLeb7BpuyU8EcyLm+7LbMe3HCx9f7yx1OTc9qGSsTNTzotRowJgh +G5i4+E5s097LUo3M55sk40/xT2anALj7t96A59r0wrf9eYAkF2XP2qH22NA0yxRDtVkQO /MhsRD+VgHPC8OqyzSbL8wp/mRU2lMQ= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1699902855; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lj2Ybkc9bEyxzgjbB/gyBkYp7HC/f0D1F7NR5xIwYLA=; b=5iBlhYbrFOrnn6k5t+01hBzwT103ynklcSqnl4VDF5Ghwmnpx6N/lFwpSbaZqtFnLx5SNK kBQYxO2DtBxzWTAA== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 8757A13907; Mon, 13 Nov 2023 19:14:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id +CJlIId1UmVFOgAAMHmgww (envelope-from ); Mon, 13 Nov 2023 19:14:15 +0000 From: Vlastimil Babka To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Vlastimil Babka Subject: [PATCH 19/20] mm/slub: optimize alloc fastpath code layout Date: Mon, 13 Nov 2023 20:14:00 +0100 Message-ID: <20231113191340.17482-41-vbabka@suse.cz> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231113191340.17482-22-vbabka@suse.cz> References: <20231113191340.17482-22-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 29E98140011 X-Stat-Signature: 9g3ss4qrxct9awjiudw7uzsrxk3erjk9 X-Rspam-User: X-HE-Tag: 1699902856-65545 X-HE-Meta: U2FsdGVkX1/G4HNX00Uv/p8GC1ADwDEhjArDN5lO/po3H7IBG1T+tDnts0GgLEjiHzVG7x9Gqu2k5Sq6P0z3QGfJnQdgBLP6bSc+8wM9pVFsZIzBcuSnEZRNBdewndxmDl5e7QwLa8N0VDJHAAaWptKIOs8f+MugbhKxUXuzb3ekHx5awQXfFH/P8EdOfrxWSF02xuF64Jw7VaWgMyh52ZF6H1f8lVI5sn+aj69QtmL2JnnFFixWI7KSYPZJ2031x0GhpzasEHiMAF9vjTXat3nL2ENU8F7yM6f258EKQh1QP7mdOjGrDByx6UJgV3zRJNvP2ATuCzvPUvy34FehtR2nvTdPLlFhj7u1x6SRlKWS9t45wM39NkGguWEGDtOxZ83zeujhHh3WuhBHAtfSfoNPKg930XUUppPmhrBLEwZQZzJO+KPWfO3Bke1Gtq4k/y5n37lmmCK8+NxuivdKxQVKY9J66LM3PfsguXU8Prscq4PRhpOCrTfrR9sblZdMGo1n7xpvC89tEjW8ImEgS+lOWGW5wXpR3YOFFwv9whIbVV8nGG1Gm/mabHJ87QR3MO6tBv3dlPALE1S59LWacMR4sqrunZQhgMGA7+VGPFTzf1zTQ6/2J/nf3ZHKoSC/fTbOD7FYBfCbCF5BV2n9HRTkyi9yrB/g3JIh5bzQehy188+D0mdO8B5f6Fps+N9AaAZt/jlqFkuX5az8/BSbNPak8ifoV1/C4jZCn7PgNcSMUz637tz9rB+s0xPmYFhF7nCzagkTm22DwRixRkkE4mhEKHJviNMeJiG2/z0Hyipc81roedMexK49B18Nhl7XDWk1zJ0BDkuoqoObGTQ44FNSSDx4nO2iO+aYuFnu/0GkoOfIUdrmnRZAUS00Qx5Jz8041aOJf14l9UIIbg9xpPsJe6ILQ4UhSRodFiH33Mvrzn43ByXAGzfEfMTU6I3iRJABv56j8oua4zJhgUQ nJzElU1s SmE3xBIyH/uZpbyn+dHPLv43TLcMVVLsPYAA7lmxP/mh9FSbwYEEOtQ169Qa7uwlWpJbs5n1cY77k7YOGAfIci+4865cENT1unateRCdb4PKzEipr2rgYrX5TeEHBVgr2GHhj+PPlsZ7+EPY0pPMgzK/6oUDhpqSvrtsLcyj7U8WGPYGgQrWS2YAl6tK8CdUqLYg2SIvpU5ujnuy6LBlHZEYgo6OrfyxUK6BcbcRZjOtv0mPixwJe5gJnXYytsdZTCIUqMFGbIOq9nRw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With allocation fastpaths no longer divided between two .c files, we have better inlining, however checking the disassembly of kmem_cache_alloc() reveals we can do better to make the fastpaths smaller and move the less common situations out of line or to separate functions, to reduce instruction cache pressure. - split memcg pre/post alloc hooks to inlined checks that use likely() to assume there will be no objcg handling necessary, and non-inline functions doing the actual handling - add some more likely/unlikely() to pre/post alloc hooks to indicate which scenarios should be out of line - change gfp_allowed_mask handling in slab_post_alloc_hook() so the code can be optimized away when kasan/kmsan/kmemleak is configured out bloat-o-meter shows: add/remove: 4/2 grow/shrink: 1/8 up/down: 521/-2924 (-2403) Function old new delta __memcg_slab_post_alloc_hook - 461 +461 kmem_cache_alloc_bulk 775 791 +16 __pfx_should_failslab.constprop - 16 +16 __pfx___memcg_slab_post_alloc_hook - 16 +16 should_failslab.constprop - 12 +12 __pfx_memcg_slab_post_alloc_hook 16 - -16 kmem_cache_alloc_lru 1295 1023 -272 kmem_cache_alloc_node 1118 817 -301 kmem_cache_alloc 1076 772 -304 kmalloc_node_trace 1149 838 -311 kmalloc_trace 1102 789 -313 __kmalloc_node_track_caller 1393 1080 -313 __kmalloc_node 1397 1082 -315 __kmalloc 1374 1059 -315 memcg_slab_post_alloc_hook 464 - -464 Note that gcc still decided to inline __memcg_pre_alloc_hook(), but the code is out of line. Forcing noinline did not improve the results. As a result the fastpaths are shorter and overal code size is reduced. Signed-off-by: Vlastimil Babka --- mm/slub.c | 89 +++++++++++++++++++++++++++++++++---------------------- 1 file changed, 54 insertions(+), 35 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index d2363b91d55c..7a40132b717a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1866,25 +1866,17 @@ static inline size_t obj_full_size(struct kmem_cache *s) /* * Returns false if the allocation should fail. */ -static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t objects, gfp_t flags) +static bool __memcg_slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, + struct obj_cgroup **objcgp, + size_t objects, gfp_t flags) { - struct obj_cgroup *objcg; - - if (!memcg_kmem_online()) - return true; - - if (!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT)) - return true; - /* * The obtained objcg pointer is safe to use within the current scope, * defined by current task or set_active_memcg() pair. * obj_cgroup_get() is used to get a permanent reference. */ - objcg = current_obj_cgroup(); + struct obj_cgroup *objcg = current_obj_cgroup(); if (!objcg) return true; @@ -1907,17 +1899,34 @@ static inline bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, return true; } -static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, - gfp_t flags, size_t size, - void **p) +/* + * Returns false if the allocation should fail. + */ +static __fastpath_inline +bool memcg_slab_pre_alloc_hook(struct kmem_cache *s, struct list_lru *lru, + struct obj_cgroup **objcgp, size_t objects, + gfp_t flags) +{ + if (!memcg_kmem_online()) + return true; + + if (likely(!(flags & __GFP_ACCOUNT) && !(s->flags & SLAB_ACCOUNT))) + return true; + + return likely(__memcg_slab_pre_alloc_hook(s, lru, objcgp, objects, + flags)); +} + +static void __memcg_slab_post_alloc_hook(struct kmem_cache *s, + struct obj_cgroup *objcg, + gfp_t flags, size_t size, + void **p) { struct slab *slab; unsigned long off; size_t i; - if (!memcg_kmem_online() || !objcg) - return; + flags &= gfp_allowed_mask; for (i = 0; i < size; i++) { if (likely(p[i])) { @@ -1940,6 +1949,16 @@ static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s, } } +static __fastpath_inline +void memcg_slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, + gfp_t flags, size_t size, void **p) +{ + if (likely(!memcg_kmem_online() || !objcg)) + return; + + return __memcg_slab_post_alloc_hook(s, objcg, flags, size, p); +} + static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { @@ -3709,34 +3728,34 @@ noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) } ALLOW_ERROR_INJECTION(should_failslab, ERRNO); -static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t size, gfp_t flags) +static __fastpath_inline +struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, + struct obj_cgroup **objcgp, + size_t size, gfp_t flags) { flags &= gfp_allowed_mask; might_alloc(flags); - if (should_failslab(s, flags)) + if (unlikely(should_failslab(s, flags))) return NULL; - if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) + if (unlikely(!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags))) return NULL; return s; } -static inline void slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init, - unsigned int orig_size) +static __fastpath_inline +void slab_post_alloc_hook(struct kmem_cache *s, struct obj_cgroup *objcg, + gfp_t flags, size_t size, void **p, bool init, + unsigned int orig_size) { unsigned int zero_size = s->object_size; bool kasan_init = init; size_t i; - - flags &= gfp_allowed_mask; + gfp_t init_flags = flags & gfp_allowed_mask; /* * For kmalloc object, the allocated memory size(object_size) is likely @@ -3769,13 +3788,13 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, * As p[i] might get tagged, memset and kmemleak hook come after KASAN. */ for (i = 0; i < size; i++) { - p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); + p[i] = kasan_slab_alloc(s, p[i], init_flags, kasan_init); if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, - s->flags, flags); - kmsan_slab_alloc(s, p[i], flags); + s->flags, init_flags); + kmsan_slab_alloc(s, p[i], init_flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); @@ -3799,7 +3818,7 @@ static __fastpath_inline void *slab_alloc_node(struct kmem_cache *s, struct list bool init = false; s = slab_pre_alloc_hook(s, lru, &objcg, 1, gfpflags); - if (!s) + if (unlikely(!s)) return NULL; object = kfence_alloc(s, orig_size, gfpflags);