From patchwork Mon Nov 13 19:13:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13454365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AD66C4167D for ; Mon, 13 Nov 2023 19:14:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA5836B0275; Mon, 13 Nov 2023 14:14:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AD7306B027E; Mon, 13 Nov 2023 14:14:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BCC16B0279; Mon, 13 Nov 2023 14:14:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 270546B0278 for ; Mon, 13 Nov 2023 14:14:17 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id D90CA40821 for ; Mon, 13 Nov 2023 19:14:16 +0000 (UTC) X-FDA: 81453881712.09.98B5443 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf03.hostedemail.com (Postfix) with ESMTP id D62C220008 for ; Mon, 13 Nov 2023 19:14:14 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=x6gwpjWZ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=4i4+efNv; dmarc=none; spf=pass (imf03.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1699902855; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BJGzWBWfS91Bn2q4x5GTRkmtz77nGJm/AYtGcjP8oqQ=; b=Ta6x6LIGQz0O4kg4dCrKFI7fTfJGtB18XVcyq4/pn8dhQR1MFZLBxuw4N37SXPYL5ZNR1N BwQTe07ie/xT/UBSxteFGmCow9N253JhCjeS4o/2+FcY3H4tYhAxOKrusHi0tkBlsnWX7O vsogu+5U2+A0fPHvTpVDfbCNpTKzHqg= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=x6gwpjWZ; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=4i4+efNv; dmarc=none; spf=pass (imf03.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1699902855; a=rsa-sha256; cv=none; b=7VQ50SRX2Js13rVphQAVRc0OWCn+98qAYLKA1TxFNJnTup+x1N2lQQpC0bD9EjXGCOgxtN Q6Wn+mV3wD/LXviffZPKPvXLukmB+/BA0Y04xSjXJP1FuR8G3RBWIO+V2LRWdQJqkD+10E 1GEdQF1M7u2x/gQBwFG6+I0pSdfUJig= Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 8AC962190C; Mon, 13 Nov 2023 19:14:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1699902853; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BJGzWBWfS91Bn2q4x5GTRkmtz77nGJm/AYtGcjP8oqQ=; b=x6gwpjWZRoU2cJwQRUstIwlnB2tOMZOiZggNYtZom5dv/fxN86SVhE8s5HQRTwoUb8s69Y JS+UErYoyEunv1otlp/7QTaBy+fwdnfhzEih5W5glYjm1S+f2rEMkyR0XrSwtkPNOcZHHg Ibt8bic0mbDFBuKf4ss5DfZvMbpCGyM= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1699902853; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=BJGzWBWfS91Bn2q4x5GTRkmtz77nGJm/AYtGcjP8oqQ=; b=4i4+efNvutHlf3ZGlqhNLQX/vcouywCQ1zBzA9GHIaTWE+CXyXRe315n8/urbTqjDAGA45 hKHukNur3sX1ItDg== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 3AAE713398; Mon, 13 Nov 2023 19:14:13 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id ALK4DYV1UmVFOgAAMHmgww (envelope-from ); Mon, 13 Nov 2023 19:14:13 +0000 From: Vlastimil Babka To: David Rientjes , Christoph Lameter , Pekka Enberg , Joonsoo Kim Cc: Andrew Morton , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org, patches@lists.linux.dev, Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Marco Elver , Johannes Weiner , Michal Hocko , Shakeel Butt , Muchun Song , Kees Cook , kasan-dev@googlegroups.com, cgroups@vger.kernel.org, Vlastimil Babka Subject: [PATCH 12/20] mm/slab: move pre/post-alloc hooks from slab.h to slub.c Date: Mon, 13 Nov 2023 20:13:53 +0100 Message-ID: <20231113191340.17482-34-vbabka@suse.cz> X-Mailer: git-send-email 2.42.1 In-Reply-To: <20231113191340.17482-22-vbabka@suse.cz> References: <20231113191340.17482-22-vbabka@suse.cz> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D62C220008 X-Stat-Signature: uikffs9d6xsdi9h4exrwheow5pds3fd7 X-HE-Tag: 1699902854-580497 X-HE-Meta: U2FsdGVkX19BUoAPHZ3OdJL4Ii89rwwEwv0xsQwTScXh52GRhrG0lZGolKUfD5+S4TXrtblMBLIdnD81qEPtnAshSL+h/O4FfDB4VCVVGMHUj0Cs5bLRvpOEKRd6lIxDD8bt1TMiHei4GFM+PtRWTQIaCioSKsoG4GYIKSabiyylKZ7s3VwLQ/L7RmcU5oVJ+lPKIJHAgdGIle5aXRCI31V28Lhc/FWhFPk3SLCbIWVR3pcjsS44AopNOgEzaWdbDFBEGj0joH4yoDwtpV+Dw6vyCZk+c3pNUT5vm0Ws//qFm0UAzou/T/0r9eCtDzGyD+z4zE86f93+eXN2EvRzOTemIPUcbtKl/wkVUIVmNbPcYUFO5BaoNtcJ/z0/4uJLw1EEZy30l6Ockx5p3I6YJkB/Iz5HOw+Vpbtnq4vcbsI35yYKl7KbRaUTT9mmEdMxyo9PzUoF9uJAMImllmH6ytssxbMxzPe8uBEZQdz9Z1eJ0MOxfInwU3w/fh0NxBMQ0+5mxSWP4ky5PJGzDzJ2HWhF1GMacL9j2PnFmb09JvVDzO5AphezTpg51mAWpv8gOimmlH7NpBmo5N6phCBTl1eYgwvn2inFM8D+5vrB1yjfMb8KE8EMXt9FXwEmYxsPPpkP6G93Zk0d3w030wCCbCQhodC3UPDop8lfMfi7ymxgLo/msN8z/7ITRDYWycl/QGYqYVeL4u98A7NHi8gTlNk+sp4UVxE0kt0DAmT13smuxkUrb0tc84QbNF2XXU86xCitDZnCIgHcCAPDcBkYqzFUOns6gf306i+goLnqKlMY0zt7g6q33u0u44O9buNrCH3tyPCh8G+aohSvszHXFVpwkucYSYUN+HQcV98LY5JNwyLy1rfUV1+Oya2Owuil6PBeuo2OyVtczpCbN8e84WrH+mZakQ177GsS8V2XuCIo0ETt8ygmh0AYPg3PDGTrNCkkvm2Dv59Hq2/P/X0 ZN8mBXzq dCyaPKCVBKt/JE6yOrEEeqkuYtjjcl3l0EzqRJ+/eEKb8kLy+XjWT9wtvmUhQFMkTXdhqsObctI3VP6nhHi45R1zuH+I58x4tt42OvhCfkILnGA2vBUghjpAlz0EGUMzbdVdvbacAVP8F3JswlzjzxXhFb2qlR/qGmQDpVfP9aeIzDQ28NUWwsp86xEXi6fgXMmIvqAS4/eLTfA6xGVujWfgXv0nNMDeFhLGiXBmN2HDeKm659A6MT3Op5sHK+QgefT3nAfsqSWsfozg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We don't share the hooks between two slab implementations anymore so they can be moved away from the header. As part of the move, also move should_failslab() from slab_common.c as the pre_alloc hook uses it. This means slab.h can stop including fault-inject.h and kmemleak.h. Fix up some files that were depending on the includes transitively. Signed-off-by: Vlastimil Babka Reviewed-by: Kees Cook --- mm/kasan/report.c | 1 + mm/memcontrol.c | 1 + mm/slab.h | 72 ----------------------------------------- mm/slab_common.c | 8 +---- mm/slub.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 84 insertions(+), 79 deletions(-) diff --git a/mm/kasan/report.c b/mm/kasan/report.c index e77facb62900..011f727bfaff 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 947fb50eba31..8a0603517065 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -64,6 +64,7 @@ #include #include #include +#include #include "internal.h" #include #include diff --git a/mm/slab.h b/mm/slab.h index c278f8b15251..aad18992269f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -9,8 +9,6 @@ #include #include #include -#include -#include #include #include @@ -795,76 +793,6 @@ static inline size_t slab_ksize(const struct kmem_cache *s) return s->size; } -static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, - struct list_lru *lru, - struct obj_cgroup **objcgp, - size_t size, gfp_t flags) -{ - flags &= gfp_allowed_mask; - - might_alloc(flags); - - if (should_failslab(s, flags)) - return NULL; - - if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) - return NULL; - - return s; -} - -static inline void slab_post_alloc_hook(struct kmem_cache *s, - struct obj_cgroup *objcg, gfp_t flags, - size_t size, void **p, bool init, - unsigned int orig_size) -{ - unsigned int zero_size = s->object_size; - bool kasan_init = init; - size_t i; - - flags &= gfp_allowed_mask; - - /* - * For kmalloc object, the allocated memory size(object_size) is likely - * larger than the requested size(orig_size). If redzone check is - * enabled for the extra space, don't zero it, as it will be redzoned - * soon. The redzone operation for this extra space could be seen as a - * replacement of current poisoning under certain debug option, and - * won't break other sanity checks. - */ - if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && - (s->flags & SLAB_KMALLOC)) - zero_size = orig_size; - - /* - * When slub_debug is enabled, avoid memory initialization integrated - * into KASAN and instead zero out the memory via the memset below with - * the proper size. Otherwise, KASAN might overwrite SLUB redzones and - * cause false-positive reports. This does not lead to a performance - * penalty on production builds, as slub_debug is not intended to be - * enabled there. - */ - if (__slub_debug_enabled()) - kasan_init = false; - - /* - * As memory initialization might be integrated into KASAN, - * kasan_slab_alloc and initialization memset must be - * kept together to avoid discrepancies in behavior. - * - * As p[i] might get tagged, memset and kmemleak hook come after KASAN. - */ - for (i = 0; i < size; i++) { - p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); - if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) - memset(p[i], 0, zero_size); - kmemleak_alloc_recursive(p[i], s->object_size, 1, - s->flags, flags); - kmsan_slab_alloc(s, p[i], flags); - } - - memcg_slab_post_alloc_hook(s, objcg, flags, size, p); -} /* * The slab lists for all objects. diff --git a/mm/slab_common.c b/mm/slab_common.c index 63b8411db7ce..bbc2e3f061f1 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include @@ -1470,10 +1471,3 @@ EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); EXPORT_TRACEPOINT_SYMBOL(kfree); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_free); -int should_failslab(struct kmem_cache *s, gfp_t gfpflags) -{ - if (__should_failslab(s, gfpflags)) - return -ENOMEM; - return 0; -} -ALLOW_ERROR_INJECTION(should_failslab, ERRNO); diff --git a/mm/slub.c b/mm/slub.c index 64170a1ccbba..e15912d1f6ed 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -3494,6 +3495,86 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, 0, sizeof(void *)); } +noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) +{ + if (__should_failslab(s, gfpflags)) + return -ENOMEM; + return 0; +} +ALLOW_ERROR_INJECTION(should_failslab, ERRNO); + +static inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, + struct list_lru *lru, + struct obj_cgroup **objcgp, + size_t size, gfp_t flags) +{ + flags &= gfp_allowed_mask; + + might_alloc(flags); + + if (should_failslab(s, flags)) + return NULL; + + if (!memcg_slab_pre_alloc_hook(s, lru, objcgp, size, flags)) + return NULL; + + return s; +} + +static inline void slab_post_alloc_hook(struct kmem_cache *s, + struct obj_cgroup *objcg, gfp_t flags, + size_t size, void **p, bool init, + unsigned int orig_size) +{ + unsigned int zero_size = s->object_size; + bool kasan_init = init; + size_t i; + + flags &= gfp_allowed_mask; + + /* + * For kmalloc object, the allocated memory size(object_size) is likely + * larger than the requested size(orig_size). If redzone check is + * enabled for the extra space, don't zero it, as it will be redzoned + * soon. The redzone operation for this extra space could be seen as a + * replacement of current poisoning under certain debug option, and + * won't break other sanity checks. + */ + if (kmem_cache_debug_flags(s, SLAB_STORE_USER | SLAB_RED_ZONE) && + (s->flags & SLAB_KMALLOC)) + zero_size = orig_size; + + /* + * When slub_debug is enabled, avoid memory initialization integrated + * into KASAN and instead zero out the memory via the memset below with + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and + * cause false-positive reports. This does not lead to a performance + * penalty on production builds, as slub_debug is not intended to be + * enabled there. + */ + if (__slub_debug_enabled()) + kasan_init = false; + + /* + * As memory initialization might be integrated into KASAN, + * kasan_slab_alloc and initialization memset must be + * kept together to avoid discrepancies in behavior. + * + * As p[i] might get tagged, memset and kmemleak hook come after KASAN. + */ + for (i = 0; i < size; i++) { + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); + if (p[i] && init && (!kasan_init || + !kasan_has_integrated_init())) + memset(p[i], 0, zero_size); + kmemleak_alloc_recursive(p[i], s->object_size, 1, + s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); + } + + memcg_slab_post_alloc_hook(s, objcg, flags, size, p); +} + /* * Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc) * have the fastpath folded into their functions. So no function call