From patchwork Mon Dec 4 19:34:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vlastimil Babka X-Patchwork-Id: 13479023 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 01972C4167B for ; Mon, 4 Dec 2023 19:35:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6D766B02F1; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF76C6B02E7; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8142E6B02E7; Mon, 4 Dec 2023 14:34:57 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 47A266B02EC for ; Mon, 4 Dec 2023 14:34:57 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 14B8114031D for ; Mon, 4 Dec 2023 19:34:57 +0000 (UTC) X-FDA: 81530138634.10.E842C7E Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131]) by imf21.hostedemail.com (Postfix) with ESMTP id CA64C1C001F for ; Mon, 4 Dec 2023 19:34:54 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701718495; a=rsa-sha256; cv=none; b=XqyaQxQ/ypLNXXsyOBvqbmyNICKzgAs5jk/hNiKxTQ2pljnnu3MnJ2CF1DQgLMfzx2LUqG zu/18zglBFFDQjmqVC7ybubbWHRROk4sgqcfmKahumFVpRbaqMuGxs39ijESOMwhoN32pu n0y1ugMPLB0YxnN1furcnDcnfxEiIvQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=none; spf=pass (imf21.hostedemail.com: domain of vbabka@suse.cz designates 195.135.223.131 as permitted sender) smtp.mailfrom=vbabka@suse.cz; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701718495; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=l0vu8j+/9A2K31W5hfz2IyO5NKbuonHh7Yjg+JM4QPY=; b=whargo5496D2HUpjH5jPBl4giTAOBAGH1ruhb4nj6aB9Boz0ntTcO4WbDplU8eyhZKIRwy qiU4+xhClFRa1vv0jouoZ46PPE92VdG3h+Ivvy1BTi4OA6WXosqkPGEbNnbdsYZz/+JvRc yk52+vGs30KpWmwG2pEGCMaJlwmBjPY= Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org [IPv6:2a07:de40:b281:104:10:150:64:97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by smtp-out2.suse.de (Postfix) with ESMTPS id D710B1FE70; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B9C9B139AA; Mon, 4 Dec 2023 19:34:52 +0000 (UTC) Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167]) by imap1.dmz-prg2.suse.org with ESMTPSA id KLwRLdwpbmUPMwAAD6G6ig (envelope-from ); Mon, 04 Dec 2023 19:34:52 +0000 From: Vlastimil Babka Date: Mon, 04 Dec 2023 20:34:43 +0100 Subject: [PATCH 4/4] mm/slub: free KFENCE objects in slab_free_hook() MIME-Version: 1.0 Message-Id: <20231204-slub-cleanup-hooks-v1-4-88b65f7cd9d5@suse.cz> References: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> In-Reply-To: <20231204-slub-cleanup-hooks-v1-0-88b65f7cd9d5@suse.cz> To: Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Cc: Andrew Morton , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Alexander Potapenko , Marco Elver , Dmitry Vyukov , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, Vlastimil Babka X-Mailer: b4 0.12.4 X-Spamd-Bar: +++++ X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CA64C1C001F X-Stat-Signature: ymraky8q9az5gmyh5h6pjj9xuimqo6fj X-HE-Tag: 1701718494-443843 X-HE-Meta: U2FsdGVkX1/HH3zJTW0qD9oWo88AIGN2F11yEvVBSQJSyZ4+/Yjbp2Zom8qHmejrvXfYZZk0iHnCdh2Ta4JsnYcMns03qTmv0kmV0HzBPk/hi8EdpZfBFfBEmJLNBtG9gbRmWdUrZOzyY1iDqKfEN2xpX4D7RtuDH2Is5WBAdO+an/FfmB6nzExFoqqd7OXxCRwXl1o+JgJbHiAkH4S+haMhNFI6QAMRwtEBmPGDiqZY8RO1sWvHTQvBTBh2Cy71qYypDnrF64wkZWf7dBO4jPHcX+F2koEUSrIGC9NTRreXNNyyD43S+x9LaRhxJNQ5PLUhgX3csTdB9VWh9cUa2KTPYSVb2qbYTSSowIcpea76ng672daPZ/bBWNS8mFv20oLZOd00+Mif5ZWl72eWtfu9yaUAdDIoCSayIfcHny5Qy5jydhfFLb5CkJ9vrh24ys9IiINSlCOD0KF5EjgEo6J3cVF6UO+ts5TNQuVItHohrGkcZqqu1hQrN6JTAiiJCNdQqt/xLP4mAKTlu56u8HsfjUPuIWQPxmlgh1ixDtsWCtYfxgfUcB7A3sUBoEcx048Fs6hQQ8/o29rcZYpU3s4JxZmCcG0eIb83hutDvoy9OCZN1TlNYbZQximMWqd+Gx7umfItxUHWb7+tMAV7xdVDxDX1dhbL+vdeVZnngQcRRdbcM6kBbTADd+BjmNfcc2W9Hl8X8WP28ginVpsXMOejCs97a055dR9rN3ozCcSOC67AMt4pYzmRLi/0O4GjmcP3khvdfXE75o/gPW/iH1lHfDjBSz2HanjMyG7pYjLA3IG/mZ3aCEB6bqaJ48xaKO9B/WXJgs67agNAFwZsc9KwAYw0APRZ54p4oEn7FYlZ7j9i5lnL8QOYOa/l4Nvgfs9my9YYnpe3K+5XOSr2Z+WjGrE6bGCvK6fUhqthErWbQVBvg3WlvqBO0kuJExln/RWVqvS8BCgzX5v1jEi q52RNjGZ uXILyg6BzHqezaiuf2+HV0cwlou0XtQNzbsR14bZTZYlFzTCT8SZ0Tj9hEqLtq2J/4ntit49ztZciYjxEHNin6t+IENppZUL4zEm34VtYOf9Mte3hSccEzDqFIVZ0oMLFKW4c1w+mKSCd1hGdxvMmIsZPSA4bfErg5y5v8Jqupg0AHvX9VGgOwIfyCNeHnruI6YLK4rfbywUw1dfv5UTP6K5ZxJJ4Rd5/CswTLX3TMjfNStCr40RJ6RkTAcPcSnTbE46d X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When freeing an object that was allocated from KFENCE, we do that in the slowpath __slab_free(), relying on the fact that KFENCE "slab" cannot be the cpu slab, so the fastpath has to fallback to the slowpath. This optimization doesn't help much though, because is_kfence_address() is checked earlier anyway during the free hook processing or detached freelist building. Thus we can simplify the code by making the slab_free_hook() free the KFENCE object immediately, similarly to KASAN quarantine. In slab_free_hook() we can place kfence_free() above init processing, as callers have been making sure to set init to false for KFENCE objects. This simplifies slab_free(). This places it also above kasan_slab_free() which is ok as that skips KFENCE objects anyway. While at it also determine the init value in slab_free_freelist_hook() outside of the loop. This change will also make introducing per cpu array caches easier. Tested-by: Marco Elver Signed-off-by: Vlastimil Babka Reviewed-by: Chengming Zhou --- mm/slub.c | 22 ++++++++++------------ 1 file changed, 10 insertions(+), 12 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index ed2fa92e914c..e38c2b712f6c 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2039,7 +2039,7 @@ static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, * production configuration these hooks all should produce no code at all. * * Returns true if freeing of the object can proceed, false if its reuse - * was delayed by KASAN quarantine. + * was delayed by KASAN quarantine, or it was returned to KFENCE. */ static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) @@ -2057,6 +2057,9 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) __kcsan_check_access(x, s->object_size, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); + if (kfence_free(kasan_reset_tag(x))) + return false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_free and initialization memset's must be @@ -2086,23 +2089,25 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, void *object; void *next = *head; void *old_tail = *tail; + bool init; if (is_kfence_address(next)) { slab_free_hook(s, next, false); - return true; + return false; } /* Head and tail of the reconstructed freelist */ *head = NULL; *tail = NULL; + init = slab_want_init_on_free(s); + do { object = next; next = get_freepointer(s, object); /* If object's reuse doesn't have to be delayed */ - if (likely(slab_free_hook(s, object, - slab_want_init_on_free(s)))) { + if (likely(slab_free_hook(s, object, init))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; @@ -4103,9 +4108,6 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, stat(s, FREE_SLOWPATH); - if (kfence_free(head)) - return; - if (IS_ENABLED(CONFIG_SLUB_TINY) || kmem_cache_debug(s)) { free_to_partial_list(s, slab, head, tail, cnt, addr); return; @@ -4290,13 +4292,9 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { - bool init; - memcg_slab_free_hook(s, slab, &object, 1); - init = !is_kfence_address(object) && slab_want_init_on_free(s); - - if (likely(slab_free_hook(s, object, init))) + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) do_slab_free(s, slab, object, object, 1, addr); }