From patchwork Fri Aug 9 15:36:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13758894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0B8DC52D7C for ; Fri, 9 Aug 2024 15:37:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 506336B008C; Fri, 9 Aug 2024 11:37:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4B4C56B0092; Fri, 9 Aug 2024 11:37:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21C296B0098; Fri, 9 Aug 2024 11:37:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F1A836B008C for ; Fri, 9 Aug 2024 11:37:10 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A872B1A0BCF for ; Fri, 9 Aug 2024 15:37:10 +0000 (UTC) X-FDA: 82433110620.05.9A2BA09 Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by imf04.hostedemail.com (Postfix) with ESMTP id A50CB40007 for ; Fri, 9 Aug 2024 15:37:08 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FBV6zh28; spf=pass (imf04.hostedemail.com: domain of jannh@google.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723217796; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=04hH29wKOUh4zO671E3nNC0Fq21jgPwt4FhdP1F+2dI=; b=8LCpe7zI7lCTvcb9n06IL9BgoEcA11Sa6C7S4SIj7eylgVyod7TGumVDq+FFBt0kgLhpQD rExYHH2MfacaoX1nzfyA2SxrAb7LgvqV2MkRy7DVwxlLgA+mPd5q9RXLnTgNFLdnxd5kWr rpSeiYoiC85uKMtIGdnrspaB6UBVbHI= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FBV6zh28; spf=pass (imf04.hostedemail.com: domain of jannh@google.com designates 209.85.128.47 as permitted sender) smtp.mailfrom=jannh@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723217796; a=rsa-sha256; cv=none; b=ZG4X+/MDnVgK5fpPcISRBT0SNHBJz7PvlZVh932umBq5IWIifbAgooUFlVOGUsEBY98wVr omj3ve1sq/apMNHmFOYDThHKqXSW+qBlvjpS9Ik+V2CDNXLOrPBXyYj3a8sPz5QS1cxebD 1LkCRie8wD1gzmXG2bal0nGEyiEWPJE= Received: by mail-wm1-f47.google.com with SMTP id 5b1f17b1804b1-427fc9834deso65815e9.0 for ; Fri, 09 Aug 2024 08:37:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1723217827; x=1723822627; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=04hH29wKOUh4zO671E3nNC0Fq21jgPwt4FhdP1F+2dI=; b=FBV6zh284c2ax1IC8IEl7CRhZVKvKr0aN/A7tBaxbq2QzuMd8tu7DFkYS8MAb8SaC6 SZWejJMmX1GKvhkHpLLPOV8DxTuS6XQWAjgcO/YPGOpfVV4IEdXalarCuc0+705jO9oe /JvhKuhlw00QlYNYajqEcjXNSITInsU9j2lMjT1UVgzDPlm7HWQiNvy5Q2a2r+PxBeVJ DtRzpGOVrs6XPg6hR9VNhxP80FMuajdFzxF25os7fhWA+AhIedzb1DvDVg0hrGq3UlMt ud5gNp3COkYH905aeoX0j+4Z6m3gwHCWzNmwvUqeq8JpTwWn/iE+jgOqVIPzt9rtR4Pl 3cMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723217827; x=1723822627; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=04hH29wKOUh4zO671E3nNC0Fq21jgPwt4FhdP1F+2dI=; b=V7d/V/d8xHQvUyR0PpLB5Oyu5Pfh6dY4Ul7zIbRUKGjsnI3aFgzAqJ8JgVJwbtw2yQ y4WTEuPosM/GxiMN+HlYKK9pQlgGyF51p4MZ7REL88TYjTAXZIi4QfVLmvuq58HCAVs4 eAKe9EfuIJ+C75lakrR/QaSdeIu6RaF/cebfLI/4Q5K3L88oGMttCjhvGAaHZcfoSDbz DYM3ZlQuPsUs2ABkt8rTfJf72Gb2hipXoI088GxakJyjK526Ot3kK7J83te3fVD0CqXg tCE5IIzKIiPovFxNlJpAwg+6PkIcD40p2sRkYg7xvelaOcaDHrYKCqtVLcbPlcObUAdO NcxA== X-Forwarded-Encrypted: i=1; AJvYcCVDJapT4RIFl5yyEpMN9HGAhqiEi4O9qfZEPIJkYtcKZOoEmG6aHzQ75dRsUfi8rjO6riy9zKNSGQ==@kvack.org X-Gm-Message-State: AOJu0YxKMoy1L7bjCzyJ20GPu1tswZTIk+ViGqWZYL/0coB8fV3PiG1j yQSooMdtK6EAA3i5ixjgd6ngAstkaBDeiFeqYW7Bfq/2JhTUFmH0FDqTQpBFMA== X-Google-Smtp-Source: AGHT+IGOr0c0CjkgwTBRF8fGT7noWEp1T+yKhvV1d2VxrR9xYPGziTD7vNkFfcFIeHv6Und2nan4Eg== X-Received: by 2002:a05:600c:3d0e:b0:426:8ee5:3e9c with SMTP id 5b1f17b1804b1-429c170502fmr1680895e9.6.1723217826322; Fri, 09 Aug 2024 08:37:06 -0700 (PDT) Received: from localhost ([2a00:79e0:9d:4:1cbc:ea05:2b3e:79e6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-42905971d06sm131370035e9.19.2024.08.09.08.37.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Aug 2024 08:37:05 -0700 (PDT) From: Jann Horn Date: Fri, 09 Aug 2024 17:36:55 +0200 Subject: [PATCH v8 1/2] kasan: catch invalid free before SLUB reinitializes the object MIME-Version: 1.0 Message-Id: <20240809-kasan-tsbrcu-v8-1-aef4593f9532@google.com> References: <20240809-kasan-tsbrcu-v8-0-aef4593f9532@google.com> In-Reply-To: <20240809-kasan-tsbrcu-v8-0-aef4593f9532@google.com> To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Marco Elver , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, David Sterba , Jann Horn X-Mailer: b4 0.15-dev X-Developer-Signature: v=1; a=ed25519-sha256; t=1723217820; l=8980; i=jannh@google.com; s=20240730; h=from:subject:message-id; bh=6bMIRMeh/1RVCY2STakwRPDtPkfm0ed8OAz2i9tHk/A=; b=cgEyMnkYj8fG04Ew4yST4NXZcj+IbXd0To9wuYcsObX2VYDJHqQIXKwsJaH/bAV6w1Fth+GPk VGb/0LWxPB2Cf5nOFT6Ov7qUiEcbqCHMirvJ0P2oCCgijDCf5jxeXGE X-Developer-Key: i=jannh@google.com; a=ed25519; pk=AljNtGOzXeF6khBXDJVVvwSEkVDGnnZZYqfWhP1V+C8= X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: A50CB40007 X-Stat-Signature: sgi3p5bewn85qrk7ma5wa3836ydkbup5 X-HE-Tag: 1723217828-358833 X-HE-Meta: U2FsdGVkX19b2HCsBAD1RJhRfwZPNf0VvTu9kYLI16kRsa0yq3Np/tfAIaXpMboHLnp3fou6OaR+KUx2QEnwOUKBNiDDGedignu7H0P4VeCFwuL2u4ssiDFk9OdKw4vt6RRnWPFrzM6qAQy0dDo2WyX3wq9ApMbb4ask0W9D6fdyZp5C+jWBAglBUlKDXREa8iR/pmDOzItV2mrY6N3cCfd6cZMof5WchGEUWdTjjMpIfxXN9Iw3omZIGAlYRMwoO+wcgxZiDFH5bSLCE0qLSjdeqNGgVXaodFwzVdFWrVbw9I3DA+dZf8d2RvToCtkY0j5OGdF6eiETlnNB6PbhCUMUnfaHYEFcVG326pIu8gBZvjjISElOPOv7EFPRvhitKWxtLq0tUd8b5tQB2tAdaJD0s7dtRWq9B6yvLfXh0v9qBIzBXv75HBxsxH0zQlAOsxYsxdMjvbcLMXwNviBBd6O2XEjk2OsG6PomziW+BCASQiKGqOuE3QJV8WiWb+0lMHPZw3AFZgZRP4rKY5jVtffK4bHQidQLZ12ZXaryzVcrXG1vZDisWL72cGlMCbgJI+77jg4/kK3c81OsDrznDnibmpZNnTTvEdGPtdYieg+WP3b6ayVe9M1w7WIGvYmmcY0rwEsF1Avk/W2pw/CG2QmcrSXuOqRhlf4D7WbsM+T4SLWlrrRRHgI2qHasWUQq2/VWHcRBKilYA8w/xOBgK0UEzPiA3RUdVO/9Opb6+2uDLqCslLby5rSwQT9c+xyI8ap3UiyKlimsGiH8JVbFFhsS5NHPYZGeCTHnAWRd5HTxqkSLOA5X3o+asPO7JKkwvX6Cjeh9g6JnYQ3dPXlYye0P8tg8KlAhcEGWwi8tpHC8rjNB0p5wBe/V1NQPfKUKH/L6wNAwfJCs1ylvrzTNXE/ZVC8bYRS1yq5R7BTxkOFbd1pS7Guxn5dGunSmdZpDi/UyrSTME/IK3Y5FXWI cXmuiKBZ KXrqkkrZTUCPTkOlBnqvRHYwQGVfyO7S6cbxd9A9GBOolIk3Px6KLzJtRvjqr4I7t7A0QvLKeV3dnw/xNA0NUvGhjsLJtYfwSynOMvOjfl8/iyaewGu/Gg40TLgNgIJ+t76b8ws2gl4YriQZN7tnLT7A1v3BqeEjc+koOIeQxUfqTX0MvS1yQzQ6ZuWTEmhShIL0+SwV2OmpxiibG+Ufb4wlc919na9TOE2wsGzaCmuqkZWLS3XvnXeb9v0Xn7segpxCQhpYXB7TGTurOUd2+KL0lQnFDqSPH1O1mq4vcvQQ436hP0/sdfcZmB+4gQaIpzUdr9nR30+6IS8XCIflqrT2FqCKrWUOBDBE5WcQZToIwc/vNy2lFN9nDhX6rTLckOVYwaRap8YbHQxI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, when KASAN is combined with init-on-free behavior, the initialization happens before KASAN's "invalid free" checks. More importantly, a subsequent commit will want to RCU-delay the actual SLUB freeing of an object, and we'd like KASAN to still validate synchronously that freeing the object is permitted. (Otherwise this change will make the existing testcase kmem_cache_invalid_free fail.) So add a new KASAN hook that allows KASAN to pre-validate a kmem_cache_free() operation before SLUB actually starts modifying the object or its metadata. Inside KASAN, this: - moves checks from poison_slab_object() into check_slab_allocation() - moves kasan_arch_is_ready() up into callers of poison_slab_object() - removes "ip" argument of poison_slab_object() and __kasan_slab_free() (since those functions no longer do any reporting) Acked-by: Vlastimil Babka #slub Reviewed-by: Andrey Konovalov Signed-off-by: Jann Horn --- include/linux/kasan.h | 54 ++++++++++++++++++++++++++++++++++++++++++--- mm/kasan/common.c | 61 ++++++++++++++++++++++++++++++--------------------- mm/slub.c | 7 ++++++ 3 files changed, 94 insertions(+), 28 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 70d6a8f6e25d..1570c7191176 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -172,19 +172,61 @@ static __always_inline void * __must_check kasan_init_slab_obj( { if (kasan_enabled()) return __kasan_init_slab_obj(cache, object); return (void *)object; } -bool __kasan_slab_free(struct kmem_cache *s, void *object, - unsigned long ip, bool init); +bool __kasan_slab_pre_free(struct kmem_cache *s, void *object, + unsigned long ip); +/** + * kasan_slab_pre_free - Check whether freeing a slab object is safe. + * @object: Object to be freed. + * + * This function checks whether freeing the given object is safe. It may + * check for double-free and invalid-free bugs and report them. + * + * This function is intended only for use by the slab allocator. + * + * @Return true if freeing the object is unsafe; false otherwise. + */ +static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, + void *object) +{ + if (kasan_enabled()) + return __kasan_slab_pre_free(s, object, _RET_IP_); + return false; +} + +bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init); +/** + * kasan_slab_free - Poison, initialize, and quarantine a slab object. + * @object: Object to be freed. + * @init: Whether to initialize the object. + * + * This function informs that a slab object has been freed and is not + * supposed to be accessed anymore, except for objects in + * SLAB_TYPESAFE_BY_RCU caches. + * + * For KASAN modes that have integrated memory initialization + * (kasan_has_integrated_init() == true), this function also initializes + * the object's memory. For other modes, the @init argument is ignored. + * + * This function might also take ownership of the object to quarantine it. + * When this happens, KASAN will defer freeing the object to a later + * stage and handle it internally until then. The return value indicates + * whether KASAN took ownership of the object. + * + * This function is intended only for use by the slab allocator. + * + * @Return true if KASAN took ownership of the object; false otherwise. + */ static __always_inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) { if (kasan_enabled()) - return __kasan_slab_free(s, object, _RET_IP_, init); + return __kasan_slab_free(s, object, init); return false; } void __kasan_kfree_large(void *ptr, unsigned long ip); static __always_inline void kasan_kfree_large(void *ptr) { @@ -368,12 +410,18 @@ static inline void kasan_poison_new_object(struct kmem_cache *cache, void *object) {} static inline void *kasan_init_slab_obj(struct kmem_cache *cache, const void *object) { return (void *)object; } + +static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) +{ + return false; +} + static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) { return false; } static inline void kasan_kfree_large(void *ptr) {} static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 85e7c6b4575c..f26bbc087b3b 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -205,59 +205,65 @@ void * __must_check __kasan_init_slab_obj(struct kmem_cache *cache, /* Tag is ignored in set_tag() without CONFIG_KASAN_SW/HW_TAGS */ object = set_tag(object, assign_tag(cache, object, true)); return (void *)object; } -static inline bool poison_slab_object(struct kmem_cache *cache, void *object, - unsigned long ip, bool init) +/* Returns true when freeing the object is not safe. */ +static bool check_slab_allocation(struct kmem_cache *cache, void *object, + unsigned long ip) { - void *tagged_object; - - if (!kasan_arch_is_ready()) - return false; + void *tagged_object = object; - tagged_object = object; object = kasan_reset_tag(object); if (unlikely(nearest_obj(cache, virt_to_slab(object), object) != object)) { kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_INVALID_FREE); return true; } - /* RCU slabs could be legally used after free within the RCU period. */ - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) - return false; - if (!kasan_byte_accessible(tagged_object)) { kasan_report_invalid_free(tagged_object, ip, KASAN_REPORT_DOUBLE_FREE); return true; } + return false; +} + +static inline void poison_slab_object(struct kmem_cache *cache, void *object, + bool init) +{ + void *tagged_object = object; + + object = kasan_reset_tag(object); + + /* RCU slabs could be legally used after free within the RCU period. */ + if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) + return; + kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), KASAN_SLAB_FREE, init); if (kasan_stack_collection_enabled()) kasan_save_free_info(cache, tagged_object); +} - return false; +bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, + unsigned long ip) +{ + if (!kasan_arch_is_ready() || is_kfence_address(object)) + return false; + return check_slab_allocation(cache, object, ip); } -bool __kasan_slab_free(struct kmem_cache *cache, void *object, - unsigned long ip, bool init) +bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init) { - if (is_kfence_address(object)) + if (!kasan_arch_is_ready() || is_kfence_address(object)) return false; - /* - * If the object is buggy, do not let slab put the object onto the - * freelist. The object will thus never be allocated again and its - * metadata will never get released. - */ - if (poison_slab_object(cache, object, ip, init)) - return true; + poison_slab_object(cache, object, init); /* * If the object is put into quarantine, do not let slab put the object * onto the freelist for now. The object's metadata is kept until the * object gets evicted from quarantine. */ @@ -501,17 +507,22 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) if (check_page_allocation(ptr, ip)) return false; kasan_poison(ptr, folio_size(folio), KASAN_PAGE_FREE, false); return true; } - if (is_kfence_address(ptr)) - return false; + if (is_kfence_address(ptr) || !kasan_arch_is_ready()) + return true; slab = folio_slab(folio); - return !poison_slab_object(slab->slab_cache, ptr, ip, false); + + if (check_slab_allocation(slab->slab_cache, ptr, ip)) + return false; + + poison_slab_object(slab->slab_cache, ptr, false); + return true; } void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip) { struct slab *slab; gfp_t flags = 0; /* Might be executing under a lock. */ diff --git a/mm/slub.c b/mm/slub.c index 3520acaf9afa..0c98b6a2124f 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2223,12 +2223,19 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) __kcsan_check_access(x, s->object_size, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); if (kfence_free(x)) return false; + /* + * Give KASAN a chance to notice an invalid free operation before we + * modify the object. + */ + if (kasan_slab_pre_free(s, x)) + return false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_free and initialization memset's must be * kept together to avoid discrepancies in behavior. * * The initialization memset's clear the object and the metadata, From patchwork Fri Aug 9 15:36:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 13758895 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4BCCFC3DA4A for ; Fri, 9 Aug 2024 15:37:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D44166B0082; Fri, 9 Aug 2024 11:37:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF1496B0098; Fri, 9 Aug 2024 11:37:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6B316B009C; Fri, 9 Aug 2024 11:37:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 96F536B0082 for ; Fri, 9 Aug 2024 11:37:14 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 3FD9C1A11CD for ; Fri, 9 Aug 2024 15:37:14 +0000 (UTC) X-FDA: 82433110788.29.E6A7B15 Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) by imf27.hostedemail.com (Postfix) with ESMTP id 1C6464000B for ; Fri, 9 Aug 2024 15:37:10 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=COODS1ai; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of jannh@google.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1723217782; a=rsa-sha256; cv=none; b=sW3asjeaxXm0nU031Rnu7DcROBhu5HxrXlvqppWLz9jJ2xjyPeIyh9p5KytGHrVUvV7YYc XU46H3rE+zDutK1uNsP/4lTGDd9CYaHxvRZs67lBcR1nVbCijfp61ym4yAznpBqOR8FBE4 XeQRjVN4ApNfZ26wUO3QajJWbvsDclw= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=COODS1ai; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of jannh@google.com designates 209.85.128.45 as permitted sender) smtp.mailfrom=jannh@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1723217782; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Nta7J3JqGYOD5OvM0IZyyB8KAW+gkwbuyLrKSCqLhjY=; b=S4c9MnRYosjs2LxMPH2UBYp51NlkMVBa6s21ziYmBua/JGGjAu08WzIa5E8nRd8ItM208F friwR5WNziI2xGXI/k3LmsiBFRp3xplxjCQSBOUKl8K8gqOBHK3QxV2j/fXsjmMvaxp/Ve yXUd1EyIq3e1A/utDVQCCDywe/d2yjI= Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-428e12f6e56so68065e9.0 for ; Fri, 09 Aug 2024 08:37:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1723217829; x=1723822629; darn=kvack.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Nta7J3JqGYOD5OvM0IZyyB8KAW+gkwbuyLrKSCqLhjY=; b=COODS1aiQ8vT/3pgUJy2UQAM9pfHd7RiwW8FvOzqJ7qf4WJqkDXWhDlHCPDi3QJtw9 mgqlPdY58HwPQCWn+HhUS+onpUu04iMaconeb5oIS4BGj1x7pQQ6hPhsqxOSuKX70xGp qXVjwfOSe1yQZUBgGxxfQJ+7rLhA6jx0kMsTJp5gBK/jG9QTigloLKwgDMNL+EUWKL4N 0fth/+54VeKQGHf94Yx/iM/vHId+pjAEaqOPn6HNEAiP0AyLBavF5QUA8GkH6R5hoxwZ 77kILERdJ3TUfnel3xlerf12+a+M3OcbiVTeFeMqqIxpXZ/d+sL8/gI0625HD8gi/lwU sZIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1723217829; x=1723822629; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Nta7J3JqGYOD5OvM0IZyyB8KAW+gkwbuyLrKSCqLhjY=; b=w6pI9WXhjFPbTbRhlDWQ2L7ZPgkPSARG37HV/N3V3on+MBP0Dudl1z7IHSCoEX/tcq WcPiiYoJg0KylPRXXoZC7h8YmjuJvyIYK7H1uK0sI2dls5bVFGZIKfq1unOXRpvWQnZT bkwXEESWYP04ax6WQ7RdqHL7R6LC0LUZ84XC2VobTDV/pzf3Tap5+WFWJnthKUxXIhYo Xbc3HBClUaxOjabF5uV4k6rCezD4qz6u2nsjtj4dU4fJvTg5lbghAnQ7eTGvoJLNcBaR eiPhsJclDOV1iIjsj+/xrfgFfzOkEmWRTy2ck6PWz4YWPnQyFfAg/bglbKYtgCTj3flO b3RQ== X-Forwarded-Encrypted: i=1; AJvYcCVOkTqttoMQNIYw9vqXar9QCVuGLMhIXxB8ZwT1lvKa0v+idjaAxof6S1+BQMUHpl/ZoQj5jYVkqFp/RJ+jaNUg/XU= X-Gm-Message-State: AOJu0YyALXOQJeIGx5f+vzSfcUA/qksY/o4uj3Kc9V0dOqtQ/wQfBsSC KzlI4WXc9OYk4p5FM47F29TwGrCflBmcNpAg0tU9hw/s0g99sLMa5pSmKplmjA== X-Google-Smtp-Source: AGHT+IF5IutVEQNrDt88SK7HPph8YJBmOKR0vOY4CJmzi0ZYJfl2efa9B/WZhUGp7F/ILxDc8oTE5g== X-Received: by 2002:a05:600c:870b:b0:426:6413:b681 with SMTP id 5b1f17b1804b1-429c171589dmr1772365e9.6.1723217827484; Fri, 09 Aug 2024 08:37:07 -0700 (PDT) Received: from localhost ([2a00:79e0:9d:4:1cbc:ea05:2b3e:79e6]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4290c7a35ccsm83541645e9.44.2024.08.09.08.37.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Aug 2024 08:37:06 -0700 (PDT) From: Jann Horn Date: Fri, 09 Aug 2024 17:36:56 +0200 Subject: [PATCH v8 2/2] slub: Introduce CONFIG_SLUB_RCU_DEBUG MIME-Version: 1.0 Message-Id: <20240809-kasan-tsbrcu-v8-2-aef4593f9532@google.com> References: <20240809-kasan-tsbrcu-v8-0-aef4593f9532@google.com> In-Reply-To: <20240809-kasan-tsbrcu-v8-0-aef4593f9532@google.com> To: Andrey Ryabinin , Alexander Potapenko , Andrey Konovalov , Dmitry Vyukov , Vincenzo Frascino , Andrew Morton , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Marco Elver , kasan-dev@googlegroups.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, David Sterba , Jann Horn , syzbot+263726e59eab6b442723@syzkaller.appspotmail.com X-Mailer: b4 0.15-dev X-Developer-Signature: v=1; a=ed25519-sha256; t=1723217820; l=18530; i=jannh@google.com; s=20240730; h=from:subject:message-id; bh=1PgmzLlEUsMDeT/cjOBHdbwuBEXeoZXwr/+BB6CuICY=; b=YklEH9R3eQxtk1j0uXkt6YcAwFnc6faw7kDyZtccMugqnNq4w+3t5jWwZ9wBNanz+Gati/6tt OSwRkmbRsOABRQ4lIdt4uwGF9Pf1IF/I1xve7vQX1YUVwhd/iefsP2a X-Developer-Key: i=jannh@google.com; a=ed25519; pk=AljNtGOzXeF6khBXDJVVvwSEkVDGnnZZYqfWhP1V+C8= X-Rspamd-Queue-Id: 1C6464000B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: hfp3rtqw9ehtino4g89us6opre8sgqyn X-HE-Tag: 1723217830-784007 X-HE-Meta: U2FsdGVkX1886geMZZRg+ygIRpceoUQ2aeu+6WllfdlaO2Oe/a8vkVUVLQRv3zMgo3DOMNoQtKSxiwXlgVk7+JE41pzQYJZ5mw5S1enTOMjwzXyPI25/fMZy1pgHroyhF37zjtq1lEFCBKzL5b+pREtriymFDrw2G5teIZ+ybAIG5Al7lsBJsFkKApFCxX9NyEKPLUdNkVjuBoAASdUZezSpbn8rtk6HYr4aYD9KKM7KFl2hmdz/yPn2N2LCXDxQbrgPslYQv8lxlCYYhRYuYPitJHRxMlsrBHfx5W6i08ZCLMfghLSo3F3NDksdVR0o6ZD5UDhnVpiLMn9Iox7Aw20uvRFMeImnIbTR73YAtN28fkrivvk6CDc0F6pWL/XyxkP0D83GSQp/pyJWBpGcJqlEBDJ1NHHk1Zn4Sa4ELkA0PeYABrqzBf6K0d/S+KITwm4Rp8CKEanlkAgNXLc5+LCu7t+rJNVTO7bqs3tj2EnUl3X24/3VTLgscrUi78fH3+VCaJQvPFdsNqo3rLbLkpCK9HNRUFyHQ0O7Fifd+a6pBhf5svPBFNEuB1f6KpvKWYabRmWhI9AQhdA5Zn2wGJFLyzTHXIRvlzz5mJu7Uyj7Cqqabo3ylmy0jTl7347Tp1mc0bEh32WFq17AzeuzkApGz3VXCO05wLL6QV9bjcA0w8RC0YjM80UvSrmnYo0VgN5lnXQXlOjb5uL/Jmq9MJQ9L4pR667g2L+kIUoviyH8y0j1f/USRT/B4YGZuC2Uii98JPRZbP7bPPCtSn7VZ6SsfjBGbI07dXZEyUOuWGLId7qi8M9KIRycDpQJ40s7p8K6G3hFRi/vYK297Olm/Fjaxw6bdGXSycg1W/25TvXBC28y/ZgMGw9P8rZmSVpanmuBfKRh1y0k7QmwiUDJaBOQlLdMQyf/eoXn17o5o01whPC4dNIX5IUaAMAh1bfJkCYYs3AubgG3hAvAeMK JUINA1b+ 54mM3ibkQ07iuSu90hqQUEjd26TnI1vzDr6qf6EUhT0Txts9P1d1nkyyHu2RTAtMEkIVCu1+uHj2GM4tRpkYAxLPkup4VUTkSZVj3qkRopikGimoC4ZYVBrD3uyR6Zt59q3gPXSlgMnnOBBSTBWmkGZCdBuA3dD1uRqakan99m3ayQ3oPOyZNVTal23zdzWwlBsWfOexXvUjj6jW03nnrNRWOxVmsb2ayWN6Rb27RBNQ64sB5bJnWILJ/3aqFaLloVqZOY3LWMU3MowIsjzZ2yvb9XWoedgBw0XAa5fTeFG0CKdHzsvQDAmTD/Lczwg57z7cPF/Ljkf+8ibDa+a7Lc/JWCZIYN3Kq+O0GdVvguSYUhDXFrZMknyZeaU09ksGZbAiYCp7UokbTNlPv+43RSP3Y5A2LmKkW1oR2mAZ6h2hqbWS/iprTJ4aWN7m3TP7UXaLScKNrGYdA1NxdjoasHU6G3n1n+fPWHVHptpIvfItUuAnKsCcuJ/FOb2oGY7/pLgiG7j8FxMuF0Pb5wuFhn2/irNv25DRRhhFueI4vTdR914nirPFvy3lYVA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Currently, KASAN is unable to catch use-after-free in SLAB_TYPESAFE_BY_RCU slabs because use-after-free is allowed within the RCU grace period by design. Add a SLUB debugging feature which RCU-delays every individual kmem_cache_free() before either actually freeing the object or handing it off to KASAN, and change KASAN to poison freed objects as normal when this option is enabled. For now I've configured Kconfig.debug to default-enable this feature in the KASAN GENERIC and SW_TAGS modes; I'm not enabling it by default in HW_TAGS mode because I'm not sure if it might have unwanted performance degradation effects there. Note that this is mostly useful with KASAN in the quarantine-based GENERIC mode; SLAB_TYPESAFE_BY_RCU slabs are basically always also slabs with a ->ctor, and KASAN's assign_tag() currently has to assign fixed tags for those, reducing the effectiveness of SW_TAGS/HW_TAGS mode. (A possible future extension of this work would be to also let SLUB call the ->ctor() on every allocation instead of only when the slab page is allocated; then tag-based modes would be able to assign new tags on every reallocation.) Tested-by: syzbot+263726e59eab6b442723@syzkaller.appspotmail.com Reviewed-by: Andrey Konovalov Acked-by: Marco Elver Acked-by: Vlastimil Babka (slab) Signed-off-by: Jann Horn --- include/linux/kasan.h | 17 +++++++---- mm/Kconfig.debug | 32 +++++++++++++++++++++ mm/kasan/common.c | 11 +++---- mm/kasan/kasan_test.c | 46 ++++++++++++++++++++++++++++++ mm/slab_common.c | 16 +++++++++++ mm/slub.c | 79 +++++++++++++++++++++++++++++++++++++++++++++------ 6 files changed, 182 insertions(+), 19 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index 1570c7191176..00a3bf7c0d8f 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -193,40 +193,44 @@ static __always_inline bool kasan_slab_pre_free(struct kmem_cache *s, { if (kasan_enabled()) return __kasan_slab_pre_free(s, object, _RET_IP_); return false; } -bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init); +bool __kasan_slab_free(struct kmem_cache *s, void *object, bool init, + bool still_accessible); /** * kasan_slab_free - Poison, initialize, and quarantine a slab object. * @object: Object to be freed. * @init: Whether to initialize the object. + * @still_accessible: Whether the object contents are still accessible. * * This function informs that a slab object has been freed and is not - * supposed to be accessed anymore, except for objects in - * SLAB_TYPESAFE_BY_RCU caches. + * supposed to be accessed anymore, except when @still_accessible is set + * (indicating that the object is in a SLAB_TYPESAFE_BY_RCU cache and an RCU + * grace period might not have passed yet). * * For KASAN modes that have integrated memory initialization * (kasan_has_integrated_init() == true), this function also initializes * the object's memory. For other modes, the @init argument is ignored. * * This function might also take ownership of the object to quarantine it. * When this happens, KASAN will defer freeing the object to a later * stage and handle it internally until then. The return value indicates * whether KASAN took ownership of the object. * * This function is intended only for use by the slab allocator. * * @Return true if KASAN took ownership of the object; false otherwise. */ static __always_inline bool kasan_slab_free(struct kmem_cache *s, - void *object, bool init) + void *object, bool init, + bool still_accessible) { if (kasan_enabled()) - return __kasan_slab_free(s, object, init); + return __kasan_slab_free(s, object, init, still_accessible); return false; } void __kasan_kfree_large(void *ptr, unsigned long ip); static __always_inline void kasan_kfree_large(void *ptr) { @@ -416,13 +420,14 @@ static inline void *kasan_init_slab_obj(struct kmem_cache *cache, static inline bool kasan_slab_pre_free(struct kmem_cache *s, void *object) { return false; } -static inline bool kasan_slab_free(struct kmem_cache *s, void *object, bool init) +static inline bool kasan_slab_free(struct kmem_cache *s, void *object, + bool init, bool still_accessible) { return false; } static inline void kasan_kfree_large(void *ptr) {} static inline void *kasan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags, bool init) diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index afc72fde0f03..41a58536531d 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -67,12 +67,44 @@ config SLUB_DEBUG_ON equivalent to specifying the "slab_debug" parameter on boot. There is no support for more fine grained debug control like possible with slab_debug=xxx. SLUB debugging may be switched off in a kernel built with CONFIG_SLUB_DEBUG_ON by specifying "slab_debug=-". +config SLUB_RCU_DEBUG + bool "Enable UAF detection in TYPESAFE_BY_RCU caches (for KASAN)" + depends on SLUB_DEBUG + # SLUB_RCU_DEBUG should build fine without KASAN, but is currently useless + # without KASAN, so mark it as a dependency of KASAN for now. + depends on KASAN + default KASAN_GENERIC || KASAN_SW_TAGS + help + Make SLAB_TYPESAFE_BY_RCU caches behave approximately as if the cache + was not marked as SLAB_TYPESAFE_BY_RCU and every caller used + kfree_rcu() instead. + + This is intended for use in combination with KASAN, to enable KASAN to + detect use-after-free accesses in such caches. + (KFENCE is able to do that independent of this flag.) + + This might degrade performance. + Unfortunately this also prevents a very specific bug pattern from + triggering (insufficient checks against an object being recycled + within the RCU grace period); so this option can be turned off even on + KASAN builds, in case you want to test for such a bug. + + If you're using this for testing bugs / fuzzing and care about + catching all the bugs WAY more than performance, you might want to + also turn on CONFIG_RCU_STRICT_GRACE_PERIOD. + + WARNING: + This is designed as a debugging feature, not a security feature. + Objects are sometimes recycled without RCU delay under memory pressure. + + If unsure, say N. + config PAGE_OWNER bool "Track page owner" depends on DEBUG_KERNEL && STACKTRACE_SUPPORT select DEBUG_FS select STACKTRACE select STACKDEPOT diff --git a/mm/kasan/common.c b/mm/kasan/common.c index f26bbc087b3b..ed4873e18c75 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -227,43 +227,44 @@ static bool check_slab_allocation(struct kmem_cache *cache, void *object, } return false; } static inline void poison_slab_object(struct kmem_cache *cache, void *object, - bool init) + bool init, bool still_accessible) { void *tagged_object = object; object = kasan_reset_tag(object); /* RCU slabs could be legally used after free within the RCU period. */ - if (unlikely(cache->flags & SLAB_TYPESAFE_BY_RCU)) + if (unlikely(still_accessible)) return; kasan_poison(object, round_up(cache->object_size, KASAN_GRANULE_SIZE), KASAN_SLAB_FREE, init); if (kasan_stack_collection_enabled()) kasan_save_free_info(cache, tagged_object); } bool __kasan_slab_pre_free(struct kmem_cache *cache, void *object, unsigned long ip) { if (!kasan_arch_is_ready() || is_kfence_address(object)) return false; return check_slab_allocation(cache, object, ip); } -bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init) +bool __kasan_slab_free(struct kmem_cache *cache, void *object, bool init, + bool still_accessible) { if (!kasan_arch_is_ready() || is_kfence_address(object)) return false; - poison_slab_object(cache, object, init); + poison_slab_object(cache, object, init, still_accessible); /* * If the object is put into quarantine, do not let slab put the object * onto the freelist for now. The object's metadata is kept until the * object gets evicted from quarantine. */ @@ -515,13 +516,13 @@ bool __kasan_mempool_poison_object(void *ptr, unsigned long ip) slab = folio_slab(folio); if (check_slab_allocation(slab->slab_cache, ptr, ip)) return false; - poison_slab_object(slab->slab_cache, ptr, false); + poison_slab_object(slab->slab_cache, ptr, false, false); return true; } void __kasan_mempool_unpoison_object(void *ptr, size_t size, unsigned long ip) { struct slab *slab; diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index 7b32be2a3cf0..567d33b493e2 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -993,12 +993,57 @@ static void kmem_cache_invalid_free(struct kunit *test) */ kmem_cache_free(cache, p); kmem_cache_destroy(cache); } +static void kmem_cache_rcu_uaf(struct kunit *test) +{ + char *p; + size_t size = 200; + struct kmem_cache *cache; + + KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB_RCU_DEBUG); + + cache = kmem_cache_create("test_cache", size, 0, SLAB_TYPESAFE_BY_RCU, + NULL); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, cache); + + p = kmem_cache_alloc(cache, GFP_KERNEL); + if (!p) { + kunit_err(test, "Allocation failed: %s\n", __func__); + kmem_cache_destroy(cache); + return; + } + *p = 1; + + rcu_read_lock(); + + /* Free the object - this will internally schedule an RCU callback. */ + kmem_cache_free(cache, p); + + /* + * We should still be allowed to access the object at this point because + * the cache is SLAB_TYPESAFE_BY_RCU and we've been in an RCU read-side + * critical section since before the kmem_cache_free(). + */ + READ_ONCE(*p); + + rcu_read_unlock(); + + /* + * Wait for the RCU callback to execute; after this, the object should + * have actually been freed from KASAN's perspective. + */ + rcu_barrier(); + + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*p)); + + kmem_cache_destroy(cache); +} + static void empty_cache_ctor(void *object) { } static void kmem_cache_double_destroy(struct kunit *test) { struct kmem_cache *cache; @@ -1934,12 +1979,13 @@ static struct kunit_case kasan_kunit_test_cases[] = { KUNIT_CASE(workqueue_uaf), KUNIT_CASE(kfree_via_page), KUNIT_CASE(kfree_via_phys), KUNIT_CASE(kmem_cache_oob), KUNIT_CASE(kmem_cache_double_free), KUNIT_CASE(kmem_cache_invalid_free), + KUNIT_CASE(kmem_cache_rcu_uaf), KUNIT_CASE(kmem_cache_double_destroy), KUNIT_CASE(kmem_cache_accounted), KUNIT_CASE(kmem_cache_bulk), KUNIT_CASE(mempool_kmalloc_oob_right), KUNIT_CASE(mempool_kmalloc_large_oob_right), KUNIT_CASE(mempool_slab_oob_right), diff --git a/mm/slab_common.c b/mm/slab_common.c index 40b582a014b8..9025e85c6750 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -573,12 +573,28 @@ void kmem_cache_destroy(struct kmem_cache *s) int err = -EBUSY; bool rcu_set; if (unlikely(!s) || !kasan_check_byte(s)) return; + if (IS_ENABLED(CONFIG_SLUB_RCU_DEBUG) && + (s->flags & SLAB_TYPESAFE_BY_RCU)) { + /* + * Under CONFIG_SLUB_RCU_DEBUG, when objects in a + * SLAB_TYPESAFE_BY_RCU slab are freed, SLUB will internally + * defer their freeing with call_rcu(). + * Wait for such call_rcu() invocations here before actually + * destroying the cache. + * + * It doesn't matter that we haven't looked at the slab refcount + * yet - slabs with SLAB_TYPESAFE_BY_RCU can't be merged, so + * the refcount should be 1 here. + */ + rcu_barrier(); + } + cpus_read_lock(); mutex_lock(&slab_mutex); rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU; s->refcount--; diff --git a/mm/slub.c b/mm/slub.c index 0c98b6a2124f..86ab9477a1ae 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2197,45 +2197,81 @@ static inline bool memcg_slab_post_alloc_hook(struct kmem_cache *s, static inline void memcg_slab_free_hook(struct kmem_cache *s, struct slab *slab, void **p, int objects) { } #endif /* CONFIG_MEMCG */ +#ifdef CONFIG_SLUB_RCU_DEBUG +static void slab_free_after_rcu_debug(struct rcu_head *rcu_head); + +struct rcu_delayed_free { + struct rcu_head head; + void *object; +}; +#endif + /* * Hooks for other subsystems that check memory allocations. In a typical * production configuration these hooks all should produce no code at all. * * Returns true if freeing of the object can proceed, false if its reuse - * was delayed by KASAN quarantine, or it was returned to KFENCE. + * was delayed by CONFIG_SLUB_RCU_DEBUG or KASAN quarantine, or it was returned + * to KFENCE. */ static __always_inline -bool slab_free_hook(struct kmem_cache *s, void *x, bool init) +bool slab_free_hook(struct kmem_cache *s, void *x, bool init, + bool after_rcu_delay) { + /* Are the object contents still accessible? */ + bool still_accessible = (s->flags & SLAB_TYPESAFE_BY_RCU) && !after_rcu_delay; + kmemleak_free_recursive(x, s->flags); kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); if (!(s->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(x, s->object_size); /* Use KCSAN to help debug racy use-after-free. */ - if (!(s->flags & SLAB_TYPESAFE_BY_RCU)) + if (!still_accessible) __kcsan_check_access(x, s->object_size, KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ASSERT); if (kfence_free(x)) return false; /* * Give KASAN a chance to notice an invalid free operation before we * modify the object. */ if (kasan_slab_pre_free(s, x)) return false; +#ifdef CONFIG_SLUB_RCU_DEBUG + if (still_accessible) { + struct rcu_delayed_free *delayed_free; + + delayed_free = kmalloc(sizeof(*delayed_free), GFP_NOWAIT); + if (delayed_free) { + /* + * Let KASAN track our call stack as a "related work + * creation", just like if the object had been freed + * normally via kfree_rcu(). + * We have to do this manually because the rcu_head is + * not located inside the object. + */ + kasan_record_aux_stack_noalloc(x); + + delayed_free->object = x; + call_rcu(&delayed_free->head, slab_free_after_rcu_debug); + return false; + } + } +#endif /* CONFIG_SLUB_RCU_DEBUG */ + /* * As memory initialization might be integrated into KASAN, * kasan_slab_free and initialization memset's must be * kept together to avoid discrepancies in behavior. * * The initialization memset's clear the object and the metadata, @@ -2253,42 +2289,42 @@ bool slab_free_hook(struct kmem_cache *s, void *x, bool init) memset(kasan_reset_tag(x), 0, s->object_size); rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; memset((char *)kasan_reset_tag(x) + inuse, 0, s->size - inuse - rsize); } /* KASAN might put x into memory quarantine, delaying its reuse. */ - return !kasan_slab_free(s, x, init); + return !kasan_slab_free(s, x, init, still_accessible); } static __fastpath_inline bool slab_free_freelist_hook(struct kmem_cache *s, void **head, void **tail, int *cnt) { void *object; void *next = *head; void *old_tail = *tail; bool init; if (is_kfence_address(next)) { - slab_free_hook(s, next, false); + slab_free_hook(s, next, false, false); return false; } /* Head and tail of the reconstructed freelist */ *head = NULL; *tail = NULL; init = slab_want_init_on_free(s); do { object = next; next = get_freepointer(s, object); /* If object's reuse doesn't have to be delayed */ - if (likely(slab_free_hook(s, object, init))) { + if (likely(slab_free_hook(s, object, init, false))) { /* Move object to the new freelist */ set_freepointer(s, object, *head); *head = object; if (!*tail) *tail = object; } else { @@ -4474,40 +4510,67 @@ static __fastpath_inline void slab_free(struct kmem_cache *s, struct slab *slab, void *object, unsigned long addr) { memcg_slab_free_hook(s, slab, &object, 1); alloc_tagging_slab_free_hook(s, slab, &object, 1); - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) do_slab_free(s, slab, object, object, 1, addr); } #ifdef CONFIG_MEMCG /* Do not inline the rare memcg charging failed path into the allocation path */ static noinline void memcg_alloc_abort_single(struct kmem_cache *s, void *object) { - if (likely(slab_free_hook(s, object, slab_want_init_on_free(s)))) + if (likely(slab_free_hook(s, object, slab_want_init_on_free(s), false))) do_slab_free(s, virt_to_slab(object), object, object, 1, _RET_IP_); } #endif static __fastpath_inline void slab_free_bulk(struct kmem_cache *s, struct slab *slab, void *head, void *tail, void **p, int cnt, unsigned long addr) { memcg_slab_free_hook(s, slab, p, cnt); alloc_tagging_slab_free_hook(s, slab, p, cnt); /* * With KASAN enabled slab_free_freelist_hook modifies the freelist * to remove objects, whose reuse must be delayed. */ if (likely(slab_free_freelist_hook(s, &head, &tail, &cnt))) do_slab_free(s, slab, head, tail, cnt, addr); } +#ifdef CONFIG_SLUB_RCU_DEBUG +static void slab_free_after_rcu_debug(struct rcu_head *rcu_head) +{ + struct rcu_delayed_free *delayed_free = + container_of(rcu_head, struct rcu_delayed_free, head); + void *object = delayed_free->object; + struct slab *slab = virt_to_slab(object); + struct kmem_cache *s; + + kfree(delayed_free); + + if (WARN_ON(is_kfence_address(object))) + return; + + /* find the object and the cache again */ + if (WARN_ON(!slab)) + return; + s = slab->slab_cache; + if (WARN_ON(!(s->flags & SLAB_TYPESAFE_BY_RCU))) + return; + + /* resume freeing */ + if (slab_free_hook(s, object, slab_want_init_on_free(s), true)) + do_slab_free(s, slab, object, object, 1, _THIS_IP_); +} +#endif /* CONFIG_SLUB_RCU_DEBUG */ + #ifdef CONFIG_KASAN_GENERIC void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr) { do_slab_free(cache, virt_to_slab(x), x, x, 1, addr); } #endif