From patchwork Thu Dec 21 20:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: andrey.konovalov@linux.dev X-Patchwork-Id: 13502606 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FFA7C35274 for ; Thu, 21 Dec 2023 20:06:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 729CB6B009C; Thu, 21 Dec 2023 15:06:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6B2126B009D; Thu, 21 Dec 2023 15:06:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 495C36B00A0; Thu, 21 Dec 2023 15:06:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1DE4F6B009D for ; Thu, 21 Dec 2023 15:06:10 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F2E6940CE3 for ; Thu, 21 Dec 2023 20:06:09 +0000 (UTC) X-FDA: 81591906858.01.F58B94C Received: from out-182.mta0.migadu.com (out-182.mta0.migadu.com [91.218.175.182]) by imf25.hostedemail.com (Postfix) with ESMTP id 546D5A0007 for ; Thu, 21 Dec 2023 20:06:08 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=KmqU694Z; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of andrey.konovalov@linux.dev designates 91.218.175.182 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703189168; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=itOhYdVtqZdvAR7o0Q16gEQdZUJ+zDuxto8wg6YMX5U=; b=FGsJ+YsMxarlJcZMPWnzKRlt10sv4+2m+FvmCJAjiFJuSjMKFnKfhwonVgl7m/FR9/aQmz Y5zQRM1YCgY/t8ipjeYEPu7VM3C8xtxxEUgMZHvMIwq4S2s1xfJFrCYbfAsjCEMdNNo5ro BhBtoyAHV2gKlHKOod2Z6VlPQkZaWcc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=KmqU694Z; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf25.hostedemail.com: domain of andrey.konovalov@linux.dev designates 91.218.175.182 as permitted sender) smtp.mailfrom=andrey.konovalov@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703189168; a=rsa-sha256; cv=none; b=nxN+gHdXPPq1jwmjJTgUtyWpSK39jbDg5xdehlXCW2cQ7ZNlDW6D3A5OxY0BAYfNnfQdWn tLce19fr9ULuxAjJJKgyivirLMZv7SWeah/H+UKytUVxGmNzkvubddDZ3OfsVW7G3VyIT8 SJYIMA6VolpGQQm+emN36n2rJaL4bHY= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1703189167; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=itOhYdVtqZdvAR7o0Q16gEQdZUJ+zDuxto8wg6YMX5U=; b=KmqU694Zt6wu9nrYTkctmmkfBJq7NuWGLfuL6LQc4yRHrVvgSk9TrEaIkENsIsxAhzy0iD M5jyxL69O1qWaiZjFMO8J3Kk4OIyyumpmCcqjeJvbednSEB0sn4BUCn5787rVS16/+WTWn vHxsZcIZaRd0Vg3+r7Go8wBuNHm2nL0= From: andrey.konovalov@linux.dev To: Marco Elver Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH mm 10/11] kasan: remove SLUB checks for page_alloc fallbacks in tests Date: Thu, 21 Dec 2023 21:04:52 +0100 Message-Id: In-Reply-To: References: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: 546D5A0007 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: hqt5943tuypxgjaoek8si8c5n81n3prs X-HE-Tag: 1703189168-184780 X-HE-Meta: U2FsdGVkX19Ggi/5s6QM9mOO6E/RfCaxNoLlCB2A4aZQAGavUk1lCr5N3Lhm3bO2oJfz6KT68A6+pBt6DupTwcq4uvqiHNbHszNNpw/qUk3D2IPB+BNRykJtI841lRn0QopdQPeduSXIa/JoRQZAhC32dQUkJqcik9OqF67BpCrMVCCZovGQy9HtuwIJyJDG5RO4XXxfkSALU88/5lhRUOGsA5aT0zRA9injvgnKwjJyCW543olNC/nKTkIROIDNDurknuS+eB/nqiTiUmd1CrBp5dn0l+aq+ct+ffEvKWbMnARaZNaqHgPTwhs73noN9kXZKlqy9dtmWEf7SWebVVhZK1rm6YMadTjww+frwpXUV+Xdgwvjbu/E4VlPzm7wT0mlbF/XztZqDyXDsEq7IeR2G4gizNCeXUxF0EjuEvB4g3nmNzE0L/W0RxfqWFIzRdUlxhHY6Ov32Nhw5x+L1hAd/qDjSBTUBGgduRKORyfViCWwpPXmYkFsWwy2SyRow1C5RDtYE08+RcXEPJyd38R5uGow8ubx2tW0lgnCjI8KH7pZT3k6R46eAY2rCjgZux3Ze76cNQzcyao+8Ne7oUx2G7AdLt0ji6TsvfXwgIyr3my+aTY1tlDWif7tSBq8pwoU6aVyxX+AQzigugCCu5Ofoec4PN8mSQ88G1a8ztyr9pAMlcFraoqJ6GTuByKwWoH8J/RroWoHGyrclln3MuY74UcjSWUjgzhY7u/AU/WzmRMGozjG5l7CTkBe9+7rp7nJR9x7oONsRGMFD0FuhcnJ4RGvzmZa7tr3PIwCE2kMQ86oznQ0puMmD/PG3BW4YFMmOs9sYrRlYFWOBCy6kYA6bm/5dX1y9zunRGryjcVCdb+QgCQwvg6foXIz2e/4OA0U3YBFrxc/LNnOkpSza1S2wJGvFYaDl8ojYvgAR1L1jEb2iInlyHLPVRIGmEGnJKGRYfXQ7NQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Andrey Konovalov A number of KASAN tests rely on the fact that calling kmalloc with a size larger than an order-1 page falls back onto page_alloc. This fallback was originally only implemented for SLUB, but since commit d6a71648dbc0 ("mm/slab: kmalloc: pass requests larger than order-1 page to page allocator"), it is also implemented for SLAB. Thus, drop the SLUB checks from the tests. Signed-off-by: Andrey Konovalov --- mm/kasan/kasan_test.c | 26 ++------------------------ 1 file changed, 2 insertions(+), 24 deletions(-) diff --git a/mm/kasan/kasan_test.c b/mm/kasan/kasan_test.c index 496154e38965..798df4983858 100644 --- a/mm/kasan/kasan_test.c +++ b/mm/kasan/kasan_test.c @@ -215,7 +215,7 @@ static void kmalloc_node_oob_right(struct kunit *test) /* * Check that KASAN detects an out-of-bounds access for a big object allocated - * via kmalloc(). But not as big as to trigger the page_alloc fallback for SLUB. + * via kmalloc(). But not as big as to trigger the page_alloc fallback. */ static void kmalloc_big_oob_right(struct kunit *test) { @@ -233,8 +233,7 @@ static void kmalloc_big_oob_right(struct kunit *test) /* * The kmalloc_large_* tests below use kmalloc() to allocate a memory chunk * that does not fit into the largest slab cache and therefore is allocated via - * the page_alloc fallback for SLUB. SLAB has no such fallback, and thus these - * tests are not supported for it. + * the page_alloc fallback. */ static void kmalloc_large_oob_right(struct kunit *test) @@ -242,8 +241,6 @@ static void kmalloc_large_oob_right(struct kunit *test) char *ptr; size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -258,8 +255,6 @@ static void kmalloc_large_uaf(struct kunit *test) char *ptr; size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); kfree(ptr); @@ -272,8 +267,6 @@ static void kmalloc_large_invalid_free(struct kunit *test) char *ptr; size_t size = KMALLOC_MAX_CACHE_SIZE + 10; - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); @@ -407,18 +400,12 @@ static void krealloc_less_oob(struct kunit *test) static void krealloc_large_more_oob(struct kunit *test) { - /* page_alloc fallback is only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - krealloc_more_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 201, KMALLOC_MAX_CACHE_SIZE + 235); } static void krealloc_large_less_oob(struct kunit *test) { - /* page_alloc fallback is only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - krealloc_less_oob_helper(test, KMALLOC_MAX_CACHE_SIZE + 235, KMALLOC_MAX_CACHE_SIZE + 201); } @@ -1144,9 +1131,6 @@ static void mempool_kmalloc_large_uaf(struct kunit *test) size_t size = KMALLOC_MAX_CACHE_SIZE + 1; void *extra_elem; - /* page_alloc fallback is only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - extra_elem = mempool_prepare_kmalloc(test, &pool, size); mempool_uaf_helper(test, &pool, false); @@ -1215,9 +1199,6 @@ static void mempool_kmalloc_large_double_free(struct kunit *test) size_t size = KMALLOC_MAX_CACHE_SIZE + 1; char *extra_elem; - /* page_alloc fallback is only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - extra_elem = mempool_prepare_kmalloc(test, &pool, size); mempool_double_free_helper(test, &pool); @@ -1272,9 +1253,6 @@ static void mempool_kmalloc_large_invalid_free(struct kunit *test) size_t size = KMALLOC_MAX_CACHE_SIZE + 1; char *extra_elem; - /* page_alloc fallback is only implemented for SLUB. */ - KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - extra_elem = mempool_prepare_kmalloc(test, &pool, size); mempool_kmalloc_invalid_free_helper(test, &pool);