From patchwork Wed Feb 24 20:05:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12102605 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39666C433E0 for ; Wed, 24 Feb 2021 20:05:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C216264F0D for ; Wed, 24 Feb 2021 20:05:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C216264F0D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 616416B00C6; Wed, 24 Feb 2021 15:05:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5ED426B00C7; Wed, 24 Feb 2021 15:05:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52CCD6B00C8; Wed, 24 Feb 2021 15:05:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id 3EC4A6B00C6 for ; Wed, 24 Feb 2021 15:05:58 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 09BD41829200A for ; Wed, 24 Feb 2021 20:05:58 +0000 (UTC) X-FDA: 77854242396.07.D448DA6 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf26.hostedemail.com (Postfix) with ESMTP id 10CF5407F8DF for ; Wed, 24 Feb 2021 20:05:52 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id A1A1664F0E; Wed, 24 Feb 2021 20:05:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1614197156; bh=iXxCAfUagaaNEkYZ2Acgf0LD3UD8IMv4eWVMhOt8pPM=; h=Date:From:To:Subject:In-Reply-To:From; b=o09mvRr+PulX4UxT/0HS6phu22cd6Uhe3+7yvEFo3NHv7ucRoy3WYvjvlTtdGV+W8 EtjaqpwXv88FUwboKKXsRmWOaw+FrsNoL4o1iJCOi2NpXDvFP7LuWL4e/EMx56oWlG jBn5dwusjpw9O8zdV0lqCfk4kQRyUuk3j9CVUJyo= Date: Wed, 24 Feb 2021 12:05:55 -0800 From: Andrew Morton To: akpm@linux-foundation.org, andreyknvl@google.com, aryabinin@virtuozzo.com, Branislav.Rankov@arm.com, catalin.marinas@arm.com, dvyukov@google.com, elver@google.com, eugenis@google.com, glider@google.com, kevin.brodsky@arm.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, pcc@google.com, torvalds@linux-foundation.org, vincenzo.frascino@arm.com, will.deacon@arm.com Subject: [patch 098/173] kasan: add proper page allocator tests Message-ID: <20210224200555.zQSVpXkxz%akpm@linux-foundation.org> In-Reply-To: <20210224115824.1e289a6895087f10c41dd8d6@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 10CF5407F8DF X-Stat-Signature: 8xe1dw8c69ban5q15mod44p5rduf8h5d Received-SPF: none (linux-foundation.org>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail.kernel.org; client-ip=198.145.29.99 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1614197152-133549 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Andrey Konovalov Subject: kasan: add proper page allocator tests The currently existing page allocator tests rely on kmalloc fallback with large sizes that is only present for SLUB. Add proper tests that use alloc/free_pages(). Link: https://linux-review.googlesource.com/id/Ia173d5a1b215fe6b2548d814ef0f4433cf983570 Link: https://lkml.kernel.org/r/a2648930e55ff75b8e700f2e0d905c2b55a67483.1610733117.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver Reviewed-by: Alexander Potapenko Cc: Andrey Ryabinin Cc: Branislav Rankov Cc: Catalin Marinas Cc: Dmitry Vyukov Cc: Evgenii Stepanov Cc: Kevin Brodsky Cc: Peter Collingbourne Cc: Vincenzo Frascino Cc: Will Deacon Signed-off-by: Andrew Morton --- lib/test_kasan.c | 51 ++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 46 insertions(+), 5 deletions(-) --- a/lib/test_kasan.c~kasan-add-proper-page-allocator-tests +++ a/lib/test_kasan.c @@ -147,6 +147,12 @@ static void kmalloc_node_oob_right(struc kfree(ptr); } +/* + * These kmalloc_pagealloc_* tests try allocating a memory chunk that doesn't + * fit into a slab cache and therefore is allocated via the page allocator + * fallback. Since this kind of fallback is only implemented for SLUB, these + * tests are limited to that allocator. + */ static void kmalloc_pagealloc_oob_right(struct kunit *test) { char *ptr; @@ -154,14 +160,11 @@ static void kmalloc_pagealloc_oob_right( KASAN_TEST_NEEDS_CONFIG_ON(test, CONFIG_SLUB); - /* - * Allocate a chunk that does not fit into a SLUB cache to trigger - * the page allocator fallback. - */ ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); KUNIT_EXPECT_KASAN_FAIL(test, ptr[size + OOB_TAG_OFF] = 0); + kfree(ptr); } @@ -174,8 +177,8 @@ static void kmalloc_pagealloc_uaf(struct ptr = kmalloc(size, GFP_KERNEL); KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); - kfree(ptr); + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = 0); } @@ -192,6 +195,42 @@ static void kmalloc_pagealloc_invalid_fr KUNIT_EXPECT_KASAN_FAIL(test, kfree(ptr + 1)); } +static void pagealloc_oob_right(struct kunit *test) +{ + char *ptr; + struct page *pages; + size_t order = 4; + size_t size = (1UL << (PAGE_SHIFT + order)); + + /* + * With generic KASAN page allocations have no redzones, thus + * out-of-bounds detection is not guaranteed. + * See https://bugzilla.kernel.org/show_bug.cgi?id=210503. + */ + KASAN_TEST_NEEDS_CONFIG_OFF(test, CONFIG_KASAN_GENERIC); + + pages = alloc_pages(GFP_KERNEL, order); + ptr = page_address(pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + + KUNIT_EXPECT_KASAN_FAIL(test, ptr[size] = 0); + free_pages((unsigned long)ptr, order); +} + +static void pagealloc_uaf(struct kunit *test) +{ + char *ptr; + struct page *pages; + size_t order = 4; + + pages = alloc_pages(GFP_KERNEL, order); + ptr = page_address(pages); + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ptr); + free_pages((unsigned long)ptr, order); + + KUNIT_EXPECT_KASAN_FAIL(test, ptr[0] = 0); +} + static void kmalloc_large_oob_right(struct kunit *test) { char *ptr; @@ -903,6 +942,8 @@ static struct kunit_case kasan_kunit_tes KUNIT_CASE(kmalloc_pagealloc_oob_right), KUNIT_CASE(kmalloc_pagealloc_uaf), KUNIT_CASE(kmalloc_pagealloc_invalid_free), + KUNIT_CASE(pagealloc_oob_right), + KUNIT_CASE(pagealloc_uaf), KUNIT_CASE(kmalloc_large_oob_right), KUNIT_CASE(kmalloc_oob_krealloc_more), KUNIT_CASE(kmalloc_oob_krealloc_less),