From patchwork Wed May 12 20:09:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12254621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87EBCC433ED for ; Wed, 12 May 2021 20:16:03 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 03A0D60BBB for ; Wed, 12 May 2021 20:16:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 03A0D60BBB Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Cc:To:From:Subject:References:Mime-Version: Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=DtorSgopZM2AjyIq45RBMZcCP6yJKgkpar7RG6nHU38=; b=Yl5eIreHESBhUD oYxcxphruvmDhR17CAp1MrV/a9Km5H4vUUF+vcwzhde7Sp+V9S6ncIneUPuUll6MRV0iEQmpmozwh DE0x762eYmq3Gcmt+equHpvo7RJ90TA2p0b3OZbMN5n5nTka5bHlIXll5o6fJ80UHIK9jVMi1Uukq RHqWi1esMmQCuTt+h+qQG8ONGJbmcN/7DEzrY3WnQPNAEND4ZIADhUsJWlwWSZsce8RZxmhK/THLk AR7fMOOV9CPSA8cK8xntEqQ2aviU2JUW/OngYCl+CCaCOH42fXaThw/xYidLBka+6n3aLDjBrOdrw LsYuk9jT2bmReSgaZjUw==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lgvEk-003stu-U1; Wed, 12 May 2021 20:13:43 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgvB0-003rs4-4w for linux-arm-kernel@desiato.infradead.org; Wed, 12 May 2021 20:09:51 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Type:Cc:To:From:Subject: References:Mime-Version:Message-Id:In-Reply-To:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=5ZJbp2nUVDVclJ/Nc9z2hwC21X5DbxwNQFzqdbwfEr4=; b=0+tjMKt8QUG7t34wJUa+YanZS6 zPCGnHcxQcQ7oqbOD24x6/npgXgk6AuaF9KHEDwYkIVk4sIux7M41/H+zrNqXS6rkXcLk6/nZA2s8 njVxLNCz/52yR0KXFeeO36lFOvU2Zea/JQyd9UzDqifSL1u5VI5oaemDzPrT4USE0rBohgG3zXo0T NasY1aR9zjK5E6O6xpb3qE3F76DypcYffmv6C4iZ9OYljKmSGAxqU5sxGJI7Sv4Ptpd+sWtuEk7V8 YgBv313+ywHIa5kyNx5xT1KwE3Z9eSNIL5mPw7JTUjbFBM6P3JKDdfPoyowAgAPE3DT+sKcnwClf8 BUOGoFkQ==; Received: from mail-qv1-xf4a.google.com ([2607:f8b0:4864:20::f4a]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lgvAw-00Amwi-N2 for linux-arm-kernel@lists.infradead.org; Wed, 12 May 2021 20:09:48 +0000 Received: by mail-qv1-xf4a.google.com with SMTP id w62-20020a0ca4440000b02901ebcd69b1a8so2361179qvw.7 for ; Wed, 12 May 2021 13:09:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=5ZJbp2nUVDVclJ/Nc9z2hwC21X5DbxwNQFzqdbwfEr4=; b=SoK/F1BmFKV+/x3oGdgEJpRscqFJ88ul/gWIEzoPgiSdWZba/6xdyF3EiTN78MJuaf jAukeCNC01qxS8Tp8Sp1Bx+FQghJwaYoArBa2P4KgT6ZW7l0TgunVnfot6onssrrQNOh pqVRLUEc8shZgqlFAr+63MycIoO7htB/I+vnCI+i2X2yZI8IhgpbxeJnYo45gMOPTnJ2 MoPorIIdhVxXm8MVgcPvOmsPD58c5BBPpgO2SVXzNlIhyH8vqPr5lSlr+d+a278HHlTc 5oJdfwI4sQEjm0f85cWQnEBdJkN4aEXHOwt6VxvxUCcY8cTTetLnokqv5+V6SAiDTqrl p8GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=5ZJbp2nUVDVclJ/Nc9z2hwC21X5DbxwNQFzqdbwfEr4=; b=nE4O6dFXGvECfAxIvxhhLBszDd5A/QUzqDzPJbZK1Mj8xe7Bca/NGQ2S1M+jOtzFCx cwSKZWkXRWB2pzPYeNpKp/w9ygn7/iNz+fMa9eZOCS9PwMGPRmi6gOM0dLpN1ESrEl5h reUJy/wuDD3rgaxt6fKYNKdVIZtbobyzz8jWn+230ZYdqVGFhO+b3HWZgQcE1MuxtUpQ sNSI9j12X3wtV5dynzIOSxINukBAVFDw7zqYmRf+1Q0AYG020eQIIASX1+wM3cGlv6A+ 9VJ3TlacH8rP9pwrqXsafIz29vM+Bi4oALys7sO1ad0VEa8cTF2aV+ddGP8voSqlnyMm m3Qw== X-Gm-Message-State: AOAM530oprKqiYLppaTPwOS9MKqfPJ/AFJ6/SUdA0a4ytFuM1g6oBLc1 ZYA0+JWuHjf8OuD24eDI4yk8XrU= X-Google-Smtp-Source: ABdhPJwEyujBbemGOER1FFbanY9TOIkDxuvU1zRys4mb4e5pLgnmcZzbgI8ak4BAIAnwNiL6afbYZJU= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:d8b:fba6:20f0:cbe3]) (user=pcc job=sendgmr) by 2002:a0c:d605:: with SMTP id c5mr36343829qvj.25.1620850184264; Wed, 12 May 2021 13:09:44 -0700 (PDT) Date: Wed, 12 May 2021 13:09:31 -0700 In-Reply-To: Message-Id: <78af73393175c648b4eb10312825612f6e6889f6.1620849613.git.pcc@google.com> Mime-Version: 1.0 References: X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH v3 1/3] kasan: use separate (un)poison implementation for integrated init From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210512_130946_784112_85AF4473 X-CRM114-Status: GOOD ( 19.56 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently with integrated init page_alloc.c needs to know whether kasan_alloc_pages() will zero initialize memory, but this will start becoming more complicated once we start adding tag initialization support for user pages. To avoid page_alloc.c needing to know more details of what integrated init will do, move the unpoisoning logic for integrated init into the HW tags implementation. Currently the logic is identical but it will diverge in subsequent patches. For symmetry do the same for poisoning although this logic will be unaffected by subsequent patches. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 --- v3: - use BUILD_BUG() v2: - fix build with KASAN disabled include/linux/kasan.h | 66 +++++++++++++++++++++++++++---------------- mm/kasan/common.c | 4 +-- mm/kasan/hw_tags.c | 14 +++++++++ mm/mempool.c | 6 ++-- mm/page_alloc.c | 56 +++++++++++++++++++----------------- 5 files changed, 91 insertions(+), 55 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index b1678a61e6a7..38061673e6ac 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -2,6 +2,7 @@ #ifndef _LINUX_KASAN_H #define _LINUX_KASAN_H +#include #include #include @@ -79,14 +80,6 @@ static inline void kasan_disable_current(void) {} #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ -#ifdef CONFIG_KASAN - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; - bool is_kmalloc; -}; - #ifdef CONFIG_KASAN_HW_TAGS DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); @@ -101,11 +94,18 @@ static inline bool kasan_has_integrated_init(void) return kasan_enabled(); } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_enabled(void) { +#ifdef CONFIG_KASAN return true; +#else + return false; +#endif } static inline bool kasan_has_integrated_init(void) @@ -113,8 +113,30 @@ static inline bool kasan_has_integrated_init(void) return false; } +static __always_inline void kasan_alloc_pages(struct page *page, + unsigned int order, gfp_t flags) +{ + /* Only available for integrated init. */ + BUILD_BUG(); +} + +static __always_inline void kasan_free_pages(struct page *page, + unsigned int order) +{ + /* Only available for integrated init. */ + BUILD_BUG(); +} + #endif /* CONFIG_KASAN_HW_TAGS */ +#ifdef CONFIG_KASAN + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; + bool is_kmalloc; +}; + slab_flags_t __kasan_never_merge(void); static __always_inline slab_flags_t kasan_never_merge(void) { @@ -130,20 +152,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_alloc_pages(struct page *page, +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order, init); + __kasan_poison_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_free_pages(struct page *page, - unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_unpoison_pages(struct page *page, + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order, init); + __kasan_unpoison_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -285,21 +307,15 @@ void kasan_restore_multi_shot(bool enabled); #else /* CONFIG_KASAN */ -static inline bool kasan_enabled(void) -{ - return false; -} -static inline bool kasan_has_integrated_init(void) -{ - return false; -} static inline slab_flags_t kasan_never_merge(void) { return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_poison_pages(struct page *page, unsigned int order, + bool init) {} +static inline void kasan_unpoison_pages(struct page *page, unsigned int order, + bool init) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6bb87f2acd4e..0ecd293af344 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init) +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 4004388b4e4b..45e552cb9172 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,6 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) +{ + bool init = !want_init_on_free() && want_init_on_alloc(flags); + + kasan_unpoison_pages(page, order, init); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + bool init = want_init_on_free(); + + kasan_poison_pages(page, order, init); +} + #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_set_tagging_report_once(bool state) diff --git a/mm/mempool.c b/mm/mempool.c index a258cf4de575..0b8afbec3e35 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data, false); + kasan_poison_pages(element, (unsigned long)pool->pool_data, + false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, + false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aaa1655cf682..6e82a7f6fd6f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; static DEFINE_STATIC_KEY_TRUE(deferred_pages); /* - * Calling kasan_free_pages() only after deferred memory initialization + * Calling kasan_poison_pages() only after deferred memory initialization * has completed. Poisoning pages during deferred memory init will greatly * lengthen the process and cause problem in large memory systems as the * deferred pages initialization is done with interrupt disabled. @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (static_branch_unlikely(&deferred_pages)) - return; - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return static_branch_unlikely(&deferred_pages) || + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool init; + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - init = want_init_on_free(); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); - kasan_free_nondeferred_pages(page, order, init, fpi_flags); + if (kasan_has_integrated_init()) { + if (!skip_kasan_poison) + kasan_free_pages(page, order); + } else { + bool init = want_init_on_free(); + + if (init) + kernel_init_free_pages(page, 1 << order); + if (!skip_kasan_poison) + kasan_poison_pages(page, order, init); + } /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { - bool init; - set_page_private(page, 0); set_page_refcounted(page); @@ -2344,10 +2342,16 @@ inline void post_alloc_hook(struct page *page, unsigned int order, * kasan_alloc_pages and kernel_init_free_pages must be * kept together to avoid discrepancies in behavior. */ - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); - kasan_alloc_pages(page, order, init); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); + if (kasan_has_integrated_init()) { + kasan_alloc_pages(page, order, gfp_flags); + } else { + bool init = + !want_init_on_free() && want_init_on_alloc(gfp_flags); + + kasan_unpoison_pages(page, order, init); + if (init) + kernel_init_free_pages(page, 1 << order); + } set_page_owner(page, order, gfp_flags); }