From patchwork Tue May 11 07:31:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12249993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75466C433B4 for ; Tue, 11 May 2021 07:31:31 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA43061278 for ; Tue, 11 May 2021 07:31:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA43061278 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 11B5C6B006E; Tue, 11 May 2021 03:31:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0CC746B0071; Tue, 11 May 2021 03:31:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E86AB6B0072; Tue, 11 May 2021 03:31:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id CAE466B006E for ; Tue, 11 May 2021 03:31:29 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 7E0DF181AF5D7 for ; Tue, 11 May 2021 07:31:29 +0000 (UTC) X-FDA: 78128129898.23.1BE7026 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf19.hostedemail.com (Postfix) with ESMTP id 082CD90009E2 for ; Tue, 11 May 2021 07:30:55 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id d15-20020a05620a136fb02902e9e93c69c8so13728570qkl.23 for ; Tue, 11 May 2021 00:31:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=Em2S1jlNLMQ349jBHTP5ZrcnI90g33mdqtuhtiBoL3I=; b=aBjYcdfuRafUmsXe+LSMOqJFwvEhB3huntWS/Io7FQ+4vO+OapPc1yJi8qobQHQ29m FbWxG2w01yKXq93mx9MEqVKSZlhKig2P1y0iDJLI+yw5CaBGs7Z+CFbDwnQW+KasM1jq +G/ViEwMwNx1UA+4r/BDYbNkTj9C1V+lD2d+FtD44271zNYcIfctKwUtOm60mYz4xCxQ EEmTA+4xhACYXao+V38Cu6wTK8LNo1F6Q75YuC0zjHTCoxPzBpv9gd3hqwl6axPrERGc ZlUxdhlIXIej7ICF7CzG8+8MRS3BIPoZUTPmLFGtB+Znwvdv4EoxCM+wt/HySJKfU0uw rxPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=Em2S1jlNLMQ349jBHTP5ZrcnI90g33mdqtuhtiBoL3I=; b=RfH0h9pSrx3vCe/pA3zp/DD+VMQSiaCiil1RosB5WkxDDyY/u2ATIi1lhtuKGzi54d zhBtEt/gzdpOvYullzgodCgL26IDJG4frI6GD6IaWWKryy6oXeCx+dswR1IZupYGirOO 2QK/48tyzv+x74/1D/PDLBdSc3jWVm90ahChCx9Rsfc5rY+05PaFBZpD7XF/6wgwAfY+ IM7GrzR3xka9ZJn+XuM8t/Dz8rcYrzvK3naUMaaZ+mje6SL701IAHmxfaDnAV3/e1lsm CM3/ixKxVaEVu6TPvyAIiNWxP0SPgrhj9A4Xag4k2GVzoBKZ8eT1Xx5CL7+TNpeFc2XY s2Sg== X-Gm-Message-State: AOAM532z27/qiPSmSlvmoJf2UbY5ddtMhDmyGGwJX5ERCxz0hQAXpDWV EwaxXISBgyvKJSR71S1n/53Y85M= X-Google-Smtp-Source: ABdhPJyr0w61o2kMRRAoWuOY0PLJMhnAzwk/u+h+dSaqALJMj0cuLk4GkoCL44ngmRT1HhoqBXNyngI= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2b2f:d40f:818f:a6d0]) (user=pcc job=sendgmr) by 2002:a05:6214:176f:: with SMTP id et15mr27747783qvb.61.1620718288331; Tue, 11 May 2021 00:31:28 -0700 (PDT) Date: Tue, 11 May 2021 00:31:06 -0700 Message-Id: <20210511073108.138837-1-pcc@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 1/3] kasan: use separate (un)poison implementation for integrated init From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=aBjYcdfu; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 30DKaYAMKCKkYLLPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--pcc.bounces.google.com designates 209.85.222.202 as permitted sender) smtp.mailfrom=30DKaYAMKCKkYLLPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--pcc.bounces.google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 082CD90009E2 X-Stat-Signature: kgbmp6nce3pyozxafs3345qik7b3qrnb Received-SPF: none (flex--pcc.bounces.google.com>: No applicable sender policy available) receiver=imf19; identity=mailfrom; envelope-from="<30DKaYAMKCKkYLLPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--pcc.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620718255-973501 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently with integrated init page_alloc.c needs to know whether kasan_alloc_pages() will zero initialize memory, but this will start becoming more complicated once we start adding tag initialization support for user pages. To avoid page_alloc.c needing to know more details of what integrated init will do, move the unpoisoning logic for integrated init into the HW tags implementation. Currently the logic is identical but it will diverge in subsequent patches. For symmetry do the same for poisoning although this logic will be unaffected by subsequent patches. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I2c550234c6c4a893c48c18ff0c6ce658c7c67056 Reported-by: kernel test robot Reported-by: kernel test robot --- include/linux/kasan.h | 63 ++++++++++++++++++++++++++----------------- mm/kasan/common.c | 4 +-- mm/kasan/hw_tags.c | 14 ++++++++++ mm/mempool.c | 6 +++-- mm/page_alloc.c | 56 ++++++++++++++++++++------------------ 5 files changed, 88 insertions(+), 55 deletions(-) diff --git a/include/linux/kasan.h b/include/linux/kasan.h index b1678a61e6a7..c5c7c45c07c0 100644 --- a/include/linux/kasan.h +++ b/include/linux/kasan.h @@ -79,14 +79,6 @@ static inline void kasan_disable_current(void) {} #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */ -#ifdef CONFIG_KASAN - -struct kasan_cache { - int alloc_meta_offset; - int free_meta_offset; - bool is_kmalloc; -}; - #ifdef CONFIG_KASAN_HW_TAGS DECLARE_STATIC_KEY_FALSE(kasan_flag_enabled); @@ -101,11 +93,18 @@ static inline bool kasan_has_integrated_init(void) return kasan_enabled(); } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags); +void kasan_free_pages(struct page *page, unsigned int order); + #else /* CONFIG_KASAN_HW_TAGS */ static inline bool kasan_enabled(void) { +#ifdef CONFIG_KASAN return true; +#else + return false; +#endif } static inline bool kasan_has_integrated_init(void) @@ -113,8 +112,30 @@ static inline bool kasan_has_integrated_init(void) return false; } +static __always_inline void kasan_alloc_pages(struct page *page, + unsigned int order, gfp_t flags) +{ + /* Only available for integrated init. */ + BUG(); +} + +static __always_inline void kasan_free_pages(struct page *page, + unsigned int order) +{ + /* Only available for integrated init. */ + BUG(); +} + #endif /* CONFIG_KASAN_HW_TAGS */ +#ifdef CONFIG_KASAN + +struct kasan_cache { + int alloc_meta_offset; + int free_meta_offset; + bool is_kmalloc; +}; + slab_flags_t __kasan_never_merge(void); static __always_inline slab_flags_t kasan_never_merge(void) { @@ -130,20 +151,20 @@ static __always_inline void kasan_unpoison_range(const void *addr, size_t size) __kasan_unpoison_range(addr, size); } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_alloc_pages(struct page *page, +void __kasan_poison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (kasan_enabled()) - __kasan_alloc_pages(page, order, init); + __kasan_poison_pages(page, order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init); -static __always_inline void kasan_free_pages(struct page *page, - unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init); +static __always_inline void kasan_unpoison_pages(struct page *page, + unsigned int order, bool init) { if (kasan_enabled()) - __kasan_free_pages(page, order, init); + __kasan_unpoison_pages(page, order, init); } void __kasan_cache_create(struct kmem_cache *cache, unsigned int *size, @@ -285,21 +306,13 @@ void kasan_restore_multi_shot(bool enabled); #else /* CONFIG_KASAN */ -static inline bool kasan_enabled(void) -{ - return false; -} -static inline bool kasan_has_integrated_init(void) -{ - return false; -} static inline slab_flags_t kasan_never_merge(void) { return 0; } static inline void kasan_unpoison_range(const void *address, size_t size) {} -static inline void kasan_alloc_pages(struct page *page, unsigned int order, bool init) {} -static inline void kasan_free_pages(struct page *page, unsigned int order, bool init) {} +static inline void kasan_poison_pages(struct page *page, unsigned int order) {} +static inline void kasan_unpoison_pages(struct page *page, unsigned int order) {} static inline void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, slab_flags_t *flags) {} diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 6bb87f2acd4e..0ecd293af344 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -97,7 +97,7 @@ slab_flags_t __kasan_never_merge(void) return 0; } -void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) +void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) { u8 tag; unsigned long i; @@ -111,7 +111,7 @@ void __kasan_alloc_pages(struct page *page, unsigned int order, bool init) kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } -void __kasan_free_pages(struct page *page, unsigned int order, bool init) +void __kasan_poison_pages(struct page *page, unsigned int order, bool init) { if (likely(!PageHighMem(page))) kasan_poison(page_address(page), PAGE_SIZE << order, diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 4004388b4e4b..45e552cb9172 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,6 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) +{ + bool init = !want_init_on_free() && want_init_on_alloc(flags); + + kasan_unpoison_pages(page, order, init); +} + +void kasan_free_pages(struct page *page, unsigned int order) +{ + bool init = want_init_on_free(); + + kasan_poison_pages(page, order, init); +} + #if IS_ENABLED(CONFIG_KASAN_KUNIT_TEST) void kasan_set_tagging_report_once(bool state) diff --git a/mm/mempool.c b/mm/mempool.c index a258cf4de575..0b8afbec3e35 100644 --- a/mm/mempool.c +++ b/mm/mempool.c @@ -106,7 +106,8 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_slab_free_mempool(element); else if (pool->alloc == mempool_alloc_pages) - kasan_free_pages(element, (unsigned long)pool->pool_data, false); + kasan_poison_pages(element, (unsigned long)pool->pool_data, + false); } static void kasan_unpoison_element(mempool_t *pool, void *element) @@ -114,7 +115,8 @@ static void kasan_unpoison_element(mempool_t *pool, void *element) if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) kasan_unpoison_range(element, __ksize(element)); else if (pool->alloc == mempool_alloc_pages) - kasan_alloc_pages(element, (unsigned long)pool->pool_data, false); + kasan_unpoison_pages(element, (unsigned long)pool->pool_data, + false); } static __always_inline void add_element(mempool_t *pool, void *element) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index aaa1655cf682..6e82a7f6fd6f 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -382,7 +382,7 @@ int page_group_by_mobility_disabled __read_mostly; static DEFINE_STATIC_KEY_TRUE(deferred_pages); /* - * Calling kasan_free_pages() only after deferred memory initialization + * Calling kasan_poison_pages() only after deferred memory initialization * has completed. Poisoning pages during deferred memory init will greatly * lengthen the process and cause problem in large memory systems as the * deferred pages initialization is done with interrupt disabled. @@ -394,15 +394,11 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (static_branch_unlikely(&deferred_pages)) - return; - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return static_branch_unlikely(&deferred_pages) || + (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -453,13 +449,10 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline void kasan_free_nondeferred_pages(struct page *page, int order, - bool init, fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(fpi_t fpi_flags) { - if (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)) - return; - kasan_free_pages(page, order, init); + return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && + (fpi_flags & FPI_SKIP_KASAN_POISON)); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1245,7 +1238,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool init; + bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page); @@ -1314,10 +1307,17 @@ static __always_inline bool free_pages_prepare(struct page *page, * With hardware tag-based KASAN, memory tags must be set before the * page becomes unavailable via debug_pagealloc or arch_free_page. */ - init = want_init_on_free(); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); - kasan_free_nondeferred_pages(page, order, init, fpi_flags); + if (kasan_has_integrated_init()) { + if (!skip_kasan_poison) + kasan_free_pages(page, order); + } else { + bool init = want_init_on_free(); + + if (init) + kernel_init_free_pages(page, 1 << order); + if (!skip_kasan_poison) + kasan_poison_pages(page, order, init); + } /* * arch_free_page() can make the page's contents inaccessible. s390 @@ -2324,8 +2324,6 @@ static bool check_new_pages(struct page *page, unsigned int order) inline void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags) { - bool init; - set_page_private(page, 0); set_page_refcounted(page); @@ -2344,10 +2342,16 @@ inline void post_alloc_hook(struct page *page, unsigned int order, * kasan_alloc_pages and kernel_init_free_pages must be * kept together to avoid discrepancies in behavior. */ - init = !want_init_on_free() && want_init_on_alloc(gfp_flags); - kasan_alloc_pages(page, order, init); - if (init && !kasan_has_integrated_init()) - kernel_init_free_pages(page, 1 << order); + if (kasan_has_integrated_init()) { + kasan_alloc_pages(page, order, gfp_flags); + } else { + bool init = + !want_init_on_free() && want_init_on_alloc(gfp_flags); + + kasan_unpoison_pages(page, order, init); + if (init) + kernel_init_free_pages(page, 1 << order); + } set_page_owner(page, order, gfp_flags); } From patchwork Tue May 11 07:31:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12249995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A487AC43460 for ; Tue, 11 May 2021 07:31:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EE4F160FE8 for ; Tue, 11 May 2021 07:31:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EE4F160FE8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D2C06B0071; Tue, 11 May 2021 03:31:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A8EC6B0072; Tue, 11 May 2021 03:31:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 625FB6B0073; Tue, 11 May 2021 03:31:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 47D416B0071 for ; Tue, 11 May 2021 03:31:32 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 04F7A9093 for ; Tue, 11 May 2021 07:31:32 +0000 (UTC) X-FDA: 78128130024.12.2E6E6F4 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf09.hostedemail.com (Postfix) with ESMTP id 83EF160006C0 for ; Tue, 11 May 2021 07:31:20 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id d63-20020a254f420000b02904f91ef33453so1872584ybb.12 for ; Tue, 11 May 2021 00:31:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=zwNofKg0jQnnaHi7fcekp4b61wponCZZO4gzsYSVFrI=; b=li71rzzyeBDSxDpmx/cVBcnuVI/8q0n9iN4TAmxoDnBaRvwdEBgm4aJgq9cbKlS9hG HcXyRvl3jObMcfSkVxxk6Ghmydl39pevTcNLEnm9gW4GRgCDRbgWC3/uhP+w125QaMDA u2hEnvphUUKhOIcFYXSJGXQkI92GCUcT4WgtXsItXsRY4geiWSxmx1fXcPgiI6dC9A8r hwfC9atHd0m7ivGIRUVn42dZYRLwnLGz9vm06lB5hx8BoHP2/2EkIfCDBEKCvUcdhMW/ Atpe25TPWxE0UiTqwYyEhmxveNEUW39GcSPRhjuLXNULzmCjZ4eFGYOrgII6FCmbtK43 KTSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=zwNofKg0jQnnaHi7fcekp4b61wponCZZO4gzsYSVFrI=; b=qK3cftNSYnW+YQIbxzwUigPdmowIDXK4fMJF3x3Fst9R6oJq64A4FlR3BeoEyHhMSH UoL3kX6+UB6jWCr0tWLOm0R4c+pAaDwwSiWh6M3HRwykMBM6MGBXq4e5cN+QCJ2zxvpx gpuIDdTizuO2xAgfwwx+Pz4BGLYN6CE7TdF2Xs+2MpVACof5J0Nj9Bf7yy6wmiI7N0Sy WZQbbW5lXDU+JT33mVtjF85kbs8byEQKWzMMfnXbfWU8Sl3jUSVeyzbtRzta7NTrn8cN uR7BxfZxQ7SXSKAk6C7OnsxPnDAU1cLOpfS9FbxE1TKYUb3Odt2+ArHxCcZThq6qjoBT 54wg== X-Gm-Message-State: AOAM532Pt/XypCvbfG9tHpwul/+BWBSp/+R209a514NNQY+getcWEWvi iKFNwvmrLfSHEj/OQiYrvfD4vE8= X-Google-Smtp-Source: ABdhPJy+l6MGcE3CHqFtg7Ci4/bLDPA1snoU48dYQMWtywDpJ9tzK9g5hTFtT4vvHdm7SxuDME60mIw= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2b2f:d40f:818f:a6d0]) (user=pcc job=sendgmr) by 2002:a25:bb85:: with SMTP id y5mr40116556ybg.391.1620718290684; Tue, 11 May 2021 00:31:30 -0700 (PDT) Date: Tue, 11 May 2021 00:31:07 -0700 In-Reply-To: <20210511073108.138837-1-pcc@google.com> Message-Id: <20210511073108.138837-2-pcc@google.com> Mime-Version: 1.0 References: <20210511073108.138837-1-pcc@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 2/3] arm64: mte: handle tags zeroing at page allocation time From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-Rspamd-Queue-Id: 83EF160006C0 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=li71rzzy; spf=pass (imf09.hostedemail.com: domain of 30jKaYAMKCKsaNNRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--pcc.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=30jKaYAMKCKsaNNRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--pcc.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Stat-Signature: jhafxpqsdpu89xggcytsiead95f8cn8w Received-SPF: none (flex--pcc.bounces.google.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from="<30jKaYAMKCKsaNNRZZRWP.NZXWTYfi-XXVgLNV.ZcR@flex--pcc.bounces.google.com>"; helo=mail-yb1-f202.google.com; client-ip=209.85.219.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620718280-287724 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, on an anonymous page fault, the kernel allocates a zeroed page and maps it in user space. If the mapping is tagged (PROT_MTE), set_pte_at() additionally clears the tags. It is, however, more efficient to clear the tags at the same time as zeroing the data on allocation. To avoid clearing the tags on any page (which may not be mapped as tagged), only do this if the vma flags contain VM_MTE. This requires introducing a new GFP flag that is used to determine whether to clear the tags. The DC GZVA instruction with a 0 top byte (and 0 tag) requires top-byte-ignore. Set the TCR_EL1.{TBI1,TBID1} bits irrespective of whether KASAN_HW is enabled. Signed-off-by: Peter Collingbourne Co-developed-by: Catalin Marinas Signed-off-by: Catalin Marinas Link: https://linux-review.googlesource.com/id/Id46dc94e30fe11474f7e54f5d65e7658dbdddb26 --- arch/arm64/include/asm/mte.h | 4 ++++ arch/arm64/include/asm/page.h | 11 +++++++++-- arch/arm64/lib/mte.S | 20 ++++++++++++++++++++ arch/arm64/mm/fault.c | 25 +++++++++++++++++++++++++ arch/arm64/mm/proc.S | 10 +++++++--- include/linux/gfp.h | 9 +++++++-- include/linux/highmem.h | 10 ++++++++++ mm/kasan/hw_tags.c | 9 ++++++++- mm/page_alloc.c | 14 +++++++++++--- 9 files changed, 101 insertions(+), 11 deletions(-) diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h index bc88a1ced0d7..67bf259ae768 100644 --- a/arch/arm64/include/asm/mte.h +++ b/arch/arm64/include/asm/mte.h @@ -37,6 +37,7 @@ void mte_free_tag_storage(char *storage); /* track which pages have valid allocation tags */ #define PG_mte_tagged PG_arch_2 +void mte_zero_clear_page_tags(void *addr); void mte_sync_tags(pte_t *ptep, pte_t pte); void mte_copy_page_tags(void *kto, const void *kfrom); void mte_thread_init_user(void); @@ -53,6 +54,9 @@ int mte_ptrace_copy_tags(struct task_struct *child, long request, /* unused if !CONFIG_ARM64_MTE, silence the compiler */ #define PG_mte_tagged 0 +static inline void mte_zero_clear_page_tags(void *addr) +{ +} static inline void mte_sync_tags(pte_t *ptep, pte_t pte) { } diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h index 012cffc574e8..a0bcaa5f735e 100644 --- a/arch/arm64/include/asm/page.h +++ b/arch/arm64/include/asm/page.h @@ -13,6 +13,7 @@ #ifndef __ASSEMBLY__ #include /* for READ_IMPLIES_EXEC */ +#include /* for gfp_t */ #include struct page; @@ -28,10 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from, void copy_highpage(struct page *to, struct page *from); #define __HAVE_ARCH_COPY_HIGHPAGE -#define __alloc_zeroed_user_highpage(movableflags, vma, vaddr) \ - alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr) +struct page *__alloc_zeroed_user_highpage(gfp_t movableflags, + struct vm_area_struct *vma, + unsigned long vaddr); #define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE +#define want_zero_tags_on_free() system_supports_mte() + +void tag_clear_highpage(struct page *to); +#define __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + #define clear_user_page(page, vaddr, pg) clear_page(page) #define copy_user_page(to, from, vaddr, pg) copy_page(to, from) diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S index 351537c12f36..e83643b3995f 100644 --- a/arch/arm64/lib/mte.S +++ b/arch/arm64/lib/mte.S @@ -36,6 +36,26 @@ SYM_FUNC_START(mte_clear_page_tags) ret SYM_FUNC_END(mte_clear_page_tags) +/* + * Zero the page and tags at the same time + * + * Parameters: + * x0 - address to the beginning of the page + */ +SYM_FUNC_START(mte_zero_clear_page_tags) + mrs x1, dczid_el0 + and w1, w1, #0xf + mov x2, #4 + lsl x1, x2, x1 + and x0, x0, #(1 << MTE_TAG_SHIFT) - 1 // clear the tag + +1: dc gzva, x0 + add x0, x0, x1 + tst x0, #(PAGE_SIZE - 1) + b.ne 1b + ret +SYM_FUNC_END(mte_zero_clear_page_tags) + /* * Copy the tags from the source page to the destination one * x0 - address of the destination page diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 871c82ab0a30..8127e0c0b8fb 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -921,3 +921,28 @@ void do_debug_exception(unsigned long addr_if_watchpoint, unsigned int esr, debug_exception_exit(regs); } NOKPROBE_SYMBOL(do_debug_exception); + +/* + * Used during anonymous page fault handling. + */ +struct page *__alloc_zeroed_user_highpage(gfp_t flags, + struct vm_area_struct *vma, + unsigned long vaddr) +{ + /* + * If the page is mapped with PROT_MTE, initialise the tags at the + * point of allocation and page zeroing as this is usually faster than + * separate DC ZVA and STGM. + */ + if (vma->vm_flags & VM_MTE) + flags |= __GFP_ZEROTAGS; + + return alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | flags, vma, vaddr); +} + +void tag_clear_highpage(struct page *page) +{ + mte_zero_clear_page_tags(page_address(page)); + page_kasan_tag_reset(page); + set_bit(PG_mte_tagged, &page->flags); +} diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index 0a48191534ff..a27c77dbe91c 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -46,9 +46,13 @@ #endif #ifdef CONFIG_KASAN_HW_TAGS -#define TCR_KASAN_HW_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 +#define TCR_MTE_FLAGS SYS_TCR_EL1_TCMA1 | TCR_TBI1 | TCR_TBID1 #else -#define TCR_KASAN_HW_FLAGS 0 +/* + * The mte_zero_clear_page_tags() implementation uses DC GZVA, which relies on + * TBI being enabled at EL1. + */ +#define TCR_MTE_FLAGS TCR_TBI1 | TCR_TBID1 #endif /* @@ -452,7 +456,7 @@ SYM_FUNC_START(__cpu_setup) msr_s SYS_TFSRE0_EL1, xzr /* set the TCR_EL1 bits */ - mov_q x10, TCR_KASAN_HW_FLAGS + mov_q x10, TCR_MTE_FLAGS orr tcr, tcr, x10 1: #endif diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 11da8af06704..68ba237365dc 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -53,8 +53,9 @@ struct vm_area_struct; #define ___GFP_HARDWALL 0x100000u #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u +#define ___GFP_ZEROTAGS 0x800000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x800000u +#define ___GFP_NOLOCKDEP 0x1000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -229,16 +230,20 @@ struct vm_area_struct; * %__GFP_COMP address compound page metadata. * * %__GFP_ZERO returns a zeroed page on success. + * + * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if + * __GFP_ZERO is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..78bba4bc47aa 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -204,6 +204,16 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#ifndef __HAVE_ARCH_TAG_CLEAR_HIGHPAGE + +#define want_zero_tags_on_free() false + +static inline void tag_clear_highpage(struct page *page) +{ +} + +#endif + /* * If we pass in a base or tail page, we can zero up to PAGE_SIZE. * If we pass in a head page, we can zero up to the size of the compound page. diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 45e552cb9172..34362c8d0955 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -242,7 +242,14 @@ void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) { bool init = !want_init_on_free() && want_init_on_alloc(flags); - kasan_unpoison_pages(page, order, init); + if (flags & __GFP_ZEROTAGS) { + int i; + + for (i = 0; i != 1 << order; ++i) + tag_clear_highpage(page + i); + } else { + kasan_unpoison_pages(page, order, init); + } } void kasan_free_pages(struct page *page, unsigned int order) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 6e82a7f6fd6f..7ac0f0721d22 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1219,10 +1219,16 @@ static int free_tail_pages_check(struct page *head_page, struct page *page) return ret; } -static void kernel_init_free_pages(struct page *page, int numpages) +static void kernel_init_free_pages(struct page *page, int numpages, bool zero_tags) { int i; + if (zero_tags) { + for (i = 0; i < numpages; i++) + tag_clear_highpage(page + i); + return; + } + /* s390's use of memset() could override KASAN redzones. */ kasan_disable_current(); for (i = 0; i < numpages; i++) { @@ -1314,7 +1320,8 @@ static __always_inline bool free_pages_prepare(struct page *page, bool init = want_init_on_free(); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, + want_zero_tags_on_free()); if (!skip_kasan_poison) kasan_poison_pages(page, order, init); } @@ -2350,7 +2357,8 @@ inline void post_alloc_hook(struct page *page, unsigned int order, kasan_unpoison_pages(page, order, init); if (init) - kernel_init_free_pages(page, 1 << order); + kernel_init_free_pages(page, 1 << order, + gfp_flags & __GFP_ZEROTAGS); } set_page_owner(page, order, gfp_flags); From patchwork Tue May 11 07:31:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 12249997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2128C433ED for ; Tue, 11 May 2021 07:31:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5293C61221 for ; Tue, 11 May 2021 07:31:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5293C61221 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 558756B0072; Tue, 11 May 2021 03:31:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 507846B0073; Tue, 11 May 2021 03:31:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EC5146B0074; Tue, 11 May 2021 03:31:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id B12BE6B0072 for ; Tue, 11 May 2021 03:31:34 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 70DCF8249980 for ; Tue, 11 May 2021 07:31:34 +0000 (UTC) X-FDA: 78128130108.22.4ACCA6D Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 049CFC000C74 for ; Tue, 11 May 2021 07:31:25 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id d201-20020ae9efd20000b02902e9e9d8d9dcso13728836qkg.10 for ; Tue, 11 May 2021 00:31:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=q3U0dXTYQLAtRpR3AVB6v6nZIKeb7k3KSlb3kuym5Ek=; b=qbl47JfeApMGuQ0dng9KhoBUJOgH16AAPODM0aFTaZiE1QDoA4/XlLO2xoKx8KgRf0 cUQjVOkH/cKBapYbFK85MpT++N6Gl5glf0ibHzhkOz7+d2jHnhV5bM6SCTGgsIq+Bt/7 Kjdljca4vFoERoFPhzRO0nxxqp3M5sjsfOKMpydnXLTQUBge0iRVHNiKtphj43I/Rfz8 OJbDn5CLlxKzhkPXq43W/PT0abPYd2ETTGWjs9Q7oI8B7Fx3VnfE2iraDrVR5ULhAxdK AtR6ZYtTHO6YLqeuUbgUqm3gChg4jiq3Nc8MxtBtoD9s/C223Zi6nZJoQFkqNG/tymoq J2Hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=q3U0dXTYQLAtRpR3AVB6v6nZIKeb7k3KSlb3kuym5Ek=; b=NPNsyO6lqn56zwwYwyfBHb2Oj6fji/PNkohaIisEi15OEBW4FQQn+WIe3l5zcTcrYx Vk05y6DyYWllwVHoCnSl/bcuk60BVHhD/qs02RNnHc3PGWaiIS/LAfnQRjol+STP7UVt uu3Hurp7HJQkqXTbLk8d+NzeQkPkZpH7uTeoz/OIoR/n2Jj2qNFsxTs/mpKjojKTszOX uuhgeWXT+O0PpQACrQYVnhf9rl3I2Rt8k/iW4qsW/+jls4IecL7E3uhoCE2XWcKNt4NU b/ytLIZdXsWc+4tHL5HE3F2hmk4kyTTkPa6LGxf8Mk+SzUG3AICmkFzGR8sKlGJlbBDz 2ewg== X-Gm-Message-State: AOAM533B4mdSOA/AFCNxw4eTlaLWKp95tmI3emEyDtPY9ZYSSX785dv8 IFC4DZa4akrxpCTd404J0DFcAog= X-Google-Smtp-Source: ABdhPJyfaRfq88M+1xM6sPWcBFUPIqX914YwzBg3NCFGOEuOsOGtKDiFVoXCJmymSh6SmJ84CcGX/Jg= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:200:2b2f:d40f:818f:a6d0]) (user=pcc job=sendgmr) by 2002:ad4:5c63:: with SMTP id i3mr27222707qvh.36.1620718293345; Tue, 11 May 2021 00:31:33 -0700 (PDT) Date: Tue, 11 May 2021 00:31:08 -0700 In-Reply-To: <20210511073108.138837-1-pcc@google.com> Message-Id: <20210511073108.138837-3-pcc@google.com> Mime-Version: 1.0 References: <20210511073108.138837-1-pcc@google.com> X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog Subject: [PATCH 3/3] kasan: allow freed user page poisoning to be disabled with HW tags From: Peter Collingbourne To: Andrey Konovalov , Alexander Potapenko , Catalin Marinas , Vincenzo Frascino , Andrew Morton Cc: Peter Collingbourne , Evgenii Stepanov , linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org X-Rspamd-Queue-Id: 049CFC000C74 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=qbl47Jfe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 31TKaYAMKCK4dQQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--pcc.bounces.google.com designates 209.85.222.202 as permitted sender) smtp.mailfrom=31TKaYAMKCK4dQQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--pcc.bounces.google.com X-Rspamd-Server: rspam03 X-Stat-Signature: te6qhxbe7zjz6taxqcu91ha9rx6jad5m Received-SPF: none (flex--pcc.bounces.google.com>: No applicable sender policy available) receiver=imf03; identity=mailfrom; envelope-from="<31TKaYAMKCK4dQQUccUZS.QcaZWbil-aaYjOQY.cfU@flex--pcc.bounces.google.com>"; helo=mail-qk1-f202.google.com; client-ip=209.85.222.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1620718285-595897 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Poisoning freed pages protects against kernel use-after-free. The likelihood of such a bug involving kernel pages is significantly higher than that for user pages. At the same time, poisoning freed pages can impose a significant performance cost, which cannot always be justified for user pages given the lower probability of finding a bug. Therefore, make it possible to configure the kernel to disable freed user page poisoning when using HW tags via the new kasan.skip_user_poison_on_free command line option. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/I716846e2de8ef179f44e835770df7e6307be96c9 --- include/linux/gfp.h | 13 ++++++++++--- include/linux/page-flags.h | 9 +++++++++ include/trace/events/mmflags.h | 9 ++++++++- mm/kasan/hw_tags.c | 10 ++++++++++ mm/page_alloc.c | 12 +++++++----- 5 files changed, 44 insertions(+), 9 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 68ba237365dc..9a77e5660b07 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -54,8 +54,9 @@ struct vm_area_struct; #define ___GFP_THISNODE 0x200000u #define ___GFP_ACCOUNT 0x400000u #define ___GFP_ZEROTAGS 0x800000u +#define ___GFP_SKIP_KASAN_POISON 0x1000000u #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x1000000u +#define ___GFP_NOLOCKDEP 0x2000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -233,17 +234,22 @@ struct vm_area_struct; * * %__GFP_ZEROTAGS returns a page with zeroed memory tags on success, if * __GFP_ZERO is set. + * + * %__GFP_SKIP_KASAN_POISON returns a page which does not need to be poisoned + * on deallocation. Typically used for userspace pages. Currently only has an + * effect in HW tags mode, and only if a command line option is set. */ #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) #define __GFP_ZEROTAGS ((__force gfp_t)___GFP_ZEROTAGS) +#define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (24 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -320,7 +326,8 @@ struct vm_area_struct; #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) #define GFP_NOIO (__GFP_RECLAIM) #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | \ + __GFP_HARDWALL | __GFP_SKIP_KASAN_POISON) #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 04a34c08e0a6..40e2c5000585 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -137,6 +137,9 @@ enum pageflags { #endif #ifdef CONFIG_64BIT PG_arch_2, +#endif +#ifdef CONFIG_KASAN_HW_TAGS + PG_skip_kasan_poison, #endif __NR_PAGEFLAGS, @@ -443,6 +446,12 @@ TESTCLEARFLAG(Young, young, PF_ANY) PAGEFLAG(Idle, idle, PF_ANY) #endif +#ifdef CONFIG_KASAN_HW_TAGS +PAGEFLAG(SkipKASanPoison, skip_kasan_poison, PF_HEAD) +#else +PAGEFLAG_FALSE(SkipKASanPoison) +#endif + /* * PageReported() is used to track reported free pages within the Buddy * allocator. We can use the non-atomic version of the test and set diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 629c7a0eaff2..390270e00a1d 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -85,6 +85,12 @@ #define IF_HAVE_PG_ARCH_2(flag,string) #endif +#ifdef CONFIG_KASAN_HW_TAGS +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) ,{1UL << flag, string} +#else +#define IF_HAVE_PG_SKIP_KASAN_POISON(flag,string) +#endif + #define __def_pageflag_names \ {1UL << PG_locked, "locked" }, \ {1UL << PG_waiters, "waiters" }, \ @@ -112,7 +118,8 @@ IF_HAVE_PG_UNCACHED(PG_uncached, "uncached" ) \ IF_HAVE_PG_HWPOISON(PG_hwpoison, "hwpoison" ) \ IF_HAVE_PG_IDLE(PG_young, "young" ) \ IF_HAVE_PG_IDLE(PG_idle, "idle" ) \ -IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) +IF_HAVE_PG_ARCH_2(PG_arch_2, "arch_2" ) \ +IF_HAVE_PG_SKIP_KASAN_POISON(PG_skip_kasan_poison, "skip_kasan_poison") #define show_page_flags(flags) \ (flags) ? __print_flags(flags, "|", \ diff --git a/mm/kasan/hw_tags.c b/mm/kasan/hw_tags.c index 34362c8d0955..954d5c2f7683 100644 --- a/mm/kasan/hw_tags.c +++ b/mm/kasan/hw_tags.c @@ -238,10 +238,20 @@ struct kasan_track *kasan_get_free_track(struct kmem_cache *cache, return &alloc_meta->free_track[0]; } +static bool skip_user_poison_on_free; +static int __init skip_user_poison_on_free_param(char *buf) +{ + return kstrtobool(buf, &skip_user_poison_on_free); +} +early_param("kasan.skip_user_poison_on_free", skip_user_poison_on_free_param); + void kasan_alloc_pages(struct page *page, unsigned int order, gfp_t flags) { bool init = !want_init_on_free() && want_init_on_alloc(flags); + if (skip_user_poison_on_free && (flags & __GFP_SKIP_KASAN_POISON)) + SetPageSkipKASanPoison(page); + if (flags & __GFP_ZEROTAGS) { int i; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7ac0f0721d22..655f39c14433 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -394,11 +394,12 @@ static DEFINE_STATIC_KEY_TRUE(deferred_pages); * on-demand allocation and then freed again before the deferred pages * initialization is done, but this is not likely to happen. */ -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return static_branch_unlikely(&deferred_pages) || (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } /* Returns true if the struct page for the pfn is uninitialised */ @@ -449,10 +450,11 @@ defer_init(int nid, unsigned long pfn, unsigned long end_pfn) return false; } #else -static inline bool should_skip_kasan_poison(fpi_t fpi_flags) +static inline bool should_skip_kasan_poison(struct page *page, fpi_t fpi_flags) { return (!IS_ENABLED(CONFIG_KASAN_GENERIC) && - (fpi_flags & FPI_SKIP_KASAN_POISON)); + (fpi_flags & FPI_SKIP_KASAN_POISON)) || + PageSkipKASanPoison(page); } static inline bool early_page_uninitialised(unsigned long pfn) @@ -1244,7 +1246,7 @@ static __always_inline bool free_pages_prepare(struct page *page, unsigned int order, bool check_free, fpi_t fpi_flags) { int bad = 0; - bool skip_kasan_poison = should_skip_kasan_poison(fpi_flags); + bool skip_kasan_poison = should_skip_kasan_poison(page, fpi_flags); VM_BUG_ON_PAGE(PageTail(page), page);