From patchwork Tue May 17 18:09:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 12852827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3367C433EF for ; Tue, 17 May 2022 18:09:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A2B46B0072; Tue, 17 May 2022 14:09:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 528B86B0073; Tue, 17 May 2022 14:09:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CAB86B0074; Tue, 17 May 2022 14:09:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2854F6B0072 for ; Tue, 17 May 2022 14:09:54 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id EB5731207EB for ; Tue, 17 May 2022 18:09:53 +0000 (UTC) X-FDA: 79476023466.24.8A1E9D4 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf19.hostedemail.com (Postfix) with ESMTP id 29B481A00D4 for ; Tue, 17 May 2022 18:09:43 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 58FAE614F1; Tue, 17 May 2022 18:09:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BDB7C34118; Tue, 17 May 2022 18:09:49 +0000 (UTC) From: Catalin Marinas To: Andrey Ryabinin , Andrey Konovalov Cc: Will Deacon , Vincenzo Frascino , Peter Collingbourne , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/3] mm: kasan: Ensure the tags are visible before the tag in page->flags Date: Tue, 17 May 2022 19:09:43 +0100 Message-Id: <20220517180945.756303-2-catalin.marinas@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220517180945.756303-1-catalin.marinas@arm.com> References: <20220517180945.756303-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 29B481A00D4 X-Stat-Signature: e6t56dnkcctb6z5oasnmjkq38pkb4did Authentication-Results: imf19.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf19.hostedemail.com: domain of cmarinas@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspam-User: X-HE-Tag: 1652810983-589361 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __kasan_unpoison_pages() colours the memory with a random tag and stores it in page->flags in order to re-create the tagged pointer via page_to_virt() later. When the tag from the page->flags is read, ensure that the in-memory tags are already visible by re-ordering the page_kasan_tag_set() after kasan_unpoison(). The former already has barriers in place through try_cmpxchg(). On the reader side, the order is ensured by the address dependency between page->flags and the memory access. Signed-off-by: Catalin Marinas Cc: Andrey Ryabinin Cc: Andrey Konovalov Cc: Vincenzo Frascino Reviewed-by: Andrey Konovalov --- mm/kasan/common.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index d9079ec11f31..f6b8dc4f354b 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -108,9 +108,10 @@ void __kasan_unpoison_pages(struct page *page, unsigned int order, bool init) return; tag = kasan_random_tag(); + kasan_unpoison(set_tag(page_address(page), tag), + PAGE_SIZE << order, init); for (i = 0; i < (1 << order); i++) page_kasan_tag_set(page + i, tag); - kasan_unpoison(page_address(page), PAGE_SIZE << order, init); } void __kasan_poison_pages(struct page *page, unsigned int order, bool init) From patchwork Tue May 17 18:09:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 12852829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A422DC433EF for ; Tue, 17 May 2022 18:09:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3F3CA8D0002; Tue, 17 May 2022 14:09:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 37BD16B0074; Tue, 17 May 2022 14:09:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 21C658D0002; Tue, 17 May 2022 14:09:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0EA3C6B0073 for ; Tue, 17 May 2022 14:09:58 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D0BD26B4 for ; Tue, 17 May 2022 18:09:57 +0000 (UTC) X-FDA: 79476023634.20.35364B1 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf06.hostedemail.com (Postfix) with ESMTP id 71E071800C7 for ; Tue, 17 May 2022 18:09:54 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 20084B81B6B; Tue, 17 May 2022 18:09:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2AD9EC34100; Tue, 17 May 2022 18:09:52 +0000 (UTC) From: Catalin Marinas To: Andrey Ryabinin , Andrey Konovalov Cc: Will Deacon , Vincenzo Frascino , Peter Collingbourne , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 2/3] mm: kasan: Reset the tag on pages intended for user Date: Tue, 17 May 2022 19:09:44 +0100 Message-Id: <20220517180945.756303-3-catalin.marinas@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220517180945.756303-1-catalin.marinas@arm.com> References: <20220517180945.756303-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 71E071800C7 X-Stat-Signature: btdirjm3nidi68sufqwxcfhm4gxk5mwr X-Rspam-User: Authentication-Results: imf06.hostedemail.com; dkim=none; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none); spf=pass (imf06.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org X-Rspamd-Server: rspam09 X-HE-Tag: 1652810994-66346 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On allocation kasan colours a page with a random tag and stores such tag in page->flags so that a subsequent page_to_virt() reconstructs the correct tagged pointer. However, when such page is mapped in user-space with PROT_MTE, the kernel's initial tag is overridden. Ensure that such pages have the tag reset (match-all) at allocation time since any late clearing of the tag is racy with other page_to_virt() dereferencing. Signed-off-by: Catalin Marinas Cc: Andrey Ryabinin Cc: Andrey Konovalov Cc: Vincenzo Frascino --- include/linux/gfp.h | 10 +++++++--- mm/page_alloc.c | 9 ++++++--- 2 files changed, 13 insertions(+), 6 deletions(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 3e3d36fc2109..88b1d4fe4dcb 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -58,13 +58,15 @@ struct vm_area_struct; #define ___GFP_SKIP_ZERO 0x1000000u #define ___GFP_SKIP_KASAN_UNPOISON 0x2000000u #define ___GFP_SKIP_KASAN_POISON 0x4000000u +#define ___GFP_PAGE_KASAN_TAG_RESET 0x8000000u #else #define ___GFP_SKIP_ZERO 0 #define ___GFP_SKIP_KASAN_UNPOISON 0 #define ___GFP_SKIP_KASAN_POISON 0 +#define ___GFP_PAGE_KASAN_TAG_RESET 0 #endif #ifdef CONFIG_LOCKDEP -#define ___GFP_NOLOCKDEP 0x8000000u +#define ___GFP_NOLOCKDEP 0x10000000u #else #define ___GFP_NOLOCKDEP 0 #endif @@ -259,12 +261,13 @@ struct vm_area_struct; #define __GFP_SKIP_ZERO ((__force gfp_t)___GFP_SKIP_ZERO) #define __GFP_SKIP_KASAN_UNPOISON ((__force gfp_t)___GFP_SKIP_KASAN_UNPOISON) #define __GFP_SKIP_KASAN_POISON ((__force gfp_t)___GFP_SKIP_KASAN_POISON) +#define __GFP_PAGE_KASAN_TAG_RESET ((__force gfp_t)___GFP_PAGE_KASAN_TAG_RESET) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (27 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (28 + IS_ENABLED(CONFIG_LOCKDEP)) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** @@ -343,7 +346,8 @@ struct vm_area_struct; #define GFP_NOWAIT (__GFP_KSWAPD_RECLAIM) #define GFP_NOIO (__GFP_RECLAIM) #define GFP_NOFS (__GFP_RECLAIM | __GFP_IO) -#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL) +#define GFP_USER (__GFP_RECLAIM | __GFP_IO | __GFP_FS | __GFP_HARDWALL | \ + __GFP_PAGE_KASAN_TAG_RESET) #define GFP_DMA __GFP_DMA #define GFP_DMA32 __GFP_DMA32 #define GFP_HIGHUSER (GFP_USER | __GFP_HIGHMEM) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c1..f9018a84f4e3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2382,6 +2382,7 @@ inline void post_alloc_hook(struct page *page, unsigned int order, bool init = !want_init_on_free() && want_init_on_alloc(gfp_flags) && !should_skip_init(gfp_flags); bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS); + int i; set_page_private(page, 0); set_page_refcounted(page); @@ -2407,8 +2408,6 @@ inline void post_alloc_hook(struct page *page, unsigned int order, * should be initialized as well). */ if (init_tags) { - int i; - /* Initialize both memory and tags. */ for (i = 0; i != 1 << order; ++i) tag_clear_highpage(page + i); @@ -2430,7 +2429,11 @@ inline void post_alloc_hook(struct page *page, unsigned int order, /* Propagate __GFP_SKIP_KASAN_POISON to page flags. */ if (kasan_hw_tags_enabled() && (gfp_flags & __GFP_SKIP_KASAN_POISON)) SetPageSkipKASanPoison(page); - + /* if match-all page address required, reset the tag */ + if (gfp_flags & __GFP_PAGE_KASAN_TAG_RESET) { + for (i = 0; i != 1 << order; ++i) + page_kasan_tag_reset(page + i); + }; set_page_owner(page, order, gfp_flags); page_table_check_alloc(page, order); } From patchwork Tue May 17 18:09:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 12852830 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA410C433F5 for ; Tue, 17 May 2022 18:09:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 469958D0003; Tue, 17 May 2022 14:09:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3C9896B0074; Tue, 17 May 2022 14:09:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2469C8D0003; Tue, 17 May 2022 14:09:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 0A9BB6B0073 for ; Tue, 17 May 2022 14:09:59 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DCC5831BFC for ; Tue, 17 May 2022 18:09:58 +0000 (UTC) X-FDA: 79476023676.02.93A9097 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf03.hostedemail.com (Postfix) with ESMTP id 6464620089 for ; Tue, 17 May 2022 18:09:48 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 3A6C9B81B75; Tue, 17 May 2022 18:09:57 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 393FFC36AE2; Tue, 17 May 2022 18:09:54 +0000 (UTC) From: Catalin Marinas To: Andrey Ryabinin , Andrey Konovalov Cc: Will Deacon , Vincenzo Frascino , Peter Collingbourne , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH 3/3] arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags" Date: Tue, 17 May 2022 19:09:45 +0100 Message-Id: <20220517180945.756303-4-catalin.marinas@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220517180945.756303-1-catalin.marinas@arm.com> References: <20220517180945.756303-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Stat-Signature: ss6zyw6wsurbdiefqqn8aio1jq1i6wp6 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6464620089 Authentication-Results: imf03.hostedemail.com; dkim=none; spf=pass (imf03.hostedemail.com: domain of cmarinas@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=cmarinas@kernel.org; dmarc=fail reason="SPF not aligned (relaxed), No valid DKIM" header.from=arm.com (policy=none) X-Rspam-User: X-HE-Tag: 1652810988-919847 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This reverts commit e5b8d9218951e59df986f627ec93569a0d22149b. On a system with MTE and KASAN_HW_TAGS enabled, when a page is allocated kasan_unpoison_pages() sets a random tag and saves it in page->flags. page_to_virt() re-creates the correct tagged pointer. If such page is mapped in user-space with PROT_MTE, the architecture code will set the tag to 0 and a subsequent page_to_virt() dereference will fault. The reverted commit aimed to fix this by resetting the tag in page->flags so that it is 0xff (match-all, not faulting). However, setting the tags and flags can race with another CPU reading the flags (page_to_virt()) and barriers can't help: P0 (mte_sync_page_tags): P1 (memcpy from virt_to_page): Rflags!=0xff Wflags=0xff DMB (doesn't help) Wtags=0 Rtags=0 // fault Since clearing the flags in the arch code doesn't help, revert the patch altogether. In addition, remove the page_kasan_tag_reset() call in tag_clear_highpage() since the core kasan code should take care of resetting the page tag. Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Vincenzo Frascino Cc: Andrey Konovalov Cc: Peter Collingbourne --- arch/arm64/kernel/hibernate.c | 5 ----- arch/arm64/kernel/mte.c | 9 --------- arch/arm64/mm/copypage.c | 9 --------- arch/arm64/mm/fault.c | 1 - arch/arm64/mm/mteswap.c | 9 --------- 5 files changed, 33 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 6328308be272..7754ef328657 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -300,11 +300,6 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); - /* - * It is not required to invoke page_kasan_tag_reset(page) - * at this point since the tags stored in page->flags are - * already restored. - */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 78b3e0f8e997..90994aca54f3 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -47,15 +47,6 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (!pte_is_tagged) return; - page_kasan_tag_reset(page); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index b5447e53cd73..70a71f38b6a9 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,15 +23,6 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); - page_kasan_tag_reset(to); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 77341b160aca..f2f21cd6d43f 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -926,6 +926,5 @@ struct page *alloc_zeroed_user_highpage_movable(struct vm_area_struct *vma, void tag_clear_highpage(struct page *page) { mte_zero_clear_page_tags(page_address(page)); - page_kasan_tag_reset(page); set_bit(PG_mte_tagged, &page->flags); } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a9e50e930484..4334dec93bd4 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,15 +53,6 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; - page_kasan_tag_reset(page); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true;