From patchwork Fri Jun 10 15:21:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 12877711 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4595FC43334 for ; Fri, 10 Jun 2022 15:23:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KY/k8kVAfGm+GTGU2PtQ17b5Rmlo+kDxK8OsbjQmTow=; b=PoBhNbl1OvALXw jstVbhpoZ2k2hTfpC4suVYANtjhVKiNICli7XioZGQSb5PRUZsNDTni/Q+8rftXJ5mSH1uaY3QHX3 EOpm60kLOhIr5svUW2+J8brfHQivK8u1mutUJNPOE9h6hFe3zUwRt+gA6jrXmfQg+Yt57Q7c+fTsS v/sV5cbmjvOV8dzXp7moO4PeUjgPK1kIpBobjkqqc+m+RW3P8Bk8GIQocrzBoTHnDmyCXbvXZ/e29 KJyJHjs4Wi6/dDJXGWJifFQygU71ASUEsxQkXgypwCOl9A3yRcB9o3GG1+iO32URKq2CFjZVcuqZ0 032i+/+FKskRRc8rvuiw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nzgSv-008k4B-7C; Fri, 10 Jun 2022 15:22:25 +0000 Received: from ams.source.kernel.org ([2604:1380:4601:e00::1]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nzgST-008jwJ-Ex for linux-arm-kernel@lists.infradead.org; Fri, 10 Jun 2022 15:21:59 +0000 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 27FC5B835F8; Fri, 10 Jun 2022 15:21:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20B78C34114; Fri, 10 Jun 2022 15:21:52 +0000 (UTC) From: Catalin Marinas To: Andrey Ryabinin , Andrey Konovalov Cc: Will Deacon , Vincenzo Frascino , Peter Collingbourne , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH v2 4/4] arm64: kasan: Revert "arm64: mte: reset the page tag in page->flags" Date: Fri, 10 Jun 2022 16:21:41 +0100 Message-Id: <20220610152141.2148929-5-catalin.marinas@arm.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220610152141.2148929-1-catalin.marinas@arm.com> References: <20220610152141.2148929-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220610_082157_844047_D6125F49 X-CRM114-Status: GOOD ( 15.53 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org This reverts commit e5b8d9218951e59df986f627ec93569a0d22149b. Pages mapped in user-space with PROT_MTE have the allocation tags either zeroed or copied/restored to some user values. In order for the kernel to access such pages via page_address(), resetting the tag in page->flags was necessary. This tag resetting was deferred to set_pte_at() -> mte_sync_page_tags() but it can race with another CPU reading the flags (via page_to_virt()): P0 (mte_sync_page_tags): P1 (memcpy from virt_to_page): Rflags!=0xff Wflags=0xff DMB (doesn't help) Wtags=0 Rtags=0 // fault Since now the post_alloc_hook() function resets the page->flags tag when unpoisoning is skipped for user pages (including the __GFP_ZEROTAGS case), revert the arm64 commit calling page_kasan_tag_reset(). Signed-off-by: Catalin Marinas Cc: Will Deacon Cc: Vincenzo Frascino Cc: Andrey Konovalov Cc: Peter Collingbourne Acked-by: Andrey Konovalov Reviewed-by: Vincenzo Frascino --- arch/arm64/kernel/hibernate.c | 5 ----- arch/arm64/kernel/mte.c | 9 --------- arch/arm64/mm/copypage.c | 9 --------- arch/arm64/mm/mteswap.c | 9 --------- 4 files changed, 32 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 2e248342476e..af5df48ba915 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -300,11 +300,6 @@ static void swsusp_mte_restore_tags(void) unsigned long pfn = xa_state.xa_index; struct page *page = pfn_to_online_page(pfn); - /* - * It is not required to invoke page_kasan_tag_reset(page) - * at this point since the tags stored in page->flags are - * already restored. - */ mte_restore_page_tags(page_address(page), tags); mte_free_tag_storage(tags); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 57b30bcf9f21..7ba4d6fd1f72 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -48,15 +48,6 @@ static void mte_sync_page_tags(struct page *page, pte_t old_pte, if (!pte_is_tagged) return; - page_kasan_tag_reset(page); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_clear_page_tags(page_address(page)); } diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c index 0dea80bf6de4..24913271e898 100644 --- a/arch/arm64/mm/copypage.c +++ b/arch/arm64/mm/copypage.c @@ -23,15 +23,6 @@ void copy_highpage(struct page *to, struct page *from) if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) { set_bit(PG_mte_tagged, &to->flags); - page_kasan_tag_reset(to); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_copy_page_tags(kto, kfrom); } } diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a9e50e930484..4334dec93bd4 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -53,15 +53,6 @@ bool mte_restore_tags(swp_entry_t entry, struct page *page) if (!tags) return false; - page_kasan_tag_reset(page); - /* - * We need smp_wmb() in between setting the flags and clearing the - * tags because if another thread reads page->flags and builds a - * tagged address out of it, there is an actual dependency to the - * memory access, but on the current thread we do not guarantee that - * the new page->flags are visible before the tags were updated. - */ - smp_wmb(); mte_restore_page_tags(page_address(page), tags); return true;