diff mbox series

[v9,28/44] arm64: mte: Reset the page tag in page->flags

Message ID 4a7819f8942922451e8075d7003f7df357919dfc.1605046192.git.andreyknvl@google.com (mailing list archive)
State New, archived
Headers show
Series [v9,01/44] kasan: drop unnecessary GPL text from comment headers | expand

Commit Message

Andrey Konovalov Nov. 10, 2020, 10:10 p.m. UTC
From: Vincenzo Frascino <vincenzo.frascino@arm.com>

The hardware tag-based KASAN for compatibility with the other modes
stores the tag associated to a page in page->flags.
Due to this the kernel faults on access when it allocates a page with an
initial tag and the user changes the tags.

Reset the tag associated by the kernel to a page in all the meaningful
places to prevent kernel faults on access.

Note: An alternative to this approach could be to modify page_to_virt().
This though could end up being racy, in fact if a CPU checks the
PG_mte_tagged bit and decides that the page is not tagged but another
CPU maps the same with PROT_MTE and becomes tagged the subsequent kernel
access would fail.

Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
---
Change-Id: I8451d438bb63364de2a3e68041e3a27866921d4e
---
 arch/arm64/kernel/hibernate.c | 5 +++++
 arch/arm64/kernel/mte.c       | 9 +++++++++
 arch/arm64/mm/copypage.c      | 1 +
 arch/arm64/mm/mteswap.c       | 9 +++++++++
 4 files changed, 24 insertions(+)

Comments

Catalin Marinas Nov. 12, 2020, 9:31 a.m. UTC | #1
On Tue, Nov 10, 2020 at 11:10:25PM +0100, Andrey Konovalov wrote:
> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
> index 70a71f38b6a9..f0efa4847e2f 100644
> --- a/arch/arm64/mm/copypage.c
> +++ b/arch/arm64/mm/copypage.c
> @@ -23,6 +23,7 @@ void copy_highpage(struct page *to, struct page *from)
>  
>  	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
>  		set_bit(PG_mte_tagged, &to->flags);
> +		page_kasan_tag_reset(to);
>  		mte_copy_page_tags(kto, kfrom);

Any reason why this doesn't have an smp_wmb() between resetting the tags
and copying them into kto?
Vincenzo Frascino Nov. 12, 2020, 9:39 a.m. UTC | #2
On 11/12/20 9:31 AM, Catalin Marinas wrote:
> On Tue, Nov 10, 2020 at 11:10:25PM +0100, Andrey Konovalov wrote:
>> diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
>> index 70a71f38b6a9..f0efa4847e2f 100644
>> --- a/arch/arm64/mm/copypage.c
>> +++ b/arch/arm64/mm/copypage.c
>> @@ -23,6 +23,7 @@ void copy_highpage(struct page *to, struct page *from)
>>  
>>  	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
>>  		set_bit(PG_mte_tagged, &to->flags);
>> +		page_kasan_tag_reset(to);
>>  		mte_copy_page_tags(kto, kfrom);
> 
> Any reason why this doesn't have an smp_wmb() between resetting the tags
> and copying them into kto?
> 

Yes, the reason is I am not sure why it disappeared from the submitted patch ;)
I am going to respin the patch.
diff mbox series

Patch

diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 42003774d261..9c9f47e9f7f4 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -371,6 +371,11 @@  static void swsusp_mte_restore_tags(void)
 		unsigned long pfn = xa_state.xa_index;
 		struct page *page = pfn_to_online_page(pfn);
 
+		/*
+		 * It is not required to invoke page_kasan_tag_reset(page)
+		 * at this point since the tags stored in page->flags are
+		 * already restored.
+		 */
 		mte_restore_page_tags(page_address(page), tags);
 
 		mte_free_tag_storage(tags);
diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c
index 8f99c65837fd..600b26d65b41 100644
--- a/arch/arm64/kernel/mte.c
+++ b/arch/arm64/kernel/mte.c
@@ -34,6 +34,15 @@  static void mte_sync_page_tags(struct page *page, pte_t *ptep, bool check_swap)
 			return;
 	}
 
+	page_kasan_tag_reset(page);
+	/*
+	 * We need smp_wmb() in between setting the flags and clearing the
+	 * tags because if another thread reads page->flags and builds a
+	 * tagged address out of it, there is an actual dependency to the
+	 * memory access, but on the current thread we do not guarantee that
+	 * the new new page->flags are visible before the tags were updated.
+	 */
+	smp_wmb();
 	mte_clear_page_tags(page_address(page));
 }
 
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index 70a71f38b6a9..f0efa4847e2f 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -23,6 +23,7 @@  void copy_highpage(struct page *to, struct page *from)
 
 	if (system_supports_mte() && test_bit(PG_mte_tagged, &from->flags)) {
 		set_bit(PG_mte_tagged, &to->flags);
+		page_kasan_tag_reset(to);
 		mte_copy_page_tags(kto, kfrom);
 	}
 }
diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c
index c52c1847079c..9cc59696489c 100644
--- a/arch/arm64/mm/mteswap.c
+++ b/arch/arm64/mm/mteswap.c
@@ -53,6 +53,15 @@  bool mte_restore_tags(swp_entry_t entry, struct page *page)
 	if (!tags)
 		return false;
 
+	page_kasan_tag_reset(page);
+	/*
+	 * We need smp_wmb() in between setting the flags and clearing the
+	 * tags because if another thread reads page->flags and builds a
+	 * tagged address out of it, there is an actual dependency to the
+	 * memory access, but on the current thread we do not guarantee that
+	 * the new new page->flags are visible before the tags were updated.
+	 */
+	smp_wmb();
 	mte_restore_page_tags(page_address(page), tags);
 
 	return true;