diff mbox series

[for-4.14] xen/arm: mm: Access a PT entry before the table is unmapped

Message ID 20200607155154.15928-1-julien@xen.org (mailing list archive)
State New, archived
Headers show
Series [for-4.14] xen/arm: mm: Access a PT entry before the table is unmapped | expand

Commit Message

Julien Grall June 7, 2020, 3:51 p.m. UTC
From: Julien Grall <jgrall@amazon.com>

xen_pt_next_level() will retrieve the MFN from the entry right after the
page-table has been unmapped.

After calling xen_unmap_table(), there is no guarantee the mapping will
still be valid. Depending on the implementation, this may result to a
data abort in Xen.

Re-order the code to retrieve the MFN before the table is unmapped.

Fixes: 53abb9a1dcd9 ("xen/arm: mm: Rework Xen page-tables walk during update")
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

I spotted this issue while reworking how page-tables are mapped on Arm64
during early boot. However the problem should be already there on Arm32.

I am actually quite amazed we haven't seen any crash on Arm32 because
there is no direct map. So the mapping will not exist after calling
xen_unmap_table().

I suspect this works because unmap_domain_page() doesn't flush the
TLBs. So the TLB still likely have the entry cached (joy!).

This patch is a candidate for Xen 4.14 and should also be backported.
---
 xen/arch/arm/mm.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

Comments

Paul Durrant June 8, 2020, 6:57 a.m. UTC | #1
> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 07 June 2020 16:52
> To: xen-devel@lists.xenproject.org
> Cc: paul@xen.org; Julien Grall <jgrall@amazon.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Subject: [PATCH for-4.14] xen/arm: mm: Access a PT entry before the table is unmapped
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> xen_pt_next_level() will retrieve the MFN from the entry right after the
> page-table has been unmapped.
> 
> After calling xen_unmap_table(), there is no guarantee the mapping will
> still be valid. Depending on the implementation, this may result to a
> data abort in Xen.
> 
> Re-order the code to retrieve the MFN before the table is unmapped.
> 
> Fixes: 53abb9a1dcd9 ("xen/arm: mm: Rework Xen page-tables walk during update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 

Release-acked-by: Paul Durrant <paul@xen.org>

> ---
> 
> I spotted this issue while reworking how page-tables are mapped on Arm64
> during early boot. However the problem should be already there on Arm32.
> 
> I am actually quite amazed we haven't seen any crash on Arm32 because
> there is no direct map. So the mapping will not exist after calling
> xen_unmap_table().
> 
> I suspect this works because unmap_domain_page() doesn't flush the
> TLBs. So the TLB still likely have the entry cached (joy!).
> 
> This patch is a candidate for Xen 4.14 and should also be backported.
> ---
>  xen/arch/arm/mm.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 1b14f4934570..9e2ff7c8005d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1036,6 +1036,7 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>  {
>      lpae_t *entry;
>      int ret;
> +    mfn_t mfn;
> 
>      entry = *table + offset;
> 
> @@ -1053,8 +1054,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>      if ( lpae_is_mapping(*entry, level) )
>          return XEN_TABLE_SUPER_PAGE;
> 
> +    mfn = lpae_get_mfn(*entry);
> +
>      xen_unmap_table(*table);
> -    *table = xen_map_table(lpae_get_mfn(*entry));
> +    *table = xen_map_table(mfn);
> 
>      return XEN_TABLE_NORMAL_PAGE;
>  }
> --
> 2.17.1
Stefano Stabellini June 8, 2020, 5:30 p.m. UTC | #2
On Sun, 7 Jun 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> xen_pt_next_level() will retrieve the MFN from the entry right after the
> page-table has been unmapped.
> 
> After calling xen_unmap_table(), there is no guarantee the mapping will
> still be valid. Depending on the implementation, this may result to a
> data abort in Xen.
> 
> Re-order the code to retrieve the MFN before the table is unmapped.
> 
> Fixes: 53abb9a1dcd9 ("xen/arm: mm: Rework Xen page-tables walk during update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> I spotted this issue while reworking how page-tables are mapped on Arm64
> during early boot. However the problem should be already there on Arm32.
> 
> I am actually quite amazed we haven't seen any crash on Arm32 because
> there is no direct map. So the mapping will not exist after calling
> xen_unmap_table().
> 
> I suspect this works because unmap_domain_page() doesn't flush the
> TLBs. So the TLB still likely have the entry cached (joy!).

  \0/

Damn!

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> This patch is a candidate for Xen 4.14 and should also be backported.
> ---
>  xen/arch/arm/mm.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 1b14f4934570..9e2ff7c8005d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1036,6 +1036,7 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>  {
>      lpae_t *entry;
>      int ret;
> +    mfn_t mfn;
>  
>      entry = *table + offset;
>  
> @@ -1053,8 +1054,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>      if ( lpae_is_mapping(*entry, level) )
>          return XEN_TABLE_SUPER_PAGE;
>  
> +    mfn = lpae_get_mfn(*entry);
> +
>      xen_unmap_table(*table);
> -    *table = xen_map_table(lpae_get_mfn(*entry));
> +    *table = xen_map_table(mfn);
>  
>      return XEN_TABLE_NORMAL_PAGE;
>  }
> -- 
> 2.17.1
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 1b14f4934570..9e2ff7c8005d 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1036,6 +1036,7 @@  static int xen_pt_next_level(bool read_only, unsigned int level,
 {
     lpae_t *entry;
     int ret;
+    mfn_t mfn;
 
     entry = *table + offset;
 
@@ -1053,8 +1054,10 @@  static int xen_pt_next_level(bool read_only, unsigned int level,
     if ( lpae_is_mapping(*entry, level) )
         return XEN_TABLE_SUPER_PAGE;
 
+    mfn = lpae_get_mfn(*entry);
+
     xen_unmap_table(*table);
-    *table = xen_map_table(lpae_get_mfn(*entry));
+    *table = xen_map_table(mfn);
 
     return XEN_TABLE_NORMAL_PAGE;
 }