diff mbox series

[6/7] xen/arm: mm: Add missing ISB in xen_pt_update()

Message ID 20230619170115.81398-7-julien@xen.org (mailing list archive)
State New, archived
Headers show
Series xen/arm: Add some missing ISBs after updating the PTEs | expand

Commit Message

Julien Grall June 19, 2023, 5:01 p.m. UTC
From: Julien Grall <jgrall@amazon.com>

Per the Arm Arm, (Armv7 DDI406C.d A3.8.3 and Armv8 DDI 0487J.a B2.3.12):

"The DMB and DSB memory barriers affect reads and writes to the memory
system generated by load/store instructions and data or unified cache
maintenance operations being executed by the processor. Instruction
fetches or accesses caused by a hardware translation table access are
not explicit accesses."

Note that second sentence is not part of the newer Armv8 spec. But the
interpretation is not much different.

The updated entry will not be used until xen_pt_update() completes.
So rather than adding the ISB after write_pte() in create_xen_table()
and xen_pt-update_entry(), add it in xen_pt_update().

Also document the reasoning of the deferral after each write_pte() calls.

Fixes: 07d11f63d03e ("xen/arm: mm: Avoid flushing the TLBs when mapping are inserted")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/mm.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Henry Wang June 20, 2023, 3:07 a.m. UTC | #1
Hi Julien,

> -----Original Message-----
> Subject: [PATCH 6/7] xen/arm: mm: Add missing ISB in xen_pt_update()
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Per the Arm Arm, (Armv7 DDI406C.d A3.8.3 and Armv8 DDI 0487J.a B2.3.12):
> 
> "The DMB and DSB memory barriers affect reads and writes to the memory
> system generated by load/store instructions and data or unified cache
> maintenance operations being executed by the processor. Instruction
> fetches or accesses caused by a hardware translation table access are
> not explicit accesses."
> 
> Note that second sentence is not part of the newer Armv8 spec. But the
> interpretation is not much different.
> 
> The updated entry will not be used until xen_pt_update() completes.
> So rather than adding the ISB after write_pte() in create_xen_table()
> and xen_pt-update_entry(), add it in xen_pt_update().
> 
> Also document the reasoning of the deferral after each write_pte() calls.
> 
> Fixes: 07d11f63d03e ("xen/arm: mm: Avoid flushing the TLBs when mapping
> are inserted")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Henry Wang <Henry.Wang@arm.com>

I've also tested this patch on top of today's staging by our internal CI, and this
patch looks good, so:

Tested-by: Henry Wang <Henry.Wang@arm.com>

Kind regards,
Henry
Bertrand Marquis July 4, 2023, 2:54 p.m. UTC | #2
Hi Julien,

> On 19 Jun 2023, at 19:01, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Per the Arm Arm, (Armv7 DDI406C.d A3.8.3 and Armv8 DDI 0487J.a B2.3.12):
> 
> "The DMB and DSB memory barriers affect reads and writes to the memory
> system generated by load/store instructions and data or unified cache
> maintenance operations being executed by the processor. Instruction
> fetches or accesses caused by a hardware translation table access are
> not explicit accesses."
> 
> Note that second sentence is not part of the newer Armv8 spec. But the
> interpretation is not much different.
> 
> The updated entry will not be used until xen_pt_update() completes.
> So rather than adding the ISB after write_pte() in create_xen_table()
> and xen_pt-update_entry(), add it in xen_pt_update().
> 
> Also document the reasoning of the deferral after each write_pte() calls.
> 
> Fixes: 07d11f63d03e ("xen/arm: mm: Avoid flushing the TLBs when mapping are inserted")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> xen/arch/arm/mm.c | 14 ++++++++++++++
> 1 file changed, 14 insertions(+)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index e460249736c3..84e652799dd2 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -779,6 +779,11 @@ static int create_xen_table(lpae_t *entry)
>     pte = mfn_to_xen_entry(mfn, MT_NORMAL);
>     pte.pt.table = 1;
>     write_pte(entry, pte);
> +    /*
> +     * No ISB here. It is deferred to xen_pt_update() as the new table
> +     * will not be used for hardware translation table access as part of
> +     * the mapping update.
> +     */
> 
>     return 0;
> }
> @@ -1017,6 +1022,10 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>     }
> 
>     write_pte(entry, pte);
> +    /*
> +     * No ISB or TLB flush here. They are deferred to xen_pt_update()
> +     * as the entry will not be used as part of the mapping update.
> +     */
> 
>     rc = 0;
> 
> @@ -1196,6 +1205,9 @@ static int xen_pt_update(unsigned long virt,
>     /*
>      * The TLBs flush can be safely skipped when a mapping is inserted
>      * as we don't allow mapping replacement (see xen_pt_check_entry()).
> +     * Although we still need an ISB to ensure any DSB in
> +     * write_pte() will complete because the mapping may be used soon
> +     * after.
>      *
>      * For all the other cases, the TLBs will be flushed unconditionally
>      * even if the mapping has failed. This is because we may have
> @@ -1204,6 +1216,8 @@ static int xen_pt_update(unsigned long virt,
>      */
>     if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) )
>         flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
> +    else
> +        isb();
> 
>     spin_unlock(&xen_pt_lock);
> 
> -- 
> 2.40.1
>
diff mbox series

Patch

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index e460249736c3..84e652799dd2 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -779,6 +779,11 @@  static int create_xen_table(lpae_t *entry)
     pte = mfn_to_xen_entry(mfn, MT_NORMAL);
     pte.pt.table = 1;
     write_pte(entry, pte);
+    /*
+     * No ISB here. It is deferred to xen_pt_update() as the new table
+     * will not be used for hardware translation table access as part of
+     * the mapping update.
+     */
 
     return 0;
 }
@@ -1017,6 +1022,10 @@  static int xen_pt_update_entry(mfn_t root, unsigned long virt,
     }
 
     write_pte(entry, pte);
+    /*
+     * No ISB or TLB flush here. They are deferred to xen_pt_update()
+     * as the entry will not be used as part of the mapping update.
+     */
 
     rc = 0;
 
@@ -1196,6 +1205,9 @@  static int xen_pt_update(unsigned long virt,
     /*
      * The TLBs flush can be safely skipped when a mapping is inserted
      * as we don't allow mapping replacement (see xen_pt_check_entry()).
+     * Although we still need an ISB to ensure any DSB in
+     * write_pte() will complete because the mapping may be used soon
+     * after.
      *
      * For all the other cases, the TLBs will be flushed unconditionally
      * even if the mapping has failed. This is because we may have
@@ -1204,6 +1216,8 @@  static int xen_pt_update(unsigned long virt,
      */
     if ( !((flags & _PAGE_PRESENT) && !mfn_eq(mfn, INVALID_MFN)) )
         flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
+    else
+        isb();
 
     spin_unlock(&xen_pt_lock);