diff mbox series

[-mm] arm64: tlbflush: Add some comments for TLB batched flushing

Message ID 20230729131448.15531-1-yangyicong@huawei.com (mailing list archive)
State New
Headers show
Series [-mm] arm64: tlbflush: Add some comments for TLB batched flushing | expand

Commit Message

Yicong Yang July 29, 2023, 1:14 p.m. UTC
From: Yicong Yang <yangyicong@hisilicon.com>

Add comments for arch_flush_tlb_batched_pending() and
arch_tlbbatch_flush() to illustrate why only a DSB is
needed.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
---
 arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

Comments

Catalin Marinas July 31, 2023, 10:34 a.m. UTC | #1
On Sat, Jul 29, 2023 at 09:14:48PM +0800, Yicong Yang wrote:
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 3456866c6a1d..2bad230b95b4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>  	__flush_tlb_page_nosync(mm, uaddr);
>  }
>  
> +/*
> + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
> + * synchronise all the TLBI issued by a DSB to avoid the race mentioned in

Nitpick: s/by a DSB/with a DSB/ as it somehow reads that the DSB issued
the TLBI.

Since the rest of the series went in via the mm tree, I assume Andrew
will pick this up as well.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks.
Alistair Popple Aug. 1, 2023, 1:10 a.m. UTC | #2
Yicong Yang <yangyicong@huawei.com> writes:

Thanks! I was reading this code the other day and it took me a while to
figure out what was going on. These comments would have been very
helpful and match my understanding, so:

Reviewed-by: Alistair Popple <apopple@nvidia.com>

> From: Yicong Yang <yangyicong@hisilicon.com>
>
> Add comments for arch_flush_tlb_batched_pending() and
> arch_tlbbatch_flush() to illustrate why only a DSB is
> needed.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
> ---
>  arch/arm64/include/asm/tlbflush.h | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index 3456866c6a1d..2bad230b95b4 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -300,11 +300,26 @@ static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
>  	__flush_tlb_page_nosync(mm, uaddr);
>  }
>  
> +/*
> + * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
> + * synchronise all the TLBI issued by a DSB to avoid the race mentioned in
> + * flush_tlb_batched_pending().
> + */
>  static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
>  {
>  	dsb(ish);
>  }
>  
> +/*
> + * To support TLB batched flush for multiple pages unmapping, we only send
> + * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
> + * completion at the end in arch_tlbbatch_flush(). Since we've already issued
> + * TLBI for each page so only a DSB is needed to synchronise its effect on the
> + * other CPUs.
> + *
> + * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
> + * for each page.
> + */
>  static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
>  {
>  	dsb(ish);
diff mbox series

Patch

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index 3456866c6a1d..2bad230b95b4 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -300,11 +300,26 @@  static inline void arch_tlbbatch_add_pending(struct arch_tlbflush_unmap_batch *b
 	__flush_tlb_page_nosync(mm, uaddr);
 }
 
+/*
+ * If mprotect/munmap/etc occurs during TLB batched flushing, we need to
+ * synchronise all the TLBI issued by a DSB to avoid the race mentioned in
+ * flush_tlb_batched_pending().
+ */
 static inline void arch_flush_tlb_batched_pending(struct mm_struct *mm)
 {
 	dsb(ish);
 }
 
+/*
+ * To support TLB batched flush for multiple pages unmapping, we only send
+ * the TLBI for each page in arch_tlbbatch_add_pending() and wait for the
+ * completion at the end in arch_tlbbatch_flush(). Since we've already issued
+ * TLBI for each page so only a DSB is needed to synchronise its effect on the
+ * other CPUs.
+ *
+ * This will save the time waiting on DSB comparing issuing a TLBI;DSB sequence
+ * for each page.
+ */
 static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
 {
 	dsb(ish);