Message ID | 20240102220134.3229156-4-samuel.holland@sifive.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | riscv: ASID-related and UP-related TLB flush enhancements | expand |
On Tue, Jan 2, 2024 at 11:01 PM Samuel Holland <samuel.holland@sifive.com> wrote: > > __flush_tlb_range() avoids broadcasting TLB flushes when an mm context > is only active on the local CPU. Apply this same optimization to TLB > flushes of kernel memory when only one CPU is online. This check can be > constant-folded when SMP is disabled. > > Signed-off-by: Samuel Holland <samuel.holland@sifive.com> > --- > > Changes in v4: > - New patch for v4 > > arch/riscv/mm/tlbflush.c | 17 ++++++----------- > 1 file changed, 6 insertions(+), 11 deletions(-) > > diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c > index 09b03bf71e6a..2f18fe6fc4f3 100644 > --- a/arch/riscv/mm/tlbflush.c > +++ b/arch/riscv/mm/tlbflush.c > @@ -98,27 +98,23 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, > { > const struct cpumask *cmask; > unsigned long asid = FLUSH_TLB_NO_ASID; > - bool broadcast; > + unsigned int cpu; > > if (mm) { > - unsigned int cpuid; > - > cmask = mm_cpumask(mm); > if (cpumask_empty(cmask)) > return; > > - cpuid = get_cpu(); > - /* check if the tlbflush needs to be sent to other CPUs */ > - broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; > - > if (static_branch_unlikely(&use_asid_allocator)) > asid = atomic_long_read(&mm->context.id) & asid_mask; > } else { > cmask = cpu_online_mask; > - broadcast = true; > } > > - if (!broadcast) { > + cpu = get_cpu(); > + > + /* Check if the TLB flush needs to be sent to other CPUs. */ > + if (cpumask_any_but(cmask, cpu) >= nr_cpu_ids) { > local_flush_tlb_range_asid(start, size, stride, asid); > } else if (riscv_use_sbi_for_rfence()) { > sbi_remote_sfence_vma_asid(cmask, start, size, asid); > @@ -132,8 +128,7 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, > on_each_cpu_mask(cmask, __ipi_flush_tlb_range_asid, &ftd, 1); > } > > - if (mm) > - put_cpu(); > + put_cpu(); > } > > void flush_tlb_mm(struct mm_struct *mm) > -- > 2.42.0 > You can add: Reviewed-by: Alexandre Ghiti <alexghiti@rivosinc.com> Thanks, Alex
diff --git a/arch/riscv/mm/tlbflush.c b/arch/riscv/mm/tlbflush.c index 09b03bf71e6a..2f18fe6fc4f3 100644 --- a/arch/riscv/mm/tlbflush.c +++ b/arch/riscv/mm/tlbflush.c @@ -98,27 +98,23 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, { const struct cpumask *cmask; unsigned long asid = FLUSH_TLB_NO_ASID; - bool broadcast; + unsigned int cpu; if (mm) { - unsigned int cpuid; - cmask = mm_cpumask(mm); if (cpumask_empty(cmask)) return; - cpuid = get_cpu(); - /* check if the tlbflush needs to be sent to other CPUs */ - broadcast = cpumask_any_but(cmask, cpuid) < nr_cpu_ids; - if (static_branch_unlikely(&use_asid_allocator)) asid = atomic_long_read(&mm->context.id) & asid_mask; } else { cmask = cpu_online_mask; - broadcast = true; } - if (!broadcast) { + cpu = get_cpu(); + + /* Check if the TLB flush needs to be sent to other CPUs. */ + if (cpumask_any_but(cmask, cpu) >= nr_cpu_ids) { local_flush_tlb_range_asid(start, size, stride, asid); } else if (riscv_use_sbi_for_rfence()) { sbi_remote_sfence_vma_asid(cmask, start, size, asid); @@ -132,8 +128,7 @@ static void __flush_tlb_range(struct mm_struct *mm, unsigned long start, on_each_cpu_mask(cmask, __ipi_flush_tlb_range_asid, &ftd, 1); } - if (mm) - put_cpu(); + put_cpu(); } void flush_tlb_mm(struct mm_struct *mm)
__flush_tlb_range() avoids broadcasting TLB flushes when an mm context is only active on the local CPU. Apply this same optimization to TLB flushes of kernel memory when only one CPU is online. This check can be constant-folded when SMP is disabled. Signed-off-by: Samuel Holland <samuel.holland@sifive.com> --- Changes in v4: - New patch for v4 arch/riscv/mm/tlbflush.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-)