diff mbox series

[2/6] riscv: cleanup send_ipi_mask

Message ID 20190821145837.3686-3-hch@lst.de (mailing list archive)
State New, archived
Headers show
Series [1/6] riscv: refactor the IPI code | expand

Commit Message

Christoph Hellwig Aug. 21, 2019, 2:58 p.m. UTC
Use the special barriers for atomic bitops to make the intention
a little more clear, and use riscv_cpuid_to_hartid_mask instead of
open coding it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/riscv/kernel/smp.c | 16 +++++++---------
 1 file changed, 7 insertions(+), 9 deletions(-)

Comments

Atish Patra Aug. 24, 2019, 12:11 a.m. UTC | #1
On Wed, 2019-08-21 at 23:58 +0900, Christoph Hellwig wrote:
> Use the special barriers for atomic bitops to make the intention
> a little more clear, and use riscv_cpuid_to_hartid_mask instead of
> open coding it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  arch/riscv/kernel/smp.c | 16 +++++++---------
>  1 file changed, 7 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
> index 8cd730239613..2e21669aa068 100644
> --- a/arch/riscv/kernel/smp.c
> +++ b/arch/riscv/kernel/smp.c
> @@ -80,17 +80,15 @@ static void ipi_stop(void)
>  
>  static void send_ipi_mask(const struct cpumask *mask, enum
> ipi_message_type op)
>  {
> -	int cpuid, hartid;
>  	struct cpumask hartid_mask;
> +	int cpu;
>  
> -	cpumask_clear(&hartid_mask);
> -	mb();
> -	for_each_cpu(cpuid, mask) {
> -		set_bit(op, &ipi_data[cpuid].bits);
> -		hartid = cpuid_to_hartid_map(cpuid);
> -		cpumask_set_cpu(hartid, &hartid_mask);
> -	}
> -	mb();
> +	smp_mb__before_atomic();
> +	for_each_cpu(cpu, mask)
> +		set_bit(op, &ipi_data[cpu].bits);
> +	smp_mb__after_atomic();
> +
> +	riscv_cpuid_to_hartid_mask(mask, &hartid_mask);

Isn't that less optimized than previous one ?

This will iterate all the cpus set in mask twice during every ipi sent.
For now, we won't see any different. As we have more number of cpus in
RISC-V (hopefully one day ;) ;)), this may affect the performance.

>  	sbi_send_ipi(cpumask_bits(&hartid_mask));
>  }
>
Christoph Hellwig Aug. 26, 2019, 11:28 a.m. UTC | #2
On Sat, Aug 24, 2019 at 12:11:15AM +0000, Atish Patra wrote:
> Isn't that less optimized than previous one ?
> 
> This will iterate all the cpus set in mask twice during every ipi sent.
> For now, we won't see any different. As we have more number of cpus in
> RISC-V (hopefully one day ;) ;)), this may affect the performance.

By then we are hopefully done with using the SBI IPI code :) The native
IPI code this refactor is preparing for won't need the hartid
translation for example.  The point of this patch isn't really to
micro-optimize, but to make the code clear and obvious.
Atish Patra Aug. 27, 2019, 6:45 p.m. UTC | #3
On Mon, 2019-08-26 at 13:28 +0200, hch@lst.de wrote:
> On Sat, Aug 24, 2019 at 12:11:15AM +0000, Atish Patra wrote:
> > Isn't that less optimized than previous one ?
> > 
> > This will iterate all the cpus set in mask twice during every ipi
> > sent.
> > For now, we won't see any different. As we have more number of cpus
> > in
> > RISC-V (hopefully one day ;) ;)), this may affect the performance.
> 
> By then we are hopefully done with using the SBI IPI code :) The
> native
> IPI code this refactor is preparing for won't need the hartid
> translation for example.  The point of this patch isn't really to
> micro-optimize, but to make the code clear and obvious.

ok. Sounds good to me.

Reviewed-by: Atish Patra <atish.patra@wdc.com>
Paul Walmsley Sept. 5, 2019, 8:46 a.m. UTC | #4
On Wed, 21 Aug 2019, Christoph Hellwig wrote:

> Use the special barriers for atomic bitops to make the intention
> a little more clear, and use riscv_cpuid_to_hartid_mask instead of
> open coding it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Thanks, queued for v5.4-rc1 with Atish's Reviewed-by:.


- Paul
diff mbox series

Patch

diff --git a/arch/riscv/kernel/smp.c b/arch/riscv/kernel/smp.c
index 8cd730239613..2e21669aa068 100644
--- a/arch/riscv/kernel/smp.c
+++ b/arch/riscv/kernel/smp.c
@@ -80,17 +80,15 @@  static void ipi_stop(void)
 
 static void send_ipi_mask(const struct cpumask *mask, enum ipi_message_type op)
 {
-	int cpuid, hartid;
 	struct cpumask hartid_mask;
+	int cpu;
 
-	cpumask_clear(&hartid_mask);
-	mb();
-	for_each_cpu(cpuid, mask) {
-		set_bit(op, &ipi_data[cpuid].bits);
-		hartid = cpuid_to_hartid_map(cpuid);
-		cpumask_set_cpu(hartid, &hartid_mask);
-	}
-	mb();
+	smp_mb__before_atomic();
+	for_each_cpu(cpu, mask)
+		set_bit(op, &ipi_data[cpu].bits);
+	smp_mb__after_atomic();
+
+	riscv_cpuid_to_hartid_mask(mask, &hartid_mask);
 	sbi_send_ipi(cpumask_bits(&hartid_mask));
 }