Message ID | 20240616-b4-mips-ipi-improvements-v1-2-e332687f1692@flygoat.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | MIPS: IPI Improvements | expand |
On Sun, Jun 16, 2024 at 10:03:06PM +0100, Jiaxun Yang wrote: > IPI interrupts need to be enabled when a new CPU coming up. > > Manage them as percpu_devid interrupts and invoke enable/disable > functions at appropriate time to perform enabling as required, > similar to what RISC-V and Arm doing. > > Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> > --- > arch/mips/include/asm/ipi.h | 11 +++++++++++ > arch/mips/kernel/smp.c | 26 ++++++++++++++++++++++++-- > 2 files changed, 35 insertions(+), 2 deletions(-) > > diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h > index df7a0ac4227a..88b507339f51 100644 > --- a/arch/mips/include/asm/ipi.h > +++ b/arch/mips/include/asm/ipi.h > @@ -29,6 +29,17 @@ int mips_smp_ipi_allocate(const struct cpumask *mask); > * Return 0 on success. > */ > int mips_smp_ipi_free(const struct cpumask *mask); > + > +void mips_smp_ipi_enable(void); > +void mips_smp_ipi_disable(void); > +#else > +static inline void mips_smp_ipi_enable(void) > +{ > +} > + > +static inline void mips_smp_ipi_disable(void) > +{ > +} > #endif /* CONFIG_GENERIC_IRQ_IPI */ > #endif /* CONFIG_SMP */ > #endif > diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c > index a6cf6444533e..710644d47106 100644 > --- a/arch/mips/kernel/smp.c > +++ b/arch/mips/kernel/smp.c > @@ -186,6 +186,7 @@ irq_handler_t ipi_handlers[IPI_MAX] __read_mostly = { > }; > > #ifdef CONFIG_GENERIC_IRQ_IPI > +static DEFINE_PER_CPU_READ_MOSTLY(int, ipi_dummy_dev); > static int ipi_virqs[IPI_MAX] __ro_after_init; > static struct irq_desc *ipi_desc[IPI_MAX] __read_mostly; > > @@ -225,13 +226,29 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask, > local_irq_restore(flags); > } > > +void mips_smp_ipi_enable(void) > +{ > + int i; > + > + for (i = 0; i < IPI_MAX; i++) > + enable_percpu_irq(ipi_virqs[i], IRQ_TYPE_NONE); > +} > + > +void mips_smp_ipi_disable(void) > +{ > + int i; > + > + for (i = 0; i < IPI_MAX; i++) > + disable_percpu_irq(ipi_virqs[i]); > +} > + there is no user of mips_smp_ipi_disable() (at least I didn't see one), so do we need this patch at all ? Just looking like ARM or RiscV isn't a justification for code churn. Thomas.
在2024年7月3日七月 下午11:03,Thomas Bogendoerfer写道: [...] > > there is no user of mips_smp_ipi_disable() (at least I didn't see one), > so do we need this patch at all ? Just looking like ARM or RiscV isn't > a justification for code churn. Hi Thomas, The per-cpu enablement process is necessary for IPI_MUX and my upcoming IPI driver. The disablement, I'm not really sure, maybe it's a good idea to call it at platform's __cpu_disable to prevent spurious IPI after IRQ migration. Thanks - Jiaxun > > Thomas. > > -- > Crap can work. Given enough thrust pigs will fly, but it's not necessarily a > good idea. [ RFC1925, 2.3 ]
On Thu, Jul 04, 2024 at 04:08:09AM +0800, Jiaxun Yang wrote: > > > 在2024年7月3日七月 下午11:03,Thomas Bogendoerfer写道: > [...] > > > > there is no user of mips_smp_ipi_disable() (at least I didn't see one), > > so do we need this patch at all ? Just looking like ARM or RiscV isn't > > a justification for code churn. > > Hi Thomas, > > The per-cpu enablement process is necessary for IPI_MUX and > my upcoming IPI driver. > > The disablement, I'm not really sure, maybe it's a good idea to call it at > platform's __cpu_disable to prevent spurious IPI after IRQ migration. don't add dead code, so drop mips_smp_ipi_disable() for now. Thomas.
diff --git a/arch/mips/include/asm/ipi.h b/arch/mips/include/asm/ipi.h index df7a0ac4227a..88b507339f51 100644 --- a/arch/mips/include/asm/ipi.h +++ b/arch/mips/include/asm/ipi.h @@ -29,6 +29,17 @@ int mips_smp_ipi_allocate(const struct cpumask *mask); * Return 0 on success. */ int mips_smp_ipi_free(const struct cpumask *mask); + +void mips_smp_ipi_enable(void); +void mips_smp_ipi_disable(void); +#else +static inline void mips_smp_ipi_enable(void) +{ +} + +static inline void mips_smp_ipi_disable(void) +{ +} #endif /* CONFIG_GENERIC_IRQ_IPI */ #endif /* CONFIG_SMP */ #endif diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c index a6cf6444533e..710644d47106 100644 --- a/arch/mips/kernel/smp.c +++ b/arch/mips/kernel/smp.c @@ -186,6 +186,7 @@ irq_handler_t ipi_handlers[IPI_MAX] __read_mostly = { }; #ifdef CONFIG_GENERIC_IRQ_IPI +static DEFINE_PER_CPU_READ_MOSTLY(int, ipi_dummy_dev); static int ipi_virqs[IPI_MAX] __ro_after_init; static struct irq_desc *ipi_desc[IPI_MAX] __read_mostly; @@ -225,13 +226,29 @@ void mips_smp_send_ipi_mask(const struct cpumask *mask, local_irq_restore(flags); } +void mips_smp_ipi_enable(void) +{ + int i; + + for (i = 0; i < IPI_MAX; i++) + enable_percpu_irq(ipi_virqs[i], IRQ_TYPE_NONE); +} + +void mips_smp_ipi_disable(void) +{ + int i; + + for (i = 0; i < IPI_MAX; i++) + disable_percpu_irq(ipi_virqs[i]); +} + static void smp_ipi_init_one(unsigned int virq, const char *name, irq_handler_t handler) { int ret; - irq_set_handler(virq, handle_percpu_irq); - ret = request_irq(virq, handler, IRQF_PERCPU, name, NULL); + irq_set_percpu_devid(virq); + ret = request_percpu_irq(virq, handler, "IPI", &ipi_dummy_dev); BUG_ON(ret); } @@ -343,6 +360,9 @@ static int __init mips_smp_ipi_init(void) return -ENODEV; } + /* Enable IPI for Boot CPU */ + mips_smp_ipi_enable(); + return 0; } early_initcall(mips_smp_ipi_init); @@ -383,6 +403,8 @@ asmlinkage void start_secondary(void) synchronise_count_slave(cpu); + mips_smp_ipi_enable(); + /* The CPU is running and counters synchronised, now mark it online */ set_cpu_online(cpu, true);
IPI interrupts need to be enabled when a new CPU coming up. Manage them as percpu_devid interrupts and invoke enable/disable functions at appropriate time to perform enabling as required, similar to what RISC-V and Arm doing. Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com> --- arch/mips/include/asm/ipi.h | 11 +++++++++++ arch/mips/kernel/smp.c | 26 ++++++++++++++++++++++++-- 2 files changed, 35 insertions(+), 2 deletions(-)