diff mbox series

genirq: Remove warning on preemptible in prepare_percpu_nmi()

Message ID 20190620091233.22731-1-lecopzer.chen@mediatek.com (mailing list archive)
State New, archived
Headers show
Series genirq: Remove warning on preemptible in prepare_percpu_nmi() | expand

Commit Message

Lecopzer Chen June 20, 2019, 9:12 a.m. UTC
prepare_percpu_nmi() acquires lock first by irq_get_desc_lock(),
no matter whether preempt enabled or not, acquiring lock forces preempt off.

This simplifies the usage of prepare_percpu_nmi() and we don't need to
acquire extra lock or explicitly call preempt_[disable,enable]().

Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Julien Thierry <julien.thierry@arm.com>
Cc: YJ Chiang <yj.chiang@mediatek.com>
Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
---
 kernel/irq/manage.c | 10 ----------
 1 file changed, 10 deletions(-)

Comments

Marc Zyngier June 20, 2019, 9:40 a.m. UTC | #1
Hi Lecopzer,

On 20/06/2019 10:12, Lecopzer Chen wrote:
> prepare_percpu_nmi() acquires lock first by irq_get_desc_lock(),
> no matter whether preempt enabled or not, acquiring lock forces preempt off.
> 
> This simplifies the usage of prepare_percpu_nmi() and we don't need to
> acquire extra lock or explicitly call preempt_[disable,enable]().

I strongly disagree. If you're calling these functions *from* a
preemptible context, you've already lost, and that's what these
WARN_ON() calls are warning you about.

These functions can only be called from a context that is naturally
preemption free, such as a hotplug notifier. Otherwise, you have no idea
which CPU you're configuring the NMI on, and I cannot see that as a good
thing.

Thanks,

	M.

> 
> Signed-off-by: Lecopzer Chen <lecopzer.chen@mediatek.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Julien Thierry <julien.thierry@arm.com>
> Cc: YJ Chiang <yj.chiang@mediatek.com>
> Cc: Lecopzer Chen <lecopzer.chen@mediatek.com>
> ---
>  kernel/irq/manage.c | 10 ----------
>  1 file changed, 10 deletions(-)
> 
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index 78f3ddeb7fe4..aa03640cd7fb 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -2509,9 +2509,6 @@ int request_percpu_nmi(unsigned int irq, irq_handler_t handler,
>   *	This call prepares an interrupt line to deliver NMI on the current CPU,
>   *	before that interrupt line gets enabled with enable_percpu_nmi().
>   *
> - *	As a CPU local operation, this should be called from non-preemptible
> - *	context.
> - *
>   *	If the interrupt line cannot be used to deliver NMIs, function
>   *	will fail returning a negative value.
>   */
> @@ -2521,8 +2518,6 @@ int prepare_percpu_nmi(unsigned int irq)
>  	struct irq_desc *desc;
>  	int ret = 0;
>  
> -	WARN_ON(preemptible());
> -
>  	desc = irq_get_desc_lock(irq, &flags,
>  				 IRQ_GET_DESC_CHECK_PERCPU);
>  	if (!desc)
> @@ -2554,17 +2549,12 @@ int prepare_percpu_nmi(unsigned int irq)
>   *	This call undoes the setup done by prepare_percpu_nmi().
>   *
>   *	IRQ line should not be enabled for the current CPU.
> - *
> - *	As a CPU local operation, this should be called from non-preemptible
> - *	context.
>   */
>  void teardown_percpu_nmi(unsigned int irq)
>  {
>  	unsigned long flags;
>  	struct irq_desc *desc;
>  
> -	WARN_ON(preemptible());
> -
>  	desc = irq_get_desc_lock(irq, &flags,
>  				 IRQ_GET_DESC_CHECK_PERCPU);
>  	if (!desc)
>
Julien Thierry June 20, 2019, 9:44 a.m. UTC | #2
Hi Lecopzer,

On 20/06/2019 10:12, Lecopzer Chen wrote:
> prepare_percpu_nmi() acquires lock first by irq_get_desc_lock(),
> no matter whether preempt enabled or not, acquiring lock forces preempt off.
> 
> This simplifies the usage of prepare_percpu_nmi() and we don't need to
> acquire extra lock or explicitly call preempt_[disable,enable]().
> 

This allows wrong usage of prepare_percpu_nmi(). If you are not calling
it from a preemptible context, you could start the call on a CPU, get
preempted and setup the NMI on a completely different CPU than the one
you started on.

This check is for sanity checking, and if you end up calling
prepare_percpu_nmi() from non-preemptible context then your intentions
are unclear, unless you are fine with the possibility of "preparing an
NMI on a random CPU". Also you would have no way to know that that CPU
(since you could run on a random CPU) doesn't already have that IRQ line
set for NMI delivery.

So, I don't think removing those simplifies much, it just silences calls
to it that could go wrong.

Cheers,
Lecopzer Chen June 20, 2019, 10:50 a.m. UTC | #3
Thanks a lot for reply!!

I just misunderstood how a PPI is registered and thought
I have a chance to eliminate the code.
This patch seems nonsense now, please ignore it.

Sorry to disturb you guys.


Thanks,
	Lecopzer
diff mbox series

Patch

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 78f3ddeb7fe4..aa03640cd7fb 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -2509,9 +2509,6 @@  int request_percpu_nmi(unsigned int irq, irq_handler_t handler,
  *	This call prepares an interrupt line to deliver NMI on the current CPU,
  *	before that interrupt line gets enabled with enable_percpu_nmi().
  *
- *	As a CPU local operation, this should be called from non-preemptible
- *	context.
- *
  *	If the interrupt line cannot be used to deliver NMIs, function
  *	will fail returning a negative value.
  */
@@ -2521,8 +2518,6 @@  int prepare_percpu_nmi(unsigned int irq)
 	struct irq_desc *desc;
 	int ret = 0;
 
-	WARN_ON(preemptible());
-
 	desc = irq_get_desc_lock(irq, &flags,
 				 IRQ_GET_DESC_CHECK_PERCPU);
 	if (!desc)
@@ -2554,17 +2549,12 @@  int prepare_percpu_nmi(unsigned int irq)
  *	This call undoes the setup done by prepare_percpu_nmi().
  *
  *	IRQ line should not be enabled for the current CPU.
- *
- *	As a CPU local operation, this should be called from non-preemptible
- *	context.
  */
 void teardown_percpu_nmi(unsigned int irq)
 {
 	unsigned long flags;
 	struct irq_desc *desc;
 
-	WARN_ON(preemptible());
-
 	desc = irq_get_desc_lock(irq, &flags,
 				 IRQ_GET_DESC_CHECK_PERCPU);
 	if (!desc)