diff mbox series

[1/4] powerpc/64: Mark prep_irq_for_idle() __cpuidle

Message ID 20230406144535.3786008-1-mpe@ellerman.id.au (mailing list archive)
State Handled Elsewhere, archived
Headers show
Series [1/4] powerpc/64: Mark prep_irq_for_idle() __cpuidle | expand

Commit Message

Michael Ellerman April 6, 2023, 2:45 p.m. UTC
Code in the idle path is not allowed to be instrumented because RCU is
disabled, see commit 0e985e9d2286 ("cpuidle: Add comments about
noinstr/__cpuidle usage").

Mark prep_irq_for_idle() __cpuidle, which is equivalent to noinstr, to
enforce that.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/kernel/irq_64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Michael Ellerman April 26, 2023, 12:01 p.m. UTC | #1
On Fri, 07 Apr 2023 00:45:32 +1000, Michael Ellerman wrote:
> Code in the idle path is not allowed to be instrumented because RCU is
> disabled, see commit 0e985e9d2286 ("cpuidle: Add comments about
> noinstr/__cpuidle usage").
> 
> Mark prep_irq_for_idle() __cpuidle, which is equivalent to noinstr, to
> enforce that.
> 
> [...]

Applied to powerpc/next.

[1/4] powerpc/64: Mark prep_irq_for_idle() __cpuidle
      https://git.kernel.org/powerpc/c/7640854d966449e5befeff02c45c799cfc3d4fcf
[2/4] powerpc/64: Don't call trace_hardirqs_on() in prep_irq_for_idle()
      https://git.kernel.org/powerpc/c/6fee130204650515af80c2786176da0fe7e94482
[3/4] cpuidle: pseries: Mark ->enter() functions as __cpuidle
      https://git.kernel.org/powerpc/c/88990745c934b14359e526033c5bc1daaf15267c
[4/4] powerpc/pseries: Always inline functions called from cpuidle
      https://git.kernel.org/powerpc/c/18b5e7170a33a985dc842ab24a690fa6ff0f50e4

cheers
diff mbox series

Patch

diff --git a/arch/powerpc/kernel/irq_64.c b/arch/powerpc/kernel/irq_64.c
index c788c55512ed..2ab0e8d84c1d 100644
--- a/arch/powerpc/kernel/irq_64.c
+++ b/arch/powerpc/kernel/irq_64.c
@@ -354,7 +354,7 @@  EXPORT_SYMBOL(arch_local_irq_restore);
  * disabled and marked as such, so the local_irq_enable() call
  * in arch_cpu_idle() will properly re-enable everything.
  */
-bool prep_irq_for_idle(void)
+__cpuidle bool prep_irq_for_idle(void)
 {
 	/*
 	 * First we need to hard disable to ensure no interrupt