Message ID | 20220708094950.41944-2-zhengqi.arch@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | arm64: run softirqs on the per-CPU IRQ stack | expand |
On Fri, Jul 8, 2022 at 11:49 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote: > > Currently arm64 supports per-CPU IRQ stack, but softirqs > are still handled in the task context. > > Since any call to local_bh_enable() at any level in the task's > call stack may trigger a softirq processing run, which could > potentially cause a task stack overflow if the combined stack > footprints exceed the stack's size, let's run these softirqs > on the IRQ stack as well. > > Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> This seems like a nice improvement, and this version addresses my concern I raised on the RFC version. Reviewed-by: Arnd Bergmann <arnd@arndb.de>
On 2022/7/14 19:32, Arnd Bergmann wrote: > On Fri, Jul 8, 2022 at 11:49 AM Qi Zheng <zhengqi.arch@bytedance.com> wrote: >> >> Currently arm64 supports per-CPU IRQ stack, but softirqs >> are still handled in the task context. >> >> Since any call to local_bh_enable() at any level in the task's >> call stack may trigger a softirq processing run, which could >> potentially cause a task stack overflow if the combined stack >> footprints exceed the stack's size, let's run these softirqs >> on the IRQ stack as well. >> >> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> > > This seems like a nice improvement, and this version addresses > my concern I raised on the RFC version. > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> Thanks for your review. :)
On Fri, Jul 08, 2022 at 05:49:49PM +0800, Qi Zheng wrote: > Currently arm64 supports per-CPU IRQ stack, but softirqs > are still handled in the task context. > > Since any call to local_bh_enable() at any level in the task's > call stack may trigger a softirq processing run, which could > potentially cause a task stack overflow if the combined stack > footprints exceed the stack's size, let's run these softirqs > on the IRQ stack as well. > > Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> > --- > arch/arm64/Kconfig | 1 + > arch/arm64/kernel/irq.c | 13 +++++++++++++ > 2 files changed, 14 insertions(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 4c1e1d2d2f8b..be0a9f0052ee 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -230,6 +230,7 @@ config ARM64 > select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD > select TRACE_IRQFLAGS_SUPPORT > select TRACE_IRQFLAGS_NMI_SUPPORT > + select HAVE_SOFTIRQ_ON_OWN_STACK > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c > index bda49430c9ea..c36ad20a52f3 100644 > --- a/arch/arm64/kernel/irq.c > +++ b/arch/arm64/kernel/irq.c > @@ -22,6 +22,7 @@ > #include <linux/vmalloc.h> > #include <asm/daifflags.h> > #include <asm/vmap_stack.h> > +#include <asm/exception.h> > > /* Only access this in an NMI enter/exit */ > DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); > @@ -71,6 +72,18 @@ static void init_irq_stacks(void) > } > #endif > > +#ifndef CONFIG_PREEMPT_RT > +static void ____do_softirq(struct pt_regs *regs) > +{ > + __do_softirq(); > +} > + > +void do_softirq_own_stack(void) > +{ > + call_on_irq_stack(NULL, ____do_softirq); > +} > +#endif Acked-by: Will Deacon <will@kernel.org> Please can you repost this at -rc1 and we can queue it up for 5.21? Thanks, Will
On 2022/7/22 17:04, Will Deacon wrote: > On Fri, Jul 08, 2022 at 05:49:49PM +0800, Qi Zheng wrote: >> Currently arm64 supports per-CPU IRQ stack, but softirqs >> are still handled in the task context. >> >> Since any call to local_bh_enable() at any level in the task's >> call stack may trigger a softirq processing run, which could >> potentially cause a task stack overflow if the combined stack >> footprints exceed the stack's size, let's run these softirqs >> on the IRQ stack as well. >> >> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> >> --- >> arch/arm64/Kconfig | 1 + >> arch/arm64/kernel/irq.c | 13 +++++++++++++ >> 2 files changed, 14 insertions(+) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index 4c1e1d2d2f8b..be0a9f0052ee 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -230,6 +230,7 @@ config ARM64 >> select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD >> select TRACE_IRQFLAGS_SUPPORT >> select TRACE_IRQFLAGS_NMI_SUPPORT >> + select HAVE_SOFTIRQ_ON_OWN_STACK >> help >> ARM 64-bit (AArch64) Linux support. >> >> diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c >> index bda49430c9ea..c36ad20a52f3 100644 >> --- a/arch/arm64/kernel/irq.c >> +++ b/arch/arm64/kernel/irq.c >> @@ -22,6 +22,7 @@ >> #include <linux/vmalloc.h> >> #include <asm/daifflags.h> >> #include <asm/vmap_stack.h> >> +#include <asm/exception.h> >> >> /* Only access this in an NMI enter/exit */ >> DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); >> @@ -71,6 +72,18 @@ static void init_irq_stacks(void) >> } >> #endif >> >> +#ifndef CONFIG_PREEMPT_RT >> +static void ____do_softirq(struct pt_regs *regs) >> +{ >> + __do_softirq(); >> +} >> + >> +void do_softirq_own_stack(void) >> +{ >> + call_on_irq_stack(NULL, ____do_softirq); >> +} >> +#endif > > Acked-by: Will Deacon <will@kernel.org> > > Please can you repost this at -rc1 and we can queue it up for 5.21? Sure, will do. Thanks, Qi > > Thanks, > > Will
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 4c1e1d2d2f8b..be0a9f0052ee 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -230,6 +230,7 @@ config ARM64 select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD select TRACE_IRQFLAGS_SUPPORT select TRACE_IRQFLAGS_NMI_SUPPORT + select HAVE_SOFTIRQ_ON_OWN_STACK help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index bda49430c9ea..c36ad20a52f3 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -22,6 +22,7 @@ #include <linux/vmalloc.h> #include <asm/daifflags.h> #include <asm/vmap_stack.h> +#include <asm/exception.h> /* Only access this in an NMI enter/exit */ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); @@ -71,6 +72,18 @@ static void init_irq_stacks(void) } #endif +#ifndef CONFIG_PREEMPT_RT +static void ____do_softirq(struct pt_regs *regs) +{ + __do_softirq(); +} + +void do_softirq_own_stack(void) +{ + call_on_irq_stack(NULL, ____do_softirq); +} +#endif + static void default_handle_irq(struct pt_regs *regs) { panic("IRQ taken without a root IRQ handler\n");
Currently arm64 supports per-CPU IRQ stack, but softirqs are still handled in the task context. Since any call to local_bh_enable() at any level in the task's call stack may trigger a softirq processing run, which could potentially cause a task stack overflow if the combined stack footprints exceed the stack's size, let's run these softirqs on the IRQ stack as well. Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> --- arch/arm64/Kconfig | 1 + arch/arm64/kernel/irq.c | 13 +++++++++++++ 2 files changed, 14 insertions(+)