Message ID | 20220815124739.15948-1-zhengqi.arch@bytedance.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v3] arm64: run softirqs on the per-CPU IRQ stack | expand |
On 2022/8/15 20:47, Qi Zheng wrote: > Currently arm64 supports per-CPU IRQ stack, but softirqs > are still handled in the task context. > > Since any call to local_bh_enable() at any level in the task's > call stack may trigger a softirq processing run, which could > potentially cause a task stack overflow if the combined stack > footprints exceed the stack's size, let's run these softirqs > on the IRQ stack as well. > > Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> > Reviewed-by: Arnd Bergmann <arnd@arndb.de> > Acked-by: Will Deacon <will@kernel.org> > --- > v2: https://lore.kernel.org/lkml/20220802065325.39740-1-zhengqi.arch@bytedance.com/ > v1: https://lore.kernel.org/lkml/20220708094950.41944-1-zhengqi.arch@bytedance.com/ > RFC: https://lore.kernel.org/lkml/20220707110511.52129-1-zhengqi.arch@bytedance.com/ > > Changelog in v2 -> v3: > - rebase onto the v6.0-rc1 Gentle ping. Thanks, Qi > > Changelog in v1 -> v2: > - temporarily discard [PATCH v1 2/2] to allow this patch to be merged first > - rebase onto the v5.19 > - collect Reviewed-by and Acked-by > > Changelog in RFC -> v1: > - fix conflicts with commit f2c5092190f2 ("arch/*: Disable softirq stacks on PREEMPT_RT.") > > arch/arm64/Kconfig | 1 + > arch/arm64/kernel/irq.c | 13 +++++++++++++ > 2 files changed, 14 insertions(+) > > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig > index 571cc234d0b3..ee92f5887cf6 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -230,6 +230,7 @@ config ARM64 > select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD > select TRACE_IRQFLAGS_SUPPORT > select TRACE_IRQFLAGS_NMI_SUPPORT > + select HAVE_SOFTIRQ_ON_OWN_STACK > help > ARM 64-bit (AArch64) Linux support. > > diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c > index bda49430c9ea..c36ad20a52f3 100644 > --- a/arch/arm64/kernel/irq.c > +++ b/arch/arm64/kernel/irq.c > @@ -22,6 +22,7 @@ > #include <linux/vmalloc.h> > #include <asm/daifflags.h> > #include <asm/vmap_stack.h> > +#include <asm/exception.h> > > /* Only access this in an NMI enter/exit */ > DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); > @@ -71,6 +72,18 @@ static void init_irq_stacks(void) > } > #endif > > +#ifndef CONFIG_PREEMPT_RT > +static void ____do_softirq(struct pt_regs *regs) > +{ > + __do_softirq(); > +} > + > +void do_softirq_own_stack(void) > +{ > + call_on_irq_stack(NULL, ____do_softirq); > +} > +#endif > + > static void default_handle_irq(struct pt_regs *regs) > { > panic("IRQ taken without a root IRQ handler\n");
On 2022/8/26 12:16, Qi Zheng wrote: > > > On 2022/8/15 20:47, Qi Zheng wrote: >> Currently arm64 supports per-CPU IRQ stack, but softirqs >> are still handled in the task context. >> >> Since any call to local_bh_enable() at any level in the task's >> call stack may trigger a softirq processing run, which could >> potentially cause a task stack overflow if the combined stack >> footprints exceed the stack's size, let's run these softirqs >> on the IRQ stack as well. >> >> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> >> Reviewed-by: Arnd Bergmann <arnd@arndb.de> >> Acked-by: Will Deacon <will@kernel.org> >> --- >> v2: >> https://lore.kernel.org/lkml/20220802065325.39740-1-zhengqi.arch@bytedance.com/ >> v1: >> https://lore.kernel.org/lkml/20220708094950.41944-1-zhengqi.arch@bytedance.com/ >> RFC: >> https://lore.kernel.org/lkml/20220707110511.52129-1-zhengqi.arch@bytedance.com/ >> >> Changelog in v2 -> v3: >> - rebase onto the v6.0-rc1 Hi Will, Are we good to merge this patch? Or if there is anything else I need to do, please let me know. :) Looking forward to your reply. Thanks, Qi > > Gentle ping. > > Thanks, > Qi > >> >> Changelog in v1 -> v2: >> - temporarily discard [PATCH v1 2/2] to allow this patch to be >> merged first >> - rebase onto the v5.19 >> - collect Reviewed-by and Acked-by >> >> Changelog in RFC -> v1: >> - fix conflicts with commit f2c5092190f2 ("arch/*: Disable softirq >> stacks on PREEMPT_RT.") >> >> arch/arm64/Kconfig | 1 + >> arch/arm64/kernel/irq.c | 13 +++++++++++++ >> 2 files changed, 14 insertions(+) >> >> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig >> index 571cc234d0b3..ee92f5887cf6 100644 >> --- a/arch/arm64/Kconfig >> +++ b/arch/arm64/Kconfig >> @@ -230,6 +230,7 @@ config ARM64 >> select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD >> select TRACE_IRQFLAGS_SUPPORT >> select TRACE_IRQFLAGS_NMI_SUPPORT >> + select HAVE_SOFTIRQ_ON_OWN_STACK >> help >> ARM 64-bit (AArch64) Linux support. >> diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c >> index bda49430c9ea..c36ad20a52f3 100644 >> --- a/arch/arm64/kernel/irq.c >> +++ b/arch/arm64/kernel/irq.c >> @@ -22,6 +22,7 @@ >> #include <linux/vmalloc.h> >> #include <asm/daifflags.h> >> #include <asm/vmap_stack.h> >> +#include <asm/exception.h> >> /* Only access this in an NMI enter/exit */ >> DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); >> @@ -71,6 +72,18 @@ static void init_irq_stacks(void) >> } >> #endif >> +#ifndef CONFIG_PREEMPT_RT >> +static void ____do_softirq(struct pt_regs *regs) >> +{ >> + __do_softirq(); >> +} >> + >> +void do_softirq_own_stack(void) >> +{ >> + call_on_irq_stack(NULL, ____do_softirq); >> +} >> +#endif >> + >> static void default_handle_irq(struct pt_regs *regs) >> { >> panic("IRQ taken without a root IRQ handler\n"); >
On Wed, Sep 07, 2022 at 03:04:48PM +0800, Qi Zheng wrote: > > > On 2022/8/26 12:16, Qi Zheng wrote: > > > > > > On 2022/8/15 20:47, Qi Zheng wrote: > > > Currently arm64 supports per-CPU IRQ stack, but softirqs > > > are still handled in the task context. > > > > > > Since any call to local_bh_enable() at any level in the task's > > > call stack may trigger a softirq processing run, which could > > > potentially cause a task stack overflow if the combined stack > > > footprints exceed the stack's size, let's run these softirqs > > > on the IRQ stack as well. > > > > > > Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> > > > Reviewed-by: Arnd Bergmann <arnd@arndb.de> > > > Acked-by: Will Deacon <will@kernel.org> > > > --- > > > v2: https://lore.kernel.org/lkml/20220802065325.39740-1-zhengqi.arch@bytedance.com/ > > > v1: https://lore.kernel.org/lkml/20220708094950.41944-1-zhengqi.arch@bytedance.com/ > > > RFC: https://lore.kernel.org/lkml/20220707110511.52129-1-zhengqi.arch@bytedance.com/ > > > > > > Changelog in v2 -> v3: > > > - rebase onto the v6.0-rc1 > > Hi Will, > > Are we good to merge this patch? Or if there is anything else I need to > do, please let me know. :) I'm expecting Catalin to pick this one up for 6.1. Will
On 2022/9/7 21:34, Will Deacon wrote: > On Wed, Sep 07, 2022 at 03:04:48PM +0800, Qi Zheng wrote: >> >> >> On 2022/8/26 12:16, Qi Zheng wrote: >>> >>> >>> On 2022/8/15 20:47, Qi Zheng wrote: >>>> Currently arm64 supports per-CPU IRQ stack, but softirqs >>>> are still handled in the task context. >>>> >>>> Since any call to local_bh_enable() at any level in the task's >>>> call stack may trigger a softirq processing run, which could >>>> potentially cause a task stack overflow if the combined stack >>>> footprints exceed the stack's size, let's run these softirqs >>>> on the IRQ stack as well. >>>> >>>> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com> >>>> Reviewed-by: Arnd Bergmann <arnd@arndb.de> >>>> Acked-by: Will Deacon <will@kernel.org> >>>> --- >>>> v2: https://lore.kernel.org/lkml/20220802065325.39740-1-zhengqi.arch@bytedance.com/ >>>> v1: https://lore.kernel.org/lkml/20220708094950.41944-1-zhengqi.arch@bytedance.com/ >>>> RFC: https://lore.kernel.org/lkml/20220707110511.52129-1-zhengqi.arch@bytedance.com/ >>>> >>>> Changelog in v2 -> v3: >>>> - rebase onto the v6.0-rc1 >> >> Hi Will, >> >> Are we good to merge this patch? Or if there is anything else I need to >> do, please let me know. :) > > I'm expecting Catalin to pick this one up for 6.1. Oh, I see. Looking forward to this. Thanks a lot. Qi > > Will
On Mon, 15 Aug 2022 20:47:39 +0800, Qi Zheng wrote: > Currently arm64 supports per-CPU IRQ stack, but softirqs > are still handled in the task context. > > Since any call to local_bh_enable() at any level in the task's > call stack may trigger a softirq processing run, which could > potentially cause a task stack overflow if the combined stack > footprints exceed the stack's size, let's run these softirqs > on the IRQ stack as well. > > [...] Applied to arm64 (for-next/misc), thanks! [1/1] arm64: run softirqs on the per-CPU IRQ stack https://git.kernel.org/arm64/c/2d2f3bb897a3
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 571cc234d0b3..ee92f5887cf6 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -230,6 +230,7 @@ config ARM64 select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD select TRACE_IRQFLAGS_SUPPORT select TRACE_IRQFLAGS_NMI_SUPPORT + select HAVE_SOFTIRQ_ON_OWN_STACK help ARM 64-bit (AArch64) Linux support. diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index bda49430c9ea..c36ad20a52f3 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -22,6 +22,7 @@ #include <linux/vmalloc.h> #include <asm/daifflags.h> #include <asm/vmap_stack.h> +#include <asm/exception.h> /* Only access this in an NMI enter/exit */ DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); @@ -71,6 +72,18 @@ static void init_irq_stacks(void) } #endif +#ifndef CONFIG_PREEMPT_RT +static void ____do_softirq(struct pt_regs *regs) +{ + __do_softirq(); +} + +void do_softirq_own_stack(void) +{ + call_on_irq_stack(NULL, ____do_softirq); +} +#endif + static void default_handle_irq(struct pt_regs *regs) { panic("IRQ taken without a root IRQ handler\n");