mbox series

[RFC,net-next,0/2] net: Use SMP threads for backlog NAPI.

Message ID 20230814093528.117342-1-bigeasy@linutronix.de (mailing list archive)
Headers show
Series net: Use SMP threads for backlog NAPI. | expand

Message

Sebastian Andrzej Siewior Aug. 14, 2023, 9:35 a.m. UTC
The RPS code and "deferred skb free" both send IPI/ function call
to a remote CPU in which a softirq is raised. This leads to a warning on
PREEMPT_RT because raising softiqrs from function call led to undesired
behaviour in the past. I had duct tape in RT for the "deferred skb free"
and Wander Lairson Costa reported the RPS case.

Patch #1 creates per-CPU threads for the backlog NAPI. It follows the
	 threaded NAPI model and solves the issue and simplifies the
         code.
Patch #2 gets rid of the warning. Since the ksoftirqd changes the
         situtation isn't as bad as it was. Still, it would be better to
	 keep it in the context where it originated.

Sebastian

Comments

Jakub Kicinski Aug. 14, 2023, 6:24 p.m. UTC | #1
On Mon, 14 Aug 2023 11:35:26 +0200 Sebastian Andrzej Siewior wrote:
> The RPS code and "deferred skb free" both send IPI/ function call
> to a remote CPU in which a softirq is raised. This leads to a warning on
> PREEMPT_RT because raising softiqrs from function call led to undesired
> behaviour in the past. I had duct tape in RT for the "deferred skb free"
> and Wander Lairson Costa reported the RPS case.

Could you find a less invasive solution?
backlog is used by veth == most containerized environments.
This change has a very high risk of regression for a lot of people.
Sebastian Andrzej Siewior Aug. 17, 2023, 1:16 p.m. UTC | #2
On 2023-08-14 11:24:21 [-0700], Jakub Kicinski wrote:
> On Mon, 14 Aug 2023 11:35:26 +0200 Sebastian Andrzej Siewior wrote:
> > The RPS code and "deferred skb free" both send IPI/ function call
> > to a remote CPU in which a softirq is raised. This leads to a warning on
> > PREEMPT_RT because raising softiqrs from function call led to undesired
> > behaviour in the past. I had duct tape in RT for the "deferred skb free"
> > and Wander Lairson Costa reported the RPS case.
> 
> Could you find a less invasive solution?
> backlog is used by veth == most containerized environments.
> This change has a very high risk of regression for a lot of people.

Looking at the cloudflare ppl here in the thread, I doubt they use
backlog but have proper NAPI so they might not need this.

There is no threaded NAPI for backlog and RPS. This was suggested as the
mitigation for the highload/ DoS case. Can this become a problem or
- backlog is used only by old drivers so they can move to proper NAPI if
  it becomes a problem.
- RPS spreads the load across multiple CPUs so it unlikely to become a
  problem.

Making this either optional in general or mandatory for threaded
interrupts or PREEMPT_RT will probably not make the maintenance of this
code any simpler.

I've been looking at veth. In the xdp case it has its own NAPI instance.
In the non-xdp it uses backlog. This should be called from
ndo_start_xmit and user's write() so BH is off and interrupts are
enabled at this point and it should be kind of rate-limited. Couldn't we
bypass backlog in this case and deliver the packet directly to the
stack?

Sebastian
Jakub Kicinski Aug. 17, 2023, 3:30 p.m. UTC | #3
On Thu, 17 Aug 2023 15:16:12 +0200 Sebastian Andrzej Siewior wrote:
> I've been looking at veth. In the xdp case it has its own NAPI instance.
> In the non-xdp it uses backlog. This should be called from
> ndo_start_xmit and user's write() so BH is off and interrupts are
> enabled at this point and it should be kind of rate-limited. Couldn't we
> bypass backlog in this case and deliver the packet directly to the
> stack?

The backlog in veth eats measurable percentage points of RPS of real
workloads, and I think number of people looked at getting rid of it.
So worthy goal for sure, but may not be a trivial fix.

To my knowledge the two main problems are:
 - we don't want to charge the sending application the processing for
   both "sides" of the connection and all the switching costs.
 - we may get an AA deadlock if the packet ends up looping in any way.

Or at least that's what I remember the problem being at 8am in the
morning :) Adding Daniel and Martin to CC, Paolo would also know this
better than me but I think he's AFK for the rest of the week.
Sebastian Andrzej Siewior Aug. 18, 2023, 9:03 a.m. UTC | #4
On 2023-08-17 08:30:25 [-0700], Jakub Kicinski wrote:
> On Thu, 17 Aug 2023 15:16:12 +0200 Sebastian Andrzej Siewior wrote:
> > I've been looking at veth. In the xdp case it has its own NAPI instance.
> > In the non-xdp it uses backlog. This should be called from
> > ndo_start_xmit and user's write() so BH is off and interrupts are
> > enabled at this point and it should be kind of rate-limited. Couldn't we
> > bypass backlog in this case and deliver the packet directly to the
> > stack?
> 
> The backlog in veth eats measurable percentage points of RPS of real
> workloads, and I think number of people looked at getting rid of it.
> So worthy goal for sure, but may not be a trivial fix.

We could separate RPS from backlog but then we still process RPS after
backlog so not sure if this gains anything. Letting veth always use its
NAPI in this case would probably do that. Not sure if it helps…

> To my knowledge the two main problems are:
>  - we don't want to charge the sending application the processing for
>    both "sides" of the connection and all the switching costs.

The packet is injected by the user and softirq is served once BH gets
back to 0. So it is served within the task's context and might be
accounted on softirq/ system (might as I think it needs to be observed
by the timer interrupt for the accounting).

>  - we may get an AA deadlock if the packet ends up looping in any way.

Right, forgot about that one.

> Or at least that's what I remember the problem being at 8am in the
> morning :) Adding Daniel and Martin to CC, Paolo would also know this
> better than me but I think he's AFK for the rest of the week.

Sebastian
Yan Zhai Aug. 18, 2023, 2:43 p.m. UTC | #5
On Thu, Aug 17, 2023 at 8:16 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2023-08-14 11:24:21 [-0700], Jakub Kicinski wrote:
> > On Mon, 14 Aug 2023 11:35:26 +0200 Sebastian Andrzej Siewior wrote:
> > > The RPS code and "deferred skb free" both send IPI/ function call
> > > to a remote CPU in which a softirq is raised. This leads to a warning on
> > > PREEMPT_RT because raising softiqrs from function call led to undesired
> > > behaviour in the past. I had duct tape in RT for the "deferred skb free"
> > > and Wander Lairson Costa reported the RPS case.
> >
> > Could you find a less invasive solution?
> > backlog is used by veth == most containerized environments.
> > This change has a very high risk of regression for a lot of people.
>
> Looking at the cloudflare ppl here in the thread, I doubt they use
> backlog but have proper NAPI so they might not need this.
>
Cloudflare does have backlog usage. On some veths we have to turn GRO
off to cope with multi-layer encapsulation, and there is also no XDP
attached on these interfaces, thus the backlog is used. There are also
other usage of backlog, tuntap, loopback and bpf-redirect ingress.
Frankly speaking, making a NAPI instance "threaded" itself is not a
concern. We have threaded NAPI running on some veth for quite a while,
and it performs pretty well. The concern, if any, would be the
maturity of new code. I am happy to help derisk with some lab tests
and dogfooding if generic agreement is reached to proceed with this
idea.

Yan

> There is no threaded NAPI for backlog and RPS. This was suggested as the
> mitigation for the highload/ DoS case. Can this become a problem or
> - backlog is used only by old drivers so they can move to proper NAPI if
>   it becomes a problem.
> - RPS spreads the load across multiple CPUs so it unlikely to become a
>   problem.
>
> Making this either optional in general or mandatory for threaded
> interrupts or PREEMPT_RT will probably not make the maintenance of this
> code any simpler.
>
> I've been looking at veth. In the xdp case it has its own NAPI instance.
> In the non-xdp it uses backlog. This should be called from
> ndo_start_xmit and user's write() so BH is off and interrupts are
> enabled at this point and it should be kind of rate-limited. Couldn't we
> bypass backlog in this case and deliver the packet directly to the
> stack?
>
> Sebastian
Sebastian Andrzej Siewior Aug. 18, 2023, 2:57 p.m. UTC | #6
On 2023-08-18 09:43:08 [-0500], Yan Zhai wrote:
> > Looking at the cloudflare ppl here in the thread, I doubt they use
> > backlog but have proper NAPI so they might not need this.
> >
> Cloudflare does have backlog usage. On some veths we have to turn GRO

Oh. Okay.

> off to cope with multi-layer encapsulation, and there is also no XDP
> attached on these interfaces, thus the backlog is used. There are also
> other usage of backlog, tuntap, loopback and bpf-redirect ingress.
> Frankly speaking, making a NAPI instance "threaded" itself is not a
> concern. We have threaded NAPI running on some veth for quite a while,
> and it performs pretty well. The concern, if any, would be the
> maturity of new code. I am happy to help derisk with some lab tests
> and dogfooding if generic agreement is reached to proceed with this
> idea.

If you have threaded NAPI for veth then you wouldn't be affected by this
code. However, if you _are_ affected by this and you use veth it would
be helpful to figure out if you have problems as of net-next and if this
helps or makes it worse.

As of now Jakub isn't eager to have it and my testing/ convincing is
quite limited. If nobody else yells that something like that would be
helpful I would simply go and convince PeterZ/tglx to apply 2/2 of this
series.

> Yan

Sebastian
Jakub Kicinski Aug. 18, 2023, 4:21 p.m. UTC | #7
On Fri, 18 Aug 2023 16:57:34 +0200 Sebastian Andrzej Siewior wrote:
> As of now Jakub isn't eager to have it and my testing/ convincing is
> quite limited. If nobody else yells that something like that would be
> helpful I would simply go and convince PeterZ/tglx to apply 2/2 of this
> series.

As tempting as code removal would be, we can still try to explore the
option of letting backlog processing run in threads - as an opt-in on
normal kernels and force it on RT?

But it would be good to wait ~2 weeks before moving forward, if you
don't mind, various core folks keep taking vacations..
Eric Dumazet Aug. 18, 2023, 4:40 p.m. UTC | #8
On Fri, Aug 18, 2023 at 6:21 PM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Fri, 18 Aug 2023 16:57:34 +0200 Sebastian Andrzej Siewior wrote:
> > As of now Jakub isn't eager to have it and my testing/ convincing is
> > quite limited. If nobody else yells that something like that would be
> > helpful I would simply go and convince PeterZ/tglx to apply 2/2 of this
> > series.
>
> As tempting as code removal would be, we can still try to explore the
> option of letting backlog processing run in threads - as an opt-in on
> normal kernels and force it on RT?

+1

Patch 1/2 as presented is really scary, we would need to test it
extensively on various platforms.

>
> But it would be good to wait ~2 weeks before moving forward, if you
> don't mind, various core folks keep taking vacations..
Yan Zhai Aug. 18, 2023, 4:56 p.m. UTC | #9
On Fri, Aug 18, 2023 at 9:57 AM Sebastian Andrzej Siewior
<bigeasy@linutronix.de> wrote:
>
> On 2023-08-18 09:43:08 [-0500], Yan Zhai wrote:
> > > Looking at the cloudflare ppl here in the thread, I doubt they use
> > > backlog but have proper NAPI so they might not need this.
> > >
> > Cloudflare does have backlog usage. On some veths we have to turn GRO
>
> Oh. Okay.
>
> > off to cope with multi-layer encapsulation, and there is also no XDP
> > attached on these interfaces, thus the backlog is used. There are also
> > other usage of backlog, tuntap, loopback and bpf-redirect ingress.
> > Frankly speaking, making a NAPI instance "threaded" itself is not a
> > concern. We have threaded NAPI running on some veth for quite a while,
> > and it performs pretty well. The concern, if any, would be the
> > maturity of new code. I am happy to help derisk with some lab tests
> > and dogfooding if generic agreement is reached to proceed with this
> > idea.
>
> If you have threaded NAPI for veth then you wouldn't be affected by this
> code. However, if you _are_ affected by this and you use veth it would
> be helpful to figure out if you have problems as of net-next and if this
> helps or makes it worse.
>
yes we are still impacted on non-NAPI veths and other scenarios. But
net-next sounds good, still plenty of time to evaluate if it has any
negative impact.

Yan

> As of now Jakub isn't eager to have it and my testing/ convincing is
> quite limited. If nobody else yells that something like that would be
> helpful I would simply go and convince PeterZ/tglx to apply 2/2 of this
> series.
>
> > Yan
>
> Sebastian
Sebastian Andrzej Siewior Aug. 23, 2023, 6:57 a.m. UTC | #10
On 2023-08-18 09:21:11 [-0700], Jakub Kicinski wrote:
> As tempting as code removal would be, we can still try to explore the
> option of letting backlog processing run in threads - as an opt-in on
> normal kernels and force it on RT?
> 
> But it would be good to wait ~2 weeks before moving forward, if you
> don't mind, various core folks keep taking vacations..

No problem.  Let me repost it then in two weeks as optional and not
mandatory.

Sebastian