Message ID | 20230911170531.828100-4-edumazet@google.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 4505dc2a522826975167823f64f0896bac1323fb |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | tcp: backlog processing optims | expand |
On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote: > __sk_flush_backlog() / sk_flush_backlog() are used > when TCP recvmsg()/sendmsg() process large chunks, > to not let packets in the backlog too long. > > It makes sense to call tcp_release_cb() to also > process actions held in sk->sk_tsq_flags for smoother > scheduling. > > Signed-off-by: Eric Dumazet <edumazet@google.com> > --- > net/core/sock.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/net/core/sock.c b/net/core/sock.c > index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk) > { > spin_lock_bh(&sk->sk_lock.slock); > __release_sock(sk); > + > + if (sk->sk_prot->release_cb) > + sk->sk_prot->release_cb(sk); Out of sheer curiosity, I'm wondering if adding an indirect_call_wrapper here could make any difference? I guess not much, and in any case it could be a follow-up. Cheers, Paolo
On Tue, Sep 12, 2023 at 6:59 PM Paolo Abeni <pabeni@redhat.com> wrote: > > On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote: > > __sk_flush_backlog() / sk_flush_backlog() are used > > when TCP recvmsg()/sendmsg() process large chunks, > > to not let packets in the backlog too long. > > > > It makes sense to call tcp_release_cb() to also > > process actions held in sk->sk_tsq_flags for smoother > > scheduling. > > > > Signed-off-by: Eric Dumazet <edumazet@google.com> > > --- > > net/core/sock.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/net/core/sock.c b/net/core/sock.c > > index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644 > > --- a/net/core/sock.c > > +++ b/net/core/sock.c > > @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk) > > { > > spin_lock_bh(&sk->sk_lock.slock); > > __release_sock(sk); > > + > > + if (sk->sk_prot->release_cb) > > + sk->sk_prot->release_cb(sk); > > Out of sheer curiosity, I'm wondering if adding an > indirect_call_wrapper here could make any difference? > > I guess not much, and in any case it could be a follow-up. > I think it would make sense, particularly from release_sock() We have such a change in our kernel, for some reason its author never upstreamed it :/
On Tue, 2023-09-12 at 19:01 +0200, Eric Dumazet wrote: > On Tue, Sep 12, 2023 at 6:59 PM Paolo Abeni <pabeni@redhat.com> wrote: > > > > On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote: > > > __sk_flush_backlog() / sk_flush_backlog() are used > > > when TCP recvmsg()/sendmsg() process large chunks, > > > to not let packets in the backlog too long. > > > > > > It makes sense to call tcp_release_cb() to also > > > process actions held in sk->sk_tsq_flags for smoother > > > scheduling. > > > > > > Signed-off-by: Eric Dumazet <edumazet@google.com> > > > --- > > > net/core/sock.c | 3 +++ > > > 1 file changed, 3 insertions(+) > > > > > > diff --git a/net/core/sock.c b/net/core/sock.c > > > index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644 > > > --- a/net/core/sock.c > > > +++ b/net/core/sock.c > > > @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk) > > > { > > > spin_lock_bh(&sk->sk_lock.slock); > > > __release_sock(sk); > > > + > > > + if (sk->sk_prot->release_cb) > > > + sk->sk_prot->release_cb(sk); > > > > Out of sheer curiosity, I'm wondering if adding an > > indirect_call_wrapper here could make any difference? > > > > I guess not much, and in any case it could be a follow-up. > > > > I think it would make sense, particularly from release_sock() > > We have such a change in our kernel, for some reason its author never > upstreamed it :/ Can be a follow-up, no need to resend the whole series, I'm applying it now! The pw backlog is already scaring as is, I prefer the path with less patches flying ;) Cheers, Paolo
diff --git a/net/core/sock.c b/net/core/sock.c index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk) { spin_lock_bh(&sk->sk_lock.slock); __release_sock(sk); + + if (sk->sk_prot->release_cb) + sk->sk_prot->release_cb(sk); spin_unlock_bh(&sk->sk_lock.slock); } EXPORT_SYMBOL_GPL(__sk_flush_backlog);
__sk_flush_backlog() / sk_flush_backlog() are used when TCP recvmsg()/sendmsg() process large chunks, to not let packets in the backlog too long. It makes sense to call tcp_release_cb() to also process actions held in sk->sk_tsq_flags for smoother scheduling. Signed-off-by: Eric Dumazet <edumazet@google.com> --- net/core/sock.c | 3 +++ 1 file changed, 3 insertions(+)