diff mbox series

[net-next,3/4] net: call prot->release_cb() when processing backlog

Message ID 20230911170531.828100-4-edumazet@google.com (mailing list archive)
State Accepted
Commit 4505dc2a522826975167823f64f0896bac1323fb
Delegated to: Netdev Maintainers
Headers show
Series tcp: backlog processing optims | expand

Checks

Context Check Description
netdev/series_format success Posting correctly formatted
netdev/tree_selection success Clearly marked for net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 1346 this patch: 1346
netdev/cc_maintainers warning 1 maintainers not CCed: kuniyu@amazon.com
netdev/build_clang success Errors and warnings before: 1364 this patch: 1364
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 1369 this patch: 1369
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 9 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Eric Dumazet Sept. 11, 2023, 5:05 p.m. UTC
__sk_flush_backlog() / sk_flush_backlog() are used
when TCP recvmsg()/sendmsg() process large chunks,
to not let packets in the backlog too long.

It makes sense to call tcp_release_cb() to also
process actions held in sk->sk_tsq_flags for smoother
scheduling.

Signed-off-by: Eric Dumazet <edumazet@google.com>
---
 net/core/sock.c | 3 +++
 1 file changed, 3 insertions(+)

Comments

Paolo Abeni Sept. 12, 2023, 4:58 p.m. UTC | #1
On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote:
> __sk_flush_backlog() / sk_flush_backlog() are used
> when TCP recvmsg()/sendmsg() process large chunks,
> to not let packets in the backlog too long.
> 
> It makes sense to call tcp_release_cb() to also
> process actions held in sk->sk_tsq_flags for smoother
> scheduling.
> 
> Signed-off-by: Eric Dumazet <edumazet@google.com>
> ---
>  net/core/sock.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk)
>  {
>  	spin_lock_bh(&sk->sk_lock.slock);
>  	__release_sock(sk);
> +
> +	if (sk->sk_prot->release_cb)
> +		sk->sk_prot->release_cb(sk);

Out of sheer curiosity, I'm wondering if adding an
indirect_call_wrapper here could make any difference?

I guess not much, and in any case it could be a follow-up.

Cheers,

Paolo
Eric Dumazet Sept. 12, 2023, 5:01 p.m. UTC | #2
On Tue, Sep 12, 2023 at 6:59 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote:
> > __sk_flush_backlog() / sk_flush_backlog() are used
> > when TCP recvmsg()/sendmsg() process large chunks,
> > to not let packets in the backlog too long.
> >
> > It makes sense to call tcp_release_cb() to also
> > process actions held in sk->sk_tsq_flags for smoother
> > scheduling.
> >
> > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > ---
> >  net/core/sock.c | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/net/core/sock.c b/net/core/sock.c
> > index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644
> > --- a/net/core/sock.c
> > +++ b/net/core/sock.c
> > @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk)
> >  {
> >       spin_lock_bh(&sk->sk_lock.slock);
> >       __release_sock(sk);
> > +
> > +     if (sk->sk_prot->release_cb)
> > +             sk->sk_prot->release_cb(sk);
>
> Out of sheer curiosity, I'm wondering if adding an
> indirect_call_wrapper here could make any difference?
>
> I guess not much, and in any case it could be a follow-up.
>

I think it would make sense, particularly from release_sock()

We have such a change in our kernel, for some reason its author never
upstreamed it :/
Paolo Abeni Sept. 12, 2023, 5:19 p.m. UTC | #3
On Tue, 2023-09-12 at 19:01 +0200, Eric Dumazet wrote:
> On Tue, Sep 12, 2023 at 6:59 PM Paolo Abeni <pabeni@redhat.com> wrote:
> > 
> > On Mon, 2023-09-11 at 17:05 +0000, Eric Dumazet wrote:
> > > __sk_flush_backlog() / sk_flush_backlog() are used
> > > when TCP recvmsg()/sendmsg() process large chunks,
> > > to not let packets in the backlog too long.
> > > 
> > > It makes sense to call tcp_release_cb() to also
> > > process actions held in sk->sk_tsq_flags for smoother
> > > scheduling.
> > > 
> > > Signed-off-by: Eric Dumazet <edumazet@google.com>
> > > ---
> > >  net/core/sock.c | 3 +++
> > >  1 file changed, 3 insertions(+)
> > > 
> > > diff --git a/net/core/sock.c b/net/core/sock.c
> > > index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644
> > > --- a/net/core/sock.c
> > > +++ b/net/core/sock.c
> > > @@ -3001,6 +3001,9 @@ void __sk_flush_backlog(struct sock *sk)
> > >  {
> > >       spin_lock_bh(&sk->sk_lock.slock);
> > >       __release_sock(sk);
> > > +
> > > +     if (sk->sk_prot->release_cb)
> > > +             sk->sk_prot->release_cb(sk);
> > 
> > Out of sheer curiosity, I'm wondering if adding an
> > indirect_call_wrapper here could make any difference?
> > 
> > I guess not much, and in any case it could be a follow-up.
> > 
> 
> I think it would make sense, particularly from release_sock()
> 
> We have such a change in our kernel, for some reason its author never
> upstreamed it :/

Can be a follow-up, no need to resend the whole series, I'm applying it
now! The pw backlog is already scaring as is, I prefer the path with
less patches flying ;)

Cheers,

Paolo
diff mbox series

Patch

diff --git a/net/core/sock.c b/net/core/sock.c
index 21610e3845a5042f7c648ccb3e0d90126df20a0b..bb89b88bc1e8a042c4ee40b3c8345dc58cb1b369 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3001,6 +3001,9 @@  void __sk_flush_backlog(struct sock *sk)
 {
 	spin_lock_bh(&sk->sk_lock.slock);
 	__release_sock(sk);
+
+	if (sk->sk_prot->release_cb)
+		sk->sk_prot->release_cb(sk);
 	spin_unlock_bh(&sk->sk_lock.slock);
 }
 EXPORT_SYMBOL_GPL(__sk_flush_backlog);