Message ID | 20220611033016.3697610-1-eric.dumazet@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 219160be496f7f9cd105c5708e37cf22ab4ce0c7 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] tcp: sk_forced_mem_schedule() optimization | expand |
On Fri, Jun 10, 2022 at 11:30 PM Eric Dumazet <eric.dumazet@gmail.com> wrote: > > From: Eric Dumazet <edumazet@google.com> > > sk_memory_allocated_add() has three callers, and returns > to them @memory_allocated. > > sk_forced_mem_schedule() is one of them, and ignores > the returned value. > > Change sk_memory_allocated_add() to return void. > > Change sock_reserve_memory() and __sk_mem_raise_allocated() > to call sk_memory_allocated(). > > This removes one cache line miss [1] for RPC workloads, > as first skbs in TCP write queue and receive queue go through > sk_forced_mem_schedule(). > > [1] Cache line holding tcp_memory_allocated. > > Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Nice find! > --- > include/net/sock.h | 3 +-- > net/core/sock.c | 9 ++++++--- > 2 files changed, 7 insertions(+), 5 deletions(-) > > diff --git a/include/net/sock.h b/include/net/sock.h > index 0063e8410a4e3ed91aef9cf34eb1127f7ce33b93..304a5e39d41e27105148058066e8fa23490cf9fa 100644 > --- a/include/net/sock.h > +++ b/include/net/sock.h > @@ -1412,7 +1412,7 @@ sk_memory_allocated(const struct sock *sk) > /* 1 MB per cpu, in page units */ > #define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT)) > > -static inline long > +static inline void > sk_memory_allocated_add(struct sock *sk, int amt) > { > int local_reserve; > @@ -1424,7 +1424,6 @@ sk_memory_allocated_add(struct sock *sk, int amt) > atomic_long_add(local_reserve, sk->sk_prot->memory_allocated); > } > preempt_enable(); > - return sk_memory_allocated(sk); > } > > static inline void > diff --git a/net/core/sock.c b/net/core/sock.c > index 697d5c8e2f0def49351a7d38ec59ab5e875d3b10..92a0296ccb1842f11fb8dd4b2f768885d05daa8f 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -1019,7 +1019,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes) > return -ENOMEM; > > /* pre-charge to forward_alloc */ > - allocated = sk_memory_allocated_add(sk, pages); > + sk_memory_allocated_add(sk, pages); > + allocated = sk_memory_allocated(sk); > /* If the system goes into memory pressure with this > * precharge, give up and return error. > */ > @@ -2906,11 +2907,13 @@ EXPORT_SYMBOL(sk_wait_data); > */ > int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) > { > - struct proto *prot = sk->sk_prot; > - long allocated = sk_memory_allocated_add(sk, amt); > bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg; > + struct proto *prot = sk->sk_prot; > bool charged = true; > + long allocated; > > + sk_memory_allocated_add(sk, amt); > + allocated = sk_memory_allocated(sk); > if (memcg_charge && > !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt, > gfp_memcg_charge()))) > -- > 2.36.1.476.g0c4daa206d-goog >
On Fri, Jun 10, 2022 at 8:30 PM Eric Dumazet <eric.dumazet@gmail.com> wrote: > > From: Eric Dumazet <edumazet@google.com> > > sk_memory_allocated_add() has three callers, and returns > to them @memory_allocated. > > sk_forced_mem_schedule() is one of them, and ignores > the returned value. > > Change sk_memory_allocated_add() to return void. > > Change sock_reserve_memory() and __sk_mem_raise_allocated() > to call sk_memory_allocated(). > > This removes one cache line miss [1] for RPC workloads, > as first skbs in TCP write queue and receive queue go through > sk_forced_mem_schedule(). > > [1] Cache line holding tcp_memory_allocated. > > Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Shakeel Butt <shakeelb@google.com>
Hello: This patch was applied to netdev/net-next.git (master) by David S. Miller <davem@davemloft.net>: On Fri, 10 Jun 2022 20:30:16 -0700 you wrote: > From: Eric Dumazet <edumazet@google.com> > > sk_memory_allocated_add() has three callers, and returns > to them @memory_allocated. > > sk_forced_mem_schedule() is one of them, and ignores > the returned value. > > [...] Here is the summary with links: - [net-next] tcp: sk_forced_mem_schedule() optimization https://git.kernel.org/netdev/net-next/c/219160be496f You are awesome, thank you!
diff --git a/include/net/sock.h b/include/net/sock.h index 0063e8410a4e3ed91aef9cf34eb1127f7ce33b93..304a5e39d41e27105148058066e8fa23490cf9fa 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1412,7 +1412,7 @@ sk_memory_allocated(const struct sock *sk) /* 1 MB per cpu, in page units */ #define SK_MEMORY_PCPU_RESERVE (1 << (20 - PAGE_SHIFT)) -static inline long +static inline void sk_memory_allocated_add(struct sock *sk, int amt) { int local_reserve; @@ -1424,7 +1424,6 @@ sk_memory_allocated_add(struct sock *sk, int amt) atomic_long_add(local_reserve, sk->sk_prot->memory_allocated); } preempt_enable(); - return sk_memory_allocated(sk); } static inline void diff --git a/net/core/sock.c b/net/core/sock.c index 697d5c8e2f0def49351a7d38ec59ab5e875d3b10..92a0296ccb1842f11fb8dd4b2f768885d05daa8f 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1019,7 +1019,8 @@ static int sock_reserve_memory(struct sock *sk, int bytes) return -ENOMEM; /* pre-charge to forward_alloc */ - allocated = sk_memory_allocated_add(sk, pages); + sk_memory_allocated_add(sk, pages); + allocated = sk_memory_allocated(sk); /* If the system goes into memory pressure with this * precharge, give up and return error. */ @@ -2906,11 +2907,13 @@ EXPORT_SYMBOL(sk_wait_data); */ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) { - struct proto *prot = sk->sk_prot; - long allocated = sk_memory_allocated_add(sk, amt); bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg; + struct proto *prot = sk->sk_prot; bool charged = true; + long allocated; + sk_memory_allocated_add(sk, amt); + allocated = sk_memory_allocated(sk); if (memcg_charge && !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt, gfp_memcg_charge())))