Message ID | 20231019120026.42215-1-wuyun.abel@bytedance.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 2def8ff3fdb66d10ebe3ec84787799ac0244eb23 |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net,v3,1/3] sock: Code cleanup on __sk_mem_raise_allocated() | expand |
On Thu, Oct 19, 2023 at 08:00:24PM +0800, Abel Wu wrote: > Code cleanup for both better simplicity and readability. > No functional change intended. > > Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> > Acked-by: Shakeel Butt <shakeelb@google.com> Reviewed-by: Simon Horman <horms@kernel.org>
Hello: This series was applied to netdev/net-next.git (main) by Paolo Abeni <pabeni@redhat.com>: On Thu, 19 Oct 2023 20:00:24 +0800 you wrote: > Code cleanup for both better simplicity and readability. > No functional change intended. > > Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> > Acked-by: Shakeel Butt <shakeelb@google.com> > --- > net/core/sock.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) Here is the summary with links: - [net,v3,1/3] sock: Code cleanup on __sk_mem_raise_allocated() https://git.kernel.org/netdev/net-next/c/2def8ff3fdb6 - [net,v3,2/3] sock: Doc behaviors for pressure heurisitics https://git.kernel.org/netdev/net-next/c/2e12072c67b5 - [net,v3,3/3] sock: Ignore memcg pressure heuristics when raising allocated https://git.kernel.org/netdev/net-next/c/66e6369e312d You are awesome, thank you!
diff --git a/net/core/sock.c b/net/core/sock.c index 16584e2dd648..4412c47466a7 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3041,17 +3041,19 @@ EXPORT_SYMBOL(sk_wait_data); */ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) { - bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg; + struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL; struct proto *prot = sk->sk_prot; - bool charged = true; + bool charged = false; long allocated; sk_memory_allocated_add(sk, amt); allocated = sk_memory_allocated(sk); - if (memcg_charge && - !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt, - gfp_memcg_charge()))) - goto suppress_allocation; + + if (memcg) { + if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge())) + goto suppress_allocation; + charged = true; + } /* Under limit. */ if (allocated <= sk_prot_mem_limits(sk, 0)) { @@ -3106,8 +3108,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) */ if (sk->sk_wmem_queued + size >= sk->sk_sndbuf) { /* Force charge with __GFP_NOFAIL */ - if (memcg_charge && !charged) { - mem_cgroup_charge_skmem(sk->sk_memcg, amt, + if (memcg && !charged) { + mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge() | __GFP_NOFAIL); } return 1; @@ -3119,8 +3121,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) sk_memory_allocated_sub(sk, amt); - if (memcg_charge && charged) - mem_cgroup_uncharge_skmem(sk->sk_memcg, amt); + if (charged) + mem_cgroup_uncharge_skmem(memcg, amt); return 0; }