Message ID | 20230522070122.6727-5-wuyun.abel@bytedance.com (mailing list archive) |
---|---|
State | Superseded |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | sock: Improve condition on sockmem pressure | expand |
On Mon, May 22, 2023 at 03:01:22PM +0800, Abel Wu wrote: > Now with the preivous patch, __sk_mem_raise_allocated() considers nit: s/preivous/previous/ > the memory pressure of both global and the socket's memcg on a func- > wide level, making the condition of memcg's pressure in question > redundant. > > Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> > --- > net/core/sock.c | 7 ++++++- > 1 file changed, 6 insertions(+), 1 deletion(-) > > diff --git a/net/core/sock.c b/net/core/sock.c > index 7641d64293af..baccbb58a11a 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) > if (sk_has_memory_pressure(sk)) { > u64 alloc; > > - if (!sk_under_memory_pressure(sk)) > + if (!sk_under_global_memory_pressure(sk)) > return 1; > alloc = sk_sockets_allocated_read_positive(sk); > + /* > + * If under global pressure, allow the sockets that are below > + * average memory usage to raise, trying to be fair among all > + * the sockets under global constrains. > + */ nit: /* Multi-line comments in networking code * look like this. */ > if (sk_prot_mem_limits(sk, 2) > alloc * > sk_mem_pages(sk->sk_wmem_queued + > atomic_read(&sk->sk_rmem_alloc) + > -- > 2.37.3 > >
Hi Simon, thanks for reviewing! I will fix the coding style issues next version! Thanks, Abel On 5/22/23 8:54 PM, Simon Horman wrote: > On Mon, May 22, 2023 at 03:01:22PM +0800, Abel Wu wrote: >> Now with the preivous patch, __sk_mem_raise_allocated() considers > > nit: s/preivous/previous/ > >> the memory pressure of both global and the socket's memcg on a func- >> wide level, making the condition of memcg's pressure in question >> redundant. >> >> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> >> --- >> net/core/sock.c | 7 ++++++- >> 1 file changed, 6 insertions(+), 1 deletion(-) >> >> diff --git a/net/core/sock.c b/net/core/sock.c >> index 7641d64293af..baccbb58a11a 100644 >> --- a/net/core/sock.c >> +++ b/net/core/sock.c >> @@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) >> if (sk_has_memory_pressure(sk)) { >> u64 alloc; >> >> - if (!sk_under_memory_pressure(sk)) >> + if (!sk_under_global_memory_pressure(sk)) >> return 1; >> alloc = sk_sockets_allocated_read_positive(sk); >> + /* >> + * If under global pressure, allow the sockets that are below >> + * average memory usage to raise, trying to be fair among all >> + * the sockets under global constrains. >> + */ > > nit: > /* Multi-line comments in networking code > * look like this. > */ > >> if (sk_prot_mem_limits(sk, 2) > alloc * >> sk_mem_pages(sk->sk_wmem_queued + >> atomic_read(&sk->sk_rmem_alloc) + >> -- >> 2.37.3 >> >>
diff --git a/net/core/sock.c b/net/core/sock.c index 7641d64293af..baccbb58a11a 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind) if (sk_has_memory_pressure(sk)) { u64 alloc; - if (!sk_under_memory_pressure(sk)) + if (!sk_under_global_memory_pressure(sk)) return 1; alloc = sk_sockets_allocated_read_positive(sk); + /* + * If under global pressure, allow the sockets that are below + * average memory usage to raise, trying to be fair among all + * the sockets under global constrains. + */ if (sk_prot_mem_limits(sk, 2) > alloc * sk_mem_pages(sk->sk_wmem_queued + atomic_read(&sk->sk_rmem_alloc) +
Now with the preivous patch, __sk_mem_raise_allocated() considers the memory pressure of both global and the socket's memcg on a func- wide level, making the condition of memcg's pressure in question redundant. Signed-off-by: Abel Wu <wuyun.abel@bytedance.com> --- net/core/sock.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-)