diff mbox series

[v2,4/4] sock: Remove redundant cond of memcg pressure

Message ID 20230522070122.6727-5-wuyun.abel@bytedance.com (mailing list archive)
State Superseded
Delegated to: Netdev Maintainers
Headers show
Series sock: Improve condition on sockmem pressure | expand

Checks

Context Check Description
netdev/series_format warning Target tree name not specified in the subject
netdev/tree_selection success Guessed tree name to be net-next, async
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 14 this patch: 14
netdev/cc_maintainers warning 1 maintainers not CCed: martin.lau@kernel.org
netdev/build_clang success Errors and warnings before: 8 this patch: 8
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 14 this patch: 14
netdev/checkpatch warning WARNING: networking block comments don't use an empty /* line, use /* Comment...
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Abel Wu May 22, 2023, 7:01 a.m. UTC
Now with the preivous patch, __sk_mem_raise_allocated() considers
the memory pressure of both global and the socket's memcg on a func-
wide level, making the condition of memcg's pressure in question
redundant.

Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
---
 net/core/sock.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

Comments

Simon Horman May 22, 2023, 12:54 p.m. UTC | #1
On Mon, May 22, 2023 at 03:01:22PM +0800, Abel Wu wrote:
> Now with the preivous patch, __sk_mem_raise_allocated() considers

nit: s/preivous/previous/

> the memory pressure of both global and the socket's memcg on a func-
> wide level, making the condition of memcg's pressure in question
> redundant.
> 
> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
> ---
>  net/core/sock.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/net/core/sock.c b/net/core/sock.c
> index 7641d64293af..baccbb58a11a 100644
> --- a/net/core/sock.c
> +++ b/net/core/sock.c
> @@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
>  	if (sk_has_memory_pressure(sk)) {
>  		u64 alloc;
>  
> -		if (!sk_under_memory_pressure(sk))
> +		if (!sk_under_global_memory_pressure(sk))
>  			return 1;
>  		alloc = sk_sockets_allocated_read_positive(sk);
> +		/*
> +		 * If under global pressure, allow the sockets that are below
> +		 * average memory usage to raise, trying to be fair among all
> +		 * the sockets under global constrains.
> +		 */

nit:
		/* Multi-line comments in networking code
		 * look like this.
		 */

>  		if (sk_prot_mem_limits(sk, 2) > alloc *
>  		    sk_mem_pages(sk->sk_wmem_queued +
>  				 atomic_read(&sk->sk_rmem_alloc) +
> -- 
> 2.37.3
> 
>
Abel Wu May 23, 2023, 3:04 a.m. UTC | #2
Hi Simon, thanks for reviewing! I will fix the coding style issues
next version!

Thanks,
	Abel

On 5/22/23 8:54 PM, Simon Horman wrote:
> On Mon, May 22, 2023 at 03:01:22PM +0800, Abel Wu wrote:
>> Now with the preivous patch, __sk_mem_raise_allocated() considers
> 
> nit: s/preivous/previous/
> 
>> the memory pressure of both global and the socket's memcg on a func-
>> wide level, making the condition of memcg's pressure in question
>> redundant.
>>
>> Signed-off-by: Abel Wu <wuyun.abel@bytedance.com>
>> ---
>>   net/core/sock.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/net/core/sock.c b/net/core/sock.c
>> index 7641d64293af..baccbb58a11a 100644
>> --- a/net/core/sock.c
>> +++ b/net/core/sock.c
>> @@ -3029,9 +3029,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
>>   	if (sk_has_memory_pressure(sk)) {
>>   		u64 alloc;
>>   
>> -		if (!sk_under_memory_pressure(sk))
>> +		if (!sk_under_global_memory_pressure(sk))
>>   			return 1;
>>   		alloc = sk_sockets_allocated_read_positive(sk);
>> +		/*
>> +		 * If under global pressure, allow the sockets that are below
>> +		 * average memory usage to raise, trying to be fair among all
>> +		 * the sockets under global constrains.
>> +		 */
> 
> nit:
> 		/* Multi-line comments in networking code
> 		 * look like this.
> 		 */
> 
>>   		if (sk_prot_mem_limits(sk, 2) > alloc *
>>   		    sk_mem_pages(sk->sk_wmem_queued +
>>   				 atomic_read(&sk->sk_rmem_alloc) +
>> -- 
>> 2.37.3
>>
>>
diff mbox series

Patch

diff --git a/net/core/sock.c b/net/core/sock.c
index 7641d64293af..baccbb58a11a 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3029,9 +3029,14 @@  int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
 	if (sk_has_memory_pressure(sk)) {
 		u64 alloc;
 
-		if (!sk_under_memory_pressure(sk))
+		if (!sk_under_global_memory_pressure(sk))
 			return 1;
 		alloc = sk_sockets_allocated_read_positive(sk);
+		/*
+		 * If under global pressure, allow the sockets that are below
+		 * average memory usage to raise, trying to be fair among all
+		 * the sockets under global constrains.
+		 */
 		if (sk_prot_mem_limits(sk, 2) > alloc *
 		    sk_mem_pages(sk->sk_wmem_queued +
 				 atomic_read(&sk->sk_rmem_alloc) +