diff mbox series

[bpf-next] bpf: Initialize same number of free nodes for each pcpu_freelist

Message ID 20221107085030.3901608-1-xukuohai@huaweicloud.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next] bpf: Initialize same number of free nodes for each pcpu_freelist | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2 this patch: 2
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 5 this patch: 5
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2 this patch: 2
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 25 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 pending Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-5 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-6 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-8 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-16 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-19 success Logs for test_progs_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-7 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-PR success PR summary
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_progs on s390x with gcc

Commit Message

Xu Kuohai Nov. 7, 2022, 8:50 a.m. UTC
From: Xu Kuohai <xukuohai@huawei.com>

pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
free nodes for each CPU except the last initialized CPU, always making
the last CPU get fewer free nodes. For example, when nr_elems == 256
and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
nodes, while in fact each CPU should get 8 nodes equally.

This patch initializes nr_elems / num_possible_cpus() free nodes for each
CPU firstly, and then allocates the remaining free nodes by one for each
CPU until no free nodes left.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
---
 kernel/bpf/percpu_freelist.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Yonghong Song Nov. 7, 2022, 4:40 p.m. UTC | #1
On 11/7/22 12:50 AM, Xu Kuohai wrote:
> From: Xu Kuohai <xukuohai@huawei.com>
> 
> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
> free nodes for each CPU except the last initialized CPU, always making
> the last CPU get fewer free nodes. For example, when nr_elems == 256

... free nodes for some cpus, and then possibly one cpu with fewer 
nodes, followed by remaining cpus with 0 nodes.

> and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
> each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
> nodes, while in fact each CPU should get 8 nodes equally.
> 
> This patch initializes nr_elems / num_possible_cpus() free nodes for each
> CPU firstly, and then allocates the remaining free nodes by one for each
> CPU until no free nodes left.
> 
> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>

LGTM. Did you observe any performance issues?

Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   kernel/bpf/percpu_freelist.c | 9 ++++++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
> index b6e7f5c5b9ab..89e84f7381cc 100644
> --- a/kernel/bpf/percpu_freelist.c
> +++ b/kernel/bpf/percpu_freelist.c
> @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>   			    u32 nr_elems)
>   {
>   	struct pcpu_freelist_head *head;
> -	int i, cpu, pcpu_entries;
> +	int i, cpu, pcpu_entries, remain_entries;
> +
> +	pcpu_entries = nr_elems / num_possible_cpus();
> +	remain_entries = nr_elems % num_possible_cpus();
>   
> -	pcpu_entries = nr_elems / num_possible_cpus() + 1;
>   	i = 0;
>   
>   	for_each_possible_cpu(cpu) {
> +		int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
>   again:
>   		head = per_cpu_ptr(s->freelist, cpu);
>   		/* No locking required as this is not visible yet. */
> @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>   		buf += elem_size;
>   		if (i == nr_elems)
>   			break;
> -		if (i % pcpu_entries)
> +		if (i < j)
>   			goto again;
>   	}
>   }
Xu Kuohai Nov. 8, 2022, 1:38 p.m. UTC | #2
On 11/8/2022 12:40 AM, Yonghong Song wrote:
> 
> 
> On 11/7/22 12:50 AM, Xu Kuohai wrote:
>> From: Xu Kuohai <xukuohai@huawei.com>
>>
>> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
>> free nodes for each CPU except the last initialized CPU, always making
>> the last CPU get fewer free nodes. For example, when nr_elems == 256
> 
> ... free nodes for some cpus, and then possibly one cpu with fewer nodes, followed by remaining cpus with 0 nodes.
> 

Will update the commit message to describe it more accurately, thanks.

>> and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
>> each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
>> nodes, while in fact each CPU should get 8 nodes equally.
>>
>> This patch initializes nr_elems / num_possible_cpus() free nodes for each
>> CPU firstly, and then allocates the remaining free nodes by one for each
>> CPU until no free nodes left.
>>
>> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
> 
> LGTM. Did you observe any performance issues?
>

No. I ran map_perf_test and did not observe any performance issues. I think
it's because the test cases are repeated in loops, so the pcpu_freelists become
stable and balanced after the first few loops.

> Acked-by: Yonghong Song <yhs@fb.com>
> 
>> ---
>>   kernel/bpf/percpu_freelist.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
>> index b6e7f5c5b9ab..89e84f7381cc 100644
>> --- a/kernel/bpf/percpu_freelist.c
>> +++ b/kernel/bpf/percpu_freelist.c
>> @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>>                   u32 nr_elems)
>>   {
>>       struct pcpu_freelist_head *head;
>> -    int i, cpu, pcpu_entries;
>> +    int i, cpu, pcpu_entries, remain_entries;
>> +
>> +    pcpu_entries = nr_elems / num_possible_cpus();
>> +    remain_entries = nr_elems % num_possible_cpus();
>> -    pcpu_entries = nr_elems / num_possible_cpus() + 1;
>>       i = 0;
>>       for_each_possible_cpu(cpu) {
>> +        int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
>>   again:
>>           head = per_cpu_ptr(s->freelist, cpu);
>>           /* No locking required as this is not visible yet. */
>> @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>>           buf += elem_size;
>>           if (i == nr_elems)
>>               break;
>> -        if (i % pcpu_entries)
>> +        if (i < j)
>>               goto again;
>>       }
>>   }
diff mbox series

Patch

diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
index b6e7f5c5b9ab..89e84f7381cc 100644
--- a/kernel/bpf/percpu_freelist.c
+++ b/kernel/bpf/percpu_freelist.c
@@ -100,12 +100,15 @@  void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
 			    u32 nr_elems)
 {
 	struct pcpu_freelist_head *head;
-	int i, cpu, pcpu_entries;
+	int i, cpu, pcpu_entries, remain_entries;
+
+	pcpu_entries = nr_elems / num_possible_cpus();
+	remain_entries = nr_elems % num_possible_cpus();
 
-	pcpu_entries = nr_elems / num_possible_cpus() + 1;
 	i = 0;
 
 	for_each_possible_cpu(cpu) {
+		int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
 again:
 		head = per_cpu_ptr(s->freelist, cpu);
 		/* No locking required as this is not visible yet. */
@@ -114,7 +117,7 @@  void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
 		buf += elem_size;
 		if (i == nr_elems)
 			break;
-		if (i % pcpu_entries)
+		if (i < j)
 			goto again;
 	}
 }