diff mbox series

[bpf-next,v2] bpf: Initialize same number of free nodes for each pcpu_freelist

Message ID 20221108142207.4079521-1-xukuohai@huaweicloud.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series [bpf-next,v2] bpf: Initialize same number of free nodes for each pcpu_freelist | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for bpf-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 2 this patch: 2
netdev/cc_maintainers success CCed 12 of 12 maintainers
netdev/build_clang success Errors and warnings before: 5 this patch: 5
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 2 this patch: 2
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 25 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 pending Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-7 success Logs for llvm-toolchain
bpf/vmtest-bpf-next-VM_Test-8 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-5 success Logs for build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for build for x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-2 success Logs for build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-3 success Logs for build for aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-4 success Logs for build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-9 success Logs for test_maps on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for test_maps on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-12 success Logs for test_maps on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-13 success Logs for test_maps on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-19 fail Logs for test_progs_no_alu32 on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-23 success Logs for test_progs_no_alu32 on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-24 success Logs for test_progs_no_alu32_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-27 success Logs for test_progs_no_alu32_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-32 success Logs for test_progs_parallel on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-34 success Logs for test_verifier on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-37 success Logs for test_verifier on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-38 success Logs for test_verifier on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-14 success Logs for test_progs on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-15 success Logs for test_progs on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-17 success Logs for test_progs on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-18 success Logs for test_progs on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-20 success Logs for test_progs_no_alu32 on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-22 success Logs for test_progs_no_alu32 on x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-25 success Logs for test_progs_no_alu32_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-28 success Logs for test_progs_no_alu32_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-29 success Logs for test_progs_parallel on aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-30 success Logs for test_progs_parallel on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-33 success Logs for test_progs_parallel on x86_64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-35 success Logs for test_verifier on aarch64 with llvm-16
bpf/vmtest-bpf-next-VM_Test-11 success Logs for test_maps on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-21 success Logs for test_progs_no_alu32 on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-26 success Logs for test_progs_no_alu32_parallel on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-36 success Logs for test_verifier on s390x with gcc
bpf/vmtest-bpf-next-PR fail PR summary
bpf/vmtest-bpf-next-VM_Test-16 fail Logs for test_progs on s390x with gcc
bpf/vmtest-bpf-next-VM_Test-31 success Logs for test_progs_parallel on s390x with gcc

Commit Message

Xu Kuohai Nov. 8, 2022, 2:22 p.m. UTC
From: Xu Kuohai <xukuohai@huawei.com>

pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
free nodes for some CPUs, and then possibly one CPU with fewer nodes,
followed by remaining cpus with 0 nodes. For example, when nr_elems == 256
and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
nodes, while in fact each CPU should get 8 nodes equally.

This patch initializes nr_elems / num_possible_cpus() free nodes for each
CPU firstly, then allocates the remaining free nodes by one for each CPU
until no free nodes left.

Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
Acked-by: Yonghong Song <yhs@fb.com>
---
v2: Update commit message and add Yonghong's ack
---
 kernel/bpf/percpu_freelist.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Andrii Nakryiko Nov. 9, 2022, 11:56 p.m. UTC | #1
On Tue, Nov 8, 2022 at 6:05 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote:
>
> From: Xu Kuohai <xukuohai@huawei.com>
>
> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
> free nodes for some CPUs, and then possibly one CPU with fewer nodes,
> followed by remaining cpus with 0 nodes. For example, when nr_elems == 256
> and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
> each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
> nodes, while in fact each CPU should get 8 nodes equally.
>
> This patch initializes nr_elems / num_possible_cpus() free nodes for each
> CPU firstly, then allocates the remaining free nodes by one for each CPU
> until no free nodes left.
>
> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
> Acked-by: Yonghong Song <yhs@fb.com>
> ---
> v2: Update commit message and add Yonghong's ack
> ---
>  kernel/bpf/percpu_freelist.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
> index b6e7f5c5b9ab..89e84f7381cc 100644
> --- a/kernel/bpf/percpu_freelist.c
> +++ b/kernel/bpf/percpu_freelist.c
> @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>                             u32 nr_elems)
>  {
>         struct pcpu_freelist_head *head;
> -       int i, cpu, pcpu_entries;
> +       int i, cpu, pcpu_entries, remain_entries;
> +
> +       pcpu_entries = nr_elems / num_possible_cpus();
> +       remain_entries = nr_elems % num_possible_cpus();
>
> -       pcpu_entries = nr_elems / num_possible_cpus() + 1;
>         i = 0;
>
>         for_each_possible_cpu(cpu) {
> +               int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
>  again:
>                 head = per_cpu_ptr(s->freelist, cpu);
>                 /* No locking required as this is not visible yet. */
> @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>                 buf += elem_size;
>                 if (i == nr_elems)
>                         break;
> -               if (i % pcpu_entries)
> +               if (i < j)
>                         goto again;
>         }

this loop's logic is quite hard to follow, if we are fixing it, can we
simplify it maybe? something like:

int cpu, cpu_idx, i, j, n, m;

n = nr_elems / num_possible_cpus();
m = nr_elems % num_possible_cpus();

for_each_possible_cpu(cpu) {
    i = n + (cpu_idx < m ? 1 : 0);
    for (j = 0; j < i; j++) {
        head = per_cpu_ptr(s->freelist, cpu);
        pcpu_freelist_push_node(head, buf);
        buf += elem_size;
    }
    cpu_idx++;
}


no gotos, no extra ifs: for each cpu we determine correct number of
elements to allocate, then just allocate them in a straightforward
loop

>  }
> --
> 2.30.2
>
Xu Kuohai Nov. 10, 2022, 2:39 a.m. UTC | #2
On 11/10/2022 7:56 AM, Andrii Nakryiko wrote:
> On Tue, Nov 8, 2022 at 6:05 AM Xu Kuohai <xukuohai@huaweicloud.com> wrote:
>>
>> From: Xu Kuohai <xukuohai@huawei.com>
>>
>> pcpu_freelist_populate() initializes nr_elems / num_possible_cpus() + 1
>> free nodes for some CPUs, and then possibly one CPU with fewer nodes,
>> followed by remaining cpus with 0 nodes. For example, when nr_elems == 256
>> and num_possible_cpus() == 32, if CPU 0 is the current cpu, CPU 0~27
>> each gets 9 free nodes, CPU 28 gets 4 free nodes, CPU 29~31 get 0 free
>> nodes, while in fact each CPU should get 8 nodes equally.
>>
>> This patch initializes nr_elems / num_possible_cpus() free nodes for each
>> CPU firstly, then allocates the remaining free nodes by one for each CPU
>> until no free nodes left.
>>
>> Signed-off-by: Xu Kuohai <xukuohai@huawei.com>
>> Acked-by: Yonghong Song <yhs@fb.com>
>> ---
>> v2: Update commit message and add Yonghong's ack
>> ---
>>   kernel/bpf/percpu_freelist.c | 9 ++++++---
>>   1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
>> index b6e7f5c5b9ab..89e84f7381cc 100644
>> --- a/kernel/bpf/percpu_freelist.c
>> +++ b/kernel/bpf/percpu_freelist.c
>> @@ -100,12 +100,15 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>>                              u32 nr_elems)
>>   {
>>          struct pcpu_freelist_head *head;
>> -       int i, cpu, pcpu_entries;
>> +       int i, cpu, pcpu_entries, remain_entries;
>> +
>> +       pcpu_entries = nr_elems / num_possible_cpus();
>> +       remain_entries = nr_elems % num_possible_cpus();
>>
>> -       pcpu_entries = nr_elems / num_possible_cpus() + 1;
>>          i = 0;
>>
>>          for_each_possible_cpu(cpu) {
>> +               int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
>>   again:
>>                  head = per_cpu_ptr(s->freelist, cpu);
>>                  /* No locking required as this is not visible yet. */
>> @@ -114,7 +117,7 @@ void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
>>                  buf += elem_size;
>>                  if (i == nr_elems)
>>                          break;
>> -               if (i % pcpu_entries)
>> +               if (i < j)
>>                          goto again;
>>          }
> 
> this loop's logic is quite hard to follow, if we are fixing it, can we
> simplify it maybe? something like:
> 
> int cpu, cpu_idx, i, j, n, m;
> 
> n = nr_elems / num_possible_cpus();
> m = nr_elems % num_possible_cpus();
> 
> for_each_possible_cpu(cpu) {
>      i = n + (cpu_idx < m ? 1 : 0);
>      for (j = 0; j < i; j++) {
>          head = per_cpu_ptr(s->freelist, cpu);
>          pcpu_freelist_push_node(head, buf);
>          buf += elem_size;
>      }
>      cpu_idx++;
> }
> 
> 
> no gotos, no extra ifs: for each cpu we determine correct number of
> elements to allocate, then just allocate them in a straightforward
> loop
> 

that's great, will update to:

int cpu, cpu_idx, i, j, n, m;

n = nr_elems / num_possible_cpus();
m = nr_elems % num_possible_cpus();

for_each_possible_cpu(cpu) {
       j = min(n + (cpu_idx < m ? 1 : 0), nr_elems);
       for (i = 0; i < j; i++) {
           head = per_cpu_ptr(s->freelist, cpu);
           pcpu_freelist_push_node(head, buf);
           buf += elem_size;
       }
       nr_elems -= j;
       cpu_idx++;
}

>>   }
>> --
>> 2.30.2
>>
diff mbox series

Patch

diff --git a/kernel/bpf/percpu_freelist.c b/kernel/bpf/percpu_freelist.c
index b6e7f5c5b9ab..89e84f7381cc 100644
--- a/kernel/bpf/percpu_freelist.c
+++ b/kernel/bpf/percpu_freelist.c
@@ -100,12 +100,15 @@  void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
 			    u32 nr_elems)
 {
 	struct pcpu_freelist_head *head;
-	int i, cpu, pcpu_entries;
+	int i, cpu, pcpu_entries, remain_entries;
+
+	pcpu_entries = nr_elems / num_possible_cpus();
+	remain_entries = nr_elems % num_possible_cpus();
 
-	pcpu_entries = nr_elems / num_possible_cpus() + 1;
 	i = 0;
 
 	for_each_possible_cpu(cpu) {
+		int j = i + pcpu_entries + (remain_entries-- > 0 ? 1 : 0);
 again:
 		head = per_cpu_ptr(s->freelist, cpu);
 		/* No locking required as this is not visible yet. */
@@ -114,7 +117,7 @@  void pcpu_freelist_populate(struct pcpu_freelist *s, void *buf, u32 elem_size,
 		buf += elem_size;
 		if (i == nr_elems)
 			break;
-		if (i % pcpu_entries)
+		if (i < j)
 			goto again;
 	}
 }