Message ID | 20230901111954.1804721-1-houtao@huaweicloud.com (mailing list archive) |
---|---|
Headers | show |
Series | bpf: Enable IRQ after irq_work_raise() completes | expand |
On Fri, Sep 1, 2023 at 4:20 AM Hou Tao <houtao@huaweicloud.com> wrote: > > From: Hou Tao <houtao1@huawei.com> > > Hi, > > The patchset aims to fix the problem that bpf_mem_alloc() may return > NULL unexpectedly when multiple bpf_mem_alloc() are invoked concurrently > under process context and there is still free memory available. The > problem was found when doing stress test for qp-trie but the same > problem also exists for bpf_obj_new() as demonstrated in patch #3. > > As pointed out by Alexei, the patchset can only fix ENOMEM problem for > normal process context and can not fix the problem for irq-disabled > context or RT-enabled kernel. > > Patch #1 fixes the race between unit_alloc() and unit_alloc(). Patch #2 > fixes the race between unit_alloc() and unit_free(). And patch #3 adds Looks good, but I disagree that patch 1 and 2 are fixes. They're surely improvements to bpf_ma logic, but not bug fixes. They decrease the probability of returning NULL, but can never eliminate it. Async alloc behavior of bpf_ma cannot be "fixed". We can make further improvements to it. Like do synchronous kmalloc() when bpf_ma is used in sleepable or other safe context. Such future improvements will not be "fixes" either. So I've applied the set to bpf-next.
Hello: This series was applied to bpf/bpf-next.git (master) by Alexei Starovoitov <ast@kernel.org>: On Fri, 1 Sep 2023 19:19:51 +0800 you wrote: > From: Hou Tao <houtao1@huawei.com> > > Hi, > > The patchset aims to fix the problem that bpf_mem_alloc() may return > NULL unexpectedly when multiple bpf_mem_alloc() are invoked concurrently > under process context and there is still free memory available. The > problem was found when doing stress test for qp-trie but the same > problem also exists for bpf_obj_new() as demonstrated in patch #3. > > [...] Here is the summary with links: - [bpf,v2,1/3] bpf: Enable IRQ after irq_work_raise() completes in unit_alloc() https://git.kernel.org/bpf/bpf-next/c/d2fc491bc41d - [bpf,v2,2/3] bpf: Enable IRQ after irq_work_raise() completes in unit_free{_rcu}() https://git.kernel.org/bpf/bpf-next/c/9d7f77f997d9 - [bpf,v2,3/3] selftests/bpf: Test preemption between bpf_obj_new() and bpf_obj_drop() https://git.kernel.org/bpf/bpf-next/c/113aa9b83cd6 You are awesome, thank you!
From: Hou Tao <houtao1@huawei.com> Hi, The patchset aims to fix the problem that bpf_mem_alloc() may return NULL unexpectedly when multiple bpf_mem_alloc() are invoked concurrently under process context and there is still free memory available. The problem was found when doing stress test for qp-trie but the same problem also exists for bpf_obj_new() as demonstrated in patch #3. As pointed out by Alexei, the patchset can only fix ENOMEM problem for normal process context and can not fix the problem for irq-disabled context or RT-enabled kernel. Patch #1 fixes the race between unit_alloc() and unit_alloc(). Patch #2 fixes the race between unit_alloc() and unit_free(). And patch #3 adds a selftest for the problem. The major change compared with v1 is using local_irq_{save,restore)() pair to disable and enable preemption instead of preempt_{disable,enable}_notrace pair. The main reason is to prevent potential overhead from __preempt_schedule_notrace(). I also run htab_mem benchmark and hash_map_perf on a 8-CPUs KVM VM to compare the performance between local_irq_{save,restore} and preempt_{disable,enable}_notrace(), but the results are similar as shown below: (1) use preempt_{disable,enable}_notrace() [root@hello bpf]# ./map_perf_test 4 8 0:hash_map_perf kmalloc 652179 events per sec 1:hash_map_perf kmalloc 651880 events per sec 2:hash_map_perf kmalloc 651382 events per sec 3:hash_map_perf kmalloc 650791 events per sec 5:hash_map_perf kmalloc 650140 events per sec 6:hash_map_perf kmalloc 652773 events per sec 7:hash_map_perf kmalloc 652751 events per sec 4:hash_map_perf kmalloc 648199 events per sec [root@hello bpf]# ./benchs/run_bench_htab_mem.sh normal bpf ma ============= overwrite per-prod-op: 110.82 ± 0.02k/s, avg mem: 2.00 ± 0.00MiB, peak mem: 2.73MiB batch_add_batch_del per-prod-op: 89.79 ± 0.75k/s, avg mem: 1.68 ± 0.38MiB, peak mem: 2.73MiB add_del_on_diff_cpu per-prod-op: 17.83 ± 0.07k/s, avg mem: 25.68 ± 2.92MiB, peak mem: 35.10MiB (2) use local_irq_{save,restore} [root@hello bpf]# ./map_perf_test 4 8 0:hash_map_perf kmalloc 656299 events per sec 1:hash_map_perf kmalloc 656397 events per sec 2:hash_map_perf kmalloc 656046 events per sec 3:hash_map_perf kmalloc 655723 events per sec 5:hash_map_perf kmalloc 655221 events per sec 4:hash_map_perf kmalloc 654617 events per sec 6:hash_map_perf kmalloc 650269 events per sec 7:hash_map_perf kmalloc 653665 events per sec [root@hello bpf]# ./benchs/run_bench_htab_mem.sh normal bpf ma ============= overwrite per-prod-op: 116.10 ± 0.02k/s, avg mem: 2.00 ± 0.00MiB, peak mem: 2.74MiB batch_add_batch_del per-prod-op: 88.76 ± 0.61k/s, avg mem: 1.94 ± 0.33MiB, peak mem: 2.74MiB add_del_on_diff_cpu per-prod-op: 18.12 ± 0.08k/s, avg mem: 25.10 ± 2.70MiB, peak mem: 34.78MiB As ususal comments are always welcome. Change Log: v2: * Use local_irq_save to disable preemption instead of using preempt_{disable,enable}_notrace pair to prevent potential overhead v1: https://lore.kernel.org/bpf/20230822133807.3198625-1-houtao@huaweicloud.com/ Hou Tao (3): bpf: Enable IRQ after irq_work_raise() completes in unit_alloc() bpf: Enable IRQ after irq_work_raise() completes in unit_free{_rcu}() selftests/bpf: Test preemption between bpf_obj_new() and bpf_obj_drop() kernel/bpf/memalloc.c | 16 ++- .../bpf/prog_tests/preempted_bpf_ma_op.c | 89 +++++++++++++++ .../selftests/bpf/progs/preempted_bpf_ma_op.c | 106 ++++++++++++++++++ 3 files changed, 208 insertions(+), 3 deletions(-) create mode 100644 tools/testing/selftests/bpf/prog_tests/preempted_bpf_ma_op.c create mode 100644 tools/testing/selftests/bpf/progs/preempted_bpf_ma_op.c