mbox series

[v6,bpf-next,00/16] bpf: BPF specific memory allocator.

Message ID 20220902211058.60789-1-alexei.starovoitov@gmail.com (mailing list archive)
Headers show
Series bpf: BPF specific memory allocator. | expand

Message

Alexei Starovoitov Sept. 2, 2022, 9:10 p.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

Introduce any context BPF specific memory allocator.

Tracing BPF programs can attach to kprobe and fentry. Hence they
run in unknown context where calling plain kmalloc() might not be safe.
Front-end kmalloc() with per-cpu cache of free elements.
Refill this cache asynchronously from irq_work.

Major achievements enabled by bpf_mem_alloc:
- Dynamically allocated hash maps used to be 10 times slower than fully preallocated.
  With bpf_mem_alloc and subsequent optimizations the speed of dynamic maps is equal to full prealloc.
- Tracing bpf programs can use dynamically allocated hash maps.
  Potentially saving lots of memory. Typical hash map is sparsely populated.
- Sleepable bpf programs can used dynamically allocated hash maps.

v5->v6:
- Debugged the reason for selftests/bpf/test_maps ooming in a small VM that BPF CI is using.
  Added patch 16 that optimizes the usage of rcu_barrier-s between bpf_mem_alloc and
  hash map. It drastically improved the speed of htab destruction.

v4->v5:
- Fixed missing migrate_disable in hash tab free path (Daniel)
- Replaced impossible "memory leak" with WARN_ON_ONCE (Martin)
- Dropped sysctl kernel.bpf_force_dyn_alloc patch (Daniel)
- Added Andrii's ack
- Added new patch 15 that removes kmem_cache usage from bpf_mem_alloc.
  It saves memory, speeds up map create/destroy operations
  while maintains hash map update/delete performance.

v3->v4:
- fix build issue due to missing local.h on 32-bit arch
- add Kumar's ack
- proposal for next steps from Delyan:
https://lore.kernel.org/bpf/d3f76b27f4e55ec9e400ae8dcaecbb702a4932e8.camel@fb.com/

v2->v3:
- Rewrote the free_list algorithm based on discussions with Kumar. Patch 1.
- Allowed sleepable bpf progs use dynamically allocated maps. Patches 13 and 14.
- Added sysctl to force bpf_mem_alloc in hash map even if pre-alloc is
  requested to reduce memory consumption. Patch 15.
- Fix: zero-fill percpu allocation
- Single rcu_barrier at the end instead of each cpu during bpf_mem_alloc destruction

v2 thread:
https://lore.kernel.org/bpf/20220817210419.95560-1-alexei.starovoitov@gmail.com/

v1->v2:
- Moved unsafe direct call_rcu() from hash map into safe place inside bpf_mem_alloc. Patches 7 and 9.
- Optimized atomic_inc/dec in hash map with percpu_counter. Patch 6.
- Tuned watermarks per allocation size. Patch 8
- Adopted this approach to per-cpu allocation. Patch 10.
- Fully converted hash map to bpf_mem_alloc. Patch 11.
- Removed tracing prog restriction on map types. Combination of all patches and final patch 12.

v1 thread:
https://lore.kernel.org/bpf/20220623003230.37497-1-alexei.starovoitov@gmail.com/

LWN article:
https://lwn.net/Articles/899274/

Future work:
- expose bpf_mem_alloc as uapi FD to be used in dynptr_alloc, kptr_alloc
- convert lru map to bpf_mem_alloc
- further cleanup htab code. Example: htab_use_raw_lock can be removed.

Alexei Starovoitov (16):
  bpf: Introduce any context BPF specific memory allocator.
  bpf: Convert hash map to bpf_mem_alloc.
  selftests/bpf: Improve test coverage of test_maps
  samples/bpf: Reduce syscall overhead in map_perf_test.
  bpf: Relax the requirement to use preallocated hash maps in tracing
    progs.
  bpf: Optimize element count in non-preallocated hash map.
  bpf: Optimize call_rcu in non-preallocated hash map.
  bpf: Adjust low/high watermarks in bpf_mem_cache
  bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
  bpf: Add percpu allocation support to bpf_mem_alloc.
  bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
  bpf: Remove tracing program restriction on map types
  bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
  bpf: Remove prealloc-only restriction for sleepable bpf programs.
  bpf: Remove usage of kmem_cache from bpf_mem_cache.
  bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.

 include/linux/bpf_mem_alloc.h             |  28 +
 kernel/bpf/Makefile                       |   2 +-
 kernel/bpf/hashtab.c                      | 138 +++--
 kernel/bpf/memalloc.c                     | 634 ++++++++++++++++++++++
 kernel/bpf/syscall.c                      |   5 +-
 kernel/bpf/verifier.c                     |  52 --
 samples/bpf/map_perf_test_kern.c          |  44 +-
 samples/bpf/map_perf_test_user.c          |   2 +-
 tools/testing/selftests/bpf/progs/timer.c |  11 -
 tools/testing/selftests/bpf/test_maps.c   |  38 +-
 10 files changed, 820 insertions(+), 134 deletions(-)
 create mode 100644 include/linux/bpf_mem_alloc.h
 create mode 100644 kernel/bpf/memalloc.c

Comments

patchwork-bot+netdevbpf@kernel.org Sept. 5, 2022, 1:50 p.m. UTC | #1
Hello:

This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Fri,  2 Sep 2022 14:10:42 -0700 you wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> Introduce any context BPF specific memory allocator.
> 
> Tracing BPF programs can attach to kprobe and fentry. Hence they
> run in unknown context where calling plain kmalloc() might not be safe.
> Front-end kmalloc() with per-cpu cache of free elements.
> Refill this cache asynchronously from irq_work.
> 
> [...]

Here is the summary with links:
  - [v6,bpf-next,01/16] bpf: Introduce any context BPF specific memory allocator.
    https://git.kernel.org/bpf/bpf-next/c/7c8199e24fa0
  - [v6,bpf-next,02/16] bpf: Convert hash map to bpf_mem_alloc.
    https://git.kernel.org/bpf/bpf-next/c/fba1a1c6c912
  - [v6,bpf-next,03/16] selftests/bpf: Improve test coverage of test_maps
    https://git.kernel.org/bpf/bpf-next/c/37521bffdd2d
  - [v6,bpf-next,04/16] samples/bpf: Reduce syscall overhead in map_perf_test.
    https://git.kernel.org/bpf/bpf-next/c/89dc8d0c38e0
  - [v6,bpf-next,05/16] bpf: Relax the requirement to use preallocated hash maps in tracing progs.
    https://git.kernel.org/bpf/bpf-next/c/34dd3bad1a6f
  - [v6,bpf-next,06/16] bpf: Optimize element count in non-preallocated hash map.
    https://git.kernel.org/bpf/bpf-next/c/86fe28f7692d
  - [v6,bpf-next,07/16] bpf: Optimize call_rcu in non-preallocated hash map.
    https://git.kernel.org/bpf/bpf-next/c/0fd7c5d43339
  - [v6,bpf-next,08/16] bpf: Adjust low/high watermarks in bpf_mem_cache
    https://git.kernel.org/bpf/bpf-next/c/7c266178aa51
  - [v6,bpf-next,09/16] bpf: Batch call_rcu callbacks instead of SLAB_TYPESAFE_BY_RCU.
    https://git.kernel.org/bpf/bpf-next/c/8d5a8011b35d
  - [v6,bpf-next,10/16] bpf: Add percpu allocation support to bpf_mem_alloc.
    https://git.kernel.org/bpf/bpf-next/c/4ab67149f3c6
  - [v6,bpf-next,11/16] bpf: Convert percpu hash map to per-cpu bpf_mem_alloc.
    https://git.kernel.org/bpf/bpf-next/c/ee4ed53c5eb6
  - [v6,bpf-next,12/16] bpf: Remove tracing program restriction on map types
    https://git.kernel.org/bpf/bpf-next/c/96da3f7d489d
  - [v6,bpf-next,13/16] bpf: Prepare bpf_mem_alloc to be used by sleepable bpf programs.
    https://git.kernel.org/bpf/bpf-next/c/dccb4a9013a6
  - [v6,bpf-next,14/16] bpf: Remove prealloc-only restriction for sleepable bpf programs.
    https://git.kernel.org/bpf/bpf-next/c/02cc5aa29e8c
  - [v6,bpf-next,15/16] bpf: Remove usage of kmem_cache from bpf_mem_cache.
    https://git.kernel.org/bpf/bpf-next/c/bfc03c15bebf
  - [v6,bpf-next,16/16] bpf: Optimize rcu_barrier usage between hash map and bpf_mem_alloc.
    https://git.kernel.org/bpf/bpf-next/c/9f2c6e96c65e

You are awesome, thank you!