mbox series

[v2,bpf-next,00/20] bpf: Introduce BPF arena.

Message ID 20240209040608.98927-1-alexei.starovoitov@gmail.com (mailing list archive)
Headers show
Series bpf: Introduce BPF arena. | expand

Message

Alexei Starovoitov Feb. 9, 2024, 4:05 a.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

v1->v2:
- Improved commit log with reasons for using vmap_pages_range() in bpf_arena.
  Thanks to Johannes
- Added support for __arena global variables in bpf programs
- Fixed race conditions spotted by Barret
- Fixed wrap32 issue spotted by Barret
- Fixed bpf_map_mmap_sz() the way Andrii suggested

The work on bpf_arena was inspired by Barret's work:
https://github.com/google/ghost-userspace/blob/main/lib/queue.bpf.h
that implements queues, lists and AVL trees completely as bpf programs
using giant bpf array map and integer indices instead of pointers.
bpf_arena is a sparse array that allows to use normal C pointers to
build such data structures. Last few patches implement page_frag
allocator, link list and hash table as bpf programs.

v1:
bpf programs have multiple options to communicate with user space:
- Various ring buffers (perf, ftrace, bpf): The data is streamed
  unidirectionally from bpf to user space.
- Hash map: The bpf program populates elements, and user space consumes them
  via bpf syscall.
- mmap()-ed array map: Libbpf creates an array map that is directly accessed by
  the bpf program and mmap-ed to user space. It's the fastest way. Its
  disadvantage is that memory for the whole array is reserved at the start.

Introduce bpf_arena, which is a sparse shared memory region between the bpf
program and user space.

Use cases:
1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous
   region, like memcached or any key/value storage. The bpf program implements an
   in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a
   value without going to user space.
2. The bpf program builds arbitrary data structures in bpf_arena (hash tables,
   rb-trees, sparse arrays), while user space consumes it.
3. bpf_arena is a "heap" of memory from the bpf program's point of view.
   The user space may mmap it, but bpf program will not convert pointers
   to user base at run-time to improve bpf program speed.

Initially, the kernel vm_area and user vma are not populated. User space can
fault in pages within the range. While servicing a page fault, bpf_arena logic
will insert a new page into the kernel and user vmas. The bpf program can
allocate pages from that region via bpf_arena_alloc_pages(). This kernel
function will insert pages into the kernel vm_area. The subsequent fault-in
from user space will populate that page into the user vma. The
BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in
from user space. In such a case, if a page is not allocated by the bpf program
and not present in the kernel vm_area, the user process will segfault. This is
useful for use cases 2 and 3 above.

bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages
either at a specific address within the arena or allocates a range with the
maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages
and removes the range from the kernel vm_area and from user process vmas.

bpf_arena can be used as a bpf program "heap" of up to 4GB. The speed of bpf
program is more important than ease of sharing with user space. This is use
case 3. In such a case, the BPF_F_NO_USER_CONV flag is recommended. It will
tell the verifier to treat the rX = bpf_arena_cast_user(rY) instruction as a
32-bit move wX = wY, which will improve bpf prog performance. Otherwise,
bpf_arena_cast_user is translated by JIT to conditionally add the upper 32 bits
of user vm_start (if the pointer is not NULL) to arena pointers before they are
stored into memory. This way, user space sees them as valid 64-bit pointers.

Diff https://github.com/llvm/llvm-project/pull/79902 taught LLVM BPF backend to
generate the bpf_cast_kern() instruction before dereference of the arena
pointer and the bpf_cast_user() instruction when the arena pointer is formed.
In a typical bpf program there will be very few bpf_cast_user().

From LLVM's point of view, arena pointers are tagged as
__attribute__((address_space(1))). Hence, clang provides helpful diagnostics
when pointers cross address space. Libbpf and the kernel support only
address_space == 1. All other address space identifiers are reserved.

rX = bpf_cast_kern(rY, addr_space) tells the verifier that
rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have
to be in the 32-bit domain. The verifier will mark load/store through
PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as
kern_vm_start + 32bit_addr memory accesses. The behavior is similar to
copy_from_kernel_nofault() except that no address checks are necessary. The
address is guaranteed to be in the 4GB range. If the page is not present, the
destination register is zeroed on read, and the operation is ignored on write.

rX = bpf_cast_user(rY, addr_space) tells the verifier that
rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then
the verifier converts cast_user to mov32. Otherwise, JIT will emit native code
equivalent to:
rX = (u32)rY;
if (rY)
  rX |= clear_lo32_bits(arena->user_vm_start); /* replace hi32 bits in rX */

After such conversion, the pointer becomes a valid user pointer within
bpf_arena range. The user process can access data structures created in
bpf_arena without any additional computations. For example, a linked list built
by a bpf program can be walked natively by user space. The last two patches
demonstrate how algorithms in the C language can be compiled as a bpf program
and as native code.

Followup patches are planned:
. support bpf_spin_lock in arena
  bpf programs running on different CPUs can synchronize access to the arena via
  existing bpf_spin_lock mechanisms (spin_locks in bpf_array or in bpf hash map).
  It will be more convenient to allow spin_locks inside the arena too.

Patch set overview:
..
- patch 4: export vmap_pages_range() to be used outside of mm directory
- patch 5: main patch that introduces bpf_arena map type. See commit log
..
- patch 9: main verifier patch to support bpf_arena
..
- patch 11-14: libbpf support for arena
..
- patch 17-20: tests

Alexei Starovoitov (20):
  bpf: Allow kfuncs return 'void *'
  bpf: Recognize '__map' suffix in kfunc arguments
  bpf: Plumb get_unmapped_area() callback into bpf_map_ops
  mm: Expose vmap_pages_range() to the rest of the kernel.
  bpf: Introduce bpf_arena.
  bpf: Disasm support for cast_kern/user instructions.
  bpf: Add x86-64 JIT support for PROBE_MEM32 pseudo instructions.
  bpf: Add x86-64 JIT support for bpf_cast_user instruction.
  bpf: Recognize cast_kern/user instructions in the verifier.
  bpf: Recognize btf_decl_tag("arg:arena") as PTR_TO_ARENA.
  libbpf: Add __arg_arena to bpf_helpers.h
  libbpf: Add support for bpf_arena.
  libbpf: Allow specifying 64-bit integers in map BTF.
  libbpf: Recognize __arena global varaibles.
  bpf: Tell bpf programs kernel's PAGE_SIZE
  bpf: Add helper macro bpf_arena_cast()
  selftests/bpf: Add unit tests for bpf_arena_alloc/free_pages
  selftests/bpf: Add bpf_arena_list test.
  selftests/bpf: Add bpf_arena_htab test.
  selftests/bpf: Convert simple page_frag allocator to per-cpu.

 arch/x86/net/bpf_jit_comp.c                   | 222 ++++++-
 include/linux/bpf.h                           |  11 +-
 include/linux/bpf_types.h                     |   1 +
 include/linux/bpf_verifier.h                  |   1 +
 include/linux/filter.h                        |   4 +
 include/linux/vmalloc.h                       |   2 +
 include/uapi/linux/bpf.h                      |  12 +
 kernel/bpf/Makefile                           |   3 +
 kernel/bpf/arena.c                            | 557 ++++++++++++++++++
 kernel/bpf/btf.c                              |  19 +-
 kernel/bpf/core.c                             |  23 +-
 kernel/bpf/disasm.c                           |  11 +
 kernel/bpf/log.c                              |   3 +
 kernel/bpf/syscall.c                          |  15 +
 kernel/bpf/verifier.c                         | 135 ++++-
 mm/vmalloc.c                                  |   4 +-
 tools/bpf/bpftool/gen.c                       |  13 +-
 tools/include/uapi/linux/bpf.h                |  12 +
 tools/lib/bpf/bpf_helpers.h                   |   6 +
 tools/lib/bpf/libbpf.c                        | 189 +++++-
 tools/lib/bpf/libbpf_probes.c                 |   7 +
 tools/testing/selftests/bpf/DENYLIST.aarch64  |   2 +
 tools/testing/selftests/bpf/DENYLIST.s390x    |   2 +
 tools/testing/selftests/bpf/bpf_arena_alloc.h |  67 +++
 .../testing/selftests/bpf/bpf_arena_common.h  |  70 +++
 tools/testing/selftests/bpf/bpf_arena_htab.h  | 100 ++++
 tools/testing/selftests/bpf/bpf_arena_list.h  |  95 +++
 .../testing/selftests/bpf/bpf_experimental.h  |  41 ++
 .../selftests/bpf/prog_tests/arena_htab.c     |  88 +++
 .../selftests/bpf/prog_tests/arena_list.c     |  68 +++
 .../selftests/bpf/prog_tests/verifier.c       |   2 +
 .../testing/selftests/bpf/progs/arena_htab.c  |  46 ++
 .../selftests/bpf/progs/arena_htab_asm.c      |   5 +
 .../testing/selftests/bpf/progs/arena_list.c  |  76 +++
 .../selftests/bpf/progs/verifier_arena.c      |  91 +++
 tools/testing/selftests/bpf/test_loader.c     |   9 +-
 36 files changed, 1969 insertions(+), 43 deletions(-)
 create mode 100644 kernel/bpf/arena.c
 create mode 100644 tools/testing/selftests/bpf/bpf_arena_alloc.h
 create mode 100644 tools/testing/selftests/bpf/bpf_arena_common.h
 create mode 100644 tools/testing/selftests/bpf/bpf_arena_htab.h
 create mode 100644 tools/testing/selftests/bpf/bpf_arena_list.h
 create mode 100644 tools/testing/selftests/bpf/prog_tests/arena_htab.c
 create mode 100644 tools/testing/selftests/bpf/prog_tests/arena_list.c
 create mode 100644 tools/testing/selftests/bpf/progs/arena_htab.c
 create mode 100644 tools/testing/selftests/bpf/progs/arena_htab_asm.c
 create mode 100644 tools/testing/selftests/bpf/progs/arena_list.c
 create mode 100644 tools/testing/selftests/bpf/progs/verifier_arena.c

Comments

David Hildenbrand Feb. 12, 2024, 2:14 p.m. UTC | #1
On 09.02.24 05:05, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> v1->v2:
> - Improved commit log with reasons for using vmap_pages_range() in bpf_arena.
>    Thanks to Johannes
> - Added support for __arena global variables in bpf programs
> - Fixed race conditions spotted by Barret
> - Fixed wrap32 issue spotted by Barret
> - Fixed bpf_map_mmap_sz() the way Andrii suggested
> 
> The work on bpf_arena was inspired by Barret's work:
> https://github.com/google/ghost-userspace/blob/main/lib/queue.bpf.h
> that implements queues, lists and AVL trees completely as bpf programs
> using giant bpf array map and integer indices instead of pointers.
> bpf_arena is a sparse array that allows to use normal C pointers to
> build such data structures. Last few patches implement page_frag
> allocator, link list and hash table as bpf programs.
> 
> v1:
> bpf programs have multiple options to communicate with user space:
> - Various ring buffers (perf, ftrace, bpf): The data is streamed
>    unidirectionally from bpf to user space.
> - Hash map: The bpf program populates elements, and user space consumes them
>    via bpf syscall.
> - mmap()-ed array map: Libbpf creates an array map that is directly accessed by
>    the bpf program and mmap-ed to user space. It's the fastest way. Its
>    disadvantage is that memory for the whole array is reserved at the start.
> 
> Introduce bpf_arena, which is a sparse shared memory region between the bpf
> program and user space.
> 
> Use cases:
> 1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous
>     region, like memcached or any key/value storage. The bpf program implements an
>     in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a
>     value without going to user space.

Just so I understand it correctly: this is all backed by unmovable and 
unswappable memory.

Is there any (existing?) way to restrict/cap the memory consumption via 
this interface? How easy is this to access+use by unprivileged userspace?

arena_vm_fault() seems to allocate new pages simply via 
alloc_page(GFP_KERNEL | __GFP_ZERO); No memory accounting, mlock limit 
checks etc.

We certainly don't want each and every application to be able to break 
page compaction, swapping etc, that's why I am asking.
Barret Rhoden Feb. 12, 2024, 5:36 p.m. UTC | #2
On 2/8/24 23:05, Alexei Starovoitov wrote:
> The work on bpf_arena was inspired by Barret's work:
> https://github.com/google/ghost-userspace/blob/main/lib/queue.bpf.h
> that implements queues, lists and AVL trees completely as bpf programs
> using giant bpf array map and integer indices instead of pointers.
> bpf_arena is a sparse array that allows to use normal C pointers to
> build such data structures. Last few patches implement page_frag
> allocator, link list and hash table as bpf programs.

thanks for the shout-out.  FWIW, i'm really looking forward to the BPF 
arena.  it'll be a little work to switch from array maps to the arena, 
but in the long run, it'll vastly simplify our scheduler code.

additionally, the ability to map in pages on demand, instead of 
preallocating a potentially large array map, will both save memory as 
well as allow me to remove some artificial limitations on what our 
scheduler can handle.  (e.g. don't limit ourselves to 64k threads).

thanks,

barret
Alexei Starovoitov Feb. 12, 2024, 6:14 p.m. UTC | #3
On Mon, Feb 12, 2024 at 6:14 AM David Hildenbrand <david@redhat.com> wrote:
>
> How easy is this to access+use by unprivileged userspace?

not possible. bpf arena requires cap_bpf + cap_perfmon.

> arena_vm_fault() seems to allocate new pages simply via
> alloc_page(GFP_KERNEL | __GFP_ZERO); No memory accounting, mlock limit
> checks etc.

Right. That's a bug. As Kumar commented on the patch 5 that it needs to
move to memcg accounting the way we do for all other maps.
It will be very similar to bpf_map_kmalloc_node().
David Hildenbrand Feb. 13, 2024, 10:35 a.m. UTC | #4
On 12.02.24 19:14, Alexei Starovoitov wrote:
> On Mon, Feb 12, 2024 at 6:14 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> How easy is this to access+use by unprivileged userspace?
> 
> not possible. bpf arena requires cap_bpf + cap_perfmon.
> 
>> arena_vm_fault() seems to allocate new pages simply via
>> alloc_page(GFP_KERNEL | __GFP_ZERO); No memory accounting, mlock limit
>> checks etc.
> 
> Right. That's a bug. As Kumar commented on the patch 5 that it needs to
> move to memcg accounting the way we do for all other maps.
> It will be very similar to bpf_map_kmalloc_node().
> 

Great, thanks!