diff mbox series

[bpf-next,04/16] bpf: Introduce bpf_arena.

Message ID 20240206220441.38311-5-alexei.starovoitov@gmail.com (mailing list archive)
State Superseded
Delegated to: BPF
Headers show
Series bpf: Introduce BPF arena. | expand

Checks

Context Check Description
bpf/vmtest-bpf-next-PR fail PR summary
netdev/series_format fail Series longer than 15 patches (and no cover letter)
netdev/tree_selection success Clearly marked for bpf-next, async
netdev/ynl success Generated files up to date; no warnings/errors; no diff in generated;
netdev/fixes_present success Fixes tag not required for -next series
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 7798 this patch: 7798
netdev/build_tools success Errors and warnings before: 1 this patch: 1
netdev/cc_maintainers warning 13 maintainers not CCed: jolsa@kernel.org john.fastabend@gmail.com nathan@kernel.org yonghong.song@linux.dev song@kernel.org martin.lau@linux.dev sdf@google.com morbo@google.com justinstitt@google.com kpsingh@kernel.org ndesaulniers@google.com llvm@lists.linux.dev haoluo@google.com
netdev/build_clang success Errors and warnings before: 2369 this patch: 2369
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/deprecated_api success None detected
netdev/check_selftest success No net selftest shell script
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn fail Errors and warnings before: 8292 this patch: 8294
netdev/checkpatch fail CHECK: No space is necessary after a cast CHECK: Please use a blank line after function/struct/union/enum declarations CHECK: Prefer using the BIT_ULL macro CHECK: struct mutex definition without comment ERROR: that open brace { should be on the previous line WARNING: added, moved or deleted file(s), does MAINTAINERS need updating? WARNING: line length of 81 exceeds 80 columns WARNING: line length of 82 exceeds 80 columns WARNING: line length of 83 exceeds 80 columns WARNING: line length of 84 exceeds 80 columns WARNING: line length of 85 exceeds 80 columns WARNING: line length of 86 exceeds 80 columns WARNING: line length of 87 exceeds 80 columns WARNING: line length of 89 exceeds 80 columns WARNING: line length of 93 exceeds 80 columns WARNING: line length of 94 exceeds 80 columns WARNING: line length of 98 exceeds 80 columns
netdev/build_clang_rust success No Rust files in patch. Skipping build
netdev/kdoc success Errors and warnings before: 6 this patch: 6
netdev/source_inline success Was 0 now: 0
bpf/vmtest-bpf-next-VM_Test-1 success Logs for ShellCheck
bpf/vmtest-bpf-next-VM_Test-0 success Logs for Lint
bpf/vmtest-bpf-next-VM_Test-2 success Logs for Unittests
bpf/vmtest-bpf-next-VM_Test-3 success Logs for Validate matrix.py
bpf/vmtest-bpf-next-VM_Test-5 success Logs for aarch64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-4 fail Logs for aarch64-gcc / build / build for aarch64 with gcc
bpf/vmtest-bpf-next-VM_Test-6 success Logs for aarch64-gcc / test
bpf/vmtest-bpf-next-VM_Test-7 success Logs for aarch64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-9 success Logs for s390x-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-8 fail Logs for s390x-gcc / build / build for s390x with gcc
bpf/vmtest-bpf-next-VM_Test-10 success Logs for s390x-gcc / test
bpf/vmtest-bpf-next-VM_Test-11 success Logs for s390x-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-12 success Logs for set-matrix
bpf/vmtest-bpf-next-VM_Test-13 fail Logs for x86_64-gcc / build / build for x86_64 with gcc
bpf/vmtest-bpf-next-VM_Test-14 success Logs for x86_64-gcc / build-release
bpf/vmtest-bpf-next-VM_Test-15 success Logs for x86_64-gcc / test
bpf/vmtest-bpf-next-VM_Test-16 success Logs for x86_64-gcc / veristat
bpf/vmtest-bpf-next-VM_Test-17 fail Logs for x86_64-llvm-17 / build / build for x86_64 with llvm-17
bpf/vmtest-bpf-next-VM_Test-18 fail Logs for x86_64-llvm-17 / build-release / build for x86_64 with llvm-17 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-19 success Logs for x86_64-llvm-17 / test
bpf/vmtest-bpf-next-VM_Test-20 success Logs for x86_64-llvm-17 / veristat
bpf/vmtest-bpf-next-VM_Test-21 fail Logs for x86_64-llvm-18 / build / build for x86_64 with llvm-18
bpf/vmtest-bpf-next-VM_Test-22 fail Logs for x86_64-llvm-18 / build-release / build for x86_64 with llvm-18 and -O2 optimization
bpf/vmtest-bpf-next-VM_Test-23 success Logs for x86_64-llvm-18 / test
bpf/vmtest-bpf-next-VM_Test-24 success Logs for x86_64-llvm-18 / veristat

Commit Message

Alexei Starovoitov Feb. 6, 2024, 10:04 p.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

Introduce bpf_arena, which is a sparse shared memory region between the bpf
program and user space.

Use cases:
1. User space mmap-s bpf_arena and uses it as a traditional mmap-ed anonymous
   region, like memcached or any key/value storage. The bpf program implements an
   in-kernel accelerator. XDP prog can search for a key in bpf_arena and return a
   value without going to user space.
2. The bpf program builds arbitrary data structures in bpf_arena (hash tables,
   rb-trees, sparse arrays), while user space occasionally consumes it.
3. bpf_arena is a "heap" of memory from the bpf program's point of view. It is
   not shared with user space.

Initially, the kernel vm_area and user vma are not populated. User space can
fault in pages within the range. While servicing a page fault, bpf_arena logic
will insert a new page into the kernel and user vmas. The bpf program can
allocate pages from that region via bpf_arena_alloc_pages(). This kernel
function will insert pages into the kernel vm_area. The subsequent fault-in
from user space will populate that page into the user vma. The
BPF_F_SEGV_ON_FAULT flag at arena creation time can be used to prevent fault-in
from user space. In such a case, if a page is not allocated by the bpf program
and not present in the kernel vm_area, the user process will segfault. This is
useful for use cases 2 and 3 above.

bpf_arena_alloc_pages() is similar to user space mmap(). It allocates pages
either at a specific address within the arena or allocates a range with the
maple tree. bpf_arena_free_pages() is analogous to munmap(), which frees pages
and removes the range from the kernel vm_area and from user process vmas.

bpf_arena can be used as a bpf program "heap" of up to 4GB. The memory is not
shared with user space. This is use case 3. In such a case, the
BPF_F_NO_USER_CONV flag is recommended. It will tell the verifier to treat the
rX = bpf_arena_cast_user(rY) instruction as a 32-bit move wX = wY, which will
improve bpf prog performance. Otherwise, bpf_arena_cast_user is translated by
JIT to conditionally add the upper 32 bits of user vm_start (if the pointer is
not NULL) to arena pointers before they are stored into memory. This way, user
space sees them as valid 64-bit pointers.

Diff https://github.com/llvm/llvm-project/pull/79902 taught LLVM BPF backend to
generate the bpf_cast_kern() instruction before dereference of the arena
pointer and the bpf_cast_user() instruction when the arena pointer is formed.
In a typical bpf program there will be very few bpf_cast_user().

From LLVM's point of view, arena pointers are tagged as
__attribute__((address_space(1))). Hence, clang provides helpful diagnostics
when pointers cross address space. Libbpf and the kernel support only
address_space == 1. All other address space identifiers are reserved.

rX = bpf_cast_kern(rY, addr_space) tells the verifier that
rX->type = PTR_TO_ARENA. Any further operations on PTR_TO_ARENA register have
to be in the 32-bit domain. The verifier will mark load/store through
PTR_TO_ARENA with PROBE_MEM32. JIT will generate them as
kern_vm_start + 32bit_addr memory accesses. The behavior is similar to
copy_from_kernel_nofault() except that no address checks are necessary. The
address is guaranteed to be in the 4GB range. If the page is not present, the
destination register is zeroed on read, and the operation is ignored on write.

rX = bpf_cast_user(rY, addr_space) tells the verifier that
rX->type = unknown scalar. If arena->map_flags has BPF_F_NO_USER_CONV set, then
the verifier converts cast_user to mov32. Otherwise, JIT will emit native code
equivalent to:
rX = (u32)rY;
if (rX)
  rX |= arena->user_vm_start & ~(u64)~0U;

After such conversion, the pointer becomes a valid user pointer within
bpf_arena range. The user process can access data structures created in
bpf_arena without any additional computations. For example, a linked list built
by a bpf program can be walked natively by user space.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/bpf.h            |   5 +-
 include/linux/bpf_types.h      |   1 +
 include/uapi/linux/bpf.h       |   7 +
 kernel/bpf/Makefile            |   3 +
 kernel/bpf/arena.c             | 518 +++++++++++++++++++++++++++++++++
 kernel/bpf/core.c              |  11 +
 kernel/bpf/syscall.c           |   3 +
 kernel/bpf/verifier.c          |   1 +
 tools/include/uapi/linux/bpf.h |   7 +
 9 files changed, 554 insertions(+), 2 deletions(-)
 create mode 100644 kernel/bpf/arena.c

Comments

Barret Rhoden Feb. 7, 2024, 6:40 p.m. UTC | #1
On 2/6/24 17:04, Alexei Starovoitov wrote:
> +
> +static long compute_pgoff(struct bpf_arena *arena, long uaddr)
> +{
> +	return (u32)(uaddr - (u32)arena->user_vm_start) >> PAGE_SHIFT;
> +}
> +
> +#define MT_ENTRY ((void *)&arena_map_ops) /* unused. has to be valid pointer */
> +
> +/*
> + * Reserve a "zero page", so that bpf prog and user space never see
> + * a pointer to arena with lower 32 bits being zero.
> + * bpf_cast_user() promotes it to full 64-bit NULL.
> + */
> +static int reserve_zero_page(struct bpf_arena *arena)
> +{
> +	long pgoff = compute_pgoff(arena, 0);
> +
> +	return mtree_insert(&arena->mt, pgoff, MT_ENTRY, GFP_KERNEL);
> +}
> +

this is pretty tricky, and i think i didn't understand it at first.

you're punching a hole in the arena, such that BPF won't allocate it via 
arena_alloc_pages().  thus BPF won't 'produce' an object with a pointer 
ending in 0x00000000.

depending on where userspace mmaps the arena, that hole may or may not 
be the first page in the array.  if userspace mmaps it to a 4GB aligned 
virtual address, it'll be page 0.  but it could be at some arbitrary 
offset within the 4GB arena.

that arbitrariness makes it harder for a BPF program to do its own 
allocations within the arena.  i'm planning on carving up the 4GB arena 
for my own purposes, managed by BPF, with the expectation that i'll be 
able to allocate any 'virtual address' within the arena.  but there's a 
magic page that won't be usable.

i can certainly live with this.  just mmap userspace to a 4GB aligned 
address + PGSIZE, so that the last page in the arena is page 0.  but 
it's a little weird.

though i think we'll have more serious issues if anyone accidentally 
tries to use that zero page.  BPF would get an EEXIST if they try to 
allocate it directly, but then page fault and die if they touched it, 
since there's no page.  i can live with that, if we force it to be the 
last page in the arena.

however, i think you need to add something to the fault handler (below) 
in case userspace touches that page:

[snip]
> +static vm_fault_t arena_vm_fault(struct vm_fault *vmf)
> +{
> +	struct bpf_map *map = vmf->vma->vm_file->private_data;
> +	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +	struct page *page;
> +	long kbase, kaddr;
> +	int ret;
> +
> +	kbase = bpf_arena_get_kern_vm_start(arena);
> +	kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
> +
> +	guard(mutex)(&arena->lock);
> +	page = vmalloc_to_page((void *)kaddr);
> +	if (page)
> +		/* already have a page vmap-ed */
> +		goto out;
> +
> +	if (arena->map.map_flags & BPF_F_SEGV_ON_FAULT)
> +		/* User space requested to segfault when page is not allocated by bpf prog */
> +		return VM_FAULT_SIGSEGV;
> +
> +	ret = mtree_insert(&arena->mt, vmf->pgoff, MT_ENTRY, GFP_KERNEL);
> +	if (ret == -EEXIST)
> +		return VM_FAULT_RETRY;

say this was the zero page.  vmalloc_to_page() failed, so we tried to 
insert.  we get EEXIST, since the slot is reserved.  we retry, since we 
were expecting the case where "no page, yet slot reserved" meant that 
BPF was in the middle of filling this page.

though i think you can fix this by just treating this as a SIGSEGV 
instead of RETRY.  when i made the original suggestion of making this a 
retry (in an email off list), that was before you had the arena mutex. 
now that you have the mutex, you shouldn't have the scenario where two 
threads are concurrently trying to fill a page.  i.e. mtree_insert + 
page_alloc + vmap are all atomic w.r.t. the mutex.


> +	if (ret)
> +		return VM_FAULT_SIGSEGV;
> +
> +	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
> +	if (!page) {
> +		mtree_erase(&arena->mt, vmf->pgoff);
> +		return VM_FAULT_SIGSEGV;
> +	}
> +
> +	ret = vmap_pages_range(kaddr, kaddr + PAGE_SIZE, PAGE_KERNEL, &page, PAGE_SHIFT);
> +	if (ret) {
> +		mtree_erase(&arena->mt, vmf->pgoff);
> +		__free_page(page);
> +		return VM_FAULT_SIGSEGV;
> +	}
> +out:
> +	page_ref_add(page, 1);
> +	vmf->page = page;
> +	return 0;
> +}

[snip]

> +static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
> +{
> +	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +	int err;
> +
> +	if (arena->user_vm_start && arena->user_vm_start != vma->vm_start)
> +		/*
> +		 * 1st user process can do mmap(NULL, ...) to pick user_vm_start
> +		 * 2nd user process must pass the same addr to mmap(addr, MAP_FIXED..);
> +		 *   or
> +		 * specify addr in map_extra at map creation time and
> +		 * use the same addr later with mmap(addr, MAP_FIXED..);
> +		 */
> +		return -EBUSY;
> +
> +	if (arena->user_vm_end && arena->user_vm_end != vma->vm_end)
> +		/* all user processes must have the same size of mmap-ed region */
> +		return -EBUSY;
> +
> +	if (vma->vm_end - vma->vm_start > 1ull << 32)
> +		/* Must not be bigger than 4Gb */
> +		return -E2BIG;
> +
> +	if (remember_vma(arena, vma))
> +		return -ENOMEM;
> +
> +	if (!arena->user_vm_start) {
> +		arena->user_vm_start = vma->vm_start;
> +		err = reserve_zero_page(arena);
> +		if (err)
> +			return err;
> +	}
> +	arena->user_vm_end = vma->vm_end;
> +	/*
> +	 * bpf_map_mmap() checks that it's being mmaped as VM_SHARED and
> +	 * clears VM_MAYEXEC. Set VM_DONTEXPAND as well to avoid
> +	 * potential change of user_vm_start.
> +	 */
> +	vm_flags_set(vma, VM_DONTEXPAND);
> +	vma->vm_ops = &arena_vm_ops;
> +	return 0;
> +}

i think this whole function needs to be protected by the mutex, or at 
least all the stuff relate to user_vm_{start,end}.  if you have to 
threads mmapping the region for the first time, you'll race on the 
values of user_vm_*.


[snip]

> +/*
> + * Allocate pages and vmap them into kernel vmalloc area.
> + * Later the pages will be mmaped into user space vma.
> + */
> +static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)

instead of uaddr, can you change this to take an address relative to the 
arena ("arena virtual address"?)?  the caller of this is in BPF, and 
they don't easily know the user virtual address.  maybe even just pgoff 
directly.

additionally, you won't need to call compute_pgoff().  as it is now, i'm 
not sure what would happen if BPF did an arena_alloc with a uaddr and 
user_vm_start wasn't set yet.  actually, i guess it'd just be 0, so 
uaddr would act like an arena virtual address, up until the moment where 
someone mmaps, then it'd suddenly change to be a user virtual address.

either way, making uaddr an arena-relative addr would make all that moot.


> +{
> +	long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;

any time you compute_pgoff() or look at user_vm_{start,end}, maybe 
either hold the mutex, or only do it from mmap faults (where we know 
user_vm_start is already set).  o/w there might be subtle races where 
some other thread is mmapping the arena for the first time.


> +	u64 kern_vm_start = bpf_arena_get_kern_vm_start(arena);
> +	long pgoff = 0, kaddr, nr_pages = 0;
> +	struct page **pages;
> +	int ret, i;
> +
> +	if (page_cnt >= page_cnt_max)
> +		return 0;
> +
> +	if (uaddr) {
> +		if (uaddr & ~PAGE_MASK)
> +			return 0;
> +		pgoff = compute_pgoff(arena, uaddr);
> +		if (pgoff + page_cnt > page_cnt_max)
> +			/* requested address will be outside of user VMA */
> +			return 0;
> +	}
> +
> +	/* zeroing is needed, since alloc_pages_bulk_array() only fills in non-zero entries */
> +	pages = kvcalloc(page_cnt, sizeof(struct page *), GFP_KERNEL);
> +	if (!pages)
> +		return 0;
> +
> +	guard(mutex)(&arena->lock);
> +
> +	if (uaddr)
> +		ret = mtree_insert_range(&arena->mt, pgoff, pgoff + page_cnt,
> +					 MT_ENTRY, GFP_KERNEL);
> +	else
> +		ret = mtree_alloc_range(&arena->mt, &pgoff, MT_ENTRY,
> +					page_cnt, 0, page_cnt_max, GFP_KERNEL);
> +	if (ret)
> +		goto out_free_pages;
> +
> +	nr_pages = alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_ZERO, node_id, page_cnt, pages);
> +	if (nr_pages != page_cnt)
> +		goto out;
> +
> +	kaddr = kern_vm_start + (u32)(arena->user_vm_start + pgoff * PAGE_SIZE);

adding user_vm_start here is pretty subtle.

so far i've been thinking that the mtree is the "address space" of the 
arena, in units of pages instead of bytes.  and pgoff is an address 
within the arena.  so mtree slot 0 is the 0th page of the 4GB region. 
and that "arena address space" is mapped at a kernel virtual address and 
a user virtual address (the same for all processes).

to convert user addresses (uaddr et al.) to arena addresses, we subtract 
user_vm_start, which makes sense.  that's what compute_pgoff() does.

i was expecting kaddr = kern_vm_start + pgoff * PGSIZE, essentially 
converting from arena address space to kernel virtual address.

instead, by adding user_vm_start and casting to u32, you're converting 
or shifting arena addresses to *another* arena address (user address, 
truncated to 4GB to keep it in the arena), and that is the one that the 
kernel will use.

is that correct?

my one concern is that there's some subtle wrap-around going on, and due 
to the shifting, kaddr can be very close to the end of the arena and 
page_cnt can be big enough to go outside the 4GB range.  we'd want it to 
wrap around.  e.g.

user_start_va = 0x1,fffff000
user_end_va =   0x2,fffff000
page_cnt_max = 0x100000 or whatever.  full 4GB range.

say we want to alloc at pgoff=0 (uaddr 0x1,fffff000), page_cnt = X.  you 
can get this pgoff either by doing mtree_insert_range or 
mtree_alloc_range on an arena with no allocations.

kaddr = kern_vm_start + 0xfffff000

the size of the vm area is 4GB + guard stuff, and we're right up against 
the end of it.

if page_cnt > the guard size, vmap_pages_range() would be called on 
something outside the vm area we reserved, which seems bad.  and even if 
it wasn't, what we want is for later page maps to start at the beginning 
of kern_vm_start.

the fix might be to just only map a page at a time - maybe a loop.  or 
detect when we're close to the edge and break it into two vmaps.  i feel 
like the loop would be easier to understand, but maybe less efficient.

> +	ret = vmap_pages_range(kaddr, kaddr + PAGE_SIZE * page_cnt, PAGE_KERNEL,
> +			       pages, PAGE_SHIFT);
> +	if (ret)
> +		goto out;
> +	kvfree(pages);
> +	return clear_lo32(arena->user_vm_start) + (u32)(kaddr - kern_vm_start);
> +out:
> +	mtree_erase(&arena->mt, pgoff);
> +out_free_pages:
> +	if (pages)
> +		for (i = 0; i < nr_pages; i++)
> +			__free_page(pages[i]);
> +	kvfree(pages);
> +	return 0;
> +}

thanks,
barret



> +
> +/*
> + * If page is present in vmalloc area, unmap it from vmalloc area,
> + * unmap it from all user space vma-s,
> + * and free it.
> + */
> +static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> +{
> +	struct vma_list *vml;
> +
> +	list_for_each_entry(vml, &arena->vma_list, head)
> +		zap_page_range_single(vml->vma, uaddr,
> +				      PAGE_SIZE * page_cnt, NULL);
> +}
> +
> +static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
> +{
> +	u64 full_uaddr, uaddr_end;
> +	long kaddr, pgoff, i;
> +	struct page *page;
> +
> +	/* only aligned lower 32-bit are relevant */
> +	uaddr = (u32)uaddr;
> +	uaddr &= PAGE_MASK;
> +	full_uaddr = clear_lo32(arena->user_vm_start) + uaddr;
> +	uaddr_end = min(arena->user_vm_end, full_uaddr + (page_cnt << PAGE_SHIFT));
> +	if (full_uaddr >= uaddr_end)
> +		return;
> +
> +	page_cnt = (uaddr_end - full_uaddr) >> PAGE_SHIFT;
> +
> +	kaddr = bpf_arena_get_kern_vm_start(arena) + uaddr;
> +
> +	guard(mutex)(&arena->lock);
> +
> +	pgoff = compute_pgoff(arena, uaddr);
> +	/* clear range */
> +	mtree_store_range(&arena->mt, pgoff, pgoff + page_cnt, NULL, GFP_KERNEL);
> +
> +	if (page_cnt > 1)
> +		/* bulk zap if multiple pages being freed */
> +		zap_pages(arena, full_uaddr, page_cnt);
> +
> +	for (i = 0; i < page_cnt; i++, kaddr += PAGE_SIZE, full_uaddr += PAGE_SIZE) {
> +		page = vmalloc_to_page((void *)kaddr);
> +		if (!page)
> +			continue;
> +		if (page_cnt == 1 && page_mapped(page)) /* mapped by some user process */
> +			zap_pages(arena, full_uaddr, 1);
> +		vunmap_range(kaddr, kaddr + PAGE_SIZE);
> +		__free_page(page);
> +	}
> +}
> +
> +__bpf_kfunc_start_defs();
> +
> +__bpf_kfunc void *bpf_arena_alloc_pages(void *p__map, void *addr__ign, u32 page_cnt,
> +					int node_id, u64 flags)
> +{
> +	struct bpf_map *map = p__map;
> +	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +
> +	if (map->map_type != BPF_MAP_TYPE_ARENA || !arena->user_vm_start || flags)
> +		return NULL;
> +
> +	return (void *)arena_alloc_pages(arena, (long)addr__ign, page_cnt, node_id);
> +}
> +
> +__bpf_kfunc void bpf_arena_free_pages(void *p__map, void *ptr__ign, u32 page_cnt)
> +{
> +	struct bpf_map *map = p__map;
> +	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> +
> +	if (map->map_type != BPF_MAP_TYPE_ARENA || !arena->user_vm_start)
> +		return;
> +	arena_free_pages(arena, (long)ptr__ign, page_cnt);
> +}
Alexei Starovoitov Feb. 7, 2024, 8:55 p.m. UTC | #2
On Wed, Feb 7, 2024 at 10:40 AM Barret Rhoden <brho@google.com> wrote:
>
> On 2/6/24 17:04, Alexei Starovoitov wrote:
> > +
> > +static long compute_pgoff(struct bpf_arena *arena, long uaddr)
> > +{
> > +     return (u32)(uaddr - (u32)arena->user_vm_start) >> PAGE_SHIFT;
> > +}
> > +
> > +#define MT_ENTRY ((void *)&arena_map_ops) /* unused. has to be valid pointer */
> > +
> > +/*
> > + * Reserve a "zero page", so that bpf prog and user space never see
> > + * a pointer to arena with lower 32 bits being zero.
> > + * bpf_cast_user() promotes it to full 64-bit NULL.
> > + */
> > +static int reserve_zero_page(struct bpf_arena *arena)
> > +{
> > +     long pgoff = compute_pgoff(arena, 0);
> > +
> > +     return mtree_insert(&arena->mt, pgoff, MT_ENTRY, GFP_KERNEL);
> > +}
> > +
>
> this is pretty tricky, and i think i didn't understand it at first.
>
> you're punching a hole in the arena, such that BPF won't allocate it via
> arena_alloc_pages().  thus BPF won't 'produce' an object with a pointer
> ending in 0x00000000.
>
> depending on where userspace mmaps the arena, that hole may or may not
> be the first page in the array.  if userspace mmaps it to a 4GB aligned
> virtual address, it'll be page 0.  but it could be at some arbitrary
> offset within the 4GB arena.
>
> that arbitrariness makes it harder for a BPF program to do its own
> allocations within the arena.  i'm planning on carving up the 4GB arena
> for my own purposes, managed by BPF, with the expectation that i'll be
> able to allocate any 'virtual address' within the arena.  but there's a
> magic page that won't be usable.
>
> i can certainly live with this.  just mmap userspace to a 4GB aligned
> address + PGSIZE, so that the last page in the arena is page 0.  but
> it's a little weird.

Agree. I made the same conclusion while adding global variables to the arena.
From the compiler point of view all such global vars start at offset zero
and there is no way to just "move them up by a page".
For example in C code it will look like:
int __arena var1;
int __arena var2;

&var1 == user_vm_start
&var2 == user_vm_start + 4

If __ulong(map_extra,...) was used or mmap(addr, MAP_FIXED)
and was 4Gb aligned the lower 32-bits of &var1 address will be zero
and there is not much we can do about it.
We can tell LLVM to emit extra 8 byte padding to the arena section,
but it will be useless pad if the arena is not aligned to 4Gb.

Anyway, in the v2 I will remove this reserve_zero_page() logic.
It's causing more harm than good.

>
> though i think we'll have more serious issues if anyone accidentally
> tries to use that zero page.  BPF would get an EEXIST if they try to
> allocate it directly, but then page fault and die if they touched it,
> since there's no page.  i can live with that, if we force it to be the
> last page in the arena.
>
> however, i think you need to add something to the fault handler (below)
> in case userspace touches that page:
>
> [snip]
> > +static vm_fault_t arena_vm_fault(struct vm_fault *vmf)
> > +{
> > +     struct bpf_map *map = vmf->vma->vm_file->private_data;
> > +     struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
> > +     struct page *page;
> > +     long kbase, kaddr;
> > +     int ret;
> > +
> > +     kbase = bpf_arena_get_kern_vm_start(arena);
> > +     kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
> > +
> > +     guard(mutex)(&arena->lock);
> > +     page = vmalloc_to_page((void *)kaddr);
> > +     if (page)
> > +             /* already have a page vmap-ed */
> > +             goto out;
> > +
> > +     if (arena->map.map_flags & BPF_F_SEGV_ON_FAULT)
> > +             /* User space requested to segfault when page is not allocated by bpf prog */
> > +             return VM_FAULT_SIGSEGV;
> > +
> > +     ret = mtree_insert(&arena->mt, vmf->pgoff, MT_ENTRY, GFP_KERNEL);
> > +     if (ret == -EEXIST)
> > +             return VM_FAULT_RETRY;
>
> say this was the zero page.  vmalloc_to_page() failed, so we tried to
> insert.  we get EEXIST, since the slot is reserved.  we retry, since we
> were expecting the case where "no page, yet slot reserved" meant that
> BPF was in the middle of filling this page.

Yes. Great catch! I hit that too while playing with global vars.

>
> though i think you can fix this by just treating this as a SIGSEGV
> instead of RETRY.

Agree.

> when i made the original suggestion of making this a
> retry (in an email off list), that was before you had the arena mutex.
> now that you have the mutex, you shouldn't have the scenario where two
> threads are concurrently trying to fill a page.  i.e. mtree_insert +
> page_alloc + vmap are all atomic w.r.t. the mutex.

yes. mutex part makes sense.

> > +
> > +     if (!arena->user_vm_start) {
> > +             arena->user_vm_start = vma->vm_start;
> > +             err = reserve_zero_page(arena);
> > +             if (err)
> > +                     return err;
> > +     }
> > +     arena->user_vm_end = vma->vm_end;
> > +     /*
> > +      * bpf_map_mmap() checks that it's being mmaped as VM_SHARED and
> > +      * clears VM_MAYEXEC. Set VM_DONTEXPAND as well to avoid
> > +      * potential change of user_vm_start.
> > +      */
> > +     vm_flags_set(vma, VM_DONTEXPAND);
> > +     vma->vm_ops = &arena_vm_ops;
> > +     return 0;
> > +}
>
> i think this whole function needs to be protected by the mutex, or at
> least all the stuff relate to user_vm_{start,end}.  if you have to
> threads mmapping the region for the first time, you'll race on the
> values of user_vm_*.

yes. will add a mutex guard.

>
> [snip]
>
> > +/*
> > + * Allocate pages and vmap them into kernel vmalloc area.
> > + * Later the pages will be mmaped into user space vma.
> > + */
> > +static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)
>
> instead of uaddr, can you change this to take an address relative to the
> arena ("arena virtual address"?)?  the caller of this is in BPF, and
> they don't easily know the user virtual address.  maybe even just pgoff
> directly.

I thought about it, but it doesn't quite make sense.
bpf prog only sees user addresses.
All load/store returns them. If it bpf_printk-s an address it will be
user address.
bpf_arena_alloc_pages() also returns a user address.

Kernel addresses are not seen by bpf prog at all.
kern_vm_base is completely hidden.
Only at JIT time, it's added to pointers.
So passing uaddr to arena_alloc_pages() matches mmap style.

uaddr = bpf_arena_alloc_pages(... uaddr ...)
uaddr = mmap(uaddr, ...MAP_FIXED)

Passing pgoff would be weird.
Also note that there is no extra flag for bpf_arena_alloc_pages().
uaddr == full 64-bit of zeros is not a valid addr to use.

> additionally, you won't need to call compute_pgoff().  as it is now, i'm
> not sure what would happen if BPF did an arena_alloc with a uaddr and
> user_vm_start wasn't set yet.

That's impossible. bpf prog won't load, if the arena map is not created
either with fixed map_extra == user_vm_start or map is created
with map_extra == 0 and mmaped.
Only then bpf prog that uses that arena can be loaded.

>
> > +{
> > +     long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;
>
> any time you compute_pgoff() or look at user_vm_{start,end}, maybe
> either hold the mutex, or only do it from mmap faults (where we know
> user_vm_start is already set).  o/w there might be subtle races where
> some other thread is mmapping the arena for the first time.

That's unnecessary, since user_vm_start is fixed for the lifetime of
bpf prog.
But you spotted a bug. I need to set user_vm_end in arena_map_alloc()
when map_extra is specified.
Otherwise after arena create and before mmap bpf prog will see different
page_cnt_max. user_vm_start will be the same, of course.

> > +
> > +     kaddr = kern_vm_start + (u32)(arena->user_vm_start + pgoff * PAGE_SIZE);
>
> adding user_vm_start here is pretty subtle.
>
> so far i've been thinking that the mtree is the "address space" of the
> arena, in units of pages instead of bytes.  and pgoff is an address
> within the arena.  so mtree slot 0 is the 0th page of the 4GB region.
> and that "arena address space" is mapped at a kernel virtual address and
> a user virtual address (the same for all processes).
>
> to convert user addresses (uaddr et al.) to arena addresses, we subtract
> user_vm_start, which makes sense.  that's what compute_pgoff() does.
>
> i was expecting kaddr = kern_vm_start + pgoff * PGSIZE, essentially
> converting from arena address space to kernel virtual address.
>
> instead, by adding user_vm_start and casting to u32, you're converting
> or shifting arena addresses to *another* arena address (user address,
> truncated to 4GB to keep it in the arena), and that is the one that the
> kernel will use.
>
> is that correct?

Pretty much. kern and user have to see lower 32-bit exactly the same.
From user pov allocation starts at pgoff=0 which is the first page
after user_vm_start. Which is normal mmap behavior and in-kernel
vma convention.
Hence in arena_alloc_pages() does the same pgoff=0 is the first
page from user vma pov.
The math to compute kernel address gets complicated because two
bases need to be added. Both kernel and user bases are different
64-bit bases and may not be aligned to 4G.

> my one concern is that there's some subtle wrap-around going on, and due
> to the shifting, kaddr can be very close to the end of the arena and
> page_cnt can be big enough to go outside the 4GB range.  we'd want it to

page_cnt cannot go outside of 4gb range.
page_cnt is a number of pages in arena->user_vm_end - arena->user_vm_start
and during mmap we check that it's <= 4gb.

> wrap around.  e.g.
>
> user_start_va = 0x1,fffff000
> user_end_va =   0x2,fffff000
> page_cnt_max = 0x100000 or whatever.  full 4GB range.
>
> say we want to alloc at pgoff=0 (uaddr 0x1,fffff000), page_cnt = X.  you
> can get this pgoff either by doing mtree_insert_range or
> mtree_alloc_range on an arena with no allocations.
>
> kaddr = kern_vm_start + 0xfffff000
>
> the size of the vm area is 4GB + guard stuff, and we're right up against
> the end of it.
>
> if page_cnt > the guard size, vmap_pages_range() would be called on
> something outside the vm area we reserved, which seems bad.  and even if
> it wasn't, what we want is for later page maps to start at the beginning
> of kern_vm_start.
>
> the fix might be to just only map a page at a time - maybe a loop.  or
> detect when we're close to the edge and break it into two vmaps.  i feel
> like the loop would be easier to understand, but maybe less efficient.

Oops. You're correct. Great catch.
In earlier versions I had it as a loop, but then I decided that doing
all mapping ops page at a time is not efficient.
Oh well. Will fix.

Thanks a lot for the review!
Barret Rhoden Feb. 7, 2024, 9:11 p.m. UTC | #3
On 2/7/24 15:55, Alexei Starovoitov wrote:
>> instead of uaddr, can you change this to take an address relative to the
>> arena ("arena virtual address"?)?  the caller of this is in BPF, and
>> they don't easily know the user virtual address.  maybe even just pgoff
>> directly.
> I thought about it, but it doesn't quite make sense.
> bpf prog only sees user addresses.
> All load/store returns them. If it bpf_printk-s an address it will be
> user address.
> bpf_arena_alloc_pages() also returns a user address.

Yeah, makes sense to keep them all in the same address space.

> 
> Kernel addresses are not seen by bpf prog at all.
> kern_vm_base is completely hidden.
> Only at JIT time, it's added to pointers.
> So passing uaddr to arena_alloc_pages() matches mmap style.
> 
> uaddr = bpf_arena_alloc_pages(... uaddr ...)
> uaddr = mmap(uaddr, ...MAP_FIXED)
> 
> Passing pgoff would be weird.
> Also note that there is no extra flag for bpf_arena_alloc_pages().
> uaddr == full 64-bit of zeros is not a valid addr to use.

The problem I had with uaddr was that when I'm writing a BPF program, I 
don't know which address to use for a given page, e.g. the beginning of 
the arena.  I needed some way to tell me the user address "base" of the 
arena.  Though now that I can specify the user_vm_start through the 
map_extra, I think I'm ok.

Specifically, say I want to break up my arena into two, 2GB chunks, one 
for each numa node, and I want to bump-allocate from each chunk.  When I 
want to allocate the first page from either segment, I'll need to know 
what user address is offset 0 or offset 2GB.

Since I know the user_start_vm at compile time, I can just hardcode that 
to convert from "arena address" (e.g. pgoff) to the user address space.

thanks,

barret
Alexei Starovoitov Feb. 8, 2024, 6:26 a.m. UTC | #4
On Wed, Feb 7, 2024 at 1:12 PM Barret Rhoden <brho@google.com> wrote:
>
> On 2/7/24 15:55, Alexei Starovoitov wrote:
> >> instead of uaddr, can you change this to take an address relative to the
> >> arena ("arena virtual address"?)?  the caller of this is in BPF, and
> >> they don't easily know the user virtual address.  maybe even just pgoff
> >> directly.
> > I thought about it, but it doesn't quite make sense.
> > bpf prog only sees user addresses.
> > All load/store returns them. If it bpf_printk-s an address it will be
> > user address.
> > bpf_arena_alloc_pages() also returns a user address.
>
> Yeah, makes sense to keep them all in the same address space.
>
> >
> > Kernel addresses are not seen by bpf prog at all.
> > kern_vm_base is completely hidden.
> > Only at JIT time, it's added to pointers.
> > So passing uaddr to arena_alloc_pages() matches mmap style.
> >
> > uaddr = bpf_arena_alloc_pages(... uaddr ...)
> > uaddr = mmap(uaddr, ...MAP_FIXED)
> >
> > Passing pgoff would be weird.
> > Also note that there is no extra flag for bpf_arena_alloc_pages().
> > uaddr == full 64-bit of zeros is not a valid addr to use.
>
> The problem I had with uaddr was that when I'm writing a BPF program, I
> don't know which address to use for a given page, e.g. the beginning of
> the arena.  I needed some way to tell me the user address "base" of the
> arena.  Though now that I can specify the user_vm_start through the
> map_extra, I think I'm ok.
>
> Specifically, say I want to break up my arena into two, 2GB chunks, one
> for each numa node, and I want to bump-allocate from each chunk.  When I
> want to allocate the first page from either segment, I'll need to know
> what user address is offset 0 or offset 2GB.

bump allocate... you mean like page_frag alloc does?
I've implemented one on top of arena:
https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/tree/tools/testing/selftests/bpf/bpf_arena_alloc.h?h=arena&id=36d78b0f1c14c959d907d68cd7d54439b9213d0c

Also I believe I addressed all issues with missing mutex and wrap around,
and pushed to:
https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=e1cb522fee661e7346e8be567eade9cf607eaf11
Please take a look.

Including the wrap around test in the last commit:
https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=01653c393a4167ccca23dc5a69aa9cf34a46eabd

Will wait a bit before sending v2.
Barret Rhoden Feb. 8, 2024, 9:58 p.m. UTC | #5
On 2/8/24 01:26, Alexei Starovoitov wrote:
> Also I believe I addressed all issues with missing mutex and wrap around,
> and pushed to:
> https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=e1cb522fee661e7346e8be567eade9cf607eaf11
> Please take a look.

LGTM, thanks.

minor things:

> +static void arena_vm_close(struct vm_area_struct *vma)
> +{
> +	struct vma_list *vml;
> +
> +	vml = vma->vm_private_data;
> +	list_del(&vml->head);
> +	vma->vm_private_data = NULL;
> +	kfree(vml);
> +}

i think this also needs protected by the arena mutex.  otherwise two 
VMAs that close at the same time can corrupt the arena vma_list.  or a 
VMA that closes while you're zapping.

remember_vma() already has the mutex held, since it's called from mmap.

> +static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)
> +{
> +	long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;

this function and arena_free_pages() are both using user_vm_start/end 
before grabbing the mutex.  so need to grab the mutex very early.

alternatively, you could make it so that the user must set the 
user_vm_start via map_extra, so you don't have to worry about these 
changing after the arena is created.

thanks,

barret
Alexei Starovoitov Feb. 8, 2024, 11:36 p.m. UTC | #6
On Thu, Feb 8, 2024 at 1:58 PM Barret Rhoden <brho@google.com> wrote:
>
> On 2/8/24 01:26, Alexei Starovoitov wrote:
> > Also I believe I addressed all issues with missing mutex and wrap around,
> > and pushed to:
> > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=e1cb522fee661e7346e8be567eade9cf607eaf11
> > Please take a look.
>
> LGTM, thanks.
>
> minor things:
>
> > +static void arena_vm_close(struct vm_area_struct *vma)
> > +{
> > +     struct vma_list *vml;
> > +
> > +     vml = vma->vm_private_data;
> > +     list_del(&vml->head);
> > +     vma->vm_private_data = NULL;
> > +     kfree(vml);
> > +}
>
> i think this also needs protected by the arena mutex.  otherwise two
> VMAs that close at the same time can corrupt the arena vma_list.  or a
> VMA that closes while you're zapping.

Excellent catch.

> remember_vma() already has the mutex held, since it's called from mmap.
>
> > +static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)
> > +{
> > +     long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;
>
> this function and arena_free_pages() are both using user_vm_start/end
> before grabbing the mutex.  so need to grab the mutex very early.
>
> alternatively, you could make it so that the user must set the
> user_vm_start via map_extra, so you don't have to worry about these
> changing after the arena is created.

Looks like I lost the diff hunk where verifier checks that
arena has user_vm_start set before loading the prog.
And for some reason forgot to remove
if (!arena->user_vm_start) return..
in bpf_arena_alloc/free_page().
I'll remove the latter and add the verifier enforcement back.
The intent was to never call arena_alloc/free_pages when the arena is
not fully formed.
Once it's fixed there will be no race in arena_alloc_pages().
user_vm_end/start are fixed before the program is loaded.

One more thing.
The vmap_pages_range_wrap32() fix that you saw in that commit is not
enough.
Turns out that [%r12 + src_reg + off] in JIT asm doesn't
fully conform to "kernel bounds all access into 32-bit".
That "+ off" part is added _after_ src_reg is bounded to 32-bit.
Remember, that was the reason we added guard pages before and after
kernel 4Gb vm area.
It's working as intended, but for this wrap32 case we need to
map one page into the normal kernel vma _and_ into the guard page.
Consider your example:
user_start_va = 0x1,fffff000
user_end_va =   0x2,fffff000

the pgoff = 0 is uaddr 0x1,fffff000.
It's kaddr = kern_vm_start + 0xfffff000
and kaddr + PAGE_SIZE is kern_vm_start + 0.

When bpf prog access an arena pointer it can do:
dst_reg = *(u64 *)(src_reg + 0)
and
dst_reg = *(u64 *)(src_reg + 4096)

the first LDX is fine, but the 2nd will be faulting
when src_reg is fffff000.
From user space pov it's a virtually contiguous address range.
For bpf prog it's also contiguous when src_reg is 32-bit bounded,
but "+ 4096" breaks that.
The 2nd load becomes:
kern_vm_start + 0xfffff000 + 4096
and it faults.
Theoretically a solution is to do:
kern_vm_start + (u32)(0xfffff000 + 4096)
in JIT, but that is too expensive.

Hence I went with arena fix (ignore lack of error checking):
static int vunmap_guard_pages(u64 kern_vm_start, u64 start, u64 end)
{
        end = (u32)end;
        if (start < S16_MAX) {
                u64 end1 = min(end, S16_MAX + 1);

                vunmap_range(kern_vm_start + (1ull << 32) + start,
                             kern_vm_start + (1ull << 32) + end1);
        }

        if (end >= U32_MAX - S16_MAX + 1) {
                u64 start2 = max(start, U32_MAX - S16_MAX + 1);

                vunmap_range(kern_vm_start - (1ull << 32) + start2,
                             kern_vm_start - (1ull << 32) + end);
        }
        return 0;
}
static int vmap_pages_range_wrap32(u64 kern_vm_start, u64 uaddr, u64 page_cnt,
                                   struct page **pages)
{
        u64 start = kern_vm_start + uaddr;
        u64 end = start + page_cnt * PAGE_SIZE;
        u64 part1_page_cnt, start2, end2;
        int ret;

        if (page_cnt == 1 || !((uaddr + page_cnt * PAGE_SIZE) >> 32)) {
                /* uaddr doesn't overflow in 32-bit */
                ret = vmap_pages_range(start, end, PAGE_KERNEL, pages,
PAGE_SHIFT);
                if (ret)
                        return ret;
                vmap_guard_pages(kern_vm_start, uaddr, uaddr +
page_cnt * PAGE_SIZE, pages);
                return 0;
        }

        part1_page_cnt = ((1ull << 32) - (u32)uaddr) >> PAGE_SHIFT;
        end = start + part1_page_cnt * PAGE_SIZE;
        ret = vmap_pages_range(start, end,
                               PAGE_KERNEL, pages, PAGE_SHIFT);
        if (ret)
            return ret;

        vmap_guard_pages(kern_vm_start, uaddr, uaddr + part1_page_cnt
* PAGE_SIZE, pages);

        start2 = kern_vm_start;
        end2 = start2 + (page_cnt - part1_page_cnt) * PAGE_SIZE;
        ret = vmap_pages_range(start2, end2,
                               PAGE_KERNEL, &pages[part1_page_cnt], PAGE_SHIFT);
        if (ret) {
                vunmap_range(start, end);
                return ret;
        }

        vmap_guard_pages(kern_vm_start, 0, (page_cnt - part1_page_cnt)
* PAGE_SIZE,
                         pages + part1_page_cnt);
        return 0;
}

It's working, but too complicated.
Instead of single vmap_pages_range()
we might need to do up to 4 calls and map certain pages into
two places to make both 64-bit virtual addresses:
kern_vm_start + 0xfffff000 + 4096
and
kern_vm_start + (u32)(0xfffff000 + 4096)
point to the same page.

I'm inclined to tackle wrap32 issue differently and simply
disallow [user_vm_start, user_vm_end] combination
where lower 32-bit can wrap.

In other words it would mean that mmap() of len=4Gb will be
aligned to 4Gb,
while mmap() of len=1M will be offsetted in such a way
that both addr and add+1M have the same upper 32-bit.
(It's not the same as 1M aligned).

With that I will remove vmap_pages_range_wrap32() and
do single normal vmap_pages_range() without extra tricks.

wdyt?
Barret Rhoden Feb. 8, 2024, 11:50 p.m. UTC | #7
On 2/8/24 18:36, Alexei Starovoitov wrote:
> I'm inclined to tackle wrap32 issue differently and simply
> disallow [user_vm_start, user_vm_end] combination
> where lower 32-bit can wrap.
> 
> In other words it would mean that mmap() of len=4Gb will be
> aligned to 4Gb,
> while mmap() of len=1M will be offsetted in such a way
> that both addr and add+1M have the same upper 32-bit.
> (It's not the same as 1M aligned).
> 
> With that I will remove vmap_pages_range_wrap32() and
> do single normal vmap_pages_range() without extra tricks.
> 
> wdyt?

SGTM.

knowing that you can't wrap the lower 32 removes a lot of headaches. 
and the restriction of aligning a 4GB mapping to 4GB boundary is pretty 
sane.  TBH doing it elsewhere is just asking for heartache.  =)

barret
diff mbox series

Patch

diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 1ebbee1d648e..42f22bc881f0 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -37,6 +37,7 @@  struct perf_event;
 struct bpf_prog;
 struct bpf_prog_aux;
 struct bpf_map;
+struct bpf_arena;
 struct sock;
 struct seq_file;
 struct btf;
@@ -531,8 +532,8 @@  void bpf_list_head_free(const struct btf_field *field, void *list_head,
 			struct bpf_spin_lock *spin_lock);
 void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
 		      struct bpf_spin_lock *spin_lock);
-
-
+u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena);
+u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena);
 int bpf_obj_name_cpy(char *dst, const char *src, unsigned int size);
 
 struct bpf_offload_dev;
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 94baced5a1ad..9f2a6b83b49e 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -132,6 +132,7 @@  BPF_MAP_TYPE(BPF_MAP_TYPE_STRUCT_OPS, bpf_struct_ops_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_RINGBUF, ringbuf_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_BLOOM_FILTER, bloom_filter_map_ops)
 BPF_MAP_TYPE(BPF_MAP_TYPE_USER_RINGBUF, user_ringbuf_map_ops)
+BPF_MAP_TYPE(BPF_MAP_TYPE_ARENA, arena_map_ops)
 
 BPF_LINK_TYPE(BPF_LINK_TYPE_RAW_TRACEPOINT, raw_tracepoint)
 BPF_LINK_TYPE(BPF_LINK_TYPE_TRACING, tracing)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index d96708380e52..f6648851eae6 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -983,6 +983,7 @@  enum bpf_map_type {
 	BPF_MAP_TYPE_BLOOM_FILTER,
 	BPF_MAP_TYPE_USER_RINGBUF,
 	BPF_MAP_TYPE_CGRP_STORAGE,
+	BPF_MAP_TYPE_ARENA,
 	__MAX_BPF_MAP_TYPE
 };
 
@@ -1370,6 +1371,12 @@  enum {
 
 /* BPF token FD is passed in a corresponding command's token_fd field */
 	BPF_F_TOKEN_FD          = (1U << 16),
+
+/* When user space page faults in bpf_arena send SIGSEGV instead of inserting new page */
+	BPF_F_SEGV_ON_FAULT	= (1U << 17),
+
+/* Do not translate kernel bpf_arena pointers to user pointers */
+	BPF_F_NO_USER_CONV	= (1U << 18),
 };
 
 /* Flags for BPF_PROG_QUERY. */
diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile
index 4ce95acfcaa7..368c5d86b5b7 100644
--- a/kernel/bpf/Makefile
+++ b/kernel/bpf/Makefile
@@ -15,6 +15,9 @@  obj-${CONFIG_BPF_LSM}	  += bpf_inode_storage.o
 obj-$(CONFIG_BPF_SYSCALL) += disasm.o mprog.o
 obj-$(CONFIG_BPF_JIT) += trampoline.o
 obj-$(CONFIG_BPF_SYSCALL) += btf.o memalloc.o
+ifeq ($(CONFIG_MMU)$(CONFIG_64BIT),yy)
+obj-$(CONFIG_BPF_SYSCALL) += arena.o
+endif
 obj-$(CONFIG_BPF_JIT) += dispatcher.o
 ifeq ($(CONFIG_NET),y)
 obj-$(CONFIG_BPF_SYSCALL) += devmap.o
diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
new file mode 100644
index 000000000000..9db720321700
--- /dev/null
+++ b/kernel/bpf/arena.c
@@ -0,0 +1,518 @@ 
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
+#include <linux/bpf.h>
+#include <linux/btf.h>
+#include <linux/err.h>
+#include <linux/btf_ids.h>
+#include <linux/vmalloc.h>
+#include <linux/pagemap.h>
+
+/*
+ * bpf_arena is a sparsely populated shared memory region between bpf program and
+ * user space process.
+ *
+ * For example on x86-64 the values could be:
+ * user_vm_start 7f7d26200000     // picked by mmap()
+ * kern_vm_start ffffc90001e69000 // picked by get_vm_area()
+ * For user space all pointers within the arena are normal 8-byte addresses.
+ * In this example 7f7d26200000 is the address of the first page (pgoff=0).
+ * The bpf program will access it as: kern_vm_start + lower_32bit_of_user_ptr
+ * (u32)7f7d26200000 -> 26200000
+ * hence
+ * ffffc90001e69000 + 26200000 == ffffc90028069000 is "pgoff=0" within 4Gb
+ * kernel memory region.
+ *
+ * BPF JITs generate the following code to access arena:
+ *   mov eax, eax  // eax has lower 32-bit of user pointer
+ *   mov word ptr [rax + r12 + off], bx
+ * where r12 == kern_vm_start and off is s16.
+ * Hence allocate 4Gb + GUARD_SZ/2 on each side.
+ *
+ * Initially kernel vm_area and user vma are not populated.
+ * User space can fault-in any address which will insert the page
+ * into kernel and user vma.
+ * bpf program can allocate a page via bpf_arena_alloc_pages() kfunc
+ * which will insert it into kernel vm_area.
+ * The later fault-in from user space will populate that page into user vma.
+ */
+
+/* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */
+#define GUARD_SZ (1ull << sizeof(((struct bpf_insn *)0)->off) * 8)
+#define KERN_VM_SZ ((1ull << 32) + GUARD_SZ)
+
+struct bpf_arena {
+	struct bpf_map map;
+	u64 user_vm_start;
+	u64 user_vm_end;
+	struct vm_struct *kern_vm;
+	struct maple_tree mt;
+	struct list_head vma_list;
+	struct mutex lock;
+};
+
+u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena)
+{
+	return arena ? (u64) (long) arena->kern_vm->addr + GUARD_SZ / 2 : 0;
+}
+
+u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena)
+{
+	return arena ? arena->user_vm_start : 0;
+}
+
+static long arena_map_peek_elem(struct bpf_map *map, void *value)
+{
+	return -EOPNOTSUPP;
+}
+
+static long arena_map_push_elem(struct bpf_map *map, void *value, u64 flags)
+{
+	return -EOPNOTSUPP;
+}
+
+static long arena_map_pop_elem(struct bpf_map *map, void *value)
+{
+	return -EOPNOTSUPP;
+}
+
+static long arena_map_delete_elem(struct bpf_map *map, void *value)
+{
+	return -EOPNOTSUPP;
+}
+
+static int arena_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
+{
+	return -EOPNOTSUPP;
+}
+
+static long compute_pgoff(struct bpf_arena *arena, long uaddr)
+{
+	return (u32)(uaddr - (u32)arena->user_vm_start) >> PAGE_SHIFT;
+}
+
+#define MT_ENTRY ((void *)&arena_map_ops) /* unused. has to be valid pointer */
+
+/*
+ * Reserve a "zero page", so that bpf prog and user space never see
+ * a pointer to arena with lower 32 bits being zero.
+ * bpf_cast_user() promotes it to full 64-bit NULL.
+ */
+static int reserve_zero_page(struct bpf_arena *arena)
+{
+	long pgoff = compute_pgoff(arena, 0);
+
+	return mtree_insert(&arena->mt, pgoff, MT_ENTRY, GFP_KERNEL);
+}
+
+static struct bpf_map *arena_map_alloc(union bpf_attr *attr)
+{
+	struct vm_struct *kern_vm;
+	int numa_node = bpf_map_attr_numa_node(attr);
+	struct bpf_arena *arena;
+	int err = -ENOMEM;
+
+	if (attr->key_size != 8 || attr->value_size != 8 ||
+	    /* BPF_F_MMAPABLE must be set */
+	    !(attr->map_flags & BPF_F_MMAPABLE) ||
+	    /* No unsupported flags present */
+	    (attr->map_flags & ~(BPF_F_SEGV_ON_FAULT | BPF_F_MMAPABLE | BPF_F_NO_USER_CONV)))
+		return ERR_PTR(-EINVAL);
+
+	if (attr->map_extra & ~PAGE_MASK)
+		/* If non-zero the map_extra is an expected user VMA start address */
+		return ERR_PTR(-EINVAL);
+
+	kern_vm = get_vm_area(KERN_VM_SZ, VM_MAP | VM_USERMAP);
+	if (!kern_vm)
+		return ERR_PTR(-ENOMEM);
+
+	arena = bpf_map_area_alloc(sizeof(*arena), numa_node);
+	if (!arena)
+		goto err;
+
+	INIT_LIST_HEAD(&arena->vma_list);
+	arena->kern_vm = kern_vm;
+	arena->user_vm_start = attr->map_extra;
+	bpf_map_init_from_attr(&arena->map, attr);
+	mt_init_flags(&arena->mt, MT_FLAGS_ALLOC_RANGE);
+	mutex_init(&arena->lock);
+	if (arena->user_vm_start) {
+		err = reserve_zero_page(arena);
+		if (err) {
+			bpf_map_area_free(arena);
+			goto err;
+		}
+	}
+
+	return &arena->map;
+err:
+	free_vm_area(kern_vm);
+	return ERR_PTR(err);
+}
+
+static int for_each_pte(pte_t *ptep, unsigned long addr, void *data)
+{
+	struct page *page;
+	pte_t pte;
+
+	pte = ptep_get(ptep);
+	if (!pte_present(pte))
+		return 0;
+	page = pte_page(pte);
+	/*
+	 * We do not update pte here:
+	 * 1. Nobody should be accessing bpf_arena's range outside of a kernel bug
+	 * 2. TLB flushing is batched or deferred. Even if we clear pte,
+	 * the TLB entries can stick around and continue to permit access to
+	 * the freed page. So it all relies on 1.
+	 */
+	__free_page(page);
+	return 0;
+}
+
+static void arena_map_free(struct bpf_map *map)
+{
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+
+	/*
+	 * Check that user vma-s are not around when bpf map is freed.
+	 * mmap() holds vm_file which holds bpf_map refcnt.
+	 * munmap() must have happened on vma followed by arena_vm_close()
+	 * which would clear arena->vma_list.
+	 */
+	if (WARN_ON_ONCE(!list_empty(&arena->vma_list)))
+		return;
+
+	/*
+	 * free_vm_area() calls remove_vm_area() that calls free_unmap_vmap_area().
+	 * It unmaps everything from vmalloc area and clears pgtables.
+	 * Call apply_to_existing_page_range() first to find populated ptes and
+	 * free those pages.
+	 */
+	apply_to_existing_page_range(&init_mm, bpf_arena_get_kern_vm_start(arena),
+				     KERN_VM_SZ - GUARD_SZ / 2, for_each_pte, NULL);
+	free_vm_area(arena->kern_vm);
+	mtree_destroy(&arena->mt);
+	bpf_map_area_free(arena);
+}
+
+static void *arena_map_lookup_elem(struct bpf_map *map, void *key)
+{
+	return ERR_PTR(-EINVAL);
+}
+
+static long arena_map_update_elem(struct bpf_map *map, void *key,
+				  void *value, u64 flags)
+{
+	return -EOPNOTSUPP;
+}
+
+static int arena_map_check_btf(const struct bpf_map *map, const struct btf *btf,
+			       const struct btf_type *key_type, const struct btf_type *value_type)
+{
+	return 0;
+}
+
+static u64 arena_map_mem_usage(const struct bpf_map *map)
+{
+	return 0;
+}
+
+struct vma_list {
+	struct vm_area_struct *vma;
+	struct list_head head;
+};
+
+static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+{
+	struct vma_list *vml;
+
+	vml = kmalloc(sizeof(*vml), GFP_KERNEL);
+	if (!vml)
+		return -ENOMEM;
+	vma->vm_private_data = vml;
+	vml->vma = vma;
+	list_add(&vml->head, &arena->vma_list);
+	return 0;
+}
+
+static void arena_vm_close(struct vm_area_struct *vma)
+{
+	struct vma_list *vml;
+
+	vml = vma->vm_private_data;
+	list_del(&vml->head);
+	vma->vm_private_data = NULL;
+	kfree(vml);
+}
+
+static vm_fault_t arena_vm_fault(struct vm_fault *vmf)
+{
+	struct bpf_map *map = vmf->vma->vm_file->private_data;
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+	struct page *page;
+	long kbase, kaddr;
+	int ret;
+
+	kbase = bpf_arena_get_kern_vm_start(arena);
+	kaddr = kbase + (u32)(vmf->address & PAGE_MASK);
+
+	guard(mutex)(&arena->lock);
+	page = vmalloc_to_page((void *)kaddr);
+	if (page)
+		/* already have a page vmap-ed */
+		goto out;
+
+	if (arena->map.map_flags & BPF_F_SEGV_ON_FAULT)
+		/* User space requested to segfault when page is not allocated by bpf prog */
+		return VM_FAULT_SIGSEGV;
+
+	ret = mtree_insert(&arena->mt, vmf->pgoff, MT_ENTRY, GFP_KERNEL);
+	if (ret == -EEXIST)
+		return VM_FAULT_RETRY;
+	if (ret)
+		return VM_FAULT_SIGSEGV;
+
+	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+	if (!page) {
+		mtree_erase(&arena->mt, vmf->pgoff);
+		return VM_FAULT_SIGSEGV;
+	}
+
+	ret = vmap_pages_range(kaddr, kaddr + PAGE_SIZE, PAGE_KERNEL, &page, PAGE_SHIFT);
+	if (ret) {
+		mtree_erase(&arena->mt, vmf->pgoff);
+		__free_page(page);
+		return VM_FAULT_SIGSEGV;
+	}
+out:
+	page_ref_add(page, 1);
+	vmf->page = page;
+	return 0;
+}
+
+static const struct vm_operations_struct arena_vm_ops = {
+	.close		= arena_vm_close,
+	.fault          = arena_vm_fault,
+};
+
+static int arena_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+{
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+	int err;
+
+	if (arena->user_vm_start && arena->user_vm_start != vma->vm_start)
+		/*
+		 * 1st user process can do mmap(NULL, ...) to pick user_vm_start
+		 * 2nd user process must pass the same addr to mmap(addr, MAP_FIXED..);
+		 *   or
+		 * specify addr in map_extra at map creation time and
+		 * use the same addr later with mmap(addr, MAP_FIXED..);
+		 */
+		return -EBUSY;
+
+	if (arena->user_vm_end && arena->user_vm_end != vma->vm_end)
+		/* all user processes must have the same size of mmap-ed region */
+		return -EBUSY;
+
+	if (vma->vm_end - vma->vm_start > 1ull << 32)
+		/* Must not be bigger than 4Gb */
+		return -E2BIG;
+
+	if (remember_vma(arena, vma))
+		return -ENOMEM;
+
+	if (!arena->user_vm_start) {
+		arena->user_vm_start = vma->vm_start;
+		err = reserve_zero_page(arena);
+		if (err)
+			return err;
+	}
+	arena->user_vm_end = vma->vm_end;
+	/*
+	 * bpf_map_mmap() checks that it's being mmaped as VM_SHARED and
+	 * clears VM_MAYEXEC. Set VM_DONTEXPAND as well to avoid
+	 * potential change of user_vm_start.
+	 */
+	vm_flags_set(vma, VM_DONTEXPAND);
+	vma->vm_ops = &arena_vm_ops;
+	return 0;
+}
+
+BTF_ID_LIST_SINGLE(bpf_arena_map_btf_ids, struct, bpf_arena)
+const struct bpf_map_ops arena_map_ops = {
+	.map_meta_equal = bpf_map_meta_equal,
+	.map_alloc = arena_map_alloc,
+	.map_free = arena_map_free,
+	.map_mmap = arena_map_mmap,
+	.map_get_next_key = arena_map_get_next_key,
+	.map_push_elem = arena_map_push_elem,
+	.map_peek_elem = arena_map_peek_elem,
+	.map_pop_elem = arena_map_pop_elem,
+	.map_lookup_elem = arena_map_lookup_elem,
+	.map_update_elem = arena_map_update_elem,
+	.map_delete_elem = arena_map_delete_elem,
+	.map_check_btf = arena_map_check_btf,
+	.map_mem_usage = arena_map_mem_usage,
+	.map_btf_id = &bpf_arena_map_btf_ids[0],
+};
+
+static u64 clear_lo32(u64 val)
+{
+	return val & ~(u64)~0U;
+}
+
+/*
+ * Allocate pages and vmap them into kernel vmalloc area.
+ * Later the pages will be mmaped into user space vma.
+ */
+static long arena_alloc_pages(struct bpf_arena *arena, long uaddr, long page_cnt, int node_id)
+{
+	long page_cnt_max = (arena->user_vm_end - arena->user_vm_start) >> PAGE_SHIFT;
+	u64 kern_vm_start = bpf_arena_get_kern_vm_start(arena);
+	long pgoff = 0, kaddr, nr_pages = 0;
+	struct page **pages;
+	int ret, i;
+
+	if (page_cnt >= page_cnt_max)
+		return 0;
+
+	if (uaddr) {
+		if (uaddr & ~PAGE_MASK)
+			return 0;
+		pgoff = compute_pgoff(arena, uaddr);
+		if (pgoff + page_cnt > page_cnt_max)
+			/* requested address will be outside of user VMA */
+			return 0;
+	}
+
+	/* zeroing is needed, since alloc_pages_bulk_array() only fills in non-zero entries */
+	pages = kvcalloc(page_cnt, sizeof(struct page *), GFP_KERNEL);
+	if (!pages)
+		return 0;
+
+	guard(mutex)(&arena->lock);
+
+	if (uaddr)
+		ret = mtree_insert_range(&arena->mt, pgoff, pgoff + page_cnt,
+					 MT_ENTRY, GFP_KERNEL);
+	else
+		ret = mtree_alloc_range(&arena->mt, &pgoff, MT_ENTRY,
+					page_cnt, 0, page_cnt_max, GFP_KERNEL);
+	if (ret)
+		goto out_free_pages;
+
+	nr_pages = alloc_pages_bulk_array_node(GFP_KERNEL | __GFP_ZERO, node_id, page_cnt, pages);
+	if (nr_pages != page_cnt)
+		goto out;
+
+	kaddr = kern_vm_start + (u32)(arena->user_vm_start + pgoff * PAGE_SIZE);
+	ret = vmap_pages_range(kaddr, kaddr + PAGE_SIZE * page_cnt, PAGE_KERNEL,
+			       pages, PAGE_SHIFT);
+	if (ret)
+		goto out;
+	kvfree(pages);
+	return clear_lo32(arena->user_vm_start) + (u32)(kaddr - kern_vm_start);
+out:
+	mtree_erase(&arena->mt, pgoff);
+out_free_pages:
+	if (pages)
+		for (i = 0; i < nr_pages; i++)
+			__free_page(pages[i]);
+	kvfree(pages);
+	return 0;
+}
+
+/*
+ * If page is present in vmalloc area, unmap it from vmalloc area,
+ * unmap it from all user space vma-s,
+ * and free it.
+ */
+static void zap_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
+{
+	struct vma_list *vml;
+
+	list_for_each_entry(vml, &arena->vma_list, head)
+		zap_page_range_single(vml->vma, uaddr,
+				      PAGE_SIZE * page_cnt, NULL);
+}
+
+static void arena_free_pages(struct bpf_arena *arena, long uaddr, long page_cnt)
+{
+	u64 full_uaddr, uaddr_end;
+	long kaddr, pgoff, i;
+	struct page *page;
+
+	/* only aligned lower 32-bit are relevant */
+	uaddr = (u32)uaddr;
+	uaddr &= PAGE_MASK;
+	full_uaddr = clear_lo32(arena->user_vm_start) + uaddr;
+	uaddr_end = min(arena->user_vm_end, full_uaddr + (page_cnt << PAGE_SHIFT));
+	if (full_uaddr >= uaddr_end)
+		return;
+
+	page_cnt = (uaddr_end - full_uaddr) >> PAGE_SHIFT;
+
+	kaddr = bpf_arena_get_kern_vm_start(arena) + uaddr;
+
+	guard(mutex)(&arena->lock);
+
+	pgoff = compute_pgoff(arena, uaddr);
+	/* clear range */
+	mtree_store_range(&arena->mt, pgoff, pgoff + page_cnt, NULL, GFP_KERNEL);
+
+	if (page_cnt > 1)
+		/* bulk zap if multiple pages being freed */
+		zap_pages(arena, full_uaddr, page_cnt);
+
+	for (i = 0; i < page_cnt; i++, kaddr += PAGE_SIZE, full_uaddr += PAGE_SIZE) {
+		page = vmalloc_to_page((void *)kaddr);
+		if (!page)
+			continue;
+		if (page_cnt == 1 && page_mapped(page)) /* mapped by some user process */
+			zap_pages(arena, full_uaddr, 1);
+		vunmap_range(kaddr, kaddr + PAGE_SIZE);
+		__free_page(page);
+	}
+}
+
+__bpf_kfunc_start_defs();
+
+__bpf_kfunc void *bpf_arena_alloc_pages(void *p__map, void *addr__ign, u32 page_cnt,
+					int node_id, u64 flags)
+{
+	struct bpf_map *map = p__map;
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+
+	if (map->map_type != BPF_MAP_TYPE_ARENA || !arena->user_vm_start || flags)
+		return NULL;
+
+	return (void *)arena_alloc_pages(arena, (long)addr__ign, page_cnt, node_id);
+}
+
+__bpf_kfunc void bpf_arena_free_pages(void *p__map, void *ptr__ign, u32 page_cnt)
+{
+	struct bpf_map *map = p__map;
+	struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+
+	if (map->map_type != BPF_MAP_TYPE_ARENA || !arena->user_vm_start)
+		return;
+	arena_free_pages(arena, (long)ptr__ign, page_cnt);
+}
+__bpf_kfunc_end_defs();
+
+BTF_KFUNCS_START(arena_kfuncs)
+BTF_ID_FLAGS(func, bpf_arena_alloc_pages, KF_TRUSTED_ARGS | KF_SLEEPABLE)
+BTF_ID_FLAGS(func, bpf_arena_free_pages, KF_TRUSTED_ARGS | KF_SLEEPABLE)
+BTF_KFUNCS_END(arena_kfuncs)
+
+static const struct btf_kfunc_id_set common_kfunc_set = {
+	.owner = THIS_MODULE,
+	.set   = &arena_kfuncs,
+};
+
+static int __init kfunc_init(void)
+{
+	return register_btf_kfunc_id_set(BPF_PROG_TYPE_UNSPEC, &common_kfunc_set);
+}
+late_initcall(kfunc_init);
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 71c459a51d9e..2539d9bfe369 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -2970,6 +2970,17 @@  void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp,
 {
 }
 
+/* for configs without MMU or 32-bit */
+__weak const struct bpf_map_ops arena_map_ops;
+__weak u64 bpf_arena_get_user_vm_start(struct bpf_arena *arena)
+{
+	return 0;
+}
+__weak u64 bpf_arena_get_kern_vm_start(struct bpf_arena *arena)
+{
+	return 0;
+}
+
 #ifdef CONFIG_BPF_SYSCALL
 static int __init bpf_global_ma_init(void)
 {
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index b2750b79ac80..ac0e4a8bb852 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -164,6 +164,7 @@  static int bpf_map_update_value(struct bpf_map *map, struct file *map_file,
 	if (bpf_map_is_offloaded(map)) {
 		return bpf_map_offload_update_elem(map, key, value, flags);
 	} else if (map->map_type == BPF_MAP_TYPE_CPUMAP ||
+		   map->map_type == BPF_MAP_TYPE_ARENA ||
 		   map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
 		return map->ops->map_update_elem(map, key, value, flags);
 	} else if (map->map_type == BPF_MAP_TYPE_SOCKHASH ||
@@ -1160,6 +1161,7 @@  static int map_create(union bpf_attr *attr)
 	}
 
 	if (attr->map_type != BPF_MAP_TYPE_BLOOM_FILTER &&
+	    attr->map_type != BPF_MAP_TYPE_ARENA &&
 	    attr->map_extra != 0)
 		return -EINVAL;
 
@@ -1249,6 +1251,7 @@  static int map_create(union bpf_attr *attr)
 	case BPF_MAP_TYPE_LRU_PERCPU_HASH:
 	case BPF_MAP_TYPE_STRUCT_OPS:
 	case BPF_MAP_TYPE_CPUMAP:
+	case BPF_MAP_TYPE_ARENA:
 		if (!bpf_token_capable(token, CAP_BPF))
 			goto put_token;
 		break;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index db569ce89fb1..3c77a3ab1192 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -18047,6 +18047,7 @@  static int check_map_prog_compatibility(struct bpf_verifier_env *env,
 		case BPF_MAP_TYPE_SK_STORAGE:
 		case BPF_MAP_TYPE_TASK_STORAGE:
 		case BPF_MAP_TYPE_CGRP_STORAGE:
+		case BPF_MAP_TYPE_ARENA:
 			break;
 		default:
 			verbose(env,
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index d96708380e52..f6648851eae6 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -983,6 +983,7 @@  enum bpf_map_type {
 	BPF_MAP_TYPE_BLOOM_FILTER,
 	BPF_MAP_TYPE_USER_RINGBUF,
 	BPF_MAP_TYPE_CGRP_STORAGE,
+	BPF_MAP_TYPE_ARENA,
 	__MAX_BPF_MAP_TYPE
 };
 
@@ -1370,6 +1371,12 @@  enum {
 
 /* BPF token FD is passed in a corresponding command's token_fd field */
 	BPF_F_TOKEN_FD          = (1U << 16),
+
+/* When user space page faults in bpf_arena send SIGSEGV instead of inserting new page */
+	BPF_F_SEGV_ON_FAULT	= (1U << 17),
+
+/* Do not translate kernel bpf_arena pointers to user pointers */
+	BPF_F_NO_USER_CONV	= (1U << 18),
 };
 
 /* Flags for BPF_PROG_QUERY. */