diff mbox series

[v2,bpf-next,3/3] mm: Introduce VM_SPARSE kind and vm_area_[un]map_pages().

Message ID 20240223235728.13981-4-alexei.starovoitov@gmail.com (mailing list archive)
State Superseded
Headers show
Series mm: Cleanup and identify various users of kernel virtual address space | expand

Commit Message

Alexei Starovoitov Feb. 23, 2024, 11:57 p.m. UTC
From: Alexei Starovoitov <ast@kernel.org>

vmap/vmalloc APIs are used to map a set of pages into contiguous kernel virtual space.

get_vm_area() with appropriate flag is used to request an area of kernel address range.
It'se used for vmalloc, vmap, ioremap, xen use cases.
- vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag.
- the areas created by vmap() function should be tagged with VM_MAP.
- ioremap areas are tagged with VM_IOREMAP.
- xen use cases are VM_XEN.

BPF would like to extend the vmap API to implement a lazily-populated
sparse, yet contiguous kernel virtual space.
Introduce VM_SPARSE vm_area flag and
vm_area_map_pages(area, start_addr, count, pages) API to map a set
of pages within a given area.
It has the same sanity checks as vmap() does.
It also checks that get_vm_area() was created with VM_SPARSE flag
which identifies such areas in /proc/vmallocinfo
and returns zero pages on read through /proc/kcore.

The next commits will introduce bpf_arena which is a sparsely populated shared
memory region between bpf program and user space process. It will map
privately-managed pages into a sparse vm area with the following steps:

  area = get_vm_area(area_size, VM_SPARSE);  // at bpf prog verification time
  vm_area_map_pages(area, kaddr, 1, page);   // on demand
                    // it will return an error if kaddr is out of range
  vm_area_unmap_pages(area, kaddr, 1);
  free_vm_area(area);                        // after bpf prog is unloaded

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/vmalloc.h |  4 +++
 mm/vmalloc.c            | 55 +++++++++++++++++++++++++++++++++++++++--
 2 files changed, 57 insertions(+), 2 deletions(-)

Comments

Christoph Hellwig Feb. 27, 2024, 5:59 p.m. UTC | #1
> privately-managed pages into a sparse vm area with the following steps:
> 
>   area = get_vm_area(area_size, VM_SPARSE);  // at bpf prog verification time
>   vm_area_map_pages(area, kaddr, 1, page);   // on demand
>                     // it will return an error if kaddr is out of range
>   vm_area_unmap_pages(area, kaddr, 1);
>   free_vm_area(area);                        // after bpf prog is unloaded

I'm still wondering if this should just use an opaque cookie instead
of exposing the vm_area.  But otherwise this mostly looks fine to me.

> +	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> +		return -ERANGE;

This check is duplicated so many times that it really begs for a helper.

> +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
> +{
> +	unsigned long size = ((unsigned long)count) * PAGE_SIZE;
> +	unsigned long end = addr + size;
> +
> +	if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
> +		return -EINVAL;
> +	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> +		return -ERANGE;
> +
> +	vunmap_range(addr, end);
> +	return 0;

Does it make much sense to have an error return here vs just debug
checks?  It's not like the caller can do much if it violates these
basic invariants.
Alexei Starovoitov Feb. 28, 2024, 1:31 a.m. UTC | #2
On Tue, Feb 27, 2024 at 9:59 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> > privately-managed pages into a sparse vm area with the following steps:
> >
> >   area = get_vm_area(area_size, VM_SPARSE);  // at bpf prog verification time
> >   vm_area_map_pages(area, kaddr, 1, page);   // on demand
> >                     // it will return an error if kaddr is out of range
> >   vm_area_unmap_pages(area, kaddr, 1);
> >   free_vm_area(area);                        // after bpf prog is unloaded
>
> I'm still wondering if this should just use an opaque cookie instead
> of exposing the vm_area.  But otherwise this mostly looks fine to me.

What would it look like with a cookie?
A static inline wrapper around get_vm_area() that returns area->addr ?
And the start address of vmap range will be such a cookie?

Then vm_area_map_pages() will be doing find_vm_area() for kaddr
to check that vm_area->flag & VM_SPARSE ?
That's fine,
but what would be an equivalent of void free_vm_area(struct vm_struct *area) ?
Another static inline wrapper similar to remove_vm_area()
that also does kfree(area); ?

Fine by me, but api isn't user friendly with such obfuscation.

I guess I don't understand the motivation to hide 'struct vm_struct *'.

> > +     if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> > +             return -ERANGE;
>
> This check is duplicated so many times that it really begs for a helper.

ok. will do.

> > +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
> > +{
> > +     unsigned long size = ((unsigned long)count) * PAGE_SIZE;
> > +     unsigned long end = addr + size;
> > +
> > +     if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
> > +             return -EINVAL;
> > +     if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
> > +             return -ERANGE;
> > +
> > +     vunmap_range(addr, end);
> > +     return 0;
>
> Does it make much sense to have an error return here vs just debug
> checks?  It's not like the caller can do much if it violates these
> basic invariants.

Ok. Will switch to void return.

Will reduce commit line logs to 75 chars in all patches as suggested.

re: VM_GRANT_TABLE or VM_XEN_GRANT_TABLE suggestion for patch 2.

I'm not sure it fits, since only one of get_vm_area() in xen code
is a grant table related. The other one is for xenbus that
creates a shared memory ring between domains.
So I'm planning to keep it as VM_XEN in the next revision unless
folks come up with a better name.

Thanks for the reviews.
Christoph Hellwig Feb. 29, 2024, 3:56 p.m. UTC | #3
On Tue, Feb 27, 2024 at 05:31:28PM -0800, Alexei Starovoitov wrote:
> What would it look like with a cookie?
> A static inline wrapper around get_vm_area() that returns area->addr ?
> And the start address of vmap range will be such a cookie?

Hmm, just making the kernel virtual address the cookie actually
sounds pretty neat indeed even if I did not have that in mind.

> I guess I don't understand the motivation to hide 'struct vm_struct *'.

The prime reason is that then people will try to start random APIs that
work on it.  But let's give it a try without the wrappers and see how
things go.
diff mbox series

Patch

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 223e51c243bc..416bc7b0b4db 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -29,6 +29,7 @@  struct iov_iter;		/* in uio.h */
 #define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
 #define VM_ALLOW_HUGE_VMAP	0x00000400      /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */
 #define VM_XEN			0x00000800	/* xen use cases */
+#define VM_SPARSE		0x00001000	/* sparse vm_area. not all pages are present. */
 
 #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \
 	!defined(CONFIG_KASAN_VMALLOC)
@@ -233,6 +234,9 @@  static inline bool is_vm_area_hugepages(const void *addr)
 }
 
 #ifdef CONFIG_MMU
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+		      struct page **pages);
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count);
 void vunmap_range(unsigned long addr, unsigned long end);
 static inline void set_vm_flush_reset_perms(void *addr)
 {
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d769a65bddad..a05dfbbacb78 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -648,6 +648,54 @@  static int vmap_pages_range(unsigned long addr, unsigned long end,
 	return err;
 }
 
+/**
+ * vm_area_map_pages - map pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages
+ * @pages: pages to map (always PAGE_SIZE pages)
+ */
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+		      struct page **pages)
+{
+	unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+	unsigned long end = addr + size;
+
+	might_sleep();
+	if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS))
+		return -EINVAL;
+	if (WARN_ON_ONCE(area->flags & VM_NO_GUARD))
+		return -EINVAL;
+	if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
+		return -EINVAL;
+	if (count > totalram_pages())
+		return -E2BIG;
+	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+		return -ERANGE;
+
+	return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT);
+}
+
+/**
+ * vm_area_unmap_pages - unmap pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages to unmap
+ */
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
+{
+	unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+	unsigned long end = addr + size;
+
+	if (WARN_ON_ONCE(!(area->flags & VM_SPARSE)))
+		return -EINVAL;
+	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+		return -ERANGE;
+
+	vunmap_range(addr, end);
+	return 0;
+}
+
 int is_vmalloc_or_module_addr(const void *x)
 {
 	/*
@@ -3822,9 +3870,9 @@  long vread_iter(struct iov_iter *iter, const char *addr, size_t count)
 
 		if (flags & VMAP_RAM)
 			copied = vmap_ram_vread_iter(iter, addr, n, flags);
-		else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN))))
+		else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN | VM_SPARSE))))
 			copied = aligned_vread_iter(iter, addr, n);
-		else /* IOREMAP|XEN area is treated as memory hole */
+		else /* IOREMAP|XEN|SPARSE area is treated as memory hole */
 			copied = zero_iter(iter, n);
 
 		addr += copied;
@@ -4418,6 +4466,9 @@  static int s_show(struct seq_file *m, void *p)
 	if (v->flags & VM_XEN)
 		seq_puts(m, " xen");
 
+	if (v->flags & VM_SPARSE)
+		seq_puts(m, " sparse");
+
 	if (v->flags & VM_ALLOC)
 		seq_puts(m, " vmalloc");