Message ID | 20240223235728.13981-4-alexei.starovoitov@gmail.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | mm: Cleanup and identify various users of kernel virtual address space | expand |
> privately-managed pages into a sparse vm area with the following steps: > > area = get_vm_area(area_size, VM_SPARSE); // at bpf prog verification time > vm_area_map_pages(area, kaddr, 1, page); // on demand > // it will return an error if kaddr is out of range > vm_area_unmap_pages(area, kaddr, 1); > free_vm_area(area); // after bpf prog is unloaded I'm still wondering if this should just use an opaque cookie instead of exposing the vm_area. But otherwise this mostly looks fine to me. > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) > + return -ERANGE; This check is duplicated so many times that it really begs for a helper. > +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count) > +{ > + unsigned long size = ((unsigned long)count) * PAGE_SIZE; > + unsigned long end = addr + size; > + > + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE))) > + return -EINVAL; > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) > + return -ERANGE; > + > + vunmap_range(addr, end); > + return 0; Does it make much sense to have an error return here vs just debug checks? It's not like the caller can do much if it violates these basic invariants.
On Tue, Feb 27, 2024 at 9:59 AM Christoph Hellwig <hch@infradead.org> wrote: > > > privately-managed pages into a sparse vm area with the following steps: > > > > area = get_vm_area(area_size, VM_SPARSE); // at bpf prog verification time > > vm_area_map_pages(area, kaddr, 1, page); // on demand > > // it will return an error if kaddr is out of range > > vm_area_unmap_pages(area, kaddr, 1); > > free_vm_area(area); // after bpf prog is unloaded > > I'm still wondering if this should just use an opaque cookie instead > of exposing the vm_area. But otherwise this mostly looks fine to me. What would it look like with a cookie? A static inline wrapper around get_vm_area() that returns area->addr ? And the start address of vmap range will be such a cookie? Then vm_area_map_pages() will be doing find_vm_area() for kaddr to check that vm_area->flag & VM_SPARSE ? That's fine, but what would be an equivalent of void free_vm_area(struct vm_struct *area) ? Another static inline wrapper similar to remove_vm_area() that also does kfree(area); ? Fine by me, but api isn't user friendly with such obfuscation. I guess I don't understand the motivation to hide 'struct vm_struct *'. > > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) > > + return -ERANGE; > > This check is duplicated so many times that it really begs for a helper. ok. will do. > > +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count) > > +{ > > + unsigned long size = ((unsigned long)count) * PAGE_SIZE; > > + unsigned long end = addr + size; > > + > > + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE))) > > + return -EINVAL; > > + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) > > + return -ERANGE; > > + > > + vunmap_range(addr, end); > > + return 0; > > Does it make much sense to have an error return here vs just debug > checks? It's not like the caller can do much if it violates these > basic invariants. Ok. Will switch to void return. Will reduce commit line logs to 75 chars in all patches as suggested. re: VM_GRANT_TABLE or VM_XEN_GRANT_TABLE suggestion for patch 2. I'm not sure it fits, since only one of get_vm_area() in xen code is a grant table related. The other one is for xenbus that creates a shared memory ring between domains. So I'm planning to keep it as VM_XEN in the next revision unless folks come up with a better name. Thanks for the reviews.
On Tue, Feb 27, 2024 at 05:31:28PM -0800, Alexei Starovoitov wrote: > What would it look like with a cookie? > A static inline wrapper around get_vm_area() that returns area->addr ? > And the start address of vmap range will be such a cookie? Hmm, just making the kernel virtual address the cookie actually sounds pretty neat indeed even if I did not have that in mind. > I guess I don't understand the motivation to hide 'struct vm_struct *'. The prime reason is that then people will try to start random APIs that work on it. But let's give it a try without the wrappers and see how things go.
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 223e51c243bc..416bc7b0b4db 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -29,6 +29,7 @@ struct iov_iter; /* in uio.h */ #define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */ #define VM_ALLOW_HUGE_VMAP 0x00000400 /* Allow for huge pages on archs with HAVE_ARCH_HUGE_VMALLOC */ #define VM_XEN 0x00000800 /* xen use cases */ +#define VM_SPARSE 0x00001000 /* sparse vm_area. not all pages are present. */ #if (defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)) && \ !defined(CONFIG_KASAN_VMALLOC) @@ -233,6 +234,9 @@ static inline bool is_vm_area_hugepages(const void *addr) } #ifdef CONFIG_MMU +int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count, + struct page **pages); +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count); void vunmap_range(unsigned long addr, unsigned long end); static inline void set_vm_flush_reset_perms(void *addr) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d769a65bddad..a05dfbbacb78 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -648,6 +648,54 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, return err; } +/** + * vm_area_map_pages - map pages inside given vm_area + * @area: vm_area + * @addr: start address inside vm_area + * @count: number of pages + * @pages: pages to map (always PAGE_SIZE pages) + */ +int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count, + struct page **pages) +{ + unsigned long size = ((unsigned long)count) * PAGE_SIZE; + unsigned long end = addr + size; + + might_sleep(); + if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS)) + return -EINVAL; + if (WARN_ON_ONCE(area->flags & VM_NO_GUARD)) + return -EINVAL; + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE))) + return -EINVAL; + if (count > totalram_pages()) + return -E2BIG; + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) + return -ERANGE; + + return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT); +} + +/** + * vm_area_unmap_pages - unmap pages inside given vm_area + * @area: vm_area + * @addr: start address inside vm_area + * @count: number of pages to unmap + */ +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count) +{ + unsigned long size = ((unsigned long)count) * PAGE_SIZE; + unsigned long end = addr + size; + + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE))) + return -EINVAL; + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) + return -ERANGE; + + vunmap_range(addr, end); + return 0; +} + int is_vmalloc_or_module_addr(const void *x) { /* @@ -3822,9 +3870,9 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if (flags & VMAP_RAM) copied = vmap_ram_vread_iter(iter, addr, n, flags); - else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN)))) + else if (!(vm && (vm->flags & (VM_IOREMAP | VM_XEN | VM_SPARSE)))) copied = aligned_vread_iter(iter, addr, n); - else /* IOREMAP|XEN area is treated as memory hole */ + else /* IOREMAP|XEN|SPARSE area is treated as memory hole */ copied = zero_iter(iter, n); addr += copied; @@ -4418,6 +4466,9 @@ static int s_show(struct seq_file *m, void *p) if (v->flags & VM_XEN) seq_puts(m, " xen"); + if (v->flags & VM_SPARSE) + seq_puts(m, " sparse"); + if (v->flags & VM_ALLOC) seq_puts(m, " vmalloc");