Message ID | 20240305030516.41519-3-alexei.starovoitov@gmail.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 6b66b3a4ed5e68dd95ce459bb2d96d4cd2633f99 |
Delegated to: | BPF |
Headers | show |
Series | mm: Enforce ioremap address space and introduce sparse vm_area | expand |
I'd still prefer to hide the vm_area, but for now:
Reviewed-by: Christoph Hellwig <hch@lst.de>
On Wed, Mar 6, 2024 at 6:19 AM Christoph Hellwig <hch@infradead.org> wrote: > > I'd still prefer to hide the vm_area, but for now: > > Reviewed-by: Christoph Hellwig <hch@lst.de> Thank you. I will think of a way to move get_vm_area() to mm/internal.h and propose a plan by lsf/mm/bpf in May.
On Mon, Mar 4, 2024 at 10:05 PM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > From: Alexei Starovoitov <ast@kernel.org> > > vmap/vmalloc APIs are used to map a set of pages into contiguous kernel > virtual space. > > get_vm_area() with appropriate flag is used to request an area of kernel > address range. It's used for vmalloc, vmap, ioremap, xen use cases. > - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag. > - the areas created by vmap() function should be tagged with VM_MAP. > - ioremap areas are tagged with VM_IOREMAP. > > BPF would like to extend the vmap API to implement a lazily-populated > sparse, yet contiguous kernel virtual space. Introduce VM_SPARSE flag > and vm_area_map_pages(area, start_addr, count, pages) API to map a set > of pages within a given area. > It has the same sanity checks as vmap() does. > It also checks that get_vm_area() was created with VM_SPARSE flag > which identifies such areas in /proc/vmallocinfo > and returns zero pages on read through /proc/kcore. > > The next commits will introduce bpf_arena which is a sparsely populated > shared memory region between bpf program and user space process. It will > map privately-managed pages into a sparse vm area with the following steps: > > // request virtual memory region during bpf prog verification > area = get_vm_area(area_size, VM_SPARSE); > > // on demand > vm_area_map_pages(area, kaddr, kend, pages); > vm_area_unmap_pages(area, kaddr, kend); > > // after bpf program is detached and unloaded > free_vm_area(area); > > Signed-off-by: Alexei Starovoitov <ast@kernel.org> > --- > include/linux/vmalloc.h | 5 ++++ > mm/vmalloc.c | 59 +++++++++++++++++++++++++++++++++++++++-- > 2 files changed, 62 insertions(+), 2 deletions(-) > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > index c720be70c8dd..0f72c85a377b 100644 > --- a/include/linux/vmalloc.h > +++ b/include/linux/vmalloc.h > @@ -35,6 +35,7 @@ struct iov_iter; /* in uio.h */ > #else > #define VM_DEFER_KMEMLEAK 0 > #endif > +#define VM_SPARSE 0x00001000 /* sparse vm_area. not all pages are present. */ > > /* bits [20..32] reserved for arch specific ioremap internals */ > > @@ -232,6 +233,10 @@ static inline bool is_vm_area_hugepages(const void *addr) > } > > #ifdef CONFIG_MMU > +int vm_area_map_pages(struct vm_struct *area, unsigned long start, > + unsigned long end, struct page **pages); > +void vm_area_unmap_pages(struct vm_struct *area, unsigned long start, > + unsigned long end); > void vunmap_range(unsigned long addr, unsigned long end); > static inline void set_vm_flush_reset_perms(void *addr) > { > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index f42f98a127d5..e5b8c70950bc 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -648,6 +648,58 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, > return err; > } > > +static int check_sparse_vm_area(struct vm_struct *area, unsigned long start, > + unsigned long end) > +{ > + might_sleep(); This interface and in general VM_SPARSE would be useful for dynamically grown kernel stacks [1]. However, the might_sleep() here would be a problem. We would need to be able to handle vm_area_map_pages() from interrupt disabled context therefore no sleeping. The caller would need to guarantee that the page tables are pre-allocated before the mapping. Pasha [1] https://lore.kernel.org/all/CA+CK2bBYt9RAVqASB2eLyRQxYT5aiL0fGhUu3TumQCyJCNTWvw@mail.gmail.com
On Wed, Mar 6, 2024 at 1:04 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > On Mon, Mar 4, 2024 at 10:05 PM Alexei Starovoitov > <alexei.starovoitov@gmail.com> wrote: > > > > From: Alexei Starovoitov <ast@kernel.org> > > > > vmap/vmalloc APIs are used to map a set of pages into contiguous kernel > > virtual space. > > > > get_vm_area() with appropriate flag is used to request an area of kernel > > address range. It's used for vmalloc, vmap, ioremap, xen use cases. > > - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag. > > - the areas created by vmap() function should be tagged with VM_MAP. > > - ioremap areas are tagged with VM_IOREMAP. > > > > BPF would like to extend the vmap API to implement a lazily-populated > > sparse, yet contiguous kernel virtual space. Introduce VM_SPARSE flag > > and vm_area_map_pages(area, start_addr, count, pages) API to map a set > > of pages within a given area. > > It has the same sanity checks as vmap() does. > > It also checks that get_vm_area() was created with VM_SPARSE flag > > which identifies such areas in /proc/vmallocinfo > > and returns zero pages on read through /proc/kcore. > > > > The next commits will introduce bpf_arena which is a sparsely populated > > shared memory region between bpf program and user space process. It will > > map privately-managed pages into a sparse vm area with the following steps: > > > > // request virtual memory region during bpf prog verification > > area = get_vm_area(area_size, VM_SPARSE); > > > > // on demand > > vm_area_map_pages(area, kaddr, kend, pages); > > vm_area_unmap_pages(area, kaddr, kend); > > > > // after bpf program is detached and unloaded > > free_vm_area(area); > > > > Signed-off-by: Alexei Starovoitov <ast@kernel.org> > > --- > > include/linux/vmalloc.h | 5 ++++ > > mm/vmalloc.c | 59 +++++++++++++++++++++++++++++++++++++++-- > > 2 files changed, 62 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h > > index c720be70c8dd..0f72c85a377b 100644 > > --- a/include/linux/vmalloc.h > > +++ b/include/linux/vmalloc.h > > @@ -35,6 +35,7 @@ struct iov_iter; /* in uio.h */ > > #else > > #define VM_DEFER_KMEMLEAK 0 > > #endif > > +#define VM_SPARSE 0x00001000 /* sparse vm_area. not all pages are present. */ > > > > /* bits [20..32] reserved for arch specific ioremap internals */ > > > > @@ -232,6 +233,10 @@ static inline bool is_vm_area_hugepages(const void *addr) > > } > > > > #ifdef CONFIG_MMU > > +int vm_area_map_pages(struct vm_struct *area, unsigned long start, > > + unsigned long end, struct page **pages); > > +void vm_area_unmap_pages(struct vm_struct *area, unsigned long start, > > + unsigned long end); > > void vunmap_range(unsigned long addr, unsigned long end); > > static inline void set_vm_flush_reset_perms(void *addr) > > { > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > > index f42f98a127d5..e5b8c70950bc 100644 > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -648,6 +648,58 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, > > return err; > > } > > > > +static int check_sparse_vm_area(struct vm_struct *area, unsigned long start, > > + unsigned long end) > > +{ > > + might_sleep(); > > This interface and in general VM_SPARSE would be useful for > dynamically grown kernel stacks [1]. However, the might_sleep() here > would be a problem. We would need to be able to handle > vm_area_map_pages() from interrupt disabled context therefore no > sleeping. The caller would need to guarantee that the page tables are > pre-allocated before the mapping. Sounds like we'd need to differentiate two kinds of sparse regions. One that is really sparse where page tables are not populated (bpf use case) and another where only the pte level might be empty. Only the latter one will be usable for such auto-grow stacks. Months back I played with this idea: https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd that "Make vmap_pages_range() allocate page tables down to the last (PTE) level." Essentially pass NULL instead of 'pages' into vmap_pages_range() and it will populate all levels except the last. Then the page fault handler can service a fault in auto-growing stack area if it has a page stashed in some per-cpu free list. I suspect this is something you might need for "16k stack that is populated on fault", plus a free list of 3 pages per-cpu, and set_pte_at() in pf handler.
> > This interface and in general VM_SPARSE would be useful for > > dynamically grown kernel stacks [1]. However, the might_sleep() here > > would be a problem. We would need to be able to handle > > vm_area_map_pages() from interrupt disabled context therefore no > > sleeping. The caller would need to guarantee that the page tables are > > pre-allocated before the mapping. > > Sounds like we'd need to differentiate two kinds of sparse regions. > One that is really sparse where page tables are not populated (bpf use case) > and another where only the pte level might be empty. > Only the latter one will be usable for such auto-grow stacks. > > Months back I played with this idea: > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd > that > "Make vmap_pages_range() allocate page tables down to the last (PTE) level." > Essentially pass NULL instead of 'pages' into vmap_pages_range() > and it will populate all levels except the last. Yes, this is what is needed, however, it can be a little simpler with kernel stacks: given that the first page in the vm_area is mapped when stack is first allocated, and that the VA range is aligned to 16K, we actually are guaranteed to have all page table levels down to pte pre-allocated during that initial mapping. Therefore, we do not need to worry about allocating them later during PFs. > Then the page fault handler can service a fault in auto-growing stack > area if it has a page stashed in some per-cpu free list. > I suspect this is something you might need for > "16k stack that is populated on fault", > plus a free list of 3 pages per-cpu, > and set_pte_at() in pf handler. Yes, what you described is exactly what I am working on: using 3-pages per-cpu to handle kstack page faults. The only thing that is missing is that I would like to have the ability to call a non-sleeping version of vm_area_map_pages(). Pasha
On Wed, Mar 6, 2024 at 1:46 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > > > This interface and in general VM_SPARSE would be useful for > > > dynamically grown kernel stacks [1]. However, the might_sleep() here > > > would be a problem. We would need to be able to handle > > > vm_area_map_pages() from interrupt disabled context therefore no > > > sleeping. The caller would need to guarantee that the page tables are > > > pre-allocated before the mapping. > > > > Sounds like we'd need to differentiate two kinds of sparse regions. > > One that is really sparse where page tables are not populated (bpf use case) > > and another where only the pte level might be empty. > > Only the latter one will be usable for such auto-grow stacks. > > > > Months back I played with this idea: > > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd > > that > > "Make vmap_pages_range() allocate page tables down to the last (PTE) level." > > Essentially pass NULL instead of 'pages' into vmap_pages_range() > > and it will populate all levels except the last. > > Yes, this is what is needed, however, it can be a little simpler with > kernel stacks: > given that the first page in the vm_area is mapped when stack is first > allocated, and that the VA range is aligned to 16K, we actually are > guaranteed to have all page table levels down to pte pre-allocated > during that initial mapping. Therefore, we do not need to worry about > allocating them later during PFs. Ahh. Found: stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, ... > > Then the page fault handler can service a fault in auto-growing stack > > area if it has a page stashed in some per-cpu free list. > > I suspect this is something you might need for > > "16k stack that is populated on fault", > > plus a free list of 3 pages per-cpu, > > and set_pte_at() in pf handler. > > Yes, what you described is exactly what I am working on: using 3-pages > per-cpu to handle kstack page faults. The only thing that is missing > is that I would like to have the ability to call a non-sleeping > version of vm_area_map_pages(). vm_area_map_pages() cannot be non-sleepable, since the [start, end) range will dictate whether mid level allocs and locks are needed. Instead in alloc_thread_stack_node() you'd need a flavor of get_vm_area() that can align the range to THREAD_ALIGN. Then immediately call _sleepable_ vm_area_map_pages() to populate the first page and later set_pte_at() the other pages on demand from the fault handler.
On Wed, Mar 6, 2024 at 5:13 PM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > On Wed, Mar 6, 2024 at 1:46 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > > > > > This interface and in general VM_SPARSE would be useful for > > > > dynamically grown kernel stacks [1]. However, the might_sleep() here > > > > would be a problem. We would need to be able to handle > > > > vm_area_map_pages() from interrupt disabled context therefore no > > > > sleeping. The caller would need to guarantee that the page tables are > > > > pre-allocated before the mapping. > > > > > > Sounds like we'd need to differentiate two kinds of sparse regions. > > > One that is really sparse where page tables are not populated (bpf use case) > > > and another where only the pte level might be empty. > > > Only the latter one will be usable for such auto-grow stacks. > > > > > > Months back I played with this idea: > > > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd > > > that > > > "Make vmap_pages_range() allocate page tables down to the last (PTE) level." > > > Essentially pass NULL instead of 'pages' into vmap_pages_range() > > > and it will populate all levels except the last. > > > > Yes, this is what is needed, however, it can be a little simpler with > > kernel stacks: > > given that the first page in the vm_area is mapped when stack is first > > allocated, and that the VA range is aligned to 16K, we actually are > > guaranteed to have all page table levels down to pte pre-allocated > > during that initial mapping. Therefore, we do not need to worry about > > allocating them later during PFs. > > Ahh. Found: > stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, ... > > > > Then the page fault handler can service a fault in auto-growing stack > > > area if it has a page stashed in some per-cpu free list. > > > I suspect this is something you might need for > > > "16k stack that is populated on fault", > > > plus a free list of 3 pages per-cpu, > > > and set_pte_at() in pf handler. > > > > Yes, what you described is exactly what I am working on: using 3-pages > > per-cpu to handle kstack page faults. The only thing that is missing > > is that I would like to have the ability to call a non-sleeping > > version of vm_area_map_pages(). > > vm_area_map_pages() cannot be non-sleepable, since the [start, end) > range will dictate whether mid level allocs and locks are needed. > > Instead in alloc_thread_stack_node() you'd need a flavor > of get_vm_area() that can align the range to THREAD_ALIGN. > Then immediately call _sleepable_ vm_area_map_pages() to populate > the first page and later set_pte_at() the other pages on demand > from the fault handler. We still need to get to PTE level to use set_pte_at(). So, either store it in task_struct for faster PF handling, or add another non-sleeping vmap function that will do something like this: vm_area_set_page_at(addr, page) { pgd = pgd_offset_k(addr) p4d = vunmap_p4d_range(pgd, addr) pud = pud_offset(p4d, addr) pmd = pmd_offset(pud, addr) pte = pte_offset_kernel(pmd, addr) set_pte_at(init_mm, addr, pte, mk_pte(page...)); } Pasha
On Mon, Mar 4, 2024 at 10:05 PM Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote: > > From: Alexei Starovoitov <ast@kernel.org> > > vmap/vmalloc APIs are used to map a set of pages into contiguous kernel > virtual space. > > get_vm_area() with appropriate flag is used to request an area of kernel > address range. It's used for vmalloc, vmap, ioremap, xen use cases. > - vmalloc use case dominates the usage. Such vm areas have VM_ALLOC flag. > - the areas created by vmap() function should be tagged with VM_MAP. > - ioremap areas are tagged with VM_IOREMAP. > > BPF would like to extend the vmap API to implement a lazily-populated > sparse, yet contiguous kernel virtual space. Introduce VM_SPARSE flag > and vm_area_map_pages(area, start_addr, count, pages) API to map a set > of pages within a given area. > It has the same sanity checks as vmap() does. > It also checks that get_vm_area() was created with VM_SPARSE flag > which identifies such areas in /proc/vmallocinfo > and returns zero pages on read through /proc/kcore. > > The next commits will introduce bpf_arena which is a sparsely populated > shared memory region between bpf program and user space process. It will > map privately-managed pages into a sparse vm area with the following steps: > > // request virtual memory region during bpf prog verification > area = get_vm_area(area_size, VM_SPARSE); > > // on demand > vm_area_map_pages(area, kaddr, kend, pages); > vm_area_unmap_pages(area, kaddr, kend); > > // after bpf program is detached and unloaded > free_vm_area(area); > > Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
On Wed, Mar 6, 2024 at 2:57 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > On Wed, Mar 6, 2024 at 5:13 PM Alexei Starovoitov > <alexei.starovoitov@gmail.com> wrote: > > > > On Wed, Mar 6, 2024 at 1:46 PM Pasha Tatashin <pasha.tatashin@soleen.com> wrote: > > > > > > > > This interface and in general VM_SPARSE would be useful for > > > > > dynamically grown kernel stacks [1]. However, the might_sleep() here > > > > > would be a problem. We would need to be able to handle > > > > > vm_area_map_pages() from interrupt disabled context therefore no > > > > > sleeping. The caller would need to guarantee that the page tables are > > > > > pre-allocated before the mapping. > > > > > > > > Sounds like we'd need to differentiate two kinds of sparse regions. > > > > One that is really sparse where page tables are not populated (bpf use case) > > > > and another where only the pte level might be empty. > > > > Only the latter one will be usable for such auto-grow stacks. > > > > > > > > Months back I played with this idea: > > > > https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?&id=ce63949a879f2f26c1c1834303e6dfbfb79d1fbd > > > > that > > > > "Make vmap_pages_range() allocate page tables down to the last (PTE) level." > > > > Essentially pass NULL instead of 'pages' into vmap_pages_range() > > > > and it will populate all levels except the last. > > > > > > Yes, this is what is needed, however, it can be a little simpler with > > > kernel stacks: > > > given that the first page in the vm_area is mapped when stack is first > > > allocated, and that the VA range is aligned to 16K, we actually are > > > guaranteed to have all page table levels down to pte pre-allocated > > > during that initial mapping. Therefore, we do not need to worry about > > > allocating them later during PFs. > > > > Ahh. Found: > > stack = __vmalloc_node_range(THREAD_SIZE, THREAD_ALIGN, ... > > > > > > Then the page fault handler can service a fault in auto-growing stack > > > > area if it has a page stashed in some per-cpu free list. > > > > I suspect this is something you might need for > > > > "16k stack that is populated on fault", > > > > plus a free list of 3 pages per-cpu, > > > > and set_pte_at() in pf handler. > > > > > > Yes, what you described is exactly what I am working on: using 3-pages > > > per-cpu to handle kstack page faults. The only thing that is missing > > > is that I would like to have the ability to call a non-sleeping > > > version of vm_area_map_pages(). > > > > vm_area_map_pages() cannot be non-sleepable, since the [start, end) > > range will dictate whether mid level allocs and locks are needed. > > > > Instead in alloc_thread_stack_node() you'd need a flavor > > of get_vm_area() that can align the range to THREAD_ALIGN. > > Then immediately call _sleepable_ vm_area_map_pages() to populate > > the first page and later set_pte_at() the other pages on demand > > from the fault handler. > > We still need to get to PTE level to use set_pte_at(). So, either > store it in task_struct for faster PF handling, or add another > non-sleeping vmap function that will do something like this: > > vm_area_set_page_at(addr, page) > { > pgd = pgd_offset_k(addr) > p4d = vunmap_p4d_range(pgd, addr) > pud = pud_offset(p4d, addr) > pmd = pmd_offset(pud, addr) > pte = pte_offset_kernel(pmd, addr) > > set_pte_at(init_mm, addr, pte, mk_pte(page...)); > } Right. There are several flavors of this logic across the tree. What you're proposing is pretty much vmalloc_to_page() that returns pte even if !pte_present, instead of a page. x86 is doing mostly the same in lookup_address() fwiw. Good opportunity to clean all this up and share the code.
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..0f72c85a377b 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -35,6 +35,7 @@ struct iov_iter; /* in uio.h */ #else #define VM_DEFER_KMEMLEAK 0 #endif +#define VM_SPARSE 0x00001000 /* sparse vm_area. not all pages are present. */ /* bits [20..32] reserved for arch specific ioremap internals */ @@ -232,6 +233,10 @@ static inline bool is_vm_area_hugepages(const void *addr) } #ifdef CONFIG_MMU +int vm_area_map_pages(struct vm_struct *area, unsigned long start, + unsigned long end, struct page **pages); +void vm_area_unmap_pages(struct vm_struct *area, unsigned long start, + unsigned long end); void vunmap_range(unsigned long addr, unsigned long end); static inline void set_vm_flush_reset_perms(void *addr) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index f42f98a127d5..e5b8c70950bc 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -648,6 +648,58 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, return err; } +static int check_sparse_vm_area(struct vm_struct *area, unsigned long start, + unsigned long end) +{ + might_sleep(); + if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS)) + return -EINVAL; + if (WARN_ON_ONCE(area->flags & VM_NO_GUARD)) + return -EINVAL; + if (WARN_ON_ONCE(!(area->flags & VM_SPARSE))) + return -EINVAL; + if ((end - start) >> PAGE_SHIFT > totalram_pages()) + return -E2BIG; + if (start < (unsigned long)area->addr || + (void *)end > area->addr + get_vm_area_size(area)) + return -ERANGE; + return 0; +} + +/** + * vm_area_map_pages - map pages inside given sparse vm_area + * @area: vm_area + * @start: start address inside vm_area + * @end: end address inside vm_area + * @pages: pages to map (always PAGE_SIZE pages) + */ +int vm_area_map_pages(struct vm_struct *area, unsigned long start, + unsigned long end, struct page **pages) +{ + int err; + + err = check_sparse_vm_area(area, start, end); + if (err) + return err; + + return vmap_pages_range(start, end, PAGE_KERNEL, pages, PAGE_SHIFT); +} + +/** + * vm_area_unmap_pages - unmap pages inside given sparse vm_area + * @area: vm_area + * @start: start address inside vm_area + * @end: end address inside vm_area + */ +void vm_area_unmap_pages(struct vm_struct *area, unsigned long start, + unsigned long end) +{ + if (check_sparse_vm_area(area, start, end)) + return; + + vunmap_range(start, end); +} + int is_vmalloc_or_module_addr(const void *x) { /* @@ -3822,9 +3874,9 @@ long vread_iter(struct iov_iter *iter, const char *addr, size_t count) if (flags & VMAP_RAM) copied = vmap_ram_vread_iter(iter, addr, n, flags); - else if (!(vm && (vm->flags & VM_IOREMAP))) + else if (!(vm && (vm->flags & (VM_IOREMAP | VM_SPARSE)))) copied = aligned_vread_iter(iter, addr, n); - else /* IOREMAP area is treated as memory hole */ + else /* IOREMAP | SPARSE area is treated as memory hole */ copied = zero_iter(iter, n); addr += copied; @@ -4415,6 +4467,9 @@ static int s_show(struct seq_file *m, void *p) if (v->flags & VM_IOREMAP) seq_puts(m, " ioremap"); + if (v->flags & VM_SPARSE) + seq_puts(m, " sparse"); + if (v->flags & VM_ALLOC) seq_puts(m, " vmalloc");