Message ID | 20190514235111.2817276-2-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [RESEND] mm: show number of vmalloc pages in /proc/meminfo | expand |
On 05/15/2019 05:21 AM, Roman Gushchin wrote: > Vmalloc() is getting more and more used these days (kernel stacks, > bpf and percpu allocator are new top users), and the total % > of memory consumed by vmalloc() can be pretty significant > and changes dynamically. > > /proc/meminfo is the best place to display this information: > its top goal is to show top consumers of the memory. > > Since the VmallocUsed field in /proc/meminfo is not in use > for quite a long time (it has been defined to 0 by the > commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from > /proc/meminfo")), let's reuse it for showing the actual > physical memory consumption of vmalloc(). The primary concern which got addressed with a5ad88ce8c7f was that computing get_vmalloc_info() was taking long time. But here its reads an already updated value which gets added or subtracted during __vmalloc_area_node/__vunmap cycle. Hence this should not cost much (like get_vmalloc_info). But is not this similar to the caching solution Linus mentioned.
Hi Roman, On Wed, May 15, 2019 at 8:51 AM Roman Gushchin <guro@fb.com> wrote: > > Vmalloc() is getting more and more used these days (kernel stacks, > bpf and percpu allocator are new top users), and the total % > of memory consumed by vmalloc() can be pretty significant > and changes dynamically. > > /proc/meminfo is the best place to display this information: > its top goal is to show top consumers of the memory. > > Since the VmallocUsed field in /proc/meminfo is not in use > for quite a long time (it has been defined to 0 by the > commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from > /proc/meminfo")), let's reuse it for showing the actual > physical memory consumption of vmalloc(). > > Signed-off-by: Roman Gushchin <guro@fb.com> > Acked-by: Johannes Weiner <hannes@cmpxchg.org> > Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Minchan Kim <minchan@kernel.org> How it's going on? Android needs this patch since it has gathered vmalloc pages from /proc/vmallocinfo. It's too slow.
On Tue, Jul 09, 2019 at 02:59:42PM +0900, Minchan Kim wrote: > Hi Roman, > > > On Wed, May 15, 2019 at 8:51 AM Roman Gushchin <guro@fb.com> wrote: > > > > Vmalloc() is getting more and more used these days (kernel stacks, > > bpf and percpu allocator are new top users), and the total % > > of memory consumed by vmalloc() can be pretty significant > > and changes dynamically. > > > > /proc/meminfo is the best place to display this information: > > its top goal is to show top consumers of the memory. > > > > Since the VmallocUsed field in /proc/meminfo is not in use > > for quite a long time (it has been defined to 0 by the > > commit a5ad88ce8c7f ("mm: get rid of 'vmalloc_info' from > > /proc/meminfo")), let's reuse it for showing the actual > > physical memory consumption of vmalloc(). > > > > Signed-off-by: Roman Gushchin <guro@fb.com> > > Acked-by: Johannes Weiner <hannes@cmpxchg.org> > > Acked-by: Vlastimil Babka <vbabka@suse.cz> > Acked-by: Minchan Kim <minchan@kernel.org> > > How it's going on? > Android needs this patch since it has gathered vmalloc pages from > /proc/vmallocinfo. It's too slow. > Andrew, can you, please, pick this one? It has been in the mm tree already, but then it was dropped because of some other non-related patches in the series conflicted with some x86 changes. This patch is useful by itself, and doesn't depend on anything else. Thanks!
diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 568d90e17c17..465ea0153b2a 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -120,7 +120,7 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "Committed_AS: ", committed); seq_printf(m, "VmallocTotal: %8lu kB\n", (unsigned long)VMALLOC_TOTAL >> 10); - show_val_kb(m, "VmallocUsed: ", 0ul); + show_val_kb(m, "VmallocUsed: ", vmalloc_nr_pages()); show_val_kb(m, "VmallocChunk: ", 0ul); show_val_kb(m, "Percpu: ", pcpu_nr_pages()); diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 51e131245379..9b21d0047710 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -72,10 +72,12 @@ extern void vm_unmap_aliases(void); #ifdef CONFIG_MMU extern void __init vmalloc_init(void); +extern unsigned long vmalloc_nr_pages(void); #else static inline void vmalloc_init(void) { } +static inline unsigned long vmalloc_nr_pages(void) { return 0; } #endif extern void *vmalloc(unsigned long size); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8d4907865614..65871ddba497 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -398,6 +398,13 @@ static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); static unsigned long lazy_max_pages(void); +static atomic_long_t nr_vmalloc_pages; + +unsigned long vmalloc_nr_pages(void) +{ + return atomic_long_read(&nr_vmalloc_pages); +} + static struct vmap_area *__find_vmap_area(unsigned long addr) { struct rb_node *n = vmap_area_root.rb_node; @@ -2214,6 +2221,7 @@ static void __vunmap(const void *addr, int deallocate_pages) BUG_ON(!page); __free_pages(page, 0); } + atomic_long_sub(area->nr_pages, &nr_vmalloc_pages); kvfree(area->pages); } @@ -2390,12 +2398,14 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, if (unlikely(!page)) { /* Successfully allocated i pages, free them in __vunmap() */ area->nr_pages = i; + atomic_long_add(area->nr_pages, &nr_vmalloc_pages); goto fail; } area->pages[i] = page; if (gfpflags_allow_blocking(gfp_mask|highmem_mask)) cond_resched(); } + atomic_long_add(area->nr_pages, &nr_vmalloc_pages); if (map_vm_area(area, prot, pages)) goto fail;