Message ID | 20190225203037.1317-3-guro@fb.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/3] mm: refactor __vunmap() to avoid duplicated call to find_vm_area() | expand |
On 2/25/19 9:30 PM, Roman Gushchin wrote: > alloc_vmap_area() is allocating memory for the vmap_area, and > performing the actual lookup of the vm area and vmap_area > initialization. > > This prevents us from using a pre-allocated memory for the map_area > structure, which can be used in some cases to minimize the number > of required memory allocations. Hmm, but that doesn't happen here or in the later patch, right? The only caller of init_vmap_area() is alloc_vmap_area(). What am I missing? > Let's keep the memory allocation part in alloc_vmap_area() and > separate everything else into init_vmap_area(). > > Signed-off-by: Roman Gushchin <guro@fb.com> > Acked-by: Johannes Weiner <hannes@cmpxchg.org> > Reviewed-by: Matthew Wilcox <willy@infradead.org>
On Fri, Mar 01, 2019 at 03:43:19PM +0100, Vlastimil Babka wrote: > On 2/25/19 9:30 PM, Roman Gushchin wrote: > > alloc_vmap_area() is allocating memory for the vmap_area, and > > performing the actual lookup of the vm area and vmap_area > > initialization. > > > > This prevents us from using a pre-allocated memory for the map_area > > structure, which can be used in some cases to minimize the number > > of required memory allocations. > > Hmm, but that doesn't happen here or in the later patch, right? The only > caller of init_vmap_area() is alloc_vmap_area(). What am I missing? So initially the patch was a part of a bigger patchset, which tried to minimize the number of separate allocations during vmalloc(), e.g. by inlining vm_struct->pages into vm_struct for small areas. I temporarily dropped the rest of the patchset for some rework, but decided to leave this patch, because it looks like a nice refactoring in any case, and also it has been already reviewed and acked by Matthew and Johannes. Thank you for looking into it!
On 3/1/19 5:48 PM, Roman Gushchin wrote: > On Fri, Mar 01, 2019 at 03:43:19PM +0100, Vlastimil Babka wrote: >> On 2/25/19 9:30 PM, Roman Gushchin wrote: >>> alloc_vmap_area() is allocating memory for the vmap_area, and >>> performing the actual lookup of the vm area and vmap_area >>> initialization. >>> >>> This prevents us from using a pre-allocated memory for the map_area >>> structure, which can be used in some cases to minimize the number >>> of required memory allocations. >> >> Hmm, but that doesn't happen here or in the later patch, right? The only >> caller of init_vmap_area() is alloc_vmap_area(). What am I missing? > > So initially the patch was a part of a bigger patchset, which > tried to minimize the number of separate allocations during vmalloc(), > e.g. by inlining vm_struct->pages into vm_struct for small areas. > > I temporarily dropped the rest of the patchset for some rework, > but decided to leave this patch, because it looks like a nice refactoring > in any case, and also it has been already reviewed and acked by Matthew > and Johannes. OK then, Acked-by: Vlastimil Babka <vbabka@suse.cz> > Thank you for looking into it! >
On Wed, Apr 17, 2019 at 03:27:56PM +0200, Vlastimil Babka wrote: > On 3/1/19 5:48 PM, Roman Gushchin wrote: > > On Fri, Mar 01, 2019 at 03:43:19PM +0100, Vlastimil Babka wrote: > >> On 2/25/19 9:30 PM, Roman Gushchin wrote: > >>> alloc_vmap_area() is allocating memory for the vmap_area, and > >>> performing the actual lookup of the vm area and vmap_area > >>> initialization. > >>> > >>> This prevents us from using a pre-allocated memory for the map_area > >>> structure, which can be used in some cases to minimize the number > >>> of required memory allocations. > >> > >> Hmm, but that doesn't happen here or in the later patch, right? The only > >> caller of init_vmap_area() is alloc_vmap_area(). What am I missing? > > > > So initially the patch was a part of a bigger patchset, which > > tried to minimize the number of separate allocations during vmalloc(), > > e.g. by inlining vm_struct->pages into vm_struct for small areas. > > > > I temporarily dropped the rest of the patchset for some rework, > > but decided to leave this patch, because it looks like a nice refactoring > > in any case, and also it has been already reviewed and acked by Matthew > > and Johannes. > > OK then, > > Acked-by: Vlastimil Babka <vbabka@suse.cz> Thank you for looking into this and other patches from the series! Btw, it looks like that recent changes in vmalloc code are in a conflict with this patch, so I'll drop it for now, and will resend two other as v4. Thanks!
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 8f0179895fb5..f1f19d1105c4 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -395,16 +395,10 @@ static void purge_vmap_area_lazy(void); static BLOCKING_NOTIFIER_HEAD(vmap_notify_list); -/* - * Allocate a region of KVA of the specified size and alignment, within the - * vstart and vend. - */ -static struct vmap_area *alloc_vmap_area(unsigned long size, - unsigned long align, - unsigned long vstart, unsigned long vend, - int node, gfp_t gfp_mask) +static int init_vmap_area(struct vmap_area *va, unsigned long size, + unsigned long align, unsigned long vstart, + unsigned long vend, int node, gfp_t gfp_mask) { - struct vmap_area *va; struct rb_node *n; unsigned long addr; int purged = 0; @@ -416,11 +410,6 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, might_sleep(); - va = kmalloc_node(sizeof(struct vmap_area), - gfp_mask & GFP_RECLAIM_MASK, node); - if (unlikely(!va)) - return ERR_PTR(-ENOMEM); - /* * Only scan the relevant parts containing pointers to other objects * to avoid false negatives. @@ -516,7 +505,7 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, BUG_ON(va->va_start < vstart); BUG_ON(va->va_end > vend); - return va; + return 0; overflow: spin_unlock(&vmap_area_lock); @@ -538,8 +527,35 @@ static struct vmap_area *alloc_vmap_area(unsigned long size, if (!(gfp_mask & __GFP_NOWARN) && printk_ratelimit()) pr_warn("vmap allocation for size %lu failed: use vmalloc=<size> to increase size\n", size); - kfree(va); - return ERR_PTR(-EBUSY); + + return -EBUSY; +} + +/* + * Allocate a region of KVA of the specified size and alignment, within the + * vstart and vend. + */ +static struct vmap_area *alloc_vmap_area(unsigned long size, + unsigned long align, + unsigned long vstart, + unsigned long vend, + int node, gfp_t gfp_mask) +{ + struct vmap_area *va; + int ret; + + va = kmalloc_node(sizeof(struct vmap_area), + gfp_mask & GFP_RECLAIM_MASK, node); + if (unlikely(!va)) + return ERR_PTR(-ENOMEM); + + ret = init_vmap_area(va, size, align, vstart, vend, node, gfp_mask); + if (ret) { + kfree(va); + return ERR_PTR(ret); + } + + return va; } int register_vmap_purge_notifier(struct notifier_block *nb)