diff mbox series

[v3,1/3] mm/vmalloc: remove preempt_disable/enable when do preloading

Message ID 20191016095438.12391-1-urezki@gmail.com (mailing list archive)
State New, archived
Headers show
Series [v3,1/3] mm/vmalloc: remove preempt_disable/enable when do preloading | expand

Commit Message

Uladzislau Rezki Oct. 16, 2019, 9:54 a.m. UTC
Some background. The preemption was disabled before to guarantee
that a preloaded object is available for a CPU, it was stored for.

The aim was to not allocate in atomic context when spinlock
is taken later, for regular vmap allocations. But that approach
conflicts with CONFIG_PREEMPT_RT philosophy. It means that
calling spin_lock() with disabled preemption is forbidden
in the CONFIG_PREEMPT_RT kernel.

Therefore, get rid of preempt_disable() and preempt_enable() when
the preload is done for splitting purpose. As a result we do not
guarantee now that a CPU is preloaded, instead we minimize the
case when it is not, with this change.

For example i run the special test case that follows the preload
pattern and path. 20 "unbind" threads run it and each does
1000000 allocations. Only 3.5 times among 1000000 a CPU was
not preloaded. So it can happen but the number is negligible.

V2 - > V3:
    - update the commit message

V1 -> V2:
  - move __this_cpu_cmpxchg check when spin_lock is taken,
    as proposed by Andrew Morton
  - add more explanation in regard of preloading
  - adjust and move some comments

Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose")
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Daniel Wagner <dwagner@suse.de>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 37 ++++++++++++++++++++-----------------
 1 file changed, 20 insertions(+), 17 deletions(-)

Comments

Michal Hocko Oct. 16, 2019, 10:59 a.m. UTC | #1
On Wed 16-10-19 11:54:36, Uladzislau Rezki (Sony) wrote:
> Some background. The preemption was disabled before to guarantee
> that a preloaded object is available for a CPU, it was stored for.

Probably good to be explicit that this has been achieved by combining
the disabling the preemption and taking the spin lock while the
ne_fit_preload_node is checked resp. repopulated, right?

> The aim was to not allocate in atomic context when spinlock
> is taken later, for regular vmap allocations. But that approach
> conflicts with CONFIG_PREEMPT_RT philosophy. It means that
> calling spin_lock() with disabled preemption is forbidden
> in the CONFIG_PREEMPT_RT kernel.
> 
> Therefore, get rid of preempt_disable() and preempt_enable() when
> the preload is done for splitting purpose. As a result we do not
> guarantee now that a CPU is preloaded, instead we minimize the
> case when it is not, with this change.

by populating the per cpu preload pointer under the vmap_area_lock.
This implies that at least each caller which has done the preallocation
will not fallback to an atomic allocation later. It is possible that the
preallocation would be pointless or that no preallocation is done
because of the race but your data shows that this is really rare.

> For example i run the special test case that follows the preload
> pattern and path. 20 "unbind" threads run it and each does
> 1000000 allocations. Only 3.5 times among 1000000 a CPU was
> not preloaded. So it can happen but the number is negligible.
> 
> V2 - > V3:
>     - update the commit message
> 
> V1 -> V2:
>   - move __this_cpu_cmpxchg check when spin_lock is taken,
>     as proposed by Andrew Morton
>   - add more explanation in regard of preloading
>   - adjust and move some comments
> 
> Fixes: 82dd23e84be3 ("mm/vmalloc.c: preload a CPU with one object for split purpose")
> Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
> Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Acked-by: Daniel Wagner <dwagner@suse.de>
> Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/vmalloc.c | 37 ++++++++++++++++++++-----------------
>  1 file changed, 20 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index e92ff5f7dd8b..b7b443bfdd92 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -1078,31 +1078,34 @@ static struct vmap_area *alloc_vmap_area(unsigned long size,
>  
>  retry:
>  	/*
> -	 * Preload this CPU with one extra vmap_area object to ensure
> -	 * that we have it available when fit type of free area is
> -	 * NE_FIT_TYPE.
> +	 * Preload this CPU with one extra vmap_area object. It is used
> +	 * when fit type of free area is NE_FIT_TYPE. Please note, it
> +	 * does not guarantee that an allocation occurs on a CPU that
> +	 * is preloaded, instead we minimize the case when it is not.
> +	 * It can happen because of cpu migration, because there is a
> +	 * race until the below spinlock is taken.
>  	 *
>  	 * The preload is done in non-atomic context, thus it allows us
>  	 * to use more permissive allocation masks to be more stable under
> -	 * low memory condition and high memory pressure.
> +	 * low memory condition and high memory pressure. In rare case,
> +	 * if not preloaded, GFP_NOWAIT is used.
>  	 *
> -	 * Even if it fails we do not really care about that. Just proceed
> -	 * as it is. "overflow" path will refill the cache we allocate from.
> +	 * Set "pva" to NULL here, because of "retry" path.
>  	 */
> -	preempt_disable();
> -	if (!__this_cpu_read(ne_fit_preload_node)) {
> -		preempt_enable();
> -		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
> -		preempt_disable();
> +	pva = NULL;
>  
> -		if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
> -			if (pva)
> -				kmem_cache_free(vmap_area_cachep, pva);
> -		}
> -	}
> +	if (!this_cpu_read(ne_fit_preload_node))
> +		/*
> +		 * Even if it fails we do not really care about that.
> +		 * Just proceed as it is. If needed "overflow" path
> +		 * will refill the cache we allocate from.
> +		 */
> +		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
>  
>  	spin_lock(&vmap_area_lock);
> -	preempt_enable();
> +
> +	if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva))
> +		kmem_cache_free(vmap_area_cachep, pva);
>  
>  	/*
>  	 * If an allocation fails, the "vend" address is
> -- 
> 2.20.1
>
Uladzislau Rezki Oct. 18, 2019, 9:37 a.m. UTC | #2
Hello, Michal.

Sorry for late reply. See my comments enclosed below:

> On Wed 16-10-19 11:54:36, Uladzislau Rezki (Sony) wrote:
> > Some background. The preemption was disabled before to guarantee
> > that a preloaded object is available for a CPU, it was stored for.
> 
> Probably good to be explicit that this has been achieved by combining
> the disabling the preemption and taking the spin lock while the
> ne_fit_preload_node is checked resp. repopulated, right?
> 
Right, agree with your comment!


> > The aim was to not allocate in atomic context when spinlock
> > is taken later, for regular vmap allocations. But that approach
> > conflicts with CONFIG_PREEMPT_RT philosophy. It means that
> > calling spin_lock() with disabled preemption is forbidden
> > in the CONFIG_PREEMPT_RT kernel.
> > 
> > Therefore, get rid of preempt_disable() and preempt_enable() when
> > the preload is done for splitting purpose. As a result we do not
> > guarantee now that a CPU is preloaded, instead we minimize the
> > case when it is not, with this change.
> 
> by populating the per cpu preload pointer under the vmap_area_lock.
> This implies that at least each caller which has done the preallocation
> will not fallback to an atomic allocation later. It is possible that the
> preallocation would be pointless or that no preallocation is done
> because of the race but your data shows that this is really rare.
> 
That makes sense to add. Please find below updated comment:

<snip>
mm/vmalloc: remove preempt_disable/enable when do preloading

Some background. The preemption was disabled before to guarantee
that a preloaded object is available for a CPU, it was stored for.
That was achieved by combining the disabling the preemption and
taking the spin lock while the ne_fit_preload_node is checked.

The aim was to not allocate in atomic context when spinlock
is taken later, for regular vmap allocations. But that approach
conflicts with CONFIG_PREEMPT_RT philosophy. It means that
calling spin_lock() with disabled preemption is forbidden
in the CONFIG_PREEMPT_RT kernel.

Therefore, get rid of preempt_disable() and preempt_enable() when
the preload is done for splitting purpose. As a result we do not
guarantee now that a CPU is preloaded, instead we minimize the
case when it is not, with this change, by populating the per
cpu preload pointer under the vmap_area_lock.

This implies that at least each caller that has done the preallocation
will not fallback to an atomic allocation later. It is possible
that the preallocation would be pointless or that no preallocation
is done because of the race but the data shows that this is really
rare.

For example i run the special test case that follows the preload
pattern and path. 20 "unbind" threads run it and each does
1000000 allocations. Only 3.5 times among 1000000 a CPU was
not preloaded. So it can happen but the number is negligible.

V2 - > V3:
    - update the commit message

V1 -> V2:
  - move __this_cpu_cmpxchg check when spin_lock is taken,
    as proposed by Andrew Morton
  - add more explanation in regard of preloading
  - adjust and move some comments
<snip>

Do you agree on that?

Thank you!

--
Vlad Rezki
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e92ff5f7dd8b..b7b443bfdd92 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -1078,31 +1078,34 @@  static struct vmap_area *alloc_vmap_area(unsigned long size,
 
 retry:
 	/*
-	 * Preload this CPU with one extra vmap_area object to ensure
-	 * that we have it available when fit type of free area is
-	 * NE_FIT_TYPE.
+	 * Preload this CPU with one extra vmap_area object. It is used
+	 * when fit type of free area is NE_FIT_TYPE. Please note, it
+	 * does not guarantee that an allocation occurs on a CPU that
+	 * is preloaded, instead we minimize the case when it is not.
+	 * It can happen because of cpu migration, because there is a
+	 * race until the below spinlock is taken.
 	 *
 	 * The preload is done in non-atomic context, thus it allows us
 	 * to use more permissive allocation masks to be more stable under
-	 * low memory condition and high memory pressure.
+	 * low memory condition and high memory pressure. In rare case,
+	 * if not preloaded, GFP_NOWAIT is used.
 	 *
-	 * Even if it fails we do not really care about that. Just proceed
-	 * as it is. "overflow" path will refill the cache we allocate from.
+	 * Set "pva" to NULL here, because of "retry" path.
 	 */
-	preempt_disable();
-	if (!__this_cpu_read(ne_fit_preload_node)) {
-		preempt_enable();
-		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
-		preempt_disable();
+	pva = NULL;
 
-		if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva)) {
-			if (pva)
-				kmem_cache_free(vmap_area_cachep, pva);
-		}
-	}
+	if (!this_cpu_read(ne_fit_preload_node))
+		/*
+		 * Even if it fails we do not really care about that.
+		 * Just proceed as it is. If needed "overflow" path
+		 * will refill the cache we allocate from.
+		 */
+		pva = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, node);
 
 	spin_lock(&vmap_area_lock);
-	preempt_enable();
+
+	if (pva && __this_cpu_cmpxchg(ne_fit_preload_node, NULL, pva))
+		kmem_cache_free(vmap_area_cachep, pva);
 
 	/*
 	 * If an allocation fails, the "vend" address is