diff mbox series

mm, slab: periodically resched in drain_freelist()

Message ID b1808b92-86df-9f53-bfb2-8862a9c554e9@google.com (mailing list archive)
State New
Headers show
Series mm, slab: periodically resched in drain_freelist() | expand

Commit Message

David Rientjes Dec. 28, 2022, 6:05 a.m. UTC
drain_freelist() can be called with a very large number of slabs to free,
such as for kmem_cache_shrink(), or depending on various settings of the
slab cache when doing periodic reaping.

If there is a potentially long list of slabs to drain, periodically
schedule to ensure we aren't saturating the cpu for too long.

Signed-off-by: David Rientjes <rientjes@google.com>
---
 mm/slab.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Hyeonggon Yoo Dec. 29, 2022, 1:41 p.m. UTC | #1
On Tue, Dec 27, 2022 at 10:05:48PM -0800, David Rientjes wrote:
> drain_freelist() can be called with a very large number of slabs to free,
> such as for kmem_cache_shrink(), or depending on various settings of the
> slab cache when doing periodic reaping.
> 
> If there is a potentially long list of slabs to drain, periodically
> schedule to ensure we aren't saturating the cpu for too long.
> 
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  mm/slab.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
>  		raw_spin_unlock_irq(&n->list_lock);
>  		slab_destroy(cache, slab);
>  		nr_freed++;
> +
> +		cond_resched();
>  	}
>  out:
>  	return nr_freed;

Looks good to me,
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Vlastimil Babka Jan. 2, 2023, 8:32 a.m. UTC | #2
On 12/28/22 07:05, David Rientjes wrote:
> drain_freelist() can be called with a very large number of slabs to free,
> such as for kmem_cache_shrink(), or depending on various settings of the
> slab cache when doing periodic reaping.
> 
> If there is a potentially long list of slabs to drain, periodically
> schedule to ensure we aren't saturating the cpu for too long.
> 
> Signed-off-by: David Rientjes <rientjes@google.com>

Thanks, added to slab/for-6.2-rc3/fixes

> ---
>  mm/slab.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -2211,6 +2211,8 @@ static int drain_freelist(struct kmem_cache *cache,
>  		raw_spin_unlock_irq(&n->list_lock);
>  		slab_destroy(cache, slab);
>  		nr_freed++;
> +
> +		cond_resched();
>  	}
>  out:
>  	return nr_freed;
diff mbox series

Patch

diff --git a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -2211,6 +2211,8 @@  static int drain_freelist(struct kmem_cache *cache,
 		raw_spin_unlock_irq(&n->list_lock);
 		slab_destroy(cache, slab);
 		nr_freed++;
+
+		cond_resched();
 	}
 out:
 	return nr_freed;