Message ID | 20221121171202.22080-5-vbabka@suse.cz (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Introduce CONFIG_SLUB_TINY and deprecate SLOB | expand |
On Mon, Nov 21, 2022 at 06:11:54PM +0100, Vlastimil Babka wrote: > SLUB will leave a number of slabs on the partial list even if they are > empty, to avoid some slab freeing and reallocation. The goal of > CONFIG_SLUB_TINY is to minimize memory overhead, so set the limits to 0 > for immediate slab page freeing. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Roman Gushchin <roman.gushchin@linux.dev> Thanks!
On Mon, Nov 21, 2022 at 06:11:54PM +0100, Vlastimil Babka wrote: > SLUB will leave a number of slabs on the partial list even if they are > empty, to avoid some slab freeing and reallocation. The goal of > CONFIG_SLUB_TINY is to minimize memory overhead, so set the limits to 0 > for immediate slab page freeing. > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/mm/slub.c b/mm/slub.c > index ab085aa2f1f0..917b79278bad 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -241,6 +241,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) > /* Enable to log cmpxchg failures */ > #undef SLUB_DEBUG_CMPXCHG > > +#ifndef CONFIG_SLUB_TINY > /* > * Minimum number of partial slabs. These will be left on the partial > * lists even if they are empty. kmem_cache_shrink may reclaim them. > @@ -253,6 +254,10 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) > * sort the partial list by the number of objects in use. > */ > #define MAX_PARTIAL 10 > +#else > +#define MIN_PARTIAL 0 > +#define MAX_PARTIAL 0 > +#endif > > #define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \ > SLAB_POISON | SLAB_STORE_USER) > -- > 2.38.1 > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
diff --git a/mm/slub.c b/mm/slub.c index ab085aa2f1f0..917b79278bad 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -241,6 +241,7 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) /* Enable to log cmpxchg failures */ #undef SLUB_DEBUG_CMPXCHG +#ifndef CONFIG_SLUB_TINY /* * Minimum number of partial slabs. These will be left on the partial * lists even if they are empty. kmem_cache_shrink may reclaim them. @@ -253,6 +254,10 @@ static inline bool kmem_cache_has_cpu_partial(struct kmem_cache *s) * sort the partial list by the number of objects in use. */ #define MAX_PARTIAL 10 +#else +#define MIN_PARTIAL 0 +#define MAX_PARTIAL 0 +#endif #define DEBUG_DEFAULT_FLAGS (SLAB_CONSISTENCY_CHECKS | SLAB_RED_ZONE | \ SLAB_POISON | SLAB_STORE_USER)
SLUB will leave a number of slabs on the partial list even if they are empty, to avoid some slab freeing and reallocation. The goal of CONFIG_SLUB_TINY is to minimize memory overhead, so set the limits to 0 for immediate slab page freeing. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- mm/slub.c | 5 +++++ 1 file changed, 5 insertions(+)