Message ID | 20221121171202.22080-6-vbabka@suse.cz (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Introduce CONFIG_SLUB_TINY and deprecate SLOB | expand |
On Mon, Nov 21, 2022 at 06:11:55PM +0100, Vlastimil Babka wrote: > With CONFIG_SLUB_TINY we want to minimize memory overhead. By lowering > the default slub_max_order we can make slab allocations use smaller > pages. However depending on object sizes, order-0 might not be the best > due to increased fragmentation. When testing on a 8MB RAM k210 system by > Damien Le Moal [1], slub_max_order=1 had the best results, so use that > as the default for CONFIG_SLUB_TINY. > > [1] https://lore.kernel.org/all/6a1883c4-4c3f-545a-90e8-2cd805bcf4ae@opensource.wdc.com/ > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Roman Gushchin <roman.gushchin@linux.dev> Thanks!
On Mon, Nov 21, 2022 at 06:11:55PM +0100, Vlastimil Babka wrote: > With CONFIG_SLUB_TINY we want to minimize memory overhead. By lowering > the default slub_max_order we can make slab allocations use smaller > pages. However depending on object sizes, order-0 might not be the best > due to increased fragmentation. When testing on a 8MB RAM k210 system by > Damien Le Moal [1], slub_max_order=1 had the best results, so use that > as the default for CONFIG_SLUB_TINY. > > [1] https://lore.kernel.org/all/6a1883c4-4c3f-545a-90e8-2cd805bcf4ae@opensource.wdc.com/ > > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 917b79278bad..bf726dd00f7d 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3888,7 +3888,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); > * take the list_lock. > */ > static unsigned int slub_min_order; > -static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; > +static unsigned int slub_max_order = > + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; > static unsigned int slub_min_objects; > > /* > -- > 2.38.1 > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
diff --git a/mm/slub.c b/mm/slub.c index 917b79278bad..bf726dd00f7d 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3888,7 +3888,8 @@ EXPORT_SYMBOL(kmem_cache_alloc_bulk); * take the list_lock. */ static unsigned int slub_min_order; -static unsigned int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; +static unsigned int slub_max_order = + IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER; static unsigned int slub_min_objects; /*
With CONFIG_SLUB_TINY we want to minimize memory overhead. By lowering the default slub_max_order we can make slab allocations use smaller pages. However depending on object sizes, order-0 might not be the best due to increased fragmentation. When testing on a 8MB RAM k210 system by Damien Le Moal [1], slub_max_order=1 had the best results, so use that as the default for CONFIG_SLUB_TINY. [1] https://lore.kernel.org/all/6a1883c4-4c3f-545a-90e8-2cd805bcf4ae@opensource.wdc.com/ Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- mm/slub.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)