Message ID | 20230908145302.30320-9-vbabka@suse.cz (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | SLUB: calculate_order() cleanups | expand |
On Fri, Sep 08, 2023 at 10:53:06PM +0800, Vlastimil Babka wrote: > The main loop in calculate_order() currently tries to find an order with > at most 1/4 waste. If that's impossible (for particular large object > sizes), there's a fallback that will try to place one object within > slab_max_order. > > If we expand the loop boundary to also allow up to 1/2 waste as the last > resort, we can remove the fallback and simplify the code, as the loop > will find an order for such sizes as well. Note we don't need to allow > more than 1/2 waste as that will never happen - calc_slab_order() would > calculate more objects to fit, reducing waste below 1/2. > > Sucessfully finding an order in the loop (compared to the fallback) will > also have the benefit in trying to satisfy min_objects, because the > fallback was passing 1. Thus the resulting slab orders might be larger > (not because it would improve waste, but to reduce pressure on shared > locks), which is one of the goals of calculate_order(). > > For example, with nr_cpus=1 and 4kB PAGE_SIZE, slub_max_order=3, before > the patch we would get the following orders for these object sizes: > > 2056 to 10920 - order-3 as selected by the loop > 10928 to 12280 - order-2 due to fallback, as <1/4 waste is not possible > 12288 to 32768 - order-3 as <1/4 waste is again possible > > After the patch: > > 2056 to 32768 - order-3, because even in the range of 10928 to 12280 we > try to satisfy the calculated min_objects. > > As a result the code is simpler and gives more consistent results. Current code already tries the fraction "1" in the follwing 2 fallback calls of calc_slab_order(), so trying fraction "2" makes sense to me. Reviewed-by: Feng Tang <feng.tang@intel.com> Thanks, Feng > Signed-off-by: Vlastimil Babka <vbabka@suse.cz> > --- > mm/slub.c | 14 ++++---------- > 1 file changed, 4 insertions(+), 10 deletions(-) > > diff --git a/mm/slub.c b/mm/slub.c > index 5c287d96b212..f04eb029d85a 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -4171,23 +4171,17 @@ static inline int calculate_order(unsigned int size) > * the order can only result in same or less fractional waste, not more. > * > * If that fails, we increase the acceptable fraction of waste and try > - * again. > + * again. The last iteration with fraction of 1/2 would effectively > + * accept any waste and give us the order determined by min_objects, as > + * long as at least single object fits within slub_max_order. > */ > - for (unsigned int fraction = 16; fraction >= 4; fraction /= 2) { > + for (unsigned int fraction = 16; fraction > 1; fraction /= 2) { > order = calc_slab_order(size, min_objects, slub_max_order, > fraction); > if (order <= slub_max_order) > return order; > } > > - /* > - * We were unable to place multiple objects in a slab. Now > - * lets see if we can place a single object there. > - */ > - order = calc_slab_order(size, 1, slub_max_order, 1); > - if (order <= slub_max_order) > - return order; > - > /* > * Doh this slab cannot be placed using slub_max_order. > */ > -- > 2.42.0 > >
diff --git a/mm/slub.c b/mm/slub.c index 5c287d96b212..f04eb029d85a 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4171,23 +4171,17 @@ static inline int calculate_order(unsigned int size) * the order can only result in same or less fractional waste, not more. * * If that fails, we increase the acceptable fraction of waste and try - * again. + * again. The last iteration with fraction of 1/2 would effectively + * accept any waste and give us the order determined by min_objects, as + * long as at least single object fits within slub_max_order. */ - for (unsigned int fraction = 16; fraction >= 4; fraction /= 2) { + for (unsigned int fraction = 16; fraction > 1; fraction /= 2) { order = calc_slab_order(size, min_objects, slub_max_order, fraction); if (order <= slub_max_order) return order; } - /* - * We were unable to place multiple objects in a slab. Now - * lets see if we can place a single object there. - */ - order = calc_slab_order(size, 1, slub_max_order, 1); - if (order <= slub_max_order) - return order; - /* * Doh this slab cannot be placed using slub_max_order. */
The main loop in calculate_order() currently tries to find an order with at most 1/4 waste. If that's impossible (for particular large object sizes), there's a fallback that will try to place one object within slab_max_order. If we expand the loop boundary to also allow up to 1/2 waste as the last resort, we can remove the fallback and simplify the code, as the loop will find an order for such sizes as well. Note we don't need to allow more than 1/2 waste as that will never happen - calc_slab_order() would calculate more objects to fit, reducing waste below 1/2. Sucessfully finding an order in the loop (compared to the fallback) will also have the benefit in trying to satisfy min_objects, because the fallback was passing 1. Thus the resulting slab orders might be larger (not because it would improve waste, but to reduce pressure on shared locks), which is one of the goals of calculate_order(). For example, with nr_cpus=1 and 4kB PAGE_SIZE, slub_max_order=3, before the patch we would get the following orders for these object sizes: 2056 to 10920 - order-3 as selected by the loop 10928 to 12280 - order-2 due to fallback, as <1/4 waste is not possible 12288 to 32768 - order-3 as <1/4 waste is again possible After the patch: 2056 to 32768 - order-3, because even in the range of 10928 to 12280 we try to satisfy the calculated min_objects. As a result the code is simpler and gives more consistent results. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- mm/slub.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-)