mbox series

[0/4] SLUB: calculate_order() cleanups

Message ID 20230908145302.30320-6-vbabka@suse.cz (mailing list archive)
Headers show
Series SLUB: calculate_order() cleanups | expand

Message

Vlastimil Babka Sept. 8, 2023, 2:53 p.m. UTC
Since reviewing recent patches made me finally dig into these functions
in details for the first time, I've also noticed some opportunities for
cleanups that should make them simpler and also deliver more consistent
results for some corner case object sizes (probably not seen in
practice). Thus patch 3 can increase slab orders somewhere, but only in
the way that was already intended. Otherwise it's almost no functional
changes.

Vlastimil Babka (4):
  mm/slub: simplify the last resort slab order calculation
  mm/slub: remove min_objects loop from calculate_order()
  mm/slub: attempt to find layouts up to 1/2 waste in calculate_order()
  mm/slub: refactor calculate_order() and calc_slab_order()

 mm/slub.c | 63 ++++++++++++++++++++++++-------------------------------
 1 file changed, 27 insertions(+), 36 deletions(-)

Comments

Vlastimil Babka Oct. 2, 2023, 12:38 p.m. UTC | #1
On 9/28/23 06:46, Jay Patel wrote:
> On Fri, 2023-09-08 at 16:53 +0200, Vlastimil Babka wrote:
>> Since reviewing recent patches made me finally dig into these
>> functions
>> in details for the first time, I've also noticed some opportunities
>> for
>> cleanups that should make them simpler and also deliver more
>> consistent
>> results for some corner case object sizes (probably not seen in
>> practice). Thus patch 3 can increase slab orders somewhere, but only
>> in
>> the way that was already intended. Otherwise it's almost no
>> functional
>> changes.
>> 
> Hi Vlastimil,

Hi, Jay!

> This cleanup patchset looks promising.
> I've conducted test
> on PowerPC with 16 CPUs and a 64K page size, and here are the results.
> 
> S
> lub Memory Usage
> 
> +-------------------+--------+------------+
> |                   | Normal | With Patch |
> +-------------------+--------+------------+
> | Total Slub Memory | 476992 | 478464     |
> | Wastage           | 431    | 451        |
> +-------------------+--------+------------+
> 
> Also, I have not detected any changes in the page order for slub caches
> across all objects with 64K page size.

As expected. Which should mean any benchmark differences should be noise and
not caused by the patches.

> Hackbench Results
> 
> +-------+----+---------+------------+----------+
> |     
>   |    | Normal  | With Patch |          |
> +-------+----+---------+-----
> -------+----------+
> | Amean | 1  | 1.1530  | 1.1347     | ( 1.59%) |
> |
> Amean | 4  | 3.9220  | 3.8240     | ( 2.50%) |
> | Amean | 7  | 6.7943  |
> 6.6300     | ( 2.42%) |
> | Amean | 12 | 11.7067 | 11.4423    | ( 2.26%) |
> | Amean | 21 | 20.6617 | 20.1680    | ( 2.39%) |
> | Amean | 30 | 29.4200
> | 28.6460    | ( 2.63%) |
> | Amean | 48 | 47.2797 | 46.2820    | ( 2.11%)
> |
> | Amean | 64 | 63.4680 | 62.1813    | ( 2.03%) |
> +-------+----+------
> ---+------------+----------+  
> 
> 
> Reviewed-by: Jay Patel
> <jaypatel@linux.ibm.com>
> Tested-by: Jay Patel <jaypatel@linux.ibm.com>

Thanks! Applied your Reviewed-and-tested-by:

> Th
> ank You 
> Jay Patel
>> Vlastimil Babka (4):
>>   mm/slub: simplify the last resort slab order calculation
>>   mm/slub: remove min_objects loop from calculate_order()
>>   mm/slub: attempt to find layouts up to 1/2 waste in
>> calculate_order()
>>   mm/slub: refactor calculate_order() and calc_slab_order()
>> 
>>  mm/slub.c | 63 ++++++++++++++++++++++++-----------------------------
>> --
>>  1 file changed, 27 insertions(+), 36 deletions(-)
>> 
>