mbox series

[RFC,v1,0/4] hugetlb: parallelize hugetlb page allocation on boot

Message ID 20231123133036.68540-1-gang.li@linux.dev (mailing list archive)
Headers show
Series hugetlb: parallelize hugetlb page allocation on boot | expand

Message

Gang Li Nov. 23, 2023, 1:30 p.m. UTC
From: Gang Li <ligang.bdlg@bytedance.com>

Inspired by these patches [1][2], this series aims to speed up the
initialization of hugetlb during the boot process through
parallelization.

It is particularly effective in large systems. On a machine equipped
with 1TB of memory and two NUMA nodes, the time for hugetlb
initialization was reduced from 2 seconds to 1 second.

In the future, as memory continues to grow, more and more time can
be saved.

This series currently focuses on optimizing 2MB hugetlb. Since
gigantic pages are few in number, their optimization effects
are not as pronounced. We may explore optimizations for
gigantic pages in the future.

Thanks,
Gang Li

Gang Li (4):
  hugetlb: code clean for hugetlb_hstate_alloc_pages
  hugetlb: split hugetlb_hstate_alloc_pages
  hugetlb: add timing to hugetlb allocations on boot
  hugetlb: parallelize hugetlb page allocation

 mm/hugetlb.c | 191 ++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 134 insertions(+), 57 deletions(-)

Comments

Gang Li Nov. 23, 2023, 1:58 p.m. UTC | #1
> Inspired by these patches [1][2], this series aims to speed up the [1]
https://lore.kernel.org/all/20200527173608.2885243-1-daniel.m.jordan@oracle.com/
[2]
https://lore.kernel.org/all/20230906112605.2286994-1-usama.arif@bytedance.com/
> initialization of hugetlb during the boot process through >
parallelization. >
David Hildenbrand Nov. 23, 2023, 2:10 p.m. UTC | #2
On 23.11.23 14:30, Gang Li wrote:
> From: Gang Li <ligang.bdlg@bytedance.com>
> 
> Inspired by these patches [1][2], this series aims to speed up the
> initialization of hugetlb during the boot process through
> parallelization.
> 
> It is particularly effective in large systems. On a machine equipped
> with 1TB of memory and two NUMA nodes, the time for hugetlb
> initialization was reduced from 2 seconds to 1 second.

Sorry to say, but why is that a scenario worth adding complexity for / 
optimizing for? You don't cover that, so there is a clear lack in the 
motivation.

2 vs. 1 second on a 1 TiB system is usually really just noise.
David Rientjes Nov. 24, 2023, 7:44 p.m. UTC | #3
On Thu, 23 Nov 2023, David Hildenbrand wrote:

> On 23.11.23 14:30, Gang Li wrote:
> > From: Gang Li <ligang.bdlg@bytedance.com>
> > 
> > Inspired by these patches [1][2], this series aims to speed up the
> > initialization of hugetlb during the boot process through
> > parallelization.
> > 
> > It is particularly effective in large systems. On a machine equipped
> > with 1TB of memory and two NUMA nodes, the time for hugetlb
> > initialization was reduced from 2 seconds to 1 second.
> 
> Sorry to say, but why is that a scenario worth adding complexity for /
> optimizing for? You don't cover that, so there is a clear lack in the
> motivation.
> 
> 2 vs. 1 second on a 1 TiB system is usually really just noise.
> 

The cost will continue to grow over time, so I presume that Gang is trying 
to get out in front of the issue even though it may not be a large savings 
today.

Running single boot tests, with the latest upstream kernel, allocating 
1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.

But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s 
today with the current implementation.

So it's likely something worth optimizing.

Gang, I'm curious about this in the cover letter:

"""
This series currently focuses on optimizing 2MB hugetlb. Since
gigantic pages are few in number, their optimization effects
are not as pronounced. We may explore optimizations for
gigantic pages in the future.
"""

For >1TB hosts, why the emphasis on 2MB hugetlb? :)  I would have expected 
1GB pages.  Are you really allocating ~500k 2MB hugetlb pages?

So if the patchset optimizes for the more likely scenario on these large 
hosts, which would be 1GB pages, that would be great.
David Hildenbrand Nov. 24, 2023, 7:47 p.m. UTC | #4
On 24.11.23 20:44, David Rientjes wrote:
> On Thu, 23 Nov 2023, David Hildenbrand wrote:
> 
>> On 23.11.23 14:30, Gang Li wrote:
>>> From: Gang Li <ligang.bdlg@bytedance.com>
>>>
>>> Inspired by these patches [1][2], this series aims to speed up the
>>> initialization of hugetlb during the boot process through
>>> parallelization.
>>>
>>> It is particularly effective in large systems. On a machine equipped
>>> with 1TB of memory and two NUMA nodes, the time for hugetlb
>>> initialization was reduced from 2 seconds to 1 second.
>>
>> Sorry to say, but why is that a scenario worth adding complexity for /
>> optimizing for? You don't cover that, so there is a clear lack in the
>> motivation.
>>
>> 2 vs. 1 second on a 1 TiB system is usually really just noise.
>>
> 
> The cost will continue to grow over time, so I presume that Gang is trying
> to get out in front of the issue even though it may not be a large savings
> today.
> 
> Running single boot tests, with the latest upstream kernel, allocating
> 1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.
> 
> But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
> today with the current implementation.

And there, the 65.2s won't be noise because that 12TB system is up by a 
snap of a finger? :)
David Rientjes Nov. 24, 2023, 8 p.m. UTC | #5
On Fri, 24 Nov 2023, David Hildenbrand wrote:

> On 24.11.23 20:44, David Rientjes wrote:
> > On Thu, 23 Nov 2023, David Hildenbrand wrote:
> > 
> > > On 23.11.23 14:30, Gang Li wrote:
> > > > From: Gang Li <ligang.bdlg@bytedance.com>
> > > > 
> > > > Inspired by these patches [1][2], this series aims to speed up the
> > > > initialization of hugetlb during the boot process through
> > > > parallelization.
> > > > 
> > > > It is particularly effective in large systems. On a machine equipped
> > > > with 1TB of memory and two NUMA nodes, the time for hugetlb
> > > > initialization was reduced from 2 seconds to 1 second.
> > > 
> > > Sorry to say, but why is that a scenario worth adding complexity for /
> > > optimizing for? You don't cover that, so there is a clear lack in the
> > > motivation.
> > > 
> > > 2 vs. 1 second on a 1 TiB system is usually really just noise.
> > > 
> > 
> > The cost will continue to grow over time, so I presume that Gang is trying
> > to get out in front of the issue even though it may not be a large savings
> > today.
> > 
> > Running single boot tests, with the latest upstream kernel, allocating
> > 1,440 1GB hugetlb pages on a 1.5TB AMD host appears to take 1.47s.
> > 
> > But allocating 11,776 1GB hugetlb pages on a 12TB Intel host takes 65.2s
> > today with the current implementation.
> 
> And there, the 65.2s won't be noise because that 12TB system is up by a snap
> of a finger? :)
> 

In this single boot test, total boot time was 373.78s, so 1GB hugetlb
allocation is 17.4% of that.

Would love to see what the numbers would look like if 1GB pages were
supported.
Gang Li Nov. 28, 2023, 3:18 a.m. UTC | #6
On 2023/11/25 04:00, David Rientjes wrote:
> On Fri, 24 Nov 2023, David Hildenbrand wrote:
> 
>> And there, the 65.2s won't be noise because that 12TB system is up by a snap
>> of a finger? :)
>>
> 
> In this single boot test, total boot time was 373.78s, so 1GB hugetlb
> allocation is 17.4% of that.

Thank you for sharing these data. Currently, I don't have access to a 
machine of such large capacity, so the benefits in my tests are not as 
pronounced.

I believe testing on a system of this scale would yield significant 
benefits.

> 
> Would love to see what the numbers would look like if 1GB pages were
> supported.
> 

Support for 1GB hugetlb is not yet perfect, so it wasn't included in v1. 
But I'm happy to refine and introduce 1GB hugetlb support in future 
versions.
Gang Li Nov. 28, 2023, 6:52 a.m. UTC | #7
Hi David Hildenbrand :),

On 2023/11/23 22:10, David Hildenbrand wrote:
> Sorry to say, but why is that a scenario worth adding complexity for /
> optimizing for? You don't cover that, so there is a clear lack in the
> motivation.

Regarding your concern about complexity, this is indeed something to
consider. There is a precedent of parallelization in pgdata[1] which
might be reused (or other methods) to reduce the complexity of this
series.

[1] 
https://lore.kernel.org/all/20200527173608.2885243-1-daniel.m.jordan@oracle.com/
David Hildenbrand Nov. 28, 2023, 8:09 a.m. UTC | #8
On 28.11.23 07:52, Gang Li wrote:
> Hi David Hildenbrand :),
> 
> On 2023/11/23 22:10, David Hildenbrand wrote:
>> Sorry to say, but why is that a scenario worth adding complexity for /
>> optimizing for? You don't cover that, so there is a clear lack in the
>> motivation.
> 
> Regarding your concern about complexity, this is indeed something to
> consider. There is a precedent of parallelization in pgdata[1] which
> might be reused (or other methods) to reduce the complexity of this
> series.

Yes, please!
David Rientjes Nov. 29, 2023, 7:41 p.m. UTC | #9
On Tue, 28 Nov 2023, Gang Li wrote:

> > 
> > > And there, the 65.2s won't be noise because that 12TB system is up by a
> > > snap
> > > of a finger? :)
> > > 
> > 
> > In this single boot test, total boot time was 373.78s, so 1GB hugetlb
> > allocation is 17.4% of that.
> 
> Thank you for sharing these data. Currently, I don't have access to a machine
> of such large capacity, so the benefits in my tests are not as pronounced.
> 
> I believe testing on a system of this scale would yield significant benefits.
> 
> > 
> > Would love to see what the numbers would look like if 1GB pages were
> > supported.
> > 
> 
> Support for 1GB hugetlb is not yet perfect, so it wasn't included in v1. But
> I'm happy to refine and introduce 1GB hugetlb support in future versions.
> 

That would be very appreciated, thank you!  I'm happy to test and collect 
data for any proposed patch series on 12TB systems booted with a lot of 
1GB hugetlb pages on the kernel command line.