diff mbox series

[v2,1/2] hugetlb: Fix hugepages_setup when deal with pernode

Message ID 20220401101232.2790280-2-liupeng256@huawei.com (mailing list archive)
State New
Headers show
Series hugetlb: Fix confusing behavior | expand

Commit Message

Peng Liu April 1, 2022, 10:12 a.m. UTC
Hugepages can be specified to pernode since "hugetlbfs: extend
the definition of hugepages parameter to support node allocation",
but the following problem is observed.

Confusing behavior is observed when both 1G and 2M hugepage is set
after "numa=off".
 cmdline hugepage settings:
  hugepagesz=1G hugepages=0:3,1:3
  hugepagesz=2M hugepages=0:1024,1:1024
 results:
  HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
  HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages

Furthermore, confusing behavior can be also observed when invalid
node behind valid node.

To fix this, hugetlb_hstate_alloc_pages should be called even when
hugepages_setup going to invalid.

Cc: <stable@vger.kernel.org>
Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
Signed-off-by: Peng Liu <liupeng256@huawei.com>
---
 mm/hugetlb.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

David Hildenbrand April 1, 2022, 10:43 a.m. UTC | #1
On 01.04.22 12:12, Peng Liu wrote:
> Hugepages can be specified to pernode since "hugetlbfs: extend
> the definition of hugepages parameter to support node allocation",
> but the following problem is observed.
> 
> Confusing behavior is observed when both 1G and 2M hugepage is set
> after "numa=off".
>  cmdline hugepage settings:
>   hugepagesz=1G hugepages=0:3,1:3
>   hugepagesz=2M hugepages=0:1024,1:1024
>  results:
>   HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>   HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
> 
> Furthermore, confusing behavior can be also observed when invalid
> node behind valid node.
> 
> To fix this, hugetlb_hstate_alloc_pages should be called even when
> hugepages_setup going to invalid.

Shouldn't we bail out if someone requests node-specific allocations but
we are not running with NUMA?

What's the result after your change?

> 
> Cc: <stable@vger.kernel.org>

I am not sure if this is really stable material.

> Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
> Signed-off-by: Peng Liu <liupeng256@huawei.com>
Mike Kravetz April 1, 2022, 5:23 p.m. UTC | #2
On 4/1/22 03:43, David Hildenbrand wrote:
> On 01.04.22 12:12, Peng Liu wrote:
>> Hugepages can be specified to pernode since "hugetlbfs: extend
>> the definition of hugepages parameter to support node allocation",
>> but the following problem is observed.
>>
>> Confusing behavior is observed when both 1G and 2M hugepage is set
>> after "numa=off".
>>  cmdline hugepage settings:
>>   hugepagesz=1G hugepages=0:3,1:3
>>   hugepagesz=2M hugepages=0:1024,1:1024
>>  results:
>>   HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>>   HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
>>
>> Furthermore, confusing behavior can be also observed when invalid
>> node behind valid node.
>>
>> To fix this, hugetlb_hstate_alloc_pages should be called even when
>> hugepages_setup going to invalid.
> 
> Shouldn't we bail out if someone requests node-specific allocations but
> we are not running with NUMA?

I thought about this as well, and could not come up with a good answer.
Certainly, nobody SHOULD specify both 'numa=off' and ask for node specific
allocations on the same command line.  I would have no problem bailing out
in such situations.  But, I think that would also require the hugetlb command
line processing to look for such situations.

One could also argue that if there is only a single node (not numa=off on
command line) and someone specifies node local allocations we should bail.

I was 'thinking' about a situation where we had multiple nodes and node
local allocations were 'hard coded' via grub or something.  Then, for some
reason one node fails to come up on a reboot.  Should we bail on all the
hugetlb allocations, or should we try to allocate on the still available
nodes?

When I went back and reread the reason for this change, I see that it is
primarily for 'some debugging and test cases'.

> 
> What's the result after your change?
> 
>>
>> Cc: <stable@vger.kernel.org>
> 
> I am not sure if this is really stable material.

Right now, we partially and inconsistently process node specific allocations
if there are missing nodes.  We allocate 'regular' hugetlb pages on existing
nodes.  But, we do not allocate gigantic hugetlb pages on existing nodes.

I believe this is worth fixing in stable.

Since the behavior for missing nodes was not really spelled out when node
specific allocations were introduced, I think an acceptable stable fix could
be to bail.

In any case, I think we need to do something.

> 
>> Fixes: b5389086ad7b ("hugetlbfs: extend the definition of hugepages parameter to support node allocation")
>> Signed-off-by: Peng Liu <liupeng256@huawei.com>
>
Peng Liu April 2, 2022, 2:36 a.m. UTC | #3
On 2022/4/1 18:43, David Hildenbrand wrote:
> On 01.04.22 12:12, Peng Liu wrote:
>> Hugepages can be specified to pernode since "hugetlbfs: extend
>> the definition of hugepages parameter to support node allocation",
>> but the following problem is observed.
>>
>> Confusing behavior is observed when both 1G and 2M hugepage is set
>> after "numa=off".
>>   cmdline hugepage settings:
>>    hugepagesz=1G hugepages=0:3,1:3
>>    hugepagesz=2M hugepages=0:1024,1:1024
>>   results:
>>    HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>>    HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
>>
>> Furthermore, confusing behavior can be also observed when invalid
>> node behind valid node.
>>
>> To fix this, hugetlb_hstate_alloc_pages should be called even when
>> hugepages_setup going to invalid.
> Shouldn't we bail out if someone requests node-specific allocations but
> we are not running with NUMA?
>
> What's the result after your change?
>
>> Cc: <stable@vger.kernel.org>
> I am not sure if this is really stable material.

This change will make 1G-huge-page consistent with 2M-huge-page when
an invalid node is configured. After this patch, all per node huge pages
will allocate until an invalid node.

Thus, the basic question is "what will lead to an invalid node".
1) Some debugging and test cases as Mike suggested.
2) When part of physical memory or cpu is broken and bios not report
the node with physical damage, but still use the original grub.
David Hildenbrand April 4, 2022, 10:41 a.m. UTC | #4
On 01.04.22 19:23, Mike Kravetz wrote:
> On 4/1/22 03:43, David Hildenbrand wrote:
>> On 01.04.22 12:12, Peng Liu wrote:
>>> Hugepages can be specified to pernode since "hugetlbfs: extend
>>> the definition of hugepages parameter to support node allocation",
>>> but the following problem is observed.
>>>
>>> Confusing behavior is observed when both 1G and 2M hugepage is set
>>> after "numa=off".
>>>  cmdline hugepage settings:
>>>   hugepagesz=1G hugepages=0:3,1:3
>>>   hugepagesz=2M hugepages=0:1024,1:1024
>>>  results:
>>>   HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>>>   HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
>>>
>>> Furthermore, confusing behavior can be also observed when invalid
>>> node behind valid node.
>>>
>>> To fix this, hugetlb_hstate_alloc_pages should be called even when
>>> hugepages_setup going to invalid.
>>
>> Shouldn't we bail out if someone requests node-specific allocations but
>> we are not running with NUMA?
> 
> I thought about this as well, and could not come up with a good answer.
> Certainly, nobody SHOULD specify both 'numa=off' and ask for node specific
> allocations on the same command line.  I would have no problem bailing out
> in such situations.  But, I think that would also require the hugetlb command
> line processing to look for such situations.

Yes. Right now I see

if (tmp >= nr_online_nodes)
	goto invalid;

Which seems a little strange, because IIUC, it's the number of online
nodes, which is completely wrong with a sparse online bitmap. Just
imagine node 0 and node 2 are online, and node 1 is offline. Assuming
that "node < 2" is valid is wrong.

Why don't we check for node_online() and bail out if that is not the
case? Is it too early for that check? But why does comparing against
nr_online_nodes() work, then?


Having that said, I'm not sure if all usage of nr_online_nodes in
mm/hugetlb.c is wrong, with a sparse online bitmap. Outside of that,
it's really just used for "nr_online_nodes > 1". I might be wrong, though.

> 
> One could also argue that if there is only a single node (not numa=off on
> command line) and someone specifies node local allocations we should bail.

I assume "numa=off" is always parsed before hugepages_setup() is called,
right? So we can just rely on the actual numa information.


> 
> I was 'thinking' about a situation where we had multiple nodes and node
> local allocations were 'hard coded' via grub or something.  Then, for some
> reason one node fails to come up on a reboot.  Should we bail on all the
> hugetlb allocations, or should we try to allocate on the still available
> nodes?

Depends on what "bail" means. Printing a warning and stopping to
allocate further is certainly good enough for my taste :)

> 
> When I went back and reread the reason for this change, I see that it is
> primarily for 'some debugging and test cases'.
> 
>>
>> What's the result after your change?
>>
>>>
>>> Cc: <stable@vger.kernel.org>
>>
>> I am not sure if this is really stable material.
> 
> Right now, we partially and inconsistently process node specific allocations
> if there are missing nodes.  We allocate 'regular' hugetlb pages on existing
> nodes.  But, we do not allocate gigantic hugetlb pages on existing nodes.
> 
> I believe this is worth fixing in stable.

I am skeptical.

https://www.kernel.org/doc/Documentation/process/stable-kernel-rules.rst

" - It must fix a real bug that bothers people (not a, "This could be a
   problem..." type thing)."

While the current behavior is suboptimal, it's certainly not an urgent
bug (?) and the kernel will boot and work just fine. As you mentioned
"nobody SHOULD specify both 'numa=off' and ask for node specific
allocations on the same command line.", this is just a corner case.

Adjusting it upstream -- okay. Backporting to stable? I don't think so.
Mike Kravetz April 4, 2022, 11:48 p.m. UTC | #5
On 4/4/22 03:41, David Hildenbrand wrote:
> On 01.04.22 19:23, Mike Kravetz wrote:
>> On 4/1/22 03:43, David Hildenbrand wrote:
>>> On 01.04.22 12:12, Peng Liu wrote:
>>>> Hugepages can be specified to pernode since "hugetlbfs: extend
>>>> the definition of hugepages parameter to support node allocation",
>>>> but the following problem is observed.
>>>>
>>>> Confusing behavior is observed when both 1G and 2M hugepage is set
>>>> after "numa=off".
>>>>  cmdline hugepage settings:
>>>>   hugepagesz=1G hugepages=0:3,1:3
>>>>   hugepagesz=2M hugepages=0:1024,1:1024
>>>>  results:
>>>>   HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
>>>>   HugeTLB registered 2.00 MiB page size, pre-allocated 1024 pages
>>>>
>>>> Furthermore, confusing behavior can be also observed when invalid
>>>> node behind valid node.
>>>>
>>>> To fix this, hugetlb_hstate_alloc_pages should be called even when
>>>> hugepages_setup going to invalid.
>>>
>>> Shouldn't we bail out if someone requests node-specific allocations but
>>> we are not running with NUMA?
>>
>> I thought about this as well, and could not come up with a good answer.
>> Certainly, nobody SHOULD specify both 'numa=off' and ask for node specific
>> allocations on the same command line.  I would have no problem bailing out
>> in such situations.  But, I think that would also require the hugetlb command
>> line processing to look for such situations.
> 
> Yes. Right now I see
> 
> if (tmp >= nr_online_nodes)
> 	goto invalid;
> 
> Which seems a little strange, because IIUC, it's the number of online
> nodes, which is completely wrong with a sparse online bitmap. Just
> imagine node 0 and node 2 are online, and node 1 is offline. Assuming
> that "node < 2" is valid is wrong.
> 
> Why don't we check for node_online() and bail out if that is not the
> case? Is it too early for that check? But why does comparing against
> nr_online_nodes() work, then?
> 
> 
> Having that said, I'm not sure if all usage of nr_online_nodes in
> mm/hugetlb.c is wrong, with a sparse online bitmap. Outside of that,
> it's really just used for "nr_online_nodes > 1". I might be wrong, though.

I think you are correct.  My bad for not being more thorough in reviewing
the original patch that added this code.  My incorrect assumption was that
a sparse node map was only possible via offline operations which could not
happen this early in boot.  I now see that a sparse map can be presented
by fw/bios/etc.  So, yes I do believe we need to check for online nodes.
diff mbox series

Patch

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b34f50156f7e..9cd746432ca9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4131,6 +4131,7 @@  static int __init hugepages_setup(char *s)
 	int count;
 	unsigned long tmp;
 	char *p = s;
+	int ret = 1;
 
 	if (!parsed_valid_hugepagesz) {
 		pr_warn("HugeTLB: hugepages=%s does not follow a valid hugepagesz, ignoring\n", s);
@@ -4189,6 +4190,7 @@  static int __init hugepages_setup(char *s)
 		}
 	}
 
+out:
 	/*
 	 * Global state is always initialized later in hugetlb_init.
 	 * But we need to allocate gigantic hstates here early to still
@@ -4199,11 +4201,12 @@  static int __init hugepages_setup(char *s)
 
 	last_mhp = mhp;
 
-	return 1;
+	return ret;
 
 invalid:
 	pr_warn("HugeTLB: Invalid hugepages parameter %s\n", p);
-	return 0;
+	ret = 0;
+	goto out;
 }
 __setup("hugepages=", hugepages_setup);