diff mbox series

mm: set hugepage to false when anon mthp allocation

Message ID 20240910140625.175700-1-wangkefeng.wang@huawei.com (mailing list archive)
State New
Headers show
Series mm: set hugepage to false when anon mthp allocation | expand

Commit Message

Kefeng Wang Sept. 10, 2024, 2:06 p.m. UTC
When the hugepage parameter is true in vma_alloc_folio(), it indicates
that we only try allocation on preferred node if possible for PMD_ORDER,
but it could lead to lots of failures for large folio allocation,
luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d
("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no
effect on runtime behavior.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---

Found the issue when backport mthp to inner kernel without ddc1a5cbc05d,
but for mainline, there is no issue, no clue why hugepage parameter was
retained, maybe just kill the parameter for mainline?

 mm/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Kefeng Wang Sept. 10, 2024, 2:18 p.m. UTC | #1
On 2024/9/10 22:06, Kefeng Wang wrote:
> When the hugepage parameter is true in vma_alloc_folio(), it indicates
> that we only try allocation on preferred node if possible for PMD_ORDER,

Should remove "for PMD_ORDER", I mean that it was used for PMD_ORDER, 
but for other high-order, it will reduce the success rate of allocation 
if without ddc1a5cbc05d.


> but it could lead to lots of failures for large folio allocation,
> luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d
> ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no
> effect on runtime behavior.
> 
> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
> ---
> 
> Found the issue when backport mthp to inner kernel without ddc1a5cbc05d,
> but for mainline, there is no issue, no clue why hugepage parameter was
> retained, maybe just kill the parameter for mainline?
> 
>   mm/memory.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index b84443e689a8..89a15858348a 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -4479,7 +4479,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
>   	gfp = vma_thp_gfp_mask(vma);
>   	while (orders) {
>   		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
> -		folio = vma_alloc_folio(gfp, order, vma, addr, true);
> +		folio = vma_alloc_folio(gfp, order, vma, addr, false);
>   		if (folio) {
>   			if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
>   				count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
Kefeng Wang Sept. 13, 2024, 10:36 a.m. UTC | #2
Hi All,

On 2024/9/10 22:18, Kefeng Wang wrote:
> 
> 
> On 2024/9/10 22:06, Kefeng Wang wrote:
>> When the hugepage parameter is true in vma_alloc_folio(), it indicates
>> that we only try allocation on preferred node if possible for PMD_ORDER,
> 
> Should remove "for PMD_ORDER", I mean that it was used for PMD_ORDER, 
> but for other high-order, it will reduce the success rate of allocation 
> if without ddc1a5cbc05d.
> 
> 
>> but it could lead to lots of failures for large folio allocation,
>> luckily the hugepage parameter was deprecated since commit ddc1a5cbc05d
>> ("mempolicy: alloc_pages_mpol() for NUMA policy without vma"), so no
>> effect on runtime behavior.
>>
>> Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
>> ---
>>
>> Found the issue when backport mthp to inner kernel without ddc1a5cbc05d,
>> but for mainline, there is no issue, no clue why hugepage parameter was
>> retained, maybe just kill the parameter for mainline?


Any comments, fix in alloc_anon_folio() or remove hugepage parameter in 
vma_alloc_folio(), thanks.

>>
>>   mm/memory.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index b84443e689a8..89a15858348a 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -4479,7 +4479,7 @@ static struct folio *alloc_anon_folio(struct 
>> vm_fault *vmf)
>>       gfp = vma_thp_gfp_mask(vma);
>>       while (orders) {
>>           addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
>> -        folio = vma_alloc_folio(gfp, order, vma, addr, true);
>> +        folio = vma_alloc_folio(gfp, order, vma, addr, false);
>>           if (folio) {
>>               if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
>>                   count_mthp_stat(order, 
>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>
diff mbox series

Patch

diff --git a/mm/memory.c b/mm/memory.c
index b84443e689a8..89a15858348a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4479,7 +4479,7 @@  static struct folio *alloc_anon_folio(struct vm_fault *vmf)
 	gfp = vma_thp_gfp_mask(vma);
 	while (orders) {
 		addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order);
-		folio = vma_alloc_folio(gfp, order, vma, addr, true);
+		folio = vma_alloc_folio(gfp, order, vma, addr, false);
 		if (folio) {
 			if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
 				count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);