diff mbox series

mempolicy: Optimize queue_folios_pte_range by PTE batching

Message ID 20250411081301.8533-1-dev.jain@arm.com (mailing list archive)
State New
Headers show
Series mempolicy: Optimize queue_folios_pte_range by PTE batching | expand

Commit Message

Dev Jain April 11, 2025, 8:13 a.m. UTC
After the check for queue_folio_required(), the code only cares about the
folio in the for loop, i.e the PTEs are redundant. Therefore, optimize this
loop by skipping over a PTE batch mapping the same folio.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
Unfortunately I have only build tested this since my test environment is
broken.

 mm/mempolicy.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

Comments

David Hildenbrand April 15, 2025, 10:17 a.m. UTC | #1
On 11.04.25 10:13, Dev Jain wrote:
> After the check for queue_folio_required(), the code only cares about the
> folio in the for loop, i.e the PTEs are redundant. Therefore, optimize this
> loop by skipping over a PTE batch mapping the same folio.
> 
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> Unfortunately I have only build tested this since my test environment is
> broken.
> 
>   mm/mempolicy.c | 12 +++++++++++-
>   1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b28a1e6ae096..b019524da8a2 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
>   	pte_t *pte, *mapped_pte;
>   	pte_t ptent;
>   	spinlock_t *ptl;
> +	int max_nr;
> +	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
> +	int nr = 1;

Try sticking to reverse xmas tree, please. (not completely the case 
here, but fpb_flags can easily be moved all he way to the top)

Also, why are you initializing nr to 1 here if you reinitialize it below?

>   
 >   	ptl = pmd_trans_huge_lock(pmd, vma);>   	if (ptl) {
> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
>   		walk->action = ACTION_AGAIN;
>   		return 0;
>   	}
 > -	for (; addr != end; pte++, addr += PAGE_SIZE) {> +	for (; addr != 
end; pte += nr, addr += nr * PAGE_SIZE) {
> +		nr = 1;
>   		ptent = ptep_get(pte);
>   		if (pte_none(ptent))
>   			continue;
> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
>   		if (!queue_folio_required(folio, qp))
>   			continue;
>   		if (folio_test_large(folio)) {
> +			max_nr = (end - addr) >> PAGE_SHIFT;
> +			if (max_nr != 1)
> +				nr = folio_pte_batch(folio, addr, pte, ptent,
> +						     max_nr, fpb_flags,
> +						     NULL, NULL, NULL);

We should probably do that immediately after we verified that 
vm_normal_folio() have us something reasonable.

>   			/*
>   			 * A large folio can only be isolated from LRU once,
>   			 * but may be mapped by many PTEs (and Copy-On-Write may
> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
>   			qp->nr_failed++;
>   			if (strictly_unmovable(flags))
>   				break;
> +			qp->nr_failed += nr - 1;

Can't we do qp->nr_failed += nr; above?

Weird enough, queue_folios_pmd() also only does qp->nr_failed++, but 
queue_pages_range() documents it that way.

>   		}
>   	}
>   	pte_unmap_unlock(mapped_pte, ptl);
Dev Jain April 15, 2025, 11:47 a.m. UTC | #2
On 15/04/25 3:47 pm, David Hildenbrand wrote:
> On 11.04.25 10:13, Dev Jain wrote:
>> After the check for queue_folio_required(), the code only cares about the
>> folio in the for loop, i.e the PTEs are redundant. Therefore, optimize 
>> this
>> loop by skipping over a PTE batch mapping the same folio.
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> Unfortunately I have only build tested this since my test environment is
>> broken.
>>
>>   mm/mempolicy.c | 12 +++++++++++-
>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>> index b28a1e6ae096..b019524da8a2 100644
>> --- a/mm/mempolicy.c
>> +++ b/mm/mempolicy.c
>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>> unsigned long addr,
>>       pte_t *pte, *mapped_pte;
>>       pte_t ptent;
>>       spinlock_t *ptl;
>> +    int max_nr;
>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>> +    int nr = 1;
> 
> Try sticking to reverse xmas tree, please. (not completely the case 
> here, but fpb_flags can easily be moved all he way to the top)

I thought that the initializations were to be kept at the bottom.
Asking for future patches, should I put all declarations in reverse-xmas 
fashion (even those which I don't intend to touch w.r.t the patch 
logic), or do I do that for only my additions?

> 
> Also, why are you initializing nr to 1 here if you reinitialize it below?

Yup no need, I thought pte += nr will blow up due to nr not being 
initialized, but it won't because it gets executed just before the start 
of the second iteration.

> 
>  >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) {
>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>> unsigned long addr,
>>           walk->action = ACTION_AGAIN;
>>           return 0;
>>       }
>  > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (; 
> addr != end; pte += nr, addr += nr * PAGE_SIZE) {
>> +        nr = 1;
>>           ptent = ptep_get(pte);
>>           if (pte_none(ptent))
>>               continue;
>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>> unsigned long addr,
>>           if (!queue_folio_required(folio, qp))
>>               continue;
>>           if (folio_test_large(folio)) {
>> +            max_nr = (end - addr) >> PAGE_SHIFT;
>> +            if (max_nr != 1)
>> +                nr = folio_pte_batch(folio, addr, pte, ptent,
>> +                             max_nr, fpb_flags,
>> +                             NULL, NULL, NULL);
> 
> We should probably do that immediately after we verified that 
> vm_normal_folio() have us something reasonable.

But shouldn't we keep the small folio case separate to avoid the 
overhead of folio_pte_batch()?

> 
>>               /*
>>                * A large folio can only be isolated from LRU once,
>>                * but may be mapped by many PTEs (and Copy-On-Write may
>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>> unsigned long addr,
>>               qp->nr_failed++;
>>               if (strictly_unmovable(flags))
>>                   break;
>> +            qp->nr_failed += nr - 1;
> 
> Can't we do qp->nr_failed += nr; above?

I did not dive deep into the significance of nr_failed, but I did that
to keep the code, before and after the change, equivalent:

Claim: if we reach qp->nr_failed++ for a single pte, we will reach here 
for all ptes belonging to the same batch.

Proof: We reach here => the if condition is true. Now, !(flags & 
(MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) and !vma_migratable(vma) do not 
depend on the ptes. So the other case is that !migrate_folio_add() is 
true => !folio_isolate_lru() is true, which depends on the folio and not 
the PTEs; if isolation fails for one PTE, it will definitely fail for 
the PTE batch.

So, before the change, if we iterate on a pte mapping a large folio, and 
strictly_unmovable(flags) is true, then nr_failed += 1 only. If not, 
then nr_failed++ will happen nr times for sure (because of the claim) 
and we can safely do qp->nr_failed += nr - 1.

> 
> Weird enough, queue_folios_pmd() also only does qp->nr_failed++, but 
> queue_pages_range() documents it that way.
> 
>>           }
>>       }
>>       pte_unmap_unlock(mapped_pte, ptl);
> 
>
David Hildenbrand April 15, 2025, 11:59 a.m. UTC | #3
On 15.04.25 13:47, Dev Jain wrote:
> 
> 
> On 15/04/25 3:47 pm, David Hildenbrand wrote:
>> On 11.04.25 10:13, Dev Jain wrote:
>>> After the check for queue_folio_required(), the code only cares about the
>>> folio in the for loop, i.e the PTEs are redundant. Therefore, optimize
>>> this
>>> loop by skipping over a PTE batch mapping the same folio.
>>>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>> Unfortunately I have only build tested this since my test environment is
>>> broken.
>>>
>>>    mm/mempolicy.c | 12 +++++++++++-
>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>>> index b28a1e6ae096..b019524da8a2 100644
>>> --- a/mm/mempolicy.c
>>> +++ b/mm/mempolicy.c
>>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>> unsigned long addr,
>>>        pte_t *pte, *mapped_pte;
>>>        pte_t ptent;
>>>        spinlock_t *ptl;
>>> +    int max_nr;
>>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>> +    int nr = 1;
>>
>> Try sticking to reverse xmas tree, please. (not completely the case
>> here, but fpb_flags can easily be moved all he way to the top)
> 
> I thought that the initializations were to be kept at the bottom.

Not that I am aware of.

> Asking for future patches, should I put all declarations in reverse-xmas
> fashion (even those which I don't intend to touch w.r.t the patch
> logic), or do I do that for only my additions?

We try to stay as close to reverse-xmas tree as possible. It's not 
always possible (e.g., dependent assignments), but fpb_flags in this 
case here can easily go all the way to the top.

...

> 
>>
>>   >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) {
>>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>> unsigned long addr,
>>>            walk->action = ACTION_AGAIN;
>>>            return 0;
>>>        }
>>   > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (;
>> addr != end; pte += nr, addr += nr * PAGE_SIZE) {
>>> +        nr = 1;
>>>            ptent = ptep_get(pte);
>>>            if (pte_none(ptent))
>>>                continue;
>>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>> unsigned long addr,
>>>            if (!queue_folio_required(folio, qp))
>>>                continue;
>>>            if (folio_test_large(folio)) {
>>> +            max_nr = (end - addr) >> PAGE_SHIFT;
>>> +            if (max_nr != 1)
>>> +                nr = folio_pte_batch(folio, addr, pte, ptent,
>>> +                             max_nr, fpb_flags,
>>> +                             NULL, NULL, NULL);
>>
>> We should probably do that immediately after we verified that
>> vm_normal_folio() have us something reasonable.
> 
> But shouldn't we keep the small folio case separate to avoid the
> overhead of folio_pte_batch()?

Yes, just do something like

if (folio_test_large(folio) && end - addr > 1)
	nr = folio_pte_batch(folio, addr, pte, ptent, end - addr,
			     max_nr, fpb_flags, ...);

before the folio_test_reserved().

Then you'd also skip the all ptes if !queue_folio_required.

> 
>>
>>>                /*
>>>                 * A large folio can only be isolated from LRU once,
>>>                 * but may be mapped by many PTEs (and Copy-On-Write may
>>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>> unsigned long addr,
>>>                qp->nr_failed++;
>>>                if (strictly_unmovable(flags))
>>>                    break;
>>> +            qp->nr_failed += nr - 1;
>>
>> Can't we do qp->nr_failed += nr; above?
> 
> I did not dive deep into the significance of nr_failed, but I did that
> to keep the code, before and after the change, equivalent:

And I question exactly that.

If we hit strictly_unmovable(flags), we end up returning "-EIO" from
queue_folios_pte_range().

And staring at queue_pages_range(), we ignore nr_failed if 
walk_page_range() returned an error.

So looks like we can just add everything in one shot, independent of 
strictly_unmovable()?
Dev Jain April 15, 2025, 12:06 p.m. UTC | #4
On 15/04/25 5:29 pm, David Hildenbrand wrote:
> On 15.04.25 13:47, Dev Jain wrote:
>>
>>
>> On 15/04/25 3:47 pm, David Hildenbrand wrote:
>>> On 11.04.25 10:13, Dev Jain wrote:
>>>> After the check for queue_folio_required(), the code only cares 
>>>> about the
>>>> folio in the for loop, i.e the PTEs are redundant. Therefore, optimize
>>>> this
>>>> loop by skipping over a PTE batch mapping the same folio.
>>>>
>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>>> ---
>>>> Unfortunately I have only build tested this since my test 
>>>> environment is
>>>> broken.
>>>>
>>>>    mm/mempolicy.c | 12 +++++++++++-
>>>>    1 file changed, 11 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>>>> index b28a1e6ae096..b019524da8a2 100644
>>>> --- a/mm/mempolicy.c
>>>> +++ b/mm/mempolicy.c
>>>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>>> unsigned long addr,
>>>>        pte_t *pte, *mapped_pte;
>>>>        pte_t ptent;
>>>>        spinlock_t *ptl;
>>>> +    int max_nr;
>>>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>> +    int nr = 1;
>>>
>>> Try sticking to reverse xmas tree, please. (not completely the case
>>> here, but fpb_flags can easily be moved all he way to the top)
>>
>> I thought that the initializations were to be kept at the bottom.
> 
> Not that I am aware of.
> 
>> Asking for future patches, should I put all declarations in reverse-xmas
>> fashion (even those which I don't intend to touch w.r.t the patch
>> logic), or do I do that for only my additions?
> 
> We try to stay as close to reverse-xmas tree as possible. It's not 
> always possible (e.g., dependent assignments), but fpb_flags in this 
> case here can easily go all the way to the top.

Sure.

> 
> ...
> 
>>
>>>
>>>   >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) {
>>>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>>> unsigned long addr,
>>>>            walk->action = ACTION_AGAIN;
>>>>            return 0;
>>>>        }
>>>   > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (;
>>> addr != end; pte += nr, addr += nr * PAGE_SIZE) {
>>>> +        nr = 1;
>>>>            ptent = ptep_get(pte);
>>>>            if (pte_none(ptent))
>>>>                continue;
>>>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>>> unsigned long addr,
>>>>            if (!queue_folio_required(folio, qp))
>>>>                continue;
>>>>            if (folio_test_large(folio)) {
>>>> +            max_nr = (end - addr) >> PAGE_SHIFT;
>>>> +            if (max_nr != 1)
>>>> +                nr = folio_pte_batch(folio, addr, pte, ptent,
>>>> +                             max_nr, fpb_flags,
>>>> +                             NULL, NULL, NULL);
>>>
>>> We should probably do that immediately after we verified that
>>> vm_normal_folio() have us something reasonable.
>>
>> But shouldn't we keep the small folio case separate to avoid the
>> overhead of folio_pte_batch()?
> 
> Yes, just do something like
> 
> if (folio_test_large(folio) && end - addr > 1)
>      nr = folio_pte_batch(folio, addr, pte, ptent, end - addr,
>                   max_nr, fpb_flags, ...);
> 
> before the folio_test_reserved().
> 
> Then you'd also skip the all ptes if !queue_folio_required.

Ah got you, thanks.

> 
>>
>>>
>>>>                /*
>>>>                 * A large folio can only be isolated from LRU once,
>>>>                 * but may be mapped by many PTEs (and Copy-On-Write may
>>>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd,
>>>> unsigned long addr,
>>>>                qp->nr_failed++;
>>>>                if (strictly_unmovable(flags))
>>>>                    break;
>>>> +            qp->nr_failed += nr - 1;
>>>
>>> Can't we do qp->nr_failed += nr; above?
>>
>> I did not dive deep into the significance of nr_failed, but I did that
>> to keep the code, before and after the change, equivalent:
> 
> And I question exactly that.
> 
> If we hit strictly_unmovable(flags), we end up returning "-EIO" from
> queue_folios_pte_range().
> 
> And staring at queue_pages_range(), we ignore nr_failed if 
> walk_page_range() returned an error.
> 
> So looks like we can just add everything in one shot, independent of 
> strictly_unmovable()?

Looks good to me this way. I'll change it, thanks.

>
Baolin Wang April 16, 2025, 7:33 a.m. UTC | #5
On 2025/4/15 19:47, Dev Jain wrote:
> 
> 
> On 15/04/25 3:47 pm, David Hildenbrand wrote:
>> On 11.04.25 10:13, Dev Jain wrote:
>>> After the check for queue_folio_required(), the code only cares about 
>>> the
>>> folio in the for loop, i.e the PTEs are redundant. Therefore, 
>>> optimize this
>>> loop by skipping over a PTE batch mapping the same folio.
>>>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>> Unfortunately I have only build tested this since my test environment is
>>> broken.
>>>
>>>   mm/mempolicy.c | 12 +++++++++++-
>>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>>> index b28a1e6ae096..b019524da8a2 100644
>>> --- a/mm/mempolicy.c
>>> +++ b/mm/mempolicy.c
>>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>> unsigned long addr,
>>>       pte_t *pte, *mapped_pte;
>>>       pte_t ptent;
>>>       spinlock_t *ptl;
>>> +    int max_nr;
>>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>> +    int nr = 1;
>>
>> Try sticking to reverse xmas tree, please. (not completely the case 
>> here, but fpb_flags can easily be moved all he way to the top)
> 
> I thought that the initializations were to be kept at the bottom.
> Asking for future patches, should I put all declarations in reverse-xmas 
> fashion (even those which I don't intend to touch w.r.t the patch 
> logic), or do I do that for only my additions?
> 
>>
>> Also, why are you initializing nr to 1 here if you reinitialize it below?
> 
> Yup no need, I thought pte += nr will blow up due to nr not being 
> initialized, but it won't because it gets executed just before the start 
> of the second iteration.
> 
>>
>>  >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) {
>>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>> unsigned long addr,
>>>           walk->action = ACTION_AGAIN;
>>>           return 0;
>>>       }
>>  > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (; 
>> addr != end; pte += nr, addr += nr * PAGE_SIZE) {
>>> +        nr = 1;
>>>           ptent = ptep_get(pte);
>>>           if (pte_none(ptent))
>>>               continue;
>>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>> unsigned long addr,
>>>           if (!queue_folio_required(folio, qp))
>>>               continue;
>>>           if (folio_test_large(folio)) {
>>> +            max_nr = (end - addr) >> PAGE_SHIFT;
>>> +            if (max_nr != 1)
>>> +                nr = folio_pte_batch(folio, addr, pte, ptent,
>>> +                             max_nr, fpb_flags,
>>> +                             NULL, NULL, NULL);
>>
>> We should probably do that immediately after we verified that 
>> vm_normal_folio() have us something reasonable.
> 
> But shouldn't we keep the small folio case separate to avoid the 
> overhead of folio_pte_batch()?
> 
>>
>>>               /*
>>>                * A large folio can only be isolated from LRU once,
>>>                * but may be mapped by many PTEs (and Copy-On-Write may
>>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>> unsigned long addr,
>>>               qp->nr_failed++;
>>>               if (strictly_unmovable(flags))
>>>                   break;
>>> +            qp->nr_failed += nr - 1;
>>
>> Can't we do qp->nr_failed += nr; above?
> 
> I did not dive deep into the significance of nr_failed, but I did that
> to keep the code, before and after the change, equivalent:
> 
> Claim: if we reach qp->nr_failed++ for a single pte, we will reach here 
> for all ptes belonging to the same batch.

Sorry, I missed the previous discussion (I replied to your new version). 
I think this claim is incorrect, we will skip remaining ptes belonging 
to the same batch with checking 'qp->large'.

		if (folio_test_large(folio)) {
			if (folio == qp->large)
				continue;
			qp->large = folio;
		}
Dev Jain April 16, 2025, 8:55 a.m. UTC | #6
On 16/04/25 1:03 pm, Baolin Wang wrote:
> 
> 
> On 2025/4/15 19:47, Dev Jain wrote:
>>
>>
>> On 15/04/25 3:47 pm, David Hildenbrand wrote:
>>> On 11.04.25 10:13, Dev Jain wrote:
>>>> After the check for queue_folio_required(), the code only cares 
>>>> about the
>>>> folio in the for loop, i.e the PTEs are redundant. Therefore, 
>>>> optimize this
>>>> loop by skipping over a PTE batch mapping the same folio.
>>>>
>>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>>> ---
>>>> Unfortunately I have only build tested this since my test 
>>>> environment is
>>>> broken.
>>>>
>>>>   mm/mempolicy.c | 12 +++++++++++-
>>>>   1 file changed, 11 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
>>>> index b28a1e6ae096..b019524da8a2 100644
>>>> --- a/mm/mempolicy.c
>>>> +++ b/mm/mempolicy.c
>>>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>>> unsigned long addr,
>>>>       pte_t *pte, *mapped_pte;
>>>>       pte_t ptent;
>>>>       spinlock_t *ptl;
>>>> +    int max_nr;
>>>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
>>>> +    int nr = 1;
>>>
>>> Try sticking to reverse xmas tree, please. (not completely the case 
>>> here, but fpb_flags can easily be moved all he way to the top)
>>
>> I thought that the initializations were to be kept at the bottom.
>> Asking for future patches, should I put all declarations in reverse- 
>> xmas fashion (even those which I don't intend to touch w.r.t the patch 
>> logic), or do I do that for only my additions?
>>
>>>
>>> Also, why are you initializing nr to 1 here if you reinitialize it 
>>> below?
>>
>> Yup no need, I thought pte += nr will blow up due to nr not being 
>> initialized, but it won't because it gets executed just before the 
>> start of the second iteration.
>>
>>>
>>>  >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) {
>>>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>>> unsigned long addr,
>>>>           walk->action = ACTION_AGAIN;
>>>>           return 0;
>>>>       }
>>>  > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (; 
>>> addr != end; pte += nr, addr += nr * PAGE_SIZE) {
>>>> +        nr = 1;
>>>>           ptent = ptep_get(pte);
>>>>           if (pte_none(ptent))
>>>>               continue;
>>>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>>> unsigned long addr,
>>>>           if (!queue_folio_required(folio, qp))
>>>>               continue;
>>>>           if (folio_test_large(folio)) {
>>>> +            max_nr = (end - addr) >> PAGE_SHIFT;
>>>> +            if (max_nr != 1)
>>>> +                nr = folio_pte_batch(folio, addr, pte, ptent,
>>>> +                             max_nr, fpb_flags,
>>>> +                             NULL, NULL, NULL);
>>>
>>> We should probably do that immediately after we verified that 
>>> vm_normal_folio() have us something reasonable.
>>
>> But shouldn't we keep the small folio case separate to avoid the 
>> overhead of folio_pte_batch()?
>>
>>>
>>>>               /*
>>>>                * A large folio can only be isolated from LRU once,
>>>>                * but may be mapped by many PTEs (and Copy-On-Write may
>>>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd, 
>>>> unsigned long addr,
>>>>               qp->nr_failed++;
>>>>               if (strictly_unmovable(flags))
>>>>                   break;
>>>> +            qp->nr_failed += nr - 1;
>>>
>>> Can't we do qp->nr_failed += nr; above?
>>
>> I did not dive deep into the significance of nr_failed, but I did that
>> to keep the code, before and after the change, equivalent:
>>
>> Claim: if we reach qp->nr_failed++ for a single pte, we will reach 
>> here for all ptes belonging to the same batch.
> 
> Sorry, I missed the previous discussion (I replied to your new version). 
> I think this claim is incorrect, we will skip remaining ptes belonging 
> to the same batch with checking 'qp->large'.
> 
>          if (folio_test_large(folio)) {
>              if (folio == qp->large)
>                  continue;
>              qp->large = folio;
>          }

Oops you are right, I missed that.
diff mbox series

Patch

diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b28a1e6ae096..b019524da8a2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -573,6 +573,9 @@  static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 	pte_t *pte, *mapped_pte;
 	pte_t ptent;
 	spinlock_t *ptl;
+	int max_nr;
+	const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+	int nr = 1;
 
 	ptl = pmd_trans_huge_lock(pmd, vma);
 	if (ptl) {
@@ -586,7 +589,8 @@  static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 		walk->action = ACTION_AGAIN;
 		return 0;
 	}
-	for (; addr != end; pte++, addr += PAGE_SIZE) {
+	for (; addr != end; pte += nr, addr += nr * PAGE_SIZE) {
+		nr = 1;
 		ptent = ptep_get(pte);
 		if (pte_none(ptent))
 			continue;
@@ -607,6 +611,11 @@  static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 		if (!queue_folio_required(folio, qp))
 			continue;
 		if (folio_test_large(folio)) {
+			max_nr = (end - addr) >> PAGE_SHIFT;
+			if (max_nr != 1)
+				nr = folio_pte_batch(folio, addr, pte, ptent,
+						     max_nr, fpb_flags,
+						     NULL, NULL, NULL);
 			/*
 			 * A large folio can only be isolated from LRU once,
 			 * but may be mapped by many PTEs (and Copy-On-Write may
@@ -633,6 +642,7 @@  static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr,
 			qp->nr_failed++;
 			if (strictly_unmovable(flags))
 				break;
+			qp->nr_failed += nr - 1;
 		}
 	}
 	pte_unmap_unlock(mapped_pte, ptl);