diff mbox series

[v6,2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop

Message ID 20240521040244.48760-3-ioworker0@gmail.com (mailing list archive)
State New
Headers show
Series Reclaim lazyfree THP without splitting | expand

Commit Message

Lance Yang May 21, 2024, 4:02 a.m. UTC
In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
folios, start the pagewalk first, then call split_huge_pmd_address() to
split the folio.

Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
the page walk. It’s probably necessary to mlock this THP to prevent it from
being picked up during page reclaim.

Suggested-by: David Hildenbrand <david@redhat.com>
Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Signed-off-by: Lance Yang <ioworker0@gmail.com>
---
 include/linux/huge_mm.h |  6 ++++++
 mm/huge_memory.c        | 42 +++++++++++++++++++++--------------------
 mm/rmap.c               | 26 ++++++++++++++++++-------
 3 files changed, 47 insertions(+), 27 deletions(-)

Comments

David Hildenbrand June 5, 2024, 12:46 p.m. UTC | #1
On 21.05.24 06:02, Lance Yang wrote:
> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> folios, start the pagewalk first, then call split_huge_pmd_address() to
> split the folio.
> 
> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> the page walk. It’s probably necessary to mlock this THP to prevent it from
> being picked up during page reclaim.
> 
> Suggested-by: David Hildenbrand <david@redhat.com>
> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> ---

[...] again, sorry for the late review.

> diff --git a/mm/rmap.c b/mm/rmap.c
> index ddffa30c79fb..08a93347f283 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>   	if (flags & TTU_SYNC)
>   		pvmw.flags = PVMW_SYNC;
>   
> -	if (flags & TTU_SPLIT_HUGE_PMD)
> -		split_huge_pmd_address(vma, address, false, folio);
> -
>   	/*
>   	 * For THP, we have to assume the worse case ie pmd for invalidation.
>   	 * For hugetlb, it could be much worse if we need to do pud
> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>   	mmu_notifier_invalidate_range_start(&range);
>   
>   	while (page_vma_mapped_walk(&pvmw)) {
> -		/* Unexpected PMD-mapped THP? */
> -		VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> -
>   		/*
>   		 * If the folio is in an mlock()d vma, we must not swap it out.
>   		 */
>   		if (!(flags & TTU_IGNORE_MLOCK) &&
>   		    (vma->vm_flags & VM_LOCKED)) {
>   			/* Restore the mlock which got missed */
> -			if (!folio_test_large(folio))
> +			if (!folio_test_large(folio) ||
> +			    (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>   				mlock_vma_folio(folio, vma);

Can you elaborate why you think this would be required? If we would have 
performed the  split_huge_pmd_address() beforehand, we would still be 
left with a large folio, no?

>   			goto walk_done_err;
>   		}
>   
> +		if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) {
> +			/*
> +			 * We temporarily have to drop the PTL and start once
> +			 * again from that now-PTE-mapped page table.
> +			 */
> +			split_huge_pmd_locked(vma, range.start, pvmw.pmd, false,
> +					      folio);

Using range.start here is a bit weird. Wouldn't this be pvmw.address? 
[did not check]

> +			pvmw.pmd = NULL;
> +			spin_unlock(pvmw.ptl);
> +			pvmw.ptl = NULL;


Would we want a

page_vma_mapped_walk_restart() that is exactly for that purpose?

> +			flags &= ~TTU_SPLIT_HUGE_PMD;
> +			continue;
> +		}
> +
> +		/* Unexpected PMD-mapped THP? */
> +		VM_BUG_ON_FOLIO(!pvmw.pte, folio);
Lance Yang June 5, 2024, 2:20 p.m. UTC | #2
Hi David,

On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 21.05.24 06:02, Lance Yang wrote:
> > In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> > folios, start the pagewalk first, then call split_huge_pmd_address() to
> > split the folio.
> >
> > Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> > encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> > the page walk. It’s probably necessary to mlock this THP to prevent it from
> > being picked up during page reclaim.
> >
> > Suggested-by: David Hildenbrand <david@redhat.com>
> > Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Signed-off-by: Lance Yang <ioworker0@gmail.com>
> > ---
>
> [...] again, sorry for the late review.

No worries at all, thanks for taking time to review!

>
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index ddffa30c79fb..08a93347f283 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >       if (flags & TTU_SYNC)
> >               pvmw.flags = PVMW_SYNC;
> >
> > -     if (flags & TTU_SPLIT_HUGE_PMD)
> > -             split_huge_pmd_address(vma, address, false, folio);
> > -
> >       /*
> >        * For THP, we have to assume the worse case ie pmd for invalidation.
> >        * For hugetlb, it could be much worse if we need to do pud
> > @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >       mmu_notifier_invalidate_range_start(&range);
> >
> >       while (page_vma_mapped_walk(&pvmw)) {
> > -             /* Unexpected PMD-mapped THP? */
> > -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> > -
> >               /*
> >                * If the folio is in an mlock()d vma, we must not swap it out.
> >                */
> >               if (!(flags & TTU_IGNORE_MLOCK) &&
> >                   (vma->vm_flags & VM_LOCKED)) {
> >                       /* Restore the mlock which got missed */
> > -                     if (!folio_test_large(folio))
> > +                     if (!folio_test_large(folio) ||
> > +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >                               mlock_vma_folio(folio, vma);
>
> Can you elaborate why you think this would be required? If we would have
> performed the  split_huge_pmd_address() beforehand, we would still be
> left with a large folio, no?

Yep, there would still be a large folio, but it wouldn't be PMD-mapped.

After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
folio, but there are a few scenarios where we don't mlock a large folio, such
as when it crosses a VM_LOCKed VMA boundary.

 -                     if (!folio_test_large(folio))
 +                     if (!folio_test_large(folio) ||
 +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))

And this check is just future-proofing and likely unnecessary. If encountering a
PMD-mapped THP missing the mlock for some reason, we can mlock this
THP to prevent it from being picked up during page reclaim, since it is fully
mapped and doesn't cross the VMA boundary, IIUC.

What do you think?
I would appreciate any suggestions regarding this check ;)

[1] https://lore.kernel.org/all/20230918073318.1181104-3-fengwei.yin@intel.com/T/#mdab40248cf3705581d8bfb64e1ebf2d9cd81c095

>
> >                       goto walk_done_err;
> >               }
> >
> > +             if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) {
> > +                     /*
> > +                      * We temporarily have to drop the PTL and start once
> > +                      * again from that now-PTE-mapped page table.
> > +                      */
> > +                     split_huge_pmd_locked(vma, range.start, pvmw.pmd, false,
> > +                                           folio);
>
> Using range.start here is a bit weird. Wouldn't this be pvmw.address?
> [did not check]

Hmm... we may adjust range.start before the page walk, but pvmw.address
does not.

At least for now, pvmw.address seems better. Will adjust as you suggested.

>
> > +                     pvmw.pmd = NULL;
> > +                     spin_unlock(pvmw.ptl);
> > +                     pvmw.ptl = NULL;
>
>
> Would we want a
>
> page_vma_mapped_walk_restart() that is exactly for that purpose?

Nice, let's add page_vma_mapped_walk_restart() for that purpose :)

Thanks again for your time!

Lance

>
> > +                     flags &= ~TTU_SPLIT_HUGE_PMD;
> > +                     continue;
> > +             }
> > +
> > +             /* Unexpected PMD-mapped THP? */
> > +             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand June 5, 2024, 2:28 p.m. UTC | #3
On 05.06.24 16:20, Lance Yang wrote:
> Hi David,
> 
> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 21.05.24 06:02, Lance Yang wrote:
>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>> split the folio.
>>>
>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>> being picked up during page reclaim.
>>>
>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>> ---
>>
>> [...] again, sorry for the late review.
> 
> No worries at all, thanks for taking time to review!
> 
>>
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index ddffa30c79fb..08a93347f283 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>        if (flags & TTU_SYNC)
>>>                pvmw.flags = PVMW_SYNC;
>>>
>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>> -             split_huge_pmd_address(vma, address, false, folio);
>>> -
>>>        /*
>>>         * For THP, we have to assume the worse case ie pmd for invalidation.
>>>         * For hugetlb, it could be much worse if we need to do pud
>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>        mmu_notifier_invalidate_range_start(&range);
>>>
>>>        while (page_vma_mapped_walk(&pvmw)) {
>>> -             /* Unexpected PMD-mapped THP? */
>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>> -
>>>                /*
>>>                 * If the folio is in an mlock()d vma, we must not swap it out.
>>>                 */
>>>                if (!(flags & TTU_IGNORE_MLOCK) &&
>>>                    (vma->vm_flags & VM_LOCKED)) {
>>>                        /* Restore the mlock which got missed */
>>> -                     if (!folio_test_large(folio))
>>> +                     if (!folio_test_large(folio) ||
>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>                                mlock_vma_folio(folio, vma);
>>
>> Can you elaborate why you think this would be required? If we would have
>> performed the  split_huge_pmd_address() beforehand, we would still be
>> left with a large folio, no?
> 
> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> 
> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> folio, but there are a few scenarios where we don't mlock a large folio, such
> as when it crosses a VM_LOCKed VMA boundary.
> 
>   -                     if (!folio_test_large(folio))
>   +                     if (!folio_test_large(folio) ||
>   +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> 
> And this check is just future-proofing and likely unnecessary. If encountering a
> PMD-mapped THP missing the mlock for some reason, we can mlock this
> THP to prevent it from being picked up during page reclaim, since it is fully
> mapped and doesn't cross the VMA boundary, IIUC.
> 
> What do you think?
> I would appreciate any suggestions regarding this check ;)

Reading this patch only, I wonder if this change makes sense in the 
context here.

Before this patch, we would have PTE-mapped the PMD-mapped THP before 
reaching this call and skipped it due to "!folio_test_large(folio)".

After this patch, we either

a) PTE-remap the THP after this check, but retry and end-up here again, 
whereby we would skip it due to "!folio_test_large(folio)".

b) Discard the PMD-mapped THP due to lazyfree directly. Can that 
co-exist with mlock and what would be the problem here with mlock?


So if the check is required in this patch, we really have to understand 
why. If not, we should better drop it from this patch.

At least my opinion, still struggling to understand why it would be 
required (I have 0 knowledge about mlock interaction with large folios :) ).
David Hildenbrand June 5, 2024, 2:39 p.m. UTC | #4
On 05.06.24 16:28, David Hildenbrand wrote:
> On 05.06.24 16:20, Lance Yang wrote:
>> Hi David,
>>
>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>
>>> On 21.05.24 06:02, Lance Yang wrote:
>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>> split the folio.
>>>>
>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>> being picked up during page reclaim.
>>>>
>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>> ---
>>>
>>> [...] again, sorry for the late review.
>>
>> No worries at all, thanks for taking time to review!
>>
>>>
>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>> index ddffa30c79fb..08a93347f283 100644
>>>> --- a/mm/rmap.c
>>>> +++ b/mm/rmap.c
>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>         if (flags & TTU_SYNC)
>>>>                 pvmw.flags = PVMW_SYNC;
>>>>
>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>> -
>>>>         /*
>>>>          * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>          * For hugetlb, it could be much worse if we need to do pud
>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>         mmu_notifier_invalidate_range_start(&range);
>>>>
>>>>         while (page_vma_mapped_walk(&pvmw)) {
>>>> -             /* Unexpected PMD-mapped THP? */
>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>> -
>>>>                 /*
>>>>                  * If the folio is in an mlock()d vma, we must not swap it out.
>>>>                  */
>>>>                 if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>                     (vma->vm_flags & VM_LOCKED)) {
>>>>                         /* Restore the mlock which got missed */
>>>> -                     if (!folio_test_large(folio))
>>>> +                     if (!folio_test_large(folio) ||
>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>                                 mlock_vma_folio(folio, vma);
>>>
>>> Can you elaborate why you think this would be required? If we would have
>>> performed the  split_huge_pmd_address() beforehand, we would still be
>>> left with a large folio, no?
>>
>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
>>
>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
>> folio, but there are a few scenarios where we don't mlock a large folio, such
>> as when it crosses a VM_LOCKed VMA boundary.
>>
>>    -                     if (!folio_test_large(folio))
>>    +                     if (!folio_test_large(folio) ||
>>    +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>
>> And this check is just future-proofing and likely unnecessary. If encountering a
>> PMD-mapped THP missing the mlock for some reason, we can mlock this
>> THP to prevent it from being picked up during page reclaim, since it is fully
>> mapped and doesn't cross the VMA boundary, IIUC.
>>
>> What do you think?
>> I would appreciate any suggestions regarding this check ;)
> 
> Reading this patch only, I wonder if this change makes sense in the
> context here.
> 
> Before this patch, we would have PTE-mapped the PMD-mapped THP before
> reaching this call and skipped it due to "!folio_test_large(folio)".
> 
> After this patch, we either
> 
> a) PTE-remap the THP after this check, but retry and end-up here again,
> whereby we would skip it due to "!folio_test_large(folio)".
> 
> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> co-exist with mlock and what would be the problem here with mlock?
> 
> 
> So if the check is required in this patch, we really have to understand
> why. If not, we should better drop it from this patch.
> 
> At least my opinion, still struggling to understand why it would be
> required (I have 0 knowledge about mlock interaction with large folios :) ).
> 

Looking at that series, in folio_references_one(), we do

			if (!folio_test_large(folio) || !pvmw.pte) {
				/* Restore the mlock which got missed */
				mlock_vma_folio(folio, vma);
				page_vma_mapped_walk_done(&pvmw);
				pra->vm_flags |= VM_LOCKED;
				return false; /* To break the loop */
			}

I wonder if we want that here as well now: in case of lazyfree we
would not back off, right?

But I'm not sure if lazyfree in mlocked areas are even possible.

Adding the "!pvmw.pte" would be much clearer to me than the flag check.
Lance Yang June 5, 2024, 2:57 p.m. UTC | #5
On Wed, Jun 5, 2024 at 10:39 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.06.24 16:28, David Hildenbrand wrote:
> > On 05.06.24 16:20, Lance Yang wrote:
> >> Hi David,
> >>
> >> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>> On 21.05.24 06:02, Lance Yang wrote:
> >>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>> split the folio.
> >>>>
> >>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>> being picked up during page reclaim.
> >>>>
> >>>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>>> ---
> >>>
> >>> [...] again, sorry for the late review.
> >>
> >> No worries at all, thanks for taking time to review!
> >>
> >>>
> >>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>> index ddffa30c79fb..08a93347f283 100644
> >>>> --- a/mm/rmap.c
> >>>> +++ b/mm/rmap.c
> >>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>         if (flags & TTU_SYNC)
> >>>>                 pvmw.flags = PVMW_SYNC;
> >>>>
> >>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>>> -             split_huge_pmd_address(vma, address, false, folio);
> >>>> -
> >>>>         /*
> >>>>          * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>          * For hugetlb, it could be much worse if we need to do pud
> >>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>         mmu_notifier_invalidate_range_start(&range);
> >>>>
> >>>>         while (page_vma_mapped_walk(&pvmw)) {
> >>>> -             /* Unexpected PMD-mapped THP? */
> >>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>> -
> >>>>                 /*
> >>>>                  * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>                  */
> >>>>                 if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>                     (vma->vm_flags & VM_LOCKED)) {
> >>>>                         /* Restore the mlock which got missed */
> >>>> -                     if (!folio_test_large(folio))
> >>>> +                     if (!folio_test_large(folio) ||
> >>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>                                 mlock_vma_folio(folio, vma);
> >>>
> >>> Can you elaborate why you think this would be required? If we would have
> >>> performed the  split_huge_pmd_address() beforehand, we would still be
> >>> left with a large folio, no?
> >>
> >> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> >>
> >> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> >> folio, but there are a few scenarios where we don't mlock a large folio, such
> >> as when it crosses a VM_LOCKed VMA boundary.
> >>
> >>    -                     if (!folio_test_large(folio))
> >>    +                     if (!folio_test_large(folio) ||
> >>    +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>
> >> And this check is just future-proofing and likely unnecessary. If encountering a
> >> PMD-mapped THP missing the mlock for some reason, we can mlock this
> >> THP to prevent it from being picked up during page reclaim, since it is fully
> >> mapped and doesn't cross the VMA boundary, IIUC.
> >>
> >> What do you think?
> >> I would appreciate any suggestions regarding this check ;)
> >
> > Reading this patch only, I wonder if this change makes sense in the
> > context here.
> >
> > Before this patch, we would have PTE-mapped the PMD-mapped THP before
> > reaching this call and skipped it due to "!folio_test_large(folio)".
> >
> > After this patch, we either
> >
> > a) PTE-remap the THP after this check, but retry and end-up here again,
> > whereby we would skip it due to "!folio_test_large(folio)".
> >
> > b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> > co-exist with mlock and what would be the problem here with mlock?
> >
> >

Thanks a lot for clarifying!

> > So if the check is required in this patch, we really have to understand
> > why. If not, we should better drop it from this patch.
> >
> > At least my opinion, still struggling to understand why it would be
> > required (I have 0 knowledge about mlock interaction with large folios :) ).
> >
>
> Looking at that series, in folio_references_one(), we do
>
>                         if (!folio_test_large(folio) || !pvmw.pte) {
>                                 /* Restore the mlock which got missed */
>                                 mlock_vma_folio(folio, vma);
>                                 page_vma_mapped_walk_done(&pvmw);
>                                 pra->vm_flags |= VM_LOCKED;
>                                 return false; /* To break the loop */
>                         }
>
> I wonder if we want that here as well now: in case of lazyfree we
> would not back off, right?
>
> But I'm not sure if lazyfree in mlocked areas are even possible.
>
> Adding the "!pvmw.pte" would be much clearer to me than the flag check.

Hmm... How about we drop it from this patch for now, and add it back if needed
in the future?

Thanks,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand June 5, 2024, 3:02 p.m. UTC | #6
On 05.06.24 16:57, Lance Yang wrote:
> On Wed, Jun 5, 2024 at 10:39 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 05.06.24 16:28, David Hildenbrand wrote:
>>> On 05.06.24 16:20, Lance Yang wrote:
>>>> Hi David,
>>>>
>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 21.05.24 06:02, Lance Yang wrote:
>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>>>> split the folio.
>>>>>>
>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>>>> being picked up during page reclaim.
>>>>>>
>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>>> ---
>>>>>
>>>>> [...] again, sorry for the late review.
>>>>
>>>> No worries at all, thanks for taking time to review!
>>>>
>>>>>
>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>>> index ddffa30c79fb..08a93347f283 100644
>>>>>> --- a/mm/rmap.c
>>>>>> +++ b/mm/rmap.c
>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>          if (flags & TTU_SYNC)
>>>>>>                  pvmw.flags = PVMW_SYNC;
>>>>>>
>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>>>> -
>>>>>>          /*
>>>>>>           * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>>>           * For hugetlb, it could be much worse if we need to do pud
>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>          mmu_notifier_invalidate_range_start(&range);
>>>>>>
>>>>>>          while (page_vma_mapped_walk(&pvmw)) {
>>>>>> -             /* Unexpected PMD-mapped THP? */
>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>>>> -
>>>>>>                  /*
>>>>>>                   * If the folio is in an mlock()d vma, we must not swap it out.
>>>>>>                   */
>>>>>>                  if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>>>                      (vma->vm_flags & VM_LOCKED)) {
>>>>>>                          /* Restore the mlock which got missed */
>>>>>> -                     if (!folio_test_large(folio))
>>>>>> +                     if (!folio_test_large(folio) ||
>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>>                                  mlock_vma_folio(folio, vma);
>>>>>
>>>>> Can you elaborate why you think this would be required? If we would have
>>>>> performed the  split_huge_pmd_address() beforehand, we would still be
>>>>> left with a large folio, no?
>>>>
>>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
>>>>
>>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
>>>> folio, but there are a few scenarios where we don't mlock a large folio, such
>>>> as when it crosses a VM_LOCKed VMA boundary.
>>>>
>>>>     -                     if (!folio_test_large(folio))
>>>>     +                     if (!folio_test_large(folio) ||
>>>>     +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>
>>>> And this check is just future-proofing and likely unnecessary. If encountering a
>>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
>>>> THP to prevent it from being picked up during page reclaim, since it is fully
>>>> mapped and doesn't cross the VMA boundary, IIUC.
>>>>
>>>> What do you think?
>>>> I would appreciate any suggestions regarding this check ;)
>>>
>>> Reading this patch only, I wonder if this change makes sense in the
>>> context here.
>>>
>>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
>>> reaching this call and skipped it due to "!folio_test_large(folio)".
>>>
>>> After this patch, we either
>>>
>>> a) PTE-remap the THP after this check, but retry and end-up here again,
>>> whereby we would skip it due to "!folio_test_large(folio)".
>>>
>>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
>>> co-exist with mlock and what would be the problem here with mlock?
>>>
>>>
> 
> Thanks a lot for clarifying!
> 
>>> So if the check is required in this patch, we really have to understand
>>> why. If not, we should better drop it from this patch.
>>>
>>> At least my opinion, still struggling to understand why it would be
>>> required (I have 0 knowledge about mlock interaction with large folios :) ).
>>>
>>
>> Looking at that series, in folio_references_one(), we do
>>
>>                          if (!folio_test_large(folio) || !pvmw.pte) {
>>                                  /* Restore the mlock which got missed */
>>                                  mlock_vma_folio(folio, vma);
>>                                  page_vma_mapped_walk_done(&pvmw);
>>                                  pra->vm_flags |= VM_LOCKED;
>>                                  return false; /* To break the loop */
>>                          }
>>
>> I wonder if we want that here as well now: in case of lazyfree we
>> would not back off, right?
>>
>> But I'm not sure if lazyfree in mlocked areas are even possible.
>>
>> Adding the "!pvmw.pte" would be much clearer to me than the flag check.
> 
> Hmm... How about we drop it from this patch for now, and add it back if needed
> in the future?

If we can rule out that MADV_FREE + mlock() keeps working as expected in 
the PMD-mapped case, we're good.

Can we rule that out? (especially for MADV_FREE followed by mlock())
Lance Yang June 5, 2024, 3:43 p.m. UTC | #7
On Wed, Jun 5, 2024 at 11:03 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.06.24 16:57, Lance Yang wrote:
> > On Wed, Jun 5, 2024 at 10:39 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 05.06.24 16:28, David Hildenbrand wrote:
> >>> On 05.06.24 16:20, Lance Yang wrote:
> >>>> Hi David,
> >>>>
> >>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 21.05.24 06:02, Lance Yang wrote:
> >>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>>>> split the folio.
> >>>>>>
> >>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>>>> being picked up during page reclaim.
> >>>>>>
> >>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>>>>> ---
> >>>>>
> >>>>> [...] again, sorry for the late review.
> >>>>
> >>>> No worries at all, thanks for taking time to review!
> >>>>
> >>>>>
> >>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>>>> index ddffa30c79fb..08a93347f283 100644
> >>>>>> --- a/mm/rmap.c
> >>>>>> +++ b/mm/rmap.c
> >>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>          if (flags & TTU_SYNC)
> >>>>>>                  pvmw.flags = PVMW_SYNC;
> >>>>>>
> >>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>>>>> -             split_huge_pmd_address(vma, address, false, folio);
> >>>>>> -
> >>>>>>          /*
> >>>>>>           * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>>>           * For hugetlb, it could be much worse if we need to do pud
> >>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>          mmu_notifier_invalidate_range_start(&range);
> >>>>>>
> >>>>>>          while (page_vma_mapped_walk(&pvmw)) {
> >>>>>> -             /* Unexpected PMD-mapped THP? */
> >>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>>>> -
> >>>>>>                  /*
> >>>>>>                   * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>>>                   */
> >>>>>>                  if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>>>                      (vma->vm_flags & VM_LOCKED)) {
> >>>>>>                          /* Restore the mlock which got missed */
> >>>>>> -                     if (!folio_test_large(folio))
> >>>>>> +                     if (!folio_test_large(folio) ||
> >>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>                                  mlock_vma_folio(folio, vma);
> >>>>>
> >>>>> Can you elaborate why you think this would be required? If we would have
> >>>>> performed the  split_huge_pmd_address() beforehand, we would still be
> >>>>> left with a large folio, no?
> >>>>
> >>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> >>>>
> >>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> >>>> folio, but there are a few scenarios where we don't mlock a large folio, such
> >>>> as when it crosses a VM_LOCKed VMA boundary.
> >>>>
> >>>>     -                     if (!folio_test_large(folio))
> >>>>     +                     if (!folio_test_large(folio) ||
> >>>>     +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>
> >>>> And this check is just future-proofing and likely unnecessary. If encountering a
> >>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
> >>>> THP to prevent it from being picked up during page reclaim, since it is fully
> >>>> mapped and doesn't cross the VMA boundary, IIUC.
> >>>>
> >>>> What do you think?
> >>>> I would appreciate any suggestions regarding this check ;)
> >>>
> >>> Reading this patch only, I wonder if this change makes sense in the
> >>> context here.
> >>>
> >>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
> >>> reaching this call and skipped it due to "!folio_test_large(folio)".
> >>>
> >>> After this patch, we either
> >>>
> >>> a) PTE-remap the THP after this check, but retry and end-up here again,
> >>> whereby we would skip it due to "!folio_test_large(folio)".
> >>>
> >>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> >>> co-exist with mlock and what would be the problem here with mlock?
> >>>
> >>>
> >
> > Thanks a lot for clarifying!
> >
> >>> So if the check is required in this patch, we really have to understand
> >>> why. If not, we should better drop it from this patch.
> >>>
> >>> At least my opinion, still struggling to understand why it would be
> >>> required (I have 0 knowledge about mlock interaction with large folios :) ).
> >>>
> >>
> >> Looking at that series, in folio_references_one(), we do
> >>
> >>                          if (!folio_test_large(folio) || !pvmw.pte) {
> >>                                  /* Restore the mlock which got missed */
> >>                                  mlock_vma_folio(folio, vma);
> >>                                  page_vma_mapped_walk_done(&pvmw);
> >>                                  pra->vm_flags |= VM_LOCKED;
> >>                                  return false; /* To break the loop */
> >>                          }
> >>
> >> I wonder if we want that here as well now: in case of lazyfree we
> >> would not back off, right?
> >>
> >> But I'm not sure if lazyfree in mlocked areas are even possible.
> >>
> >> Adding the "!pvmw.pte" would be much clearer to me than the flag check.
> >
> > Hmm... How about we drop it from this patch for now, and add it back if needed
> > in the future?
>
> If we can rule out that MADV_FREE + mlock() keeps working as expected in
> the PMD-mapped case, we're good.
>
> Can we rule that out? (especially for MADV_FREE followed by mlock())

Perhaps we don't worry about that.

IIUC, without that check, MADV_FREE + mlock() still works as expected in
the PMD-mapped case, since if encountering a large folio in a VM_LOCKED
VMA range, we will stop the page walk immediately.

Thanks,
Lance


>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand June 5, 2024, 4:16 p.m. UTC | #8
On 05.06.24 17:43, Lance Yang wrote:
> On Wed, Jun 5, 2024 at 11:03 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 05.06.24 16:57, Lance Yang wrote:
>>> On Wed, Jun 5, 2024 at 10:39 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 05.06.24 16:28, David Hildenbrand wrote:
>>>>> On 05.06.24 16:20, Lance Yang wrote:
>>>>>> Hi David,
>>>>>>
>>>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>>>
>>>>>>> On 21.05.24 06:02, Lance Yang wrote:
>>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>>>>>> split the folio.
>>>>>>>>
>>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>>>>>> being picked up during page reclaim.
>>>>>>>>
>>>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>>>>> ---
>>>>>>>
>>>>>>> [...] again, sorry for the late review.
>>>>>>
>>>>>> No worries at all, thanks for taking time to review!
>>>>>>
>>>>>>>
>>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>>>>> index ddffa30c79fb..08a93347f283 100644
>>>>>>>> --- a/mm/rmap.c
>>>>>>>> +++ b/mm/rmap.c
>>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>>>           if (flags & TTU_SYNC)
>>>>>>>>                   pvmw.flags = PVMW_SYNC;
>>>>>>>>
>>>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>>>>>> -
>>>>>>>>           /*
>>>>>>>>            * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>>>>>            * For hugetlb, it could be much worse if we need to do pud
>>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>>>           mmu_notifier_invalidate_range_start(&range);
>>>>>>>>
>>>>>>>>           while (page_vma_mapped_walk(&pvmw)) {
>>>>>>>> -             /* Unexpected PMD-mapped THP? */
>>>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>>>>>> -
>>>>>>>>                   /*
>>>>>>>>                    * If the folio is in an mlock()d vma, we must not swap it out.
>>>>>>>>                    */
>>>>>>>>                   if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>>>>>                       (vma->vm_flags & VM_LOCKED)) {
>>>>>>>>                           /* Restore the mlock which got missed */
>>>>>>>> -                     if (!folio_test_large(folio))
>>>>>>>> +                     if (!folio_test_large(folio) ||
>>>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>>>>                                   mlock_vma_folio(folio, vma);
>>>>>>>
>>>>>>> Can you elaborate why you think this would be required? If we would have
>>>>>>> performed the  split_huge_pmd_address() beforehand, we would still be
>>>>>>> left with a large folio, no?
>>>>>>
>>>>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
>>>>>>
>>>>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
>>>>>> folio, but there are a few scenarios where we don't mlock a large folio, such
>>>>>> as when it crosses a VM_LOCKed VMA boundary.
>>>>>>
>>>>>>      -                     if (!folio_test_large(folio))
>>>>>>      +                     if (!folio_test_large(folio) ||
>>>>>>      +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>>
>>>>>> And this check is just future-proofing and likely unnecessary. If encountering a
>>>>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
>>>>>> THP to prevent it from being picked up during page reclaim, since it is fully
>>>>>> mapped and doesn't cross the VMA boundary, IIUC.
>>>>>>
>>>>>> What do you think?
>>>>>> I would appreciate any suggestions regarding this check ;)
>>>>>
>>>>> Reading this patch only, I wonder if this change makes sense in the
>>>>> context here.
>>>>>
>>>>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
>>>>> reaching this call and skipped it due to "!folio_test_large(folio)".
>>>>>
>>>>> After this patch, we either
>>>>>
>>>>> a) PTE-remap the THP after this check, but retry and end-up here again,
>>>>> whereby we would skip it due to "!folio_test_large(folio)".
>>>>>
>>>>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
>>>>> co-exist with mlock and what would be the problem here with mlock?
>>>>>
>>>>>
>>>
>>> Thanks a lot for clarifying!
>>>
>>>>> So if the check is required in this patch, we really have to understand
>>>>> why. If not, we should better drop it from this patch.
>>>>>
>>>>> At least my opinion, still struggling to understand why it would be
>>>>> required (I have 0 knowledge about mlock interaction with large folios :) ).
>>>>>
>>>>
>>>> Looking at that series, in folio_references_one(), we do
>>>>
>>>>                           if (!folio_test_large(folio) || !pvmw.pte) {
>>>>                                   /* Restore the mlock which got missed */
>>>>                                   mlock_vma_folio(folio, vma);
>>>>                                   page_vma_mapped_walk_done(&pvmw);
>>>>                                   pra->vm_flags |= VM_LOCKED;
>>>>                                   return false; /* To break the loop */
>>>>                           }
>>>>
>>>> I wonder if we want that here as well now: in case of lazyfree we
>>>> would not back off, right?
>>>>
>>>> But I'm not sure if lazyfree in mlocked areas are even possible.
>>>>
>>>> Adding the "!pvmw.pte" would be much clearer to me than the flag check.
>>>
>>> Hmm... How about we drop it from this patch for now, and add it back if needed
>>> in the future?
>>
>> If we can rule out that MADV_FREE + mlock() keeps working as expected in
>> the PMD-mapped case, we're good.
>>
>> Can we rule that out? (especially for MADV_FREE followed by mlock())
> 
> Perhaps we don't worry about that.
> 
> IIUC, without that check, MADV_FREE + mlock() still works as expected in
> the PMD-mapped case, since if encountering a large folio in a VM_LOCKED
> VMA range, we will stop the page walk immediately.


Can you point me at the code (especially considering patch #3?)
Lance Yang June 6, 2024, 3:55 a.m. UTC | #9
On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.06.24 16:20, Lance Yang wrote:
> > Hi David,
> >
> > On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 21.05.24 06:02, Lance Yang wrote:
> >>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>> split the folio.
> >>>
> >>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>> being picked up during page reclaim.
> >>>
> >>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>> ---
> >>
> >> [...] again, sorry for the late review.
> >
> > No worries at all, thanks for taking time to review!
> >
> >>
> >>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>> index ddffa30c79fb..08a93347f283 100644
> >>> --- a/mm/rmap.c
> >>> +++ b/mm/rmap.c
> >>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>        if (flags & TTU_SYNC)
> >>>                pvmw.flags = PVMW_SYNC;
> >>>
> >>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>> -             split_huge_pmd_address(vma, address, false, folio);
> >>> -
> >>>        /*
> >>>         * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>         * For hugetlb, it could be much worse if we need to do pud
> >>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>        mmu_notifier_invalidate_range_start(&range);
> >>>
> >>>        while (page_vma_mapped_walk(&pvmw)) {
> >>> -             /* Unexpected PMD-mapped THP? */
> >>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>> -
> >>>                /*
> >>>                 * If the folio is in an mlock()d vma, we must not swap it out.
> >>>                 */
> >>>                if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>                    (vma->vm_flags & VM_LOCKED)) {
> >>>                        /* Restore the mlock which got missed */
> >>> -                     if (!folio_test_large(folio))
> >>> +                     if (!folio_test_large(folio) ||
> >>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>                                mlock_vma_folio(folio, vma);
> >>
> >> Can you elaborate why you think this would be required? If we would have
> >> performed the  split_huge_pmd_address() beforehand, we would still be
> >> left with a large folio, no?
> >
> > Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> >
> > After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> > folio, but there are a few scenarios where we don't mlock a large folio, such
> > as when it crosses a VM_LOCKed VMA boundary.
> >
> >   -                     if (!folio_test_large(folio))
> >   +                     if (!folio_test_large(folio) ||
> >   +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >
> > And this check is just future-proofing and likely unnecessary. If encountering a
> > PMD-mapped THP missing the mlock for some reason, we can mlock this
> > THP to prevent it from being picked up during page reclaim, since it is fully
> > mapped and doesn't cross the VMA boundary, IIUC.
> >
> > What do you think?
> > I would appreciate any suggestions regarding this check ;)
>
> Reading this patch only, I wonder if this change makes sense in the
> context here.

Allow me to try explaining it again ;)

>
> Before this patch, we would have PTE-mapped the PMD-mapped THP before
> reaching this call and skipped it due to "!folio_test_large(folio)".

Yes, there is only a PTE-mapped THP when doing the "!folio_test_large(folio)"
check, as we will first conditionally split the PMD via
split_huge_pmd_address().

>
> After this patch, we either

Things will change. We'll first do the "!folio_test_large(folio)" check, then
conditionally split the PMD via split_huge_pmd_address().

>
> a) PTE-remap the THP after this check, but retry and end-up here again,
> whereby we would skip it due to "!folio_test_large(folio)".

Hmm...

IIUC, we will skip it after this check, stop the page walk, and not
PTE-remap the THP.

>
> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> co-exist with mlock and what would be the problem here with mlock?

Before discarding a PMD-mapped THP as a whole, as patch #3 did,
we also perform the "!folio_test_large(folio)" check. If the THP coexists
with mlock, we will skip it, stop the page walk, and not discard it. IIUC.

>
>
> So if the check is required in this patch, we really have to understand
> why. If not, we should better drop it from this patch.

I added the "!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD))" check
in this patch just to future-proof mlock for a PMD-mapped THP missing
the mlock, to prevent it from being picked up during page reclaim.

But is this really required? It seems like nothing should really be broken
without this check.

Perhaps, we should drop it from this patch until we fully understand the
reason for it. Could you get me some suggestions?

Thanks,
Lance


>
> At least my opinion, still struggling to understand why it would be
> required (I have 0 knowledge about mlock interaction with large folios :) ).
>
> --
> Cheers,
>
> David / dhildenb
>
Lance Yang June 6, 2024, 3:57 a.m. UTC | #10
On Thu, Jun 6, 2024 at 12:16 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 05.06.24 17:43, Lance Yang wrote:
> > On Wed, Jun 5, 2024 at 11:03 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 05.06.24 16:57, Lance Yang wrote:
> >>> On Wed, Jun 5, 2024 at 10:39 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>
> >>>> On 05.06.24 16:28, David Hildenbrand wrote:
> >>>>> On 05.06.24 16:20, Lance Yang wrote:
> >>>>>> Hi David,
> >>>>>>
> >>>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>
> >>>>>>> On 21.05.24 06:02, Lance Yang wrote:
> >>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>>>>>> split the folio.
> >>>>>>>>
> >>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>>>>>> being picked up during page reclaim.
> >>>>>>>>
> >>>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>>>>>>> ---
> >>>>>>>
> >>>>>>> [...] again, sorry for the late review.
> >>>>>>
> >>>>>> No worries at all, thanks for taking time to review!
> >>>>>>
> >>>>>>>
> >>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>>>>>> index ddffa30c79fb..08a93347f283 100644
> >>>>>>>> --- a/mm/rmap.c
> >>>>>>>> +++ b/mm/rmap.c
> >>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>>           if (flags & TTU_SYNC)
> >>>>>>>>                   pvmw.flags = PVMW_SYNC;
> >>>>>>>>
> >>>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
> >>>>>>>> -
> >>>>>>>>           /*
> >>>>>>>>            * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>>>>>            * For hugetlb, it could be much worse if we need to do pud
> >>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>>           mmu_notifier_invalidate_range_start(&range);
> >>>>>>>>
> >>>>>>>>           while (page_vma_mapped_walk(&pvmw)) {
> >>>>>>>> -             /* Unexpected PMD-mapped THP? */
> >>>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>>>>>> -
> >>>>>>>>                   /*
> >>>>>>>>                    * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>>>>>                    */
> >>>>>>>>                   if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>>>>>                       (vma->vm_flags & VM_LOCKED)) {
> >>>>>>>>                           /* Restore the mlock which got missed */
> >>>>>>>> -                     if (!folio_test_large(folio))
> >>>>>>>> +                     if (!folio_test_large(folio) ||
> >>>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>>>                                   mlock_vma_folio(folio, vma);
> >>>>>>>
> >>>>>>> Can you elaborate why you think this would be required? If we would have
> >>>>>>> performed the  split_huge_pmd_address() beforehand, we would still be
> >>>>>>> left with a large folio, no?
> >>>>>>
> >>>>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> >>>>>>
> >>>>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> >>>>>> folio, but there are a few scenarios where we don't mlock a large folio, such
> >>>>>> as when it crosses a VM_LOCKed VMA boundary.
> >>>>>>
> >>>>>>      -                     if (!folio_test_large(folio))
> >>>>>>      +                     if (!folio_test_large(folio) ||
> >>>>>>      +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>
> >>>>>> And this check is just future-proofing and likely unnecessary. If encountering a
> >>>>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
> >>>>>> THP to prevent it from being picked up during page reclaim, since it is fully
> >>>>>> mapped and doesn't cross the VMA boundary, IIUC.
> >>>>>>
> >>>>>> What do you think?
> >>>>>> I would appreciate any suggestions regarding this check ;)
> >>>>>
> >>>>> Reading this patch only, I wonder if this change makes sense in the
> >>>>> context here.
> >>>>>
> >>>>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
> >>>>> reaching this call and skipped it due to "!folio_test_large(folio)".
> >>>>>
> >>>>> After this patch, we either
> >>>>>
> >>>>> a) PTE-remap the THP after this check, but retry and end-up here again,
> >>>>> whereby we would skip it due to "!folio_test_large(folio)".
> >>>>>
> >>>>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> >>>>> co-exist with mlock and what would be the problem here with mlock?
> >>>>>
> >>>>>
> >>>
> >>> Thanks a lot for clarifying!
> >>>
> >>>>> So if the check is required in this patch, we really have to understand
> >>>>> why. If not, we should better drop it from this patch.
> >>>>>
> >>>>> At least my opinion, still struggling to understand why it would be
> >>>>> required (I have 0 knowledge about mlock interaction with large folios :) ).
> >>>>>
> >>>>
> >>>> Looking at that series, in folio_references_one(), we do
> >>>>
> >>>>                           if (!folio_test_large(folio) || !pvmw.pte) {
> >>>>                                   /* Restore the mlock which got missed */
> >>>>                                   mlock_vma_folio(folio, vma);
> >>>>                                   page_vma_mapped_walk_done(&pvmw);
> >>>>                                   pra->vm_flags |= VM_LOCKED;
> >>>>                                   return false; /* To break the loop */
> >>>>                           }
> >>>>
> >>>> I wonder if we want that here as well now: in case of lazyfree we
> >>>> would not back off, right?
> >>>>
> >>>> But I'm not sure if lazyfree in mlocked areas are even possible.
> >>>>
> >>>> Adding the "!pvmw.pte" would be much clearer to me than the flag check.
> >>>
> >>> Hmm... How about we drop it from this patch for now, and add it back if needed
> >>> in the future?
> >>
> >> If we can rule out that MADV_FREE + mlock() keeps working as expected in
> >> the PMD-mapped case, we're good.
> >>
> >> Can we rule that out? (especially for MADV_FREE followed by mlock())
> >
> > Perhaps we don't worry about that.
> >
> > IIUC, without that check, MADV_FREE + mlock() still works as expected in
> > the PMD-mapped case, since if encountering a large folio in a VM_LOCKED
> > VMA range, we will stop the page walk immediately.
>
>
> Can you point me at the code (especially considering patch #3?)

Yep, please see my other mail ;)

Thanks,
Lance

>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand June 6, 2024, 8:01 a.m. UTC | #11
On 06.06.24 05:55, Lance Yang wrote:
> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 05.06.24 16:20, Lance Yang wrote:
>>> Hi David,
>>>
>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 21.05.24 06:02, Lance Yang wrote:
>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>>> split the folio.
>>>>>
>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>>> being picked up during page reclaim.
>>>>>
>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>> ---
>>>>
>>>> [...] again, sorry for the late review.
>>>
>>> No worries at all, thanks for taking time to review!
>>>
>>>>
>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>> index ddffa30c79fb..08a93347f283 100644
>>>>> --- a/mm/rmap.c
>>>>> +++ b/mm/rmap.c
>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>         if (flags & TTU_SYNC)
>>>>>                 pvmw.flags = PVMW_SYNC;
>>>>>
>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>>> -
>>>>>         /*
>>>>>          * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>>          * For hugetlb, it could be much worse if we need to do pud
>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>         mmu_notifier_invalidate_range_start(&range);
>>>>>
>>>>>         while (page_vma_mapped_walk(&pvmw)) {
>>>>> -             /* Unexpected PMD-mapped THP? */
>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>>> -
>>>>>                 /*
>>>>>                  * If the folio is in an mlock()d vma, we must not swap it out.
>>>>>                  */
>>>>>                 if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>>                     (vma->vm_flags & VM_LOCKED)) {
>>>>>                         /* Restore the mlock which got missed */
>>>>> -                     if (!folio_test_large(folio))
>>>>> +                     if (!folio_test_large(folio) ||
>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>                                 mlock_vma_folio(folio, vma);
>>>>
>>>> Can you elaborate why you think this would be required? If we would have
>>>> performed the  split_huge_pmd_address() beforehand, we would still be
>>>> left with a large folio, no?
>>>
>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
>>>
>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
>>> folio, but there are a few scenarios where we don't mlock a large folio, such
>>> as when it crosses a VM_LOCKed VMA boundary.
>>>
>>>    -                     if (!folio_test_large(folio))
>>>    +                     if (!folio_test_large(folio) ||
>>>    +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>
>>> And this check is just future-proofing and likely unnecessary. If encountering a
>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
>>> THP to prevent it from being picked up during page reclaim, since it is fully
>>> mapped and doesn't cross the VMA boundary, IIUC.
>>>
>>> What do you think?
>>> I would appreciate any suggestions regarding this check ;)
>>
>> Reading this patch only, I wonder if this change makes sense in the
>> context here.
> 
> Allow me to try explaining it again ;)
> 
>>
>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
>> reaching this call and skipped it due to "!folio_test_large(folio)".
> 
> Yes, there is only a PTE-mapped THP when doing the "!folio_test_large(folio)"
> check, as we will first conditionally split the PMD via
> split_huge_pmd_address().
> 
>>
>> After this patch, we either
> 
> Things will change. We'll first do the "!folio_test_large(folio)" check, then
> conditionally split the PMD via split_huge_pmd_address().
> 
>>
>> a) PTE-remap the THP after this check, but retry and end-up here again,
>> whereby we would skip it due to "!folio_test_large(folio)".
> 
> Hmm...
> 
> IIUC, we will skip it after this check, stop the page walk, and not
> PTE-remap the THP.
> 
>>
>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
>> co-exist with mlock and what would be the problem here with mlock?
> 
> Before discarding a PMD-mapped THP as a whole, as patch #3 did,
> we also perform the "!folio_test_large(folio)" check. If the THP coexists
> with mlock, we will skip it, stop the page walk, and not discard it. IIUC.

But "!folio_test_large(folio)" would *skip* the THP and not consider it 
regarding mlock.

I'm probably missing something and should try current mm/mm-unstable 
with MADV_FREE + mlock() on a PMD-mapped THP.

> 
>>
>>
>> So if the check is required in this patch, we really have to understand
>> why. If not, we should better drop it from this patch.
> 
> I added the "!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD))" check
> in this patch just to future-proof mlock for a PMD-mapped THP missing
> the mlock, to prevent it from being picked up during page reclaim.
> 
> But is this really required? It seems like nothing should really be broken
> without this check.
> 
> Perhaps, we should drop it from this patch until we fully understand the
> reason for it. Could you get me some suggestions?

We should drop it from this patch, agreed. We might need it 
("!pvmw.pte") in patch #3, but I still have to understand if there 
really would be a problem.
David Hildenbrand June 6, 2024, 8:06 a.m. UTC | #12
On 06.06.24 10:01, David Hildenbrand wrote:
> On 06.06.24 05:55, Lance Yang wrote:
>> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
>>>
>>> On 05.06.24 16:20, Lance Yang wrote:
>>>> Hi David,
>>>>
>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 21.05.24 06:02, Lance Yang wrote:
>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>>>> split the folio.
>>>>>>
>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>>>> being picked up during page reclaim.
>>>>>>
>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>>> ---
>>>>>
>>>>> [...] again, sorry for the late review.
>>>>
>>>> No worries at all, thanks for taking time to review!
>>>>
>>>>>
>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>>> index ddffa30c79fb..08a93347f283 100644
>>>>>> --- a/mm/rmap.c
>>>>>> +++ b/mm/rmap.c
>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>          if (flags & TTU_SYNC)
>>>>>>                  pvmw.flags = PVMW_SYNC;
>>>>>>
>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>>>> -
>>>>>>          /*
>>>>>>           * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>>>           * For hugetlb, it could be much worse if we need to do pud
>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>          mmu_notifier_invalidate_range_start(&range);
>>>>>>
>>>>>>          while (page_vma_mapped_walk(&pvmw)) {
>>>>>> -             /* Unexpected PMD-mapped THP? */
>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>>>> -
>>>>>>                  /*
>>>>>>                   * If the folio is in an mlock()d vma, we must not swap it out.
>>>>>>                   */
>>>>>>                  if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>>>                      (vma->vm_flags & VM_LOCKED)) {
>>>>>>                          /* Restore the mlock which got missed */
>>>>>> -                     if (!folio_test_large(folio))
>>>>>> +                     if (!folio_test_large(folio) ||
>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>>                                  mlock_vma_folio(folio, vma);
>>>>>
>>>>> Can you elaborate why you think this would be required? If we would have
>>>>> performed the  split_huge_pmd_address() beforehand, we would still be
>>>>> left with a large folio, no?
>>>>
>>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
>>>>
>>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
>>>> folio, but there are a few scenarios where we don't mlock a large folio, such
>>>> as when it crosses a VM_LOCKed VMA boundary.
>>>>
>>>>     -                     if (!folio_test_large(folio))
>>>>     +                     if (!folio_test_large(folio) ||
>>>>     +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>
>>>> And this check is just future-proofing and likely unnecessary. If encountering a
>>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
>>>> THP to prevent it from being picked up during page reclaim, since it is fully
>>>> mapped and doesn't cross the VMA boundary, IIUC.
>>>>
>>>> What do you think?
>>>> I would appreciate any suggestions regarding this check ;)
>>>
>>> Reading this patch only, I wonder if this change makes sense in the
>>> context here.
>>
>> Allow me to try explaining it again ;)
>>
>>>
>>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
>>> reaching this call and skipped it due to "!folio_test_large(folio)".
>>
>> Yes, there is only a PTE-mapped THP when doing the "!folio_test_large(folio)"
>> check, as we will first conditionally split the PMD via
>> split_huge_pmd_address().
>>
>>>
>>> After this patch, we either
>>
>> Things will change. We'll first do the "!folio_test_large(folio)" check, then
>> conditionally split the PMD via split_huge_pmd_address().
>>
>>>
>>> a) PTE-remap the THP after this check, but retry and end-up here again,
>>> whereby we would skip it due to "!folio_test_large(folio)".
>>
>> Hmm...
>>
>> IIUC, we will skip it after this check, stop the page walk, and not
>> PTE-remap the THP.
>>
>>>
>>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
>>> co-exist with mlock and what would be the problem here with mlock?
>>
>> Before discarding a PMD-mapped THP as a whole, as patch #3 did,
>> we also perform the "!folio_test_large(folio)" check. If the THP coexists
>> with mlock, we will skip it, stop the page walk, and not discard it. IIUC.
> 
> But "!folio_test_large(folio)" would *skip* the THP and not consider it
> regarding mlock.
> 
> I'm probably missing something

I'm stupid, I missed that we still do the "goto walk_done_err;", only 
that we don't do the mlock_vma_folio(folio, vma);

Yes, let's drop it for now! :)
Lance Yang June 6, 2024, 9:38 a.m. UTC | #13
On Thu, Jun 6, 2024 at 4:06 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 06.06.24 10:01, David Hildenbrand wrote:
> > On 06.06.24 05:55, Lance Yang wrote:
> >> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>> On 05.06.24 16:20, Lance Yang wrote:
> >>>> Hi David,
> >>>>
> >>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 21.05.24 06:02, Lance Yang wrote:
> >>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>>>> split the folio.
> >>>>>>
> >>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>>>> being picked up during page reclaim.
> >>>>>>
> >>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>>>>> ---
> >>>>>
> >>>>> [...] again, sorry for the late review.
> >>>>
> >>>> No worries at all, thanks for taking time to review!
> >>>>
> >>>>>
> >>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>>>> index ddffa30c79fb..08a93347f283 100644
> >>>>>> --- a/mm/rmap.c
> >>>>>> +++ b/mm/rmap.c
> >>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>          if (flags & TTU_SYNC)
> >>>>>>                  pvmw.flags = PVMW_SYNC;
> >>>>>>
> >>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>>>>> -             split_huge_pmd_address(vma, address, false, folio);
> >>>>>> -
> >>>>>>          /*
> >>>>>>           * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>>>           * For hugetlb, it could be much worse if we need to do pud
> >>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>          mmu_notifier_invalidate_range_start(&range);
> >>>>>>
> >>>>>>          while (page_vma_mapped_walk(&pvmw)) {
> >>>>>> -             /* Unexpected PMD-mapped THP? */
> >>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>>>> -
> >>>>>>                  /*
> >>>>>>                   * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>>>                   */
> >>>>>>                  if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>>>                      (vma->vm_flags & VM_LOCKED)) {
> >>>>>>                          /* Restore the mlock which got missed */
> >>>>>> -                     if (!folio_test_large(folio))
> >>>>>> +                     if (!folio_test_large(folio) ||
> >>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>                                  mlock_vma_folio(folio, vma);

Should we still keep the '!pvmw.pte' here? Something like:

if (!folio_test_large(folio) || !pvmw.pte)
    mlock_vma_folio(folio, vma);

We can mlock the THP to prevent it from being picked up during page reclaim.

David, I’d like to hear your thoughts on this ;)

Thanks,
Lance

> >>>>>
> >>>>> Can you elaborate why you think this would be required? If we would have
> >>>>> performed the  split_huge_pmd_address() beforehand, we would still be
> >>>>> left with a large folio, no?
> >>>>
> >>>> Yep, there would still be a large folio, but it wouldn't be PMD-mapped.
> >>>>
> >>>> After Weifeng's series[1], the kernel supports mlock for PTE-mapped large
> >>>> folio, but there are a few scenarios where we don't mlock a large folio, such
> >>>> as when it crosses a VM_LOCKed VMA boundary.
> >>>>
> >>>>     -                     if (!folio_test_large(folio))
> >>>>     +                     if (!folio_test_large(folio) ||
> >>>>     +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>
> >>>> And this check is just future-proofing and likely unnecessary. If encountering a
> >>>> PMD-mapped THP missing the mlock for some reason, we can mlock this
> >>>> THP to prevent it from being picked up during page reclaim, since it is fully
> >>>> mapped and doesn't cross the VMA boundary, IIUC.
> >>>>
> >>>> What do you think?
> >>>> I would appreciate any suggestions regarding this check ;)
> >>>
> >>> Reading this patch only, I wonder if this change makes sense in the
> >>> context here.
> >>
> >> Allow me to try explaining it again ;)
> >>
> >>>
> >>> Before this patch, we would have PTE-mapped the PMD-mapped THP before
> >>> reaching this call and skipped it due to "!folio_test_large(folio)".
> >>
> >> Yes, there is only a PTE-mapped THP when doing the "!folio_test_large(folio)"
> >> check, as we will first conditionally split the PMD via
> >> split_huge_pmd_address().
> >>
> >>>
> >>> After this patch, we either
> >>
> >> Things will change. We'll first do the "!folio_test_large(folio)" check, then
> >> conditionally split the PMD via split_huge_pmd_address().
> >>
> >>>
> >>> a) PTE-remap the THP after this check, but retry and end-up here again,
> >>> whereby we would skip it due to "!folio_test_large(folio)".
> >>
> >> Hmm...
> >>
> >> IIUC, we will skip it after this check, stop the page walk, and not
> >> PTE-remap the THP.
> >>
> >>>
> >>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that
> >>> co-exist with mlock and what would be the problem here with mlock?
> >>
> >> Before discarding a PMD-mapped THP as a whole, as patch #3 did,
> >> we also perform the "!folio_test_large(folio)" check. If the THP coexists
> >> with mlock, we will skip it, stop the page walk, and not discard it. IIUC.
> >
> > But "!folio_test_large(folio)" would *skip* the THP and not consider it
> > regarding mlock.
> >
> > I'm probably missing something
>
> I'm stupid, I missed that we still do the "goto walk_done_err;", only
> that we don't do the mlock_vma_folio(folio, vma);
>
> Yes, let's drop it for now! :)
>
> --
> Cheers,
>
> David / dhildenb
>
David Hildenbrand June 6, 2024, 9:41 a.m. UTC | #14
On 06.06.24 11:38, Lance Yang wrote:
> On Thu, Jun 6, 2024 at 4:06 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 06.06.24 10:01, David Hildenbrand wrote:
>>> On 06.06.24 05:55, Lance Yang wrote:
>>>> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 05.06.24 16:20, Lance Yang wrote:
>>>>>> Hi David,
>>>>>>
>>>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>>>
>>>>>>> On 21.05.24 06:02, Lance Yang wrote:
>>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
>>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
>>>>>>>> split the folio.
>>>>>>>>
>>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
>>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
>>>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
>>>>>>>> being picked up during page reclaim.
>>>>>>>>
>>>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
>>>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
>>>>>>>> ---
>>>>>>>
>>>>>>> [...] again, sorry for the late review.
>>>>>>
>>>>>> No worries at all, thanks for taking time to review!
>>>>>>
>>>>>>>
>>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>>>>> index ddffa30c79fb..08a93347f283 100644
>>>>>>>> --- a/mm/rmap.c
>>>>>>>> +++ b/mm/rmap.c
>>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>>>           if (flags & TTU_SYNC)
>>>>>>>>                   pvmw.flags = PVMW_SYNC;
>>>>>>>>
>>>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
>>>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
>>>>>>>> -
>>>>>>>>           /*
>>>>>>>>            * For THP, we have to assume the worse case ie pmd for invalidation.
>>>>>>>>            * For hugetlb, it could be much worse if we need to do pud
>>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
>>>>>>>>           mmu_notifier_invalidate_range_start(&range);
>>>>>>>>
>>>>>>>>           while (page_vma_mapped_walk(&pvmw)) {
>>>>>>>> -             /* Unexpected PMD-mapped THP? */
>>>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
>>>>>>>> -
>>>>>>>>                   /*
>>>>>>>>                    * If the folio is in an mlock()d vma, we must not swap it out.
>>>>>>>>                    */
>>>>>>>>                   if (!(flags & TTU_IGNORE_MLOCK) &&
>>>>>>>>                       (vma->vm_flags & VM_LOCKED)) {
>>>>>>>>                           /* Restore the mlock which got missed */
>>>>>>>> -                     if (!folio_test_large(folio))
>>>>>>>> +                     if (!folio_test_large(folio) ||
>>>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
>>>>>>>>                                   mlock_vma_folio(folio, vma);
> 
> Should we still keep the '!pvmw.pte' here? Something like:
> 
> if (!folio_test_large(folio) || !pvmw.pte)
>      mlock_vma_folio(folio, vma);

I was wondering the same the whole time ...

> 
> We can mlock the THP to prevent it from being picked up during page reclaim.
> 
> David, I’d like to hear your thoughts on this ;)

but I think there is no need to for now, in the context of your patchset. :)
Lance Yang June 7, 2024, 1:50 a.m. UTC | #15
On Thu, Jun 6, 2024 at 5:41 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 06.06.24 11:38, Lance Yang wrote:
> > On Thu, Jun 6, 2024 at 4:06 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 06.06.24 10:01, David Hildenbrand wrote:
> >>> On 06.06.24 05:55, Lance Yang wrote:
> >>>> On Wed, Jun 5, 2024 at 10:28 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 05.06.24 16:20, Lance Yang wrote:
> >>>>>> Hi David,
> >>>>>>
> >>>>>> On Wed, Jun 5, 2024 at 8:46 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>
> >>>>>>> On 21.05.24 06:02, Lance Yang wrote:
> >>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-mapped
> >>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_address() to
> >>>>>>>> split the folio.
> >>>>>>>>
> >>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we might
> >>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED range during
> >>>>>>>> the page walk. It’s probably necessary to mlock this THP to prevent it from
> >>>>>>>> being picked up during page reclaim.
> >>>>>>>>
> >>>>>>>> Suggested-by: David Hildenbrand <david@redhat.com>
> >>>>>>>> Suggested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> >>>>>>>> Signed-off-by: Lance Yang <ioworker0@gmail.com>
> >>>>>>>> ---
> >>>>>>>
> >>>>>>> [...] again, sorry for the late review.
> >>>>>>
> >>>>>> No worries at all, thanks for taking time to review!
> >>>>>>
> >>>>>>>
> >>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>>>>>>> index ddffa30c79fb..08a93347f283 100644
> >>>>>>>> --- a/mm/rmap.c
> >>>>>>>> +++ b/mm/rmap.c
> >>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>>           if (flags & TTU_SYNC)
> >>>>>>>>                   pvmw.flags = PVMW_SYNC;
> >>>>>>>>
> >>>>>>>> -     if (flags & TTU_SPLIT_HUGE_PMD)
> >>>>>>>> -             split_huge_pmd_address(vma, address, false, folio);
> >>>>>>>> -
> >>>>>>>>           /*
> >>>>>>>>            * For THP, we have to assume the worse case ie pmd for invalidation.
> >>>>>>>>            * For hugetlb, it could be much worse if we need to do pud
> >>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> >>>>>>>>           mmu_notifier_invalidate_range_start(&range);
> >>>>>>>>
> >>>>>>>>           while (page_vma_mapped_walk(&pvmw)) {
> >>>>>>>> -             /* Unexpected PMD-mapped THP? */
> >>>>>>>> -             VM_BUG_ON_FOLIO(!pvmw.pte, folio);
> >>>>>>>> -
> >>>>>>>>                   /*
> >>>>>>>>                    * If the folio is in an mlock()d vma, we must not swap it out.
> >>>>>>>>                    */
> >>>>>>>>                   if (!(flags & TTU_IGNORE_MLOCK) &&
> >>>>>>>>                       (vma->vm_flags & VM_LOCKED)) {
> >>>>>>>>                           /* Restore the mlock which got missed */
> >>>>>>>> -                     if (!folio_test_large(folio))
> >>>>>>>> +                     if (!folio_test_large(folio) ||
> >>>>>>>> +                         (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
> >>>>>>>>                                   mlock_vma_folio(folio, vma);
> >
> > Should we still keep the '!pvmw.pte' here? Something like:
> >
> > if (!folio_test_large(folio) || !pvmw.pte)
> >      mlock_vma_folio(folio, vma);
>
> I was wondering the same the whole time ...
>
> >
> > We can mlock the THP to prevent it from being picked up during page reclaim.
> >
> > David, I’d like to hear your thoughts on this ;)
>
> but I think there is no need to for now, in the context of your patchset. :)

Agreed. Let's drop it for now :)

Thanks a lot for your thoughts!
Lance

>
> --
> Cheers,
>
> David / dhildenb
>
diff mbox series

Patch

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index c8d3ec116e29..9fcb0b0b6ed1 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -409,6 +409,9 @@  static inline bool thp_migration_supported(void)
 	return IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION);
 }
 
+void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+			   pmd_t *pmd, bool freeze, struct folio *folio);
+
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
@@ -471,6 +474,9 @@  static inline void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 		unsigned long address, bool freeze, struct folio *folio) {}
 static inline void split_huge_pmd_address(struct vm_area_struct *vma,
 		unsigned long address, bool freeze, struct folio *folio) {}
+static inline void split_huge_pmd_locked(struct vm_area_struct *vma,
+					 unsigned long address, pmd_t *pmd,
+					 bool freeze, struct folio *folio) {}
 
 #define split_huge_pud(__vma, __pmd, __address)	\
 	do { } while (0)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 317de2afd371..425272c6c50b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2581,6 +2581,27 @@  static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 	pmd_populate(mm, pmd, pgtable);
 }
 
+void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
+			   pmd_t *pmd, bool freeze, struct folio *folio)
+{
+	VM_WARN_ON_ONCE(folio && !folio_test_pmd_mappable(folio));
+	VM_WARN_ON_ONCE(!IS_ALIGNED(address, HPAGE_PMD_SIZE));
+	VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
+	VM_BUG_ON(freeze && !folio);
+
+	/*
+	 * When the caller requests to set up a migration entry, we
+	 * require a folio to check the PMD against. Otherwise, there
+	 * is a risk of replacing the wrong folio.
+	 */
+	if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
+	    is_pmd_migration_entry(*pmd)) {
+		if (folio && folio != pmd_folio(*pmd))
+			return;
+		__split_huge_pmd_locked(vma, pmd, address, freeze);
+	}
+}
+
 void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 		unsigned long address, bool freeze, struct folio *folio)
 {
@@ -2592,26 +2613,7 @@  void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 				(address & HPAGE_PMD_MASK) + HPAGE_PMD_SIZE);
 	mmu_notifier_invalidate_range_start(&range);
 	ptl = pmd_lock(vma->vm_mm, pmd);
-
-	/*
-	 * If caller asks to setup a migration entry, we need a folio to check
-	 * pmd against. Otherwise we can end up replacing wrong folio.
-	 */
-	VM_BUG_ON(freeze && !folio);
-	VM_WARN_ON_ONCE(folio && !folio_test_locked(folio));
-
-	if (pmd_trans_huge(*pmd) || pmd_devmap(*pmd) ||
-	    is_pmd_migration_entry(*pmd)) {
-		/*
-		 * It's safe to call pmd_page when folio is set because it's
-		 * guaranteed that pmd is present.
-		 */
-		if (folio && folio != pmd_folio(*pmd))
-			goto out;
-		__split_huge_pmd_locked(vma, pmd, range.start, freeze);
-	}
-
-out:
+	split_huge_pmd_locked(vma, range.start, pmd, freeze, folio);
 	spin_unlock(ptl);
 	mmu_notifier_invalidate_range_end(&range);
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index ddffa30c79fb..08a93347f283 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1640,9 +1640,6 @@  static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 	if (flags & TTU_SYNC)
 		pvmw.flags = PVMW_SYNC;
 
-	if (flags & TTU_SPLIT_HUGE_PMD)
-		split_huge_pmd_address(vma, address, false, folio);
-
 	/*
 	 * For THP, we have to assume the worse case ie pmd for invalidation.
 	 * For hugetlb, it could be much worse if we need to do pud
@@ -1668,20 +1665,35 @@  static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
 	mmu_notifier_invalidate_range_start(&range);
 
 	while (page_vma_mapped_walk(&pvmw)) {
-		/* Unexpected PMD-mapped THP? */
-		VM_BUG_ON_FOLIO(!pvmw.pte, folio);
-
 		/*
 		 * If the folio is in an mlock()d vma, we must not swap it out.
 		 */
 		if (!(flags & TTU_IGNORE_MLOCK) &&
 		    (vma->vm_flags & VM_LOCKED)) {
 			/* Restore the mlock which got missed */
-			if (!folio_test_large(folio))
+			if (!folio_test_large(folio) ||
+			    (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)))
 				mlock_vma_folio(folio, vma);
 			goto walk_done_err;
 		}
 
+		if (!pvmw.pte && (flags & TTU_SPLIT_HUGE_PMD)) {
+			/*
+			 * We temporarily have to drop the PTL and start once
+			 * again from that now-PTE-mapped page table.
+			 */
+			split_huge_pmd_locked(vma, range.start, pvmw.pmd, false,
+					      folio);
+			pvmw.pmd = NULL;
+			spin_unlock(pvmw.ptl);
+			pvmw.ptl = NULL;
+			flags &= ~TTU_SPLIT_HUGE_PMD;
+			continue;
+		}
+
+		/* Unexpected PMD-mapped THP? */
+		VM_BUG_ON_FOLIO(!pvmw.pte, folio);
+
 		pfn = pte_pfn(ptep_get(pvmw.pte));
 		subpage = folio_page(folio, pfn - folio_pfn(folio));
 		address = pvmw.address;