diff mbox series

mm: thp: check total_mapcount instead of page_mapcount

Message ID 20210430210744.216095-1-shy828301@gmail.com (mailing list archive)
State New
Headers show
Series mm: thp: check total_mapcount instead of page_mapcount | expand

Commit Message

Yang Shi April 30, 2021, 9:07 p.m. UTC
When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
return false positive for PTE-mapped THP since page_mapcount() is used
to check if the THP is unmapped, but it just checks compound mapount and
head page's mapcount.  If the THP is PTE-mapped and head page is not
mapped, it may return false positive.

Use total_mapcount() instead of page_mapcount() and do so for the
VM_BUG_ON_PAGE in split_huge_page_to_list as well.

[1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/

Signed-off-by: Yang Shi <shy828301@gmail.com>
---
 mm/huge_memory.c | 2 +-
 mm/rmap.c        | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

Comments

Zi Yan April 30, 2021, 9:30 p.m. UTC | #1
On 30 Apr 2021, at 17:07, Yang Shi wrote:

> When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
> return false positive for PTE-mapped THP since page_mapcount() is used
> to check if the THP is unmapped, but it just checks compound mapount and
> head page's mapcount.  If the THP is PTE-mapped and head page is not
> mapped, it may return false positive.
>
> Use total_mapcount() instead of page_mapcount() and do so for the
> VM_BUG_ON_PAGE in split_huge_page_to_list as well.
>
> [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
>
> Signed-off-by: Yang Shi <shy828301@gmail.com>
> ---
>  mm/huge_memory.c | 2 +-
>  mm/rmap.c        | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 63ed6b25deaa..2122c3e853b9 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>  	}
>
>  	unmap_page(head);
> -	VM_BUG_ON_PAGE(compound_mapcount(head), head);
> +	VM_BUG_ON_PAGE(total_mapcount(head), head);

I am not sure about this change. The code below also checks total_mapcount(head)
and returns EBUSY if the count is non-zero. This change makes the code dead.
On the other hand, the change will force all mappings to the page have to be
successfully unmapped all the time. I am not sure if we want to do that.
Maybe it is better to just check total_mapcount() and fail the split.
The same situation happens with the code change below.

>
>  	/* block interrupt reentry in xa_lock and spinlock */
>  	local_irq_disable();
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 693a610e181d..2e547378ab5f 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
>  	else
>  		rmap_walk(page, &rwc);
>
> -	return !page_mapcount(page) ? true : false;
> +	return !total_mapcount(page) ? true : false;
>  }

In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
to the page have to be all unmapped, which might not be the case we want.
Maybe you will want to remove the VM_BUG_ON_PAGE here, check total_mapcount()
above, and fail the split if not all mappings to the pages are unmapped.



—
Best Regards,
Yan Zi
Yang Shi April 30, 2021, 9:56 p.m. UTC | #2
On Fri, Apr 30, 2021 at 2:30 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 30 Apr 2021, at 17:07, Yang Shi wrote:
>
> > When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
> > return false positive for PTE-mapped THP since page_mapcount() is used
> > to check if the THP is unmapped, but it just checks compound mapount and
> > head page's mapcount.  If the THP is PTE-mapped and head page is not
> > mapped, it may return false positive.
> >
> > Use total_mapcount() instead of page_mapcount() and do so for the
> > VM_BUG_ON_PAGE in split_huge_page_to_list as well.
> >
> > [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
> >
> > Signed-off-by: Yang Shi <shy828301@gmail.com>
> > ---
> >  mm/huge_memory.c | 2 +-
> >  mm/rmap.c        | 2 +-
> >  2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 63ed6b25deaa..2122c3e853b9 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >       }
> >
> >       unmap_page(head);
> > -     VM_BUG_ON_PAGE(compound_mapcount(head), head);
> > +     VM_BUG_ON_PAGE(total_mapcount(head), head);
>
> I am not sure about this change. The code below also checks total_mapcount(head)
> and returns EBUSY if the count is non-zero. This change makes the code dead.

It is actually dead if CONFIG_DEBUG_VM is enabled and total_mapcount
is not 0 regardless of this change due to the below code, right?

if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
                        pr_alert("total_mapcount: %u, page_count(): %u\n",
                                        mapcount, count);
                        if (PageTail(page))
                                dump_page(head, NULL);
                        dump_page(page, "total_mapcount(head) > 0");
                        BUG();
                }

> On the other hand, the change will force all mappings to the page have to be
> successfully unmapped all the time. I am not sure if we want to do that.
> Maybe it is better to just check total_mapcount() and fail the split.
> The same situation happens with the code change below.

IIUC, the code did force all mappings to the page to be unmapped in
order to split it.

>
> >
> >       /* block interrupt reentry in xa_lock and spinlock */
> >       local_irq_disable();
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 693a610e181d..2e547378ab5f 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
> >       else
> >               rmap_walk(page, &rwc);
> >
> > -     return !page_mapcount(page) ? true : false;
> > +     return !total_mapcount(page) ? true : false;
> >  }
>
> In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
> to the page have to be all unmapped, which might not be the case we want.

AFAICT, I don't see such a case from all the callers of
try_to_unmap(). Imay miss something, but I do have a hard time
thinking of a usecase which can proceed safely with "not fully
unmapped" page.

> Maybe you will want to remove the VM_BUG_ON_PAGE here, check total_mapcount()
> above, and fail the split if not all mappings to the pages are unmapped.
>
>
>
> —
> Best Regards,
> Yan Zi
Zi Yan April 30, 2021, 10:30 p.m. UTC | #3
On 30 Apr 2021, at 17:56, Yang Shi wrote:

> On Fri, Apr 30, 2021 at 2:30 PM Zi Yan <ziy@nvidia.com> wrote:
>>
>> On 30 Apr 2021, at 17:07, Yang Shi wrote:
>>
>>> When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
>>> return false positive for PTE-mapped THP since page_mapcount() is used
>>> to check if the THP is unmapped, but it just checks compound mapount and
>>> head page's mapcount.  If the THP is PTE-mapped and head page is not
>>> mapped, it may return false positive.
>>>
>>> Use total_mapcount() instead of page_mapcount() and do so for the
>>> VM_BUG_ON_PAGE in split_huge_page_to_list as well.
>>>
>>> [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
>>>
>>> Signed-off-by: Yang Shi <shy828301@gmail.com>
>>> ---
>>>  mm/huge_memory.c | 2 +-
>>>  mm/rmap.c        | 2 +-
>>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 63ed6b25deaa..2122c3e853b9 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>>>       }
>>>
>>>       unmap_page(head);
>>> -     VM_BUG_ON_PAGE(compound_mapcount(head), head);
>>> +     VM_BUG_ON_PAGE(total_mapcount(head), head);
>>
>> I am not sure about this change. The code below also checks total_mapcount(head)
>> and returns EBUSY if the count is non-zero. This change makes the code dead.
>
> It is actually dead if CONFIG_DEBUG_VM is enabled and total_mapcount
> is not 0 regardless of this change due to the below code, right?
>
> if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
>                         pr_alert("total_mapcount: %u, page_count(): %u\n",
>                                         mapcount, count);
>                         if (PageTail(page))
>                                 dump_page(head, NULL);
>                         dump_page(page, "total_mapcount(head) > 0");
>                         BUG();
>                 }

Right. But with this change, mapcount will never be non-zero. The code above
will be useless and can be removed.

>> On the other hand, the change will force all mappings to the page have to be
>> successfully unmapped all the time. I am not sure if we want to do that.
>> Maybe it is better to just check total_mapcount() and fail the split.
>> The same situation happens with the code change below.
>
> IIUC, the code did force all mappings to the page to be unmapped in
> order to split it.
>>
>>>
>>>       /* block interrupt reentry in xa_lock and spinlock */
>>>       local_irq_disable();
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 693a610e181d..2e547378ab5f 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
>>>       else
>>>               rmap_walk(page, &rwc);
>>>
>>> -     return !page_mapcount(page) ? true : false;
>>> +     return !total_mapcount(page) ? true : false;
>>>  }
>>
>> In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
>> to the page have to be all unmapped, which might not be the case we want.
>
> AFAICT, I don't see such a case from all the callers of
> try_to_unmap(). Imay miss something, but I do have a hard time
> thinking of a usecase which can proceed safely with "not fully
> unmapped" page.

This code change is correct, but after the change unmap_page() will fire VM_BUG_ON
when not all mappings are unmapped. Along with the change above, we will have
two identical VM_BUG_ONs happen one after another. We might want to remove one
of them.

Also, this changes the semantics of try_to_unmap. The comment for try_to_unmap
might need to be updated.


—
Best Regards,
Yan Zi
Yang Shi April 30, 2021, 10:55 p.m. UTC | #4
On Fri, Apr 30, 2021 at 3:30 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 30 Apr 2021, at 17:56, Yang Shi wrote:
>
> > On Fri, Apr 30, 2021 at 2:30 PM Zi Yan <ziy@nvidia.com> wrote:
> >>
> >> On 30 Apr 2021, at 17:07, Yang Shi wrote:
> >>
> >>> When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
> >>> return false positive for PTE-mapped THP since page_mapcount() is used
> >>> to check if the THP is unmapped, but it just checks compound mapount and
> >>> head page's mapcount.  If the THP is PTE-mapped and head page is not
> >>> mapped, it may return false positive.
> >>>
> >>> Use total_mapcount() instead of page_mapcount() and do so for the
> >>> VM_BUG_ON_PAGE in split_huge_page_to_list as well.
> >>>
> >>> [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
> >>>
> >>> Signed-off-by: Yang Shi <shy828301@gmail.com>
> >>> ---
> >>>  mm/huge_memory.c | 2 +-
> >>>  mm/rmap.c        | 2 +-
> >>>  2 files changed, 2 insertions(+), 2 deletions(-)
> >>>
> >>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> >>> index 63ed6b25deaa..2122c3e853b9 100644
> >>> --- a/mm/huge_memory.c
> >>> +++ b/mm/huge_memory.c
> >>> @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
> >>>       }
> >>>
> >>>       unmap_page(head);
> >>> -     VM_BUG_ON_PAGE(compound_mapcount(head), head);
> >>> +     VM_BUG_ON_PAGE(total_mapcount(head), head);
> >>
> >> I am not sure about this change. The code below also checks total_mapcount(head)
> >> and returns EBUSY if the count is non-zero. This change makes the code dead.
> >
> > It is actually dead if CONFIG_DEBUG_VM is enabled and total_mapcount
> > is not 0 regardless of this change due to the below code, right?
> >
> > if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
> >                         pr_alert("total_mapcount: %u, page_count(): %u\n",
> >                                         mapcount, count);
> >                         if (PageTail(page))
> >                                 dump_page(head, NULL);
> >                         dump_page(page, "total_mapcount(head) > 0");
> >                         BUG();
> >                 }
>
> Right. But with this change, mapcount will never be non-zero. The code above
> will be useless and can be removed.

Yes, you are correct.

>
> >> On the other hand, the change will force all mappings to the page have to be
> >> successfully unmapped all the time. I am not sure if we want to do that.
> >> Maybe it is better to just check total_mapcount() and fail the split.
> >> The same situation happens with the code change below.
> >
> > IIUC, the code did force all mappings to the page to be unmapped in
> > order to split it.
> >>
> >>>
> >>>       /* block interrupt reentry in xa_lock and spinlock */
> >>>       local_irq_disable();
> >>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>> index 693a610e181d..2e547378ab5f 100644
> >>> --- a/mm/rmap.c
> >>> +++ b/mm/rmap.c
> >>> @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
> >>>       else
> >>>               rmap_walk(page, &rwc);
> >>>
> >>> -     return !page_mapcount(page) ? true : false;
> >>> +     return !total_mapcount(page) ? true : false;
> >>>  }
> >>
> >> In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
> >> to the page have to be all unmapped, which might not be the case we want.
> >
> > AFAICT, I don't see such a case from all the callers of
> > try_to_unmap(). Imay miss something, but I do have a hard time
> > thinking of a usecase which can proceed safely with "not fully
> > unmapped" page.
>
> This code change is correct, but after the change unmap_page() will fire VM_BUG_ON
> when not all mappings are unmapped. Along with the change above, we will have
> two identical VM_BUG_ONs happen one after another. We might want to remove one
> of them.

Yes. I'd prefer keep the one after unmap_page() since it seems more
obvious. Any objection?

>
> Also, this changes the semantics of try_to_unmap. The comment for try_to_unmap
> might need to be updated.

What comment do you refer to?

>
>
> —
> Best Regards,
> Yan Zi
Zi Yan April 30, 2021, 11:02 p.m. UTC | #5
On 30 Apr 2021, at 18:55, Yang Shi wrote:

> On Fri, Apr 30, 2021 at 3:30 PM Zi Yan <ziy@nvidia.com> wrote:
>>
>> On 30 Apr 2021, at 17:56, Yang Shi wrote:
>>
>>> On Fri, Apr 30, 2021 at 2:30 PM Zi Yan <ziy@nvidia.com> wrote:
>>>>
>>>> On 30 Apr 2021, at 17:07, Yang Shi wrote:
>>>>
>>>>> When debugging the bug reported by Wang Yugui [1], try_to_unmap() may
>>>>> return false positive for PTE-mapped THP since page_mapcount() is used
>>>>> to check if the THP is unmapped, but it just checks compound mapount and
>>>>> head page's mapcount.  If the THP is PTE-mapped and head page is not
>>>>> mapped, it may return false positive.
>>>>>
>>>>> Use total_mapcount() instead of page_mapcount() and do so for the
>>>>> VM_BUG_ON_PAGE in split_huge_page_to_list as well.
>>>>>
>>>>> [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@e16-tech.com/
>>>>>
>>>>> Signed-off-by: Yang Shi <shy828301@gmail.com>
>>>>> ---
>>>>>  mm/huge_memory.c | 2 +-
>>>>>  mm/rmap.c        | 2 +-
>>>>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>>>> index 63ed6b25deaa..2122c3e853b9 100644
>>>>> --- a/mm/huge_memory.c
>>>>> +++ b/mm/huge_memory.c
>>>>> @@ -2718,7 +2718,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
>>>>>       }
>>>>>
>>>>>       unmap_page(head);
>>>>> -     VM_BUG_ON_PAGE(compound_mapcount(head), head);
>>>>> +     VM_BUG_ON_PAGE(total_mapcount(head), head);
>>>>
>>>> I am not sure about this change. The code below also checks total_mapcount(head)
>>>> and returns EBUSY if the count is non-zero. This change makes the code dead.
>>>
>>> It is actually dead if CONFIG_DEBUG_VM is enabled and total_mapcount
>>> is not 0 regardless of this change due to the below code, right?
>>>
>>> if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) {
>>>                         pr_alert("total_mapcount: %u, page_count(): %u\n",
>>>                                         mapcount, count);
>>>                         if (PageTail(page))
>>>                                 dump_page(head, NULL);
>>>                         dump_page(page, "total_mapcount(head) > 0");
>>>                         BUG();
>>>                 }
>>
>> Right. But with this change, mapcount will never be non-zero. The code above
>> will be useless and can be removed.
>
> Yes, you are correct.
>
>>
>>>> On the other hand, the change will force all mappings to the page have to be
>>>> successfully unmapped all the time. I am not sure if we want to do that.
>>>> Maybe it is better to just check total_mapcount() and fail the split.
>>>> The same situation happens with the code change below.
>>>
>>> IIUC, the code did force all mappings to the page to be unmapped in
>>> order to split it.
>>>>
>>>>>
>>>>>       /* block interrupt reentry in xa_lock and spinlock */
>>>>>       local_irq_disable();
>>>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>>>> index 693a610e181d..2e547378ab5f 100644
>>>>> --- a/mm/rmap.c
>>>>> +++ b/mm/rmap.c
>>>>> @@ -1777,7 +1777,7 @@ bool try_to_unmap(struct page *page, enum ttu_flags flags)
>>>>>       else
>>>>>               rmap_walk(page, &rwc);
>>>>>
>>>>> -     return !page_mapcount(page) ? true : false;
>>>>> +     return !total_mapcount(page) ? true : false;
>>>>>  }
>>>>
>>>> In unmap_page(), VM_BUG_ON_PAGE(!unmap_success, page) will force all mappings
>>>> to the page have to be all unmapped, which might not be the case we want.
>>>
>>> AFAICT, I don't see such a case from all the callers of
>>> try_to_unmap(). Imay miss something, but I do have a hard time
>>> thinking of a usecase which can proceed safely with "not fully
>>> unmapped" page.
>>
>> This code change is correct, but after the change unmap_page() will fire VM_BUG_ON
>> when not all mappings are unmapped. Along with the change above, we will have
>> two identical VM_BUG_ONs happen one after another. We might want to remove one
>> of them.
>
> Yes. I'd prefer keep the one after unmap_page() since it seems more
> obvious. Any objection?

Sounds good to me.

>
>>
>> Also, this changes the semantics of try_to_unmap. The comment for try_to_unmap
>> might need to be updated.
>
> What comment do you refer to?

/**
 * try_to_unmap - try to remove all page table mappings to a page

a page -> a page and the compound page it belongs to

 * @page: the page to get unmapped

the page -> the page or the subpage of a compound page

 * @flags: action and flags
 *
 * Tries to remove all the page table entries which are mapping this
 * page, used in the pageout path.  Caller must hold the page lock.

this page -> this page and the compound page it belongs to

Feel free to change the wording if you find better ones.


—
Best Regards,
Yan Zi
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 63ed6b25deaa..2122c3e853b9 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2718,7 +2718,7 @@  int split_huge_page_to_list(struct page *page, struct list_head *list)
 	}
 
 	unmap_page(head);
-	VM_BUG_ON_PAGE(compound_mapcount(head), head);
+	VM_BUG_ON_PAGE(total_mapcount(head), head);
 
 	/* block interrupt reentry in xa_lock and spinlock */
 	local_irq_disable();
diff --git a/mm/rmap.c b/mm/rmap.c
index 693a610e181d..2e547378ab5f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1777,7 +1777,7 @@  bool try_to_unmap(struct page *page, enum ttu_flags flags)
 	else
 		rmap_walk(page, &rwc);
 
-	return !page_mapcount(page) ? true : false;
+	return !total_mapcount(page) ? true : false;
 }
 
 /**