mm/page_vma_mapped.c: Exactly compare hugetlbfs page's pfn in pfn_in_hpage()
diff mbox series

Message ID 1578552280-5703-1-git-send-email-lixinhai.lxh@gmail.com
State New
Headers show
Series
  • mm/page_vma_mapped.c: Exactly compare hugetlbfs page's pfn in pfn_in_hpage()
Related show

Commit Message

Li Xinhai Jan. 9, 2020, 6:44 a.m. UTC
check_pte is called for hugetlbfs page and comparing pfn in pfn_in_page,
where pfn is compared in range [hpage_pfn, hpage_pfn+HPAGE_PMD_NR). Now
change it to match exactly for hugetlbfs page to avoid hiding any
potential problems.

Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/page_vma_mapped.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Kirill A. Shutemov Jan. 9, 2020, 12:30 p.m. UTC | #1
On Thu, Jan 09, 2020 at 06:44:40AM +0000, Li Xinhai wrote:
> check_pte is called for hugetlbfs page and comparing pfn in pfn_in_page,
> where pfn is compared in range [hpage_pfn, hpage_pfn+HPAGE_PMD_NR). Now
> change it to match exactly for hugetlbfs page to avoid hiding any
> potential problems.

Hm. What potential problems do you talk about?

I understand that for hugetlb pages pfn is always has to be equal
hpage_pfn, but returning false for pfn in the range of the hugetlb page
just because it's not head is not helpful.

I would rather have

	VM_BUG_ON_PAGE(PageHuge(hpage) && pfn != hpage_pfn, hpage);

there.
> 
> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>  mm/page_vma_mapped.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index eff4b45..434978b 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -55,6 +55,8 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
>  static inline bool pfn_in_hpage(struct page *hpage, unsigned long pfn)
>  {
>  	unsigned long hpage_pfn = page_to_pfn(hpage);
> +	if (unlikely(PageHuge(hpage)))
> +		return pfn == hpage_pfn;
>  
>  	/* THP can be referenced by any subpage */
>  	return pfn >= hpage_pfn && pfn - hpage_pfn < hpage_nr_pages(hpage);
> -- 
> 1.8.3.1
> 
>
Li Xinhai Jan. 9, 2020, 1:55 p.m. UTC | #2
On 2020-01-09 at 20:30 Kirill A. Shutemov wrote:
>On Thu, Jan 09, 2020 at 06:44:40AM +0000, Li Xinhai wrote:
>> check_pte is called for hugetlbfs page and comparing pfn in pfn_in_page,
>> where pfn is compared in range [hpage_pfn, hpage_pfn+HPAGE_PMD_NR). Now
>> change it to match exactly for hugetlbfs page to avoid hiding any
>> potential problems.
>
>Hm. What potential problems do you talk about?
>
>I understand that for hugetlb pages pfn is always has to be equal
>hpage_pfn, but returning false for pfn in the range of the hugetlb page
>just because it's not head is not helpful.
>
>I would rather have
>
>	VM_BUG_ON_PAGE(PageHuge(hpage) && pfn != hpage_pfn, hpage);
>
>there. 
Yes, that is better to report BUG as we already catch it.

>>
>> Signed-off-by: Li Xinhai <lixinhai.lxh@gmail.com>
>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>> ---
>>  mm/page_vma_mapped.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
>> index eff4b45..434978b 100644
>> --- a/mm/page_vma_mapped.c
>> +++ b/mm/page_vma_mapped.c
>> @@ -55,6 +55,8 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
>>  static inline bool pfn_in_hpage(struct page *hpage, unsigned long pfn)
>>  {
>>  unsigned long hpage_pfn = page_to_pfn(hpage);
>> +	if (unlikely(PageHuge(hpage)))
>> +	return pfn == hpage_pfn;
>> 
>>  /* THP can be referenced by any subpage */
>>  return pfn >= hpage_pfn && pfn - hpage_pfn < hpage_nr_pages(hpage);
>> --
>> 1.8.3.1
>>
>>
>
>--
> Kirill A. Shutemov

Patch
diff mbox series

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index eff4b45..434978b 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -55,6 +55,8 @@  static bool map_pte(struct page_vma_mapped_walk *pvmw)
 static inline bool pfn_in_hpage(struct page *hpage, unsigned long pfn)
 {
 	unsigned long hpage_pfn = page_to_pfn(hpage);
+	if (unlikely(PageHuge(hpage)))
+		return pfn == hpage_pfn;
 
 	/* THP can be referenced by any subpage */
 	return pfn >= hpage_pfn && pfn - hpage_pfn < hpage_nr_pages(hpage);