diff mbox series

[v2,6/8] mm/huge_memory: remove stale NUMA hinting comment from follow_trans_huge_pmd()

Message ID 20230801124844.278698-7-david@redhat.com (mailing list archive)
State New, archived
Headers show
Series smaps / mm/gup: fix gup_can_follow_protnone fallout | expand

Commit Message

David Hildenbrand Aug. 1, 2023, 12:48 p.m. UTC
That comment for pmd_protnone() was added in commit 2b4847e73004
("mm: numa: serialise parallel get_user_page against THP migration"), which
noted:

	THP does not unmap pages due to a lack of support for migration
	entries at a PMD level.  This allows races with get_user_pages

Nowadays, we do have PMD migration entries, so the comment no longer
applies. Let's drop it.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/huge_memory.c | 1 -
 1 file changed, 1 deletion(-)

Comments

Peter Xu Aug. 1, 2023, 4:07 p.m. UTC | #1
On Tue, Aug 01, 2023 at 02:48:42PM +0200, David Hildenbrand wrote:
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2cd3e5502180..0b709d2c46c6 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1467,7 +1467,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
>  	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
>  		return ERR_PTR(-EFAULT);
>  
> -	/* Full NUMA hinting faults to serialise migration in fault paths */
>  	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
>  		return NULL;

Perhaps squashing into patch 1?  Thanks,
David Hildenbrand Aug. 1, 2023, 4:16 p.m. UTC | #2
On 01.08.23 18:07, Peter Xu wrote:
> On Tue, Aug 01, 2023 at 02:48:42PM +0200, David Hildenbrand wrote:
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 2cd3e5502180..0b709d2c46c6 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1467,7 +1467,6 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
>>   	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
>>   		return ERR_PTR(-EFAULT);
>>   
>> -	/* Full NUMA hinting faults to serialise migration in fault paths */
>>   	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
>>   		return NULL;
> 
> Perhaps squashing into patch 1?  Thanks,

I decided against it so I don't have to make patch description of patch 
#1 even longer with something that's mostly unrelated to the core change.
Mel Gorman Aug. 2, 2023, 3:34 p.m. UTC | #3
On Tue, Aug 01, 2023 at 02:48:42PM +0200, David Hildenbrand wrote:
> That comment for pmd_protnone() was added in commit 2b4847e73004
> ("mm: numa: serialise parallel get_user_page against THP migration"), which
> noted:
> 
> 	THP does not unmap pages due to a lack of support for migration
> 	entries at a PMD level.  This allows races with get_user_pages
> 
> Nowadays, we do have PMD migration entries, so the comment no longer
> applies. Let's drop it.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Mel Gorman <mgorman@techsingularity.net>
diff mbox series

Patch

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2cd3e5502180..0b709d2c46c6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1467,7 +1467,6 @@  struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 	if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
 		return ERR_PTR(-EFAULT);
 
-	/* Full NUMA hinting faults to serialise migration in fault paths */
 	if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags))
 		return NULL;