diff mbox series

[1/2] userfaultfd: remove one unnecessary warn_on in __mcopy_atomic_hugetlb

Message ID 20190925121833.2766-1-richardw.yang@linux.intel.com (mailing list archive)
State New, archived
Headers show
Series [1/2] userfaultfd: remove one unnecessary warn_on in __mcopy_atomic_hugetlb | expand

Commit Message

Wei Yang Sept. 25, 2019, 12:18 p.m. UTC
The warning here is to make sure address(dst_addr) and length(len -
copied) are huge page size aligned.

While this is ensured by:

    dst_start and len is huge page size aligned
    dst_addr equals to dst_start and increase huge page size each time
    copied increase huge page size each time

This means this warning will never be triggered.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
---
 mm/userfaultfd.c | 4 ----
 1 file changed, 4 deletions(-)

Comments

Mike Kravetz Sept. 25, 2019, 5:44 p.m. UTC | #1
On 9/25/19 5:18 AM, Wei Yang wrote:
> The warning here is to make sure address(dst_addr) and length(len -
> copied) are huge page size aligned.
> 
> While this is ensured by:
> 
>     dst_start and len is huge page size aligned
>     dst_addr equals to dst_start and increase huge page size each time
>     copied increase huge page size each time

Can we also remove the following for the same reasons?

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 640ff2bd9a69..f82d5ec698d8 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		pte_t dst_pteval;
 
 		BUG_ON(dst_addr >= dst_start + len);
-		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
 
 		/*
 		 * Serialize via hugetlb_fault_mutex
Wei Yang Sept. 26, 2019, 12:35 a.m. UTC | #2
On Wed, Sep 25, 2019 at 10:44:58AM -0700, Mike Kravetz wrote:
>On 9/25/19 5:18 AM, Wei Yang wrote:
>> The warning here is to make sure address(dst_addr) and length(len -
>> copied) are huge page size aligned.
>> 
>> While this is ensured by:
>> 
>>     dst_start and len is huge page size aligned
>>     dst_addr equals to dst_start and increase huge page size each time
>>     copied increase huge page size each time
>
>Can we also remove the following for the same reasons?
>
>diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
>index 640ff2bd9a69..f82d5ec698d8 100644
>--- a/mm/userfaultfd.c
>+++ b/mm/userfaultfd.c
>@@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> 		pte_t dst_pteval;
> 
> 		BUG_ON(dst_addr >= dst_start + len);
>-		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
> 

Thanks for your comment.

It looks good, while I lack some knowledge between vma_hpagesize and
huge_page_mask().

If they are the same, why not use the same interface for all those checks in
this function?

> 		/*
> 		 * Serialize via hugetlb_fault_mutex
>
>-- 
>Mike Kravetz
>
>> 
>> This means this warning will never be triggered.
>> 
>> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
>> ---
>>  mm/userfaultfd.c | 4 ----
>>  1 file changed, 4 deletions(-)
>> 
>> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
>> index c7ae74ce5ff3..7895c715000e 100644
>> --- a/mm/userfaultfd.c
>> +++ b/mm/userfaultfd.c
>> @@ -243,10 +243,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>  		vm_shared = dst_vma->vm_flags & VM_SHARED;
>>  	}
>>  
>> -	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
>> -		    (len - copied) & (vma_hpagesize - 1)))
>> -		goto out_unlock;
>> -
>>  	/*
>>  	 * If not shared, ensure the dst_vma has a anon_vma.
>>  	 */
>>
Mike Kravetz Sept. 26, 2019, 2:10 a.m. UTC | #3
On 9/25/19 5:35 PM, Wei Yang wrote:
> On Wed, Sep 25, 2019 at 10:44:58AM -0700, Mike Kravetz wrote:
>> On 9/25/19 5:18 AM, Wei Yang wrote:
>>> The warning here is to make sure address(dst_addr) and length(len -
>>> copied) are huge page size aligned.
>>>
>>> While this is ensured by:
>>>
>>>     dst_start and len is huge page size aligned
>>>     dst_addr equals to dst_start and increase huge page size each time
>>>     copied increase huge page size each time
>>
>> Can we also remove the following for the same reasons?
>>
>> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
>> index 640ff2bd9a69..f82d5ec698d8 100644
>> --- a/mm/userfaultfd.c
>> +++ b/mm/userfaultfd.c
>> @@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>> 		pte_t dst_pteval;
>>
>> 		BUG_ON(dst_addr >= dst_start + len);
>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>
> 
> Thanks for your comment.
> 
> It looks good, while I lack some knowledge between vma_hpagesize and
> huge_page_mask().

vma_hpagesize is just a local variable used so that repeated calls to
vma_kernel_pagesize() or huge_page_size() are not necessary.

> If they are the same, why not use the same interface for all those checks in
> this function?

If we remove the VM_BUG_ON, that is the only use of huge_page_mask() in
the function.

We can can also eliminate a call to huge_page_size() by making this change.

@@ -273,7 +272,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
 
 		err = -ENOMEM;
-		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
+		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
 		if (!dst_pte) {
 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 			goto out_unlock;
Wei Yang Sept. 26, 2019, 2:56 a.m. UTC | #4
On Wed, Sep 25, 2019 at 07:10:46PM -0700, Mike Kravetz wrote:
>On 9/25/19 5:35 PM, Wei Yang wrote:
>> On Wed, Sep 25, 2019 at 10:44:58AM -0700, Mike Kravetz wrote:
>>> On 9/25/19 5:18 AM, Wei Yang wrote:
>>>> The warning here is to make sure address(dst_addr) and length(len -
>>>> copied) are huge page size aligned.
>>>>
>>>> While this is ensured by:
>>>>
>>>>     dst_start and len is huge page size aligned
>>>>     dst_addr equals to dst_start and increase huge page size each time
>>>>     copied increase huge page size each time
>>>
>>> Can we also remove the following for the same reasons?
>>>
>>> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
>>> index 640ff2bd9a69..f82d5ec698d8 100644
>>> --- a/mm/userfaultfd.c
>>> +++ b/mm/userfaultfd.c
>>> @@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>> 		pte_t dst_pteval;
>>>
>>> 		BUG_ON(dst_addr >= dst_start + len);
>>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>>
>> 
>> Thanks for your comment.
>> 
>> It looks good, while I lack some knowledge between vma_hpagesize and
>> huge_page_mask().
>
>vma_hpagesize is just a local variable used so that repeated calls to
>vma_kernel_pagesize() or huge_page_size() are not necessary.
>

Thanks for your confirmation. If this is the case, we can remove this BUG_ON
safely.

>> If they are the same, why not use the same interface for all those checks in
>> this function?
>
>If we remove the VM_BUG_ON, that is the only use of huge_page_mask() in
>the function.
>
>We can can also eliminate a call to huge_page_size() by making this change.
>
>@@ -273,7 +272,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
> 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
> 
> 		err = -ENOMEM;
>-		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>+		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
> 		if (!dst_pte) {
> 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> 			goto out_unlock;

Agree, and also with this I think

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index c153344774c7..74363f0a0dd0 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -315,7 +315,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 
                        err = copy_huge_page_from_user(page,
                                                (const void __user *)src_addr,
-                                               pages_per_huge_page(h), true);
+                                               vma_hpagesize / PAGE_SIZE, true);
                        if (unlikely(err)) {
                                err = -EFAULT;
                                goto out;

After these cleanup, we use vma_pagesize to deal with all page size related
calculation in this function, which looks more consistent to me.

Does it looks good to you?

>-- 
>Mike Kravetz
Mike Kravetz Sept. 26, 2019, 5:20 p.m. UTC | #5
On 9/25/19 7:56 PM, Wei Yang wrote:
> On Wed, Sep 25, 2019 at 07:10:46PM -0700, Mike Kravetz wrote:
>> On 9/25/19 5:35 PM, Wei Yang wrote:
>>> On Wed, Sep 25, 2019 at 10:44:58AM -0700, Mike Kravetz wrote:
>>>> On 9/25/19 5:18 AM, Wei Yang wrote:
>>>>> The warning here is to make sure address(dst_addr) and length(len -
>>>>> copied) are huge page size aligned.
>>>>>
>>>>> While this is ensured by:
>>>>>
>>>>>     dst_start and len is huge page size aligned
>>>>>     dst_addr equals to dst_start and increase huge page size each time
>>>>>     copied increase huge page size each time
>>>>
>>>> Can we also remove the following for the same reasons?
>>>>
>>>> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
>>>> index 640ff2bd9a69..f82d5ec698d8 100644
>>>> --- a/mm/userfaultfd.c
>>>> +++ b/mm/userfaultfd.c
>>>> @@ -262,7 +262,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>>>> 		pte_t dst_pteval;
>>>>
>>>> 		BUG_ON(dst_addr >= dst_start + len);
>>>> -		VM_BUG_ON(dst_addr & ~huge_page_mask(h));
>>>>
>>>
>>> Thanks for your comment.
>>>
>>> It looks good, while I lack some knowledge between vma_hpagesize and
>>> huge_page_mask().
>>
>> vma_hpagesize is just a local variable used so that repeated calls to
>> vma_kernel_pagesize() or huge_page_size() are not necessary.
>>
> 
> Thanks for your confirmation. If this is the case, we can remove this BUG_ON
> safely.
> 
>>> If they are the same, why not use the same interface for all those checks in
>>> this function?
>>
>> If we remove the VM_BUG_ON, that is the only use of huge_page_mask() in
>> the function.
>>
>> We can can also eliminate a call to huge_page_size() by making this change.
>>
>> @@ -273,7 +272,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>> 		mutex_lock(&hugetlb_fault_mutex_table[hash]);
>>
>> 		err = -ENOMEM;
>> -		dst_pte = huge_pte_alloc(dst_mm, dst_addr, huge_page_size(h));
>> +		dst_pte = huge_pte_alloc(dst_mm, dst_addr, vma_hpagesize);
>> 		if (!dst_pte) {
>> 			mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>> 			goto out_unlock;
> 
> Agree, and also with this I think
> 
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index c153344774c7..74363f0a0dd0 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -315,7 +315,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
>  
>                         err = copy_huge_page_from_user(page,
>                                                 (const void __user *)src_addr,
> -                                               pages_per_huge_page(h), true);
> +                                               vma_hpagesize / PAGE_SIZE, true);
>                         if (unlikely(err)) {
>                                 err = -EFAULT;
>                                 goto out;
> 
> After these cleanup, we use vma_pagesize to deal with all page size related
> calculation in this function, which looks more consistent to me.
> 
> Does it looks good to you?

Yes, that looks good.

Thanks for cleaning up this code.
diff mbox series

Patch

diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index c7ae74ce5ff3..7895c715000e 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -243,10 +243,6 @@  static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm,
 		vm_shared = dst_vma->vm_flags & VM_SHARED;
 	}
 
-	if (WARN_ON(dst_addr & (vma_hpagesize - 1) ||
-		    (len - copied) & (vma_hpagesize - 1)))
-		goto out_unlock;
-
 	/*
 	 * If not shared, ensure the dst_vma has a anon_vma.
 	 */