diff mbox series

mm: remove a redundant check in do_munmap()

Message ID 20181010125327.68803-1-richard.weiyang@gmail.com (mailing list archive)
State New, archived
Headers show
Series mm: remove a redundant check in do_munmap() | expand

Commit Message

Wei Yang Oct. 10, 2018, 12:53 p.m. UTC
A non-NULL vma returned from find_vma() implies:

   vma->vm_start <= start

Since len != 0, the following condition always hods:

   vma->vm_start < start + len = end

This means the if check would never be true.

This patch removes this redundant check and fix two typo in comment.

Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
---
 mm/mmap.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

Comments

Kirill A . Shutemov Oct. 10, 2018, 2:13 p.m. UTC | #1
On Wed, Oct 10, 2018 at 08:53:27PM +0800, Wei Yang wrote:
> A non-NULL vma returned from find_vma() implies:
> 
>    vma->vm_start <= start
> Since len != 0, the following condition always hods:

s/hods/holds/

>    vma->vm_start < start + len = end
> 
> This means the if check would never be true.

Have you considered overflow?

> This patch removes this redundant check and fix two typo in comment.
> 
> Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
> ---
>  mm/mmap.c | 10 +++-------
>  1 file changed, 3 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 8d6449e74431..94660ddfa2c1 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -414,7 +414,7 @@ static void vma_gap_update(struct vm_area_struct *vma)
>  {
>  	/*
>  	 * As it turns out, RB_DECLARE_CALLBACKS() already created a callback
> -	 * function that does exacltly what we want.
> +	 * function that does exactly what we want.
>  	 */
>  	vma_gap_callbacks_propagate(&vma->vm_rb, NULL);
>  }
> @@ -1621,7 +1621,7 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg)
>  #endif /* __ARCH_WANT_SYS_OLD_MMAP */
>  
>  /*
> - * Some shared mappigns will want the pages marked read-only
> + * Some shared mappings will want the pages marked read-only
>   * to track write events. If so, we'll downgrade vm_page_prot
>   * to the private version (using protection_map[] without the
>   * VM_SHARED bit).
> @@ -2705,12 +2705,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>  	if (!vma)
>  		return 0;
>  	prev = vma->vm_prev;
> -	/* we have  start < vma->vm_end  */
> -
> -	/* if it doesn't overlap, we have nothing.. */
> +	/* we have vma->vm_start <= start < vma->vm_end */
>  	end = start + len;
> -	if (vma->vm_start >= end)
> -		return 0;
>  
>  	/*
>  	 * If we need to split any vma, do it now to save pain later.
> -- 
> 2.15.1
>
Matthew Wilcox Oct. 10, 2018, 2:13 p.m. UTC | #2
On Wed, Oct 10, 2018 at 08:53:27PM +0800, Wei Yang wrote:
> A non-NULL vma returned from find_vma() implies:
> 
>    vma->vm_start <= start
> 
> Since len != 0, the following condition always hods:
> 
>    vma->vm_start < start + len = end
> 
> This means the if check would never be true.

This is true because earlier in the function, start + len is checked to
be sure that it does not wrap.

> This patch removes this redundant check and fix two typo in comment.

> @@ -2705,12 +2705,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
> -	/* we have  start < vma->vm_end  */
> -
> -	/* if it doesn't overlap, we have nothing.. */
> +	/* we have vma->vm_start <= start < vma->vm_end */
>  	end = start + len;
> -	if (vma->vm_start >= end)
> -		return 0;

I agree that it's not currently a useful check, but it's also not going
to have much effect on anything to delete it.  I think there are probably
more worthwhile places to look for inefficiencies.
Wei Yang Oct. 10, 2018, 3:19 p.m. UTC | #3
On Wed, Oct 10, 2018 at 05:13:20PM +0300, Kirill A. Shutemov wrote:
>On Wed, Oct 10, 2018 at 08:53:27PM +0800, Wei Yang wrote:
>> A non-NULL vma returned from find_vma() implies:
>> 
>>    vma->vm_start <= start
>> Since len != 0, the following condition always hods:
>
>s/hods/holds/
>
>>    vma->vm_start < start + len = end
>> 
>> This means the if check would never be true.
>
>Have you considered overflow?
>

Thanks for your comment.

At the beginning of this function, we make sure (len <= TASK_SIZE - start).
Wei Yang Oct. 10, 2018, 3:23 p.m. UTC | #4
On Wed, Oct 10, 2018 at 07:13:55AM -0700, Matthew Wilcox wrote:
>On Wed, Oct 10, 2018 at 08:53:27PM +0800, Wei Yang wrote:
>> A non-NULL vma returned from find_vma() implies:
>> 
>>    vma->vm_start <= start
>> 
>> Since len != 0, the following condition always hods:
>> 
>>    vma->vm_start < start + len = end
>> 
>> This means the if check would never be true.
>
>This is true because earlier in the function, start + len is checked to
>be sure that it does not wrap.
>
>> This patch removes this redundant check and fix two typo in comment.
>
>> @@ -2705,12 +2705,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> -	/* we have  start < vma->vm_end  */
>> -
>> -	/* if it doesn't overlap, we have nothing.. */
>> +	/* we have vma->vm_start <= start < vma->vm_end */
>>  	end = start + len;
>> -	if (vma->vm_start >= end)
>> -		return 0;
>
>I agree that it's not currently a useful check, but it's also not going
>to have much effect on anything to delete it.  I think there are probably
>more worthwhile places to look for inefficiencies.

Thanks for your comment.

Agree, this will not have impact on performance.

The intentinon here is to make the code more clear, otherwise this is a
little misleading. Especially for the comment just before this *if*
clause, audience may think it is possible to have a non-overlap region.
Wei Yang Oct. 16, 2018, 6:24 a.m. UTC | #5
Well, this change is not correct.

On Wed, Oct 10, 2018 at 07:13:55AM -0700, Matthew Wilcox wrote:
>On Wed, Oct 10, 2018 at 08:53:27PM +0800, Wei Yang wrote:
>> A non-NULL vma returned from find_vma() implies:
>> 
>>    vma->vm_start <= start
>> 

My misunderstanding of find_vma(), the non-NULL return value from
find_vma() doesn't impley vma->vm_start <= start. Instead it just
implies addr < vma->vm_end.

This means the original check between vm_start and end is necessary.

Thanks for testing from Rong.

>> Since len != 0, the following condition always hods:
>> 
>>    vma->vm_start < start + len = end
>> 
>> This means the if check would never be true.
>
>This is true because earlier in the function, start + len is checked to
>be sure that it does not wrap.
>
>> This patch removes this redundant check and fix two typo in comment.
>
>> @@ -2705,12 +2705,8 @@ int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
>> -	/* we have  start < vma->vm_end  */
>> -
>> -	/* if it doesn't overlap, we have nothing.. */
>> +	/* we have vma->vm_start <= start < vma->vm_end */
>>  	end = start + len;
>> -	if (vma->vm_start >= end)
>> -		return 0;
>
>I agree that it's not currently a useful check, but it's also not going
>to have much effect on anything to delete it.  I think there are probably
>more worthwhile places to look for inefficiencies.
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index 8d6449e74431..94660ddfa2c1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -414,7 +414,7 @@  static void vma_gap_update(struct vm_area_struct *vma)
 {
 	/*
 	 * As it turns out, RB_DECLARE_CALLBACKS() already created a callback
-	 * function that does exacltly what we want.
+	 * function that does exactly what we want.
 	 */
 	vma_gap_callbacks_propagate(&vma->vm_rb, NULL);
 }
@@ -1621,7 +1621,7 @@  SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg)
 #endif /* __ARCH_WANT_SYS_OLD_MMAP */
 
 /*
- * Some shared mappigns will want the pages marked read-only
+ * Some shared mappings will want the pages marked read-only
  * to track write events. If so, we'll downgrade vm_page_prot
  * to the private version (using protection_map[] without the
  * VM_SHARED bit).
@@ -2705,12 +2705,8 @@  int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	if (!vma)
 		return 0;
 	prev = vma->vm_prev;
-	/* we have  start < vma->vm_end  */
-
-	/* if it doesn't overlap, we have nothing.. */
+	/* we have vma->vm_start <= start < vma->vm_end */
 	end = start + len;
-	if (vma->vm_start >= end)
-		return 0;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.