diff mbox series

[v2,3/4] mm/mmap: Change detached vma locking scheme

Message ID 20230714195551.894800-4-Liam.Howlett@oracle.com (mailing list archive)
State New
Headers show
Series More strict maple tree lockdep | expand

Commit Message

Liam R. Howlett July 14, 2023, 7:55 p.m. UTC
Don't set the lock to the mm lock so that the detached VMA tree does not
complain about being unlocked when the mmap_lock is dropped prior to
freeing the tree.

Move the destroying of the detached tree outside the mmap lock all
together.

Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
---
 mm/mmap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Liam R. Howlett July 19, 2023, 6:31 p.m. UTC | #1
Hello Andrew,

Please replace v2 with the attached v3 of this patch to address the
issue with building ARCH=um [1].

[1] https://lore.kernel.org/linux-mm/20230718172105.GA1714004@dev-arch.thelio-3990X/T/

Thanks,
Liam


* Liam R. Howlett <Liam.Howlett@oracle.com> [230714 15:56]:
> Don't set the lock to the mm lock so that the detached VMA tree does not
> complain about being unlocked when the mmap_lock is dropped prior to
> freeing the tree.
> 
> Move the destroying of the detached tree outside the mmap lock all
> together.
> 
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
>  mm/mmap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 7b70379a8b3e..ab6cb00d377a 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2427,7 +2427,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	unsigned long locked_vm = 0;
>  	MA_STATE(mas_detach, &mt_detach, 0, 0);
>  	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
> -	mt_set_external_lock(&mt_detach, &mm->mmap_lock);
> +	mt_detach.ma_external_lock = NULL;
>  
>  	/*
>  	 * If we need to split any vma, do it now to save pain later.
> @@ -2545,11 +2545,11 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  	/* Statistics and freeing VMAs */
>  	mas_set(&mas_detach, start);
>  	remove_mt(mm, &mas_detach);
> -	__mt_destroy(&mt_detach);
>  	validate_mm(mm);
>  	if (unlock)
>  		mmap_read_unlock(mm);
>  
> +	__mt_destroy(&mt_detach);
>  	return 0;
>  
>  clear_tree_failed:
> -- 
> 2.39.2
>
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index 7b70379a8b3e..ab6cb00d377a 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2427,7 +2427,7 @@  do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	unsigned long locked_vm = 0;
 	MA_STATE(mas_detach, &mt_detach, 0, 0);
 	mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
-	mt_set_external_lock(&mt_detach, &mm->mmap_lock);
+	mt_detach.ma_external_lock = NULL;
 
 	/*
 	 * If we need to split any vma, do it now to save pain later.
@@ -2545,11 +2545,11 @@  do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	/* Statistics and freeing VMAs */
 	mas_set(&mas_detach, start);
 	remove_mt(mm, &mas_detach);
-	__mt_destroy(&mt_detach);
 	validate_mm(mm);
 	if (unlock)
 		mmap_read_unlock(mm);
 
+	__mt_destroy(&mt_detach);
 	return 0;
 
 clear_tree_failed: