diff mbox series

[RFC,RESEND,11/28] mm/mmap: mark VMAs as locked before merging or splitting them

Message ID 20220901173516.702122-12-surenb@google.com (mailing list archive)
State New
Headers show
Series per-VMA locks proposal | expand

Commit Message

Suren Baghdasaryan Sept. 1, 2022, 5:34 p.m. UTC
Decisions about whether VMAs can be merged or split must be made while
VMAs are protected from the changes which can affect that decision.
For example, merge_vma uses vma->anon_vma in its decision whether the
VMA can be merged. Meanwhile, page fault handler changes vma->anon_vma
during COW operation.
Mark all VMAs which might be affected by a merge or split operation as
locked before making decision how such operations should be performed.

Signed-off-by: Suren Baghdasaryan <surenb@google.com>
---
 mm/mmap.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Laurent Dufour Sept. 6, 2022, 3:44 p.m. UTC | #1
Le 01/09/2022 à 19:34, Suren Baghdasaryan a écrit :
> Decisions about whether VMAs can be merged or split must be made while
> VMAs are protected from the changes which can affect that decision.
> For example, merge_vma uses vma->anon_vma in its decision whether the
> VMA can be merged. Meanwhile, page fault handler changes vma->anon_vma
> during COW operation.
> Mark all VMAs which might be affected by a merge or split operation as
> locked before making decision how such operations should be performed.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> ---
>  mm/mmap.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index ed58cf0689b2..ade3909c89b4 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1147,10 +1147,17 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm,
>  	if (vm_flags & VM_SPECIAL)
>  		return NULL;
>  
> +	if (prev)
> +		vma_mark_locked(prev);
>  	next = vma_next(mm, prev);
>  	area = next;
> -	if (area && area->vm_end == end)		/* cases 6, 7, 8 */
> +	if (area)
> +		vma_mark_locked(area);
> +	if (area && area->vm_end == end) {		/* cases 6, 7, 8 */
>  		next = next->vm_next;
> +		if (next)
> +			vma_mark_locked(next);
> +	}
>  
>  	/* verify some invariant that must be enforced by the caller */
>  	VM_WARN_ON(prev && addr <= prev->vm_start);
> @@ -2687,6 +2694,7 @@ int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
>  	struct vm_area_struct *new;
>  	int err;
>  
> +	vma_mark_locked(vma);
>  	if (vma->vm_ops && vma->vm_ops->may_split) {
>  		err = vma->vm_ops->may_split(vma, addr);
>  		if (err)

That looks good to me, the new VMA allocated by vm_area_dup(vma) is
inheriting the locked state from vma.

Reviewed-by: Laurent Dufour <laurent.dufour@fr.ibm.com>
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index ed58cf0689b2..ade3909c89b4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1147,10 +1147,17 @@  struct vm_area_struct *vma_merge(struct mm_struct *mm,
 	if (vm_flags & VM_SPECIAL)
 		return NULL;
 
+	if (prev)
+		vma_mark_locked(prev);
 	next = vma_next(mm, prev);
 	area = next;
-	if (area && area->vm_end == end)		/* cases 6, 7, 8 */
+	if (area)
+		vma_mark_locked(area);
+	if (area && area->vm_end == end) {		/* cases 6, 7, 8 */
 		next = next->vm_next;
+		if (next)
+			vma_mark_locked(next);
+	}
 
 	/* verify some invariant that must be enforced by the caller */
 	VM_WARN_ON(prev && addr <= prev->vm_start);
@@ -2687,6 +2694,7 @@  int __split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	struct vm_area_struct *new;
 	int err;
 
+	vma_mark_locked(vma);
 	if (vma->vm_ops && vma->vm_ops->may_split) {
 		err = vma->vm_ops->may_split(vma, addr);
 		if (err)