diff mbox series

[v3,02/16] mm/mmap: Introduce abort_munmap_vmas()

Message ID 20240704182718.2653918-3-Liam.Howlett@oracle.com (mailing list archive)
State New
Headers show
Series Avoid MAP_FIXED gap exposure | expand

Commit Message

Liam R. Howlett July 4, 2024, 6:27 p.m. UTC
Extract clean up of failed munmap() operations from
do_vmi_align_munmap().  This simplifies later patches in the series.

Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
---
 mm/mmap.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

Comments

Lorenzo Stoakes July 5, 2024, 5:02 p.m. UTC | #1
On Thu, Jul 04, 2024 at 02:27:04PM GMT, Liam R. Howlett wrote:
> Extract clean up of failed munmap() operations from
> do_vmi_align_munmap().  This simplifies later patches in the series.
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
>  mm/mmap.c | 25 ++++++++++++++++++++-----
>  1 file changed, 20 insertions(+), 5 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 28a46d9ddde0..d572e1ff8255 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2586,6 +2586,25 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
>  			 vma->vm_userfaultfd_ctx, anon_vma_name(vma));
>  }
>
> +/*
> + * abort_munmap_vmas - Undo any munmap work and free resources
> + *
> + * Reattach detached vmas, free up maple tree used to track the vmas.
> + */
> +static inline void abort_munmap_vmas(struct ma_state *mas_detach)
> +{
> +	struct vm_area_struct *vma;
> +	int limit;
> +
> +	limit = mas_detach->index;

This feels like a change to existing behaviour actually, I mean a sensible
one - as you are not just walking the tree start-to-end but rather only
walking up to the point that it has been populated (assuming I'm not
missing anything, looks to me like mas_for_each is _inclusive_ on max).

Maybe  worth mentioning in commit msg?

> +	mas_set(mas_detach, 0);
> +	/* Re-attach any detached VMAs */
> +	mas_for_each(mas_detach, vma, limit)
> +		vma_mark_detached(vma, false);
> +
> +	__mt_destroy(mas_detach->tree);
> +}
> +
>  /*
>   * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
>   * @vmi: The vma iterator
> @@ -2740,11 +2759,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  userfaultfd_error:
>  munmap_gather_failed:
>  end_split_failed:
> -	mas_set(&mas_detach, 0);
> -	mas_for_each(&mas_detach, next, end)
> -		vma_mark_detached(next, false);
> -
> -	__mt_destroy(&mt_detach);
> +	abort_munmap_vmas(&mas_detach);
>  start_split_failed:
>  map_count_exceeded:
>  	validate_mm(mm);
> --
> 2.43.0
>

This looks fine though, feel free to add:

Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Liam R. Howlett July 5, 2024, 6:12 p.m. UTC | #2
* Lorenzo Stoakes <lorenzo.stoakes@oracle.com> [240705 13:02]:
> On Thu, Jul 04, 2024 at 02:27:04PM GMT, Liam R. Howlett wrote:
> > Extract clean up of failed munmap() operations from
> > do_vmi_align_munmap().  This simplifies later patches in the series.
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> > ---
> >  mm/mmap.c | 25 ++++++++++++++++++++-----
> >  1 file changed, 20 insertions(+), 5 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 28a46d9ddde0..d572e1ff8255 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2586,6 +2586,25 @@ struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
> >  			 vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> >  }
> >
> > +/*
> > + * abort_munmap_vmas - Undo any munmap work and free resources
> > + *
> > + * Reattach detached vmas, free up maple tree used to track the vmas.
> > + */
> > +static inline void abort_munmap_vmas(struct ma_state *mas_detach)
> > +{
> > +	struct vm_area_struct *vma;
> > +	int limit;
> > +
> > +	limit = mas_detach->index;
> 
> This feels like a change to existing behaviour actually, I mean a sensible
> one - as you are not just walking the tree start-to-end but rather only
> walking up to the point that it has been populated (assuming I'm not
> missing anything, looks to me like mas_for_each is _inclusive_ on max).

This is not the main tree, but the detached tree.  It only contains the
vmas that are going to be freed (or, rather aborted from being freed).

I see what you mean that the end in the abort code below would be one
beyond the tree walk.  The new abort code uses the index (from the
previous write) as the limit.

All that really matters is that we go to a number high enough to cover
all vmas that were detached.  I used 'end' in the below code because I
knew it would cover all of the vmas added (we actually start at index
0).

The value of 'mas_detach->index' is used in the new code because I knew
that's as far as I had to go, and I could limit the arguments passed
to the function.

I think that I'll actually change limit to ULONG_MAX in another revision
because I like that better than expecting the index to have not been
touched by others.

> 
> Maybe  worth mentioning in commit msg?

Yes, good idea.  Thanks for catching this.

> 
> > +	mas_set(mas_detach, 0);
> > +	/* Re-attach any detached VMAs */
> > +	mas_for_each(mas_detach, vma, limit)
> > +		vma_mark_detached(vma, false);
> > +
> > +	__mt_destroy(mas_detach->tree);
> > +}
> > +
> >  /*
> >   * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> >   * @vmi: The vma iterator
> > @@ -2740,11 +2759,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >  userfaultfd_error:
> >  munmap_gather_failed:
> >  end_split_failed:
> > -	mas_set(&mas_detach, 0);
> > -	mas_for_each(&mas_detach, next, end)
> > -		vma_mark_detached(next, false);
> > -
> > -	__mt_destroy(&mt_detach);
> > +	abort_munmap_vmas(&mas_detach);
> >  start_split_failed:
> >  map_count_exceeded:
> >  	validate_mm(mm);
> > --
> > 2.43.0
> >
> 
> This looks fine though, feel free to add:
> 
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>

Thanks.
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index 28a46d9ddde0..d572e1ff8255 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2586,6 +2586,25 @@  struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
 			 vma->vm_userfaultfd_ctx, anon_vma_name(vma));
 }
 
+/*
+ * abort_munmap_vmas - Undo any munmap work and free resources
+ *
+ * Reattach detached vmas, free up maple tree used to track the vmas.
+ */
+static inline void abort_munmap_vmas(struct ma_state *mas_detach)
+{
+	struct vm_area_struct *vma;
+	int limit;
+
+	limit = mas_detach->index;
+	mas_set(mas_detach, 0);
+	/* Re-attach any detached VMAs */
+	mas_for_each(mas_detach, vma, limit)
+		vma_mark_detached(vma, false);
+
+	__mt_destroy(mas_detach->tree);
+}
+
 /*
  * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
  * @vmi: The vma iterator
@@ -2740,11 +2759,7 @@  do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 userfaultfd_error:
 munmap_gather_failed:
 end_split_failed:
-	mas_set(&mas_detach, 0);
-	mas_for_each(&mas_detach, next, end)
-		vma_mark_detached(next, false);
-
-	__mt_destroy(&mt_detach);
+	abort_munmap_vmas(&mas_detach);
 start_split_failed:
 map_count_exceeded:
 	validate_mm(mm);