diff mbox series

[v4,18/21] mm/mmap: Move can_modify_mm() check down the stack

Message ID 20240710192250.4114783-19-Liam.Howlett@oracle.com (mailing list archive)
State New
Headers show
Series Avoid MAP_FIXED gap exposure | expand

Commit Message

Liam R. Howlett July 10, 2024, 7:22 p.m. UTC
From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Without an arch_unmap() call anymore, the check for mseal'ed vmas can be
moved lower as well.  This has the benefit of only actually checking if
things are msealed when there is anything to check.  That is, we know
there is at least one vma that is in the way and needs to be checked.

Only call the can_modify_mm() in do_vmi_align_munmap() and the MAP_FIXED
case of mmap_region().

Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Jeff Xu <jeffxu@chromium.org>
---
 mm/mmap.c | 24 ++++++++----------------
 1 file changed, 8 insertions(+), 16 deletions(-)

Comments

Jeff Xu July 17, 2024, 5:03 a.m. UTC | #1
Hi

On Wed, Jul 10, 2024 at 12:23 PM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
>
> Without an arch_unmap() call anymore,
Is there another patch that removes arch_unmap() ?
Can you please post the link for the patch ?

Thanks
-Jeff

> the check for mseal'ed vmas can be
> moved lower as well.  This has the benefit of only actually checking if
> things are msealed when there is anything to check.  That is, we know
> there is at least one vma that is in the way and needs to be checked.
>
> Only call the can_modify_mm() in do_vmi_align_munmap() and the MAP_FIXED
> case of mmap_region().
>
> Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
> Cc: Jeff Xu <jeffxu@chromium.org>
> ---
>  mm/mmap.c | 24 ++++++++----------------
>  1 file changed, 8 insertions(+), 16 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index df565f51971d..c343366b3ad2 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2849,6 +2849,10 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>         struct vma_munmap_struct vms;
>         int error;
>
> +       /* Prevent unmapping a sealed VMA. */
> +       if (unlikely(!can_modify_mm(mm, start, end)))
> +               return -EPERM;
> +
>         init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock);
>         error = vms_gather_munmap_vmas(&vms, &mas_detach);
>         if (error)
> @@ -2899,13 +2903,6 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>         if (end == start)
>                 return -EINVAL;
>
> -       /*
> -        * Prevent unmapping a sealed VMA.
> -        * can_modify_mm assumes we have acquired the lock on MM.
> -        */
> -       if (unlikely(!can_modify_mm(mm, start, end)))
> -               return -EPERM;
> -
>         /* Find the first overlapping VMA */
>         vma = vma_find(vmi, end);
>         if (!vma) {
> @@ -2963,13 +2960,15 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
>         if (!may_expand_vm(mm, vm_flags, pglen - nr_pages))
>                 return -ENOMEM;
>
> -       if (unlikely(!can_modify_mm(mm, addr, end)))
> -               return -EPERM;
>
>         /* Find the first overlapping VMA */
>         vma = vma_find(&vmi, end);
>         init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
>         if (vma) {
> +               /* Prevent unmapping a sealed VMA. */
> +               if (unlikely(!can_modify_mm(mm, addr, end)))
> +                       return -EPERM;
> +
>                 mt_init_flags(&mt_detach, vmi.mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
>                 mt_on_stack(mt_detach);
>                 mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
> @@ -3341,13 +3340,6 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  {
>         struct mm_struct *mm = vma->vm_mm;
>
> -       /*
> -        * Prevent unmapping a sealed VMA.
> -        * can_modify_mm assumes we have acquired the lock on MM.
> -        */
> -       if (unlikely(!can_modify_mm(mm, start, end)))
> -               return -EPERM;
> -
>         return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
>  }
>
> --
> 2.43.0
>
Liam R. Howlett July 17, 2024, 2:07 p.m. UTC | #2
* Jeff Xu <jeffxu@chromium.org> [240717 01:03]:
> Hi
> 
> On Wed, Jul 10, 2024 at 12:23 PM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> >
> > From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
> >
> > Without an arch_unmap() call anymore,
> Is there another patch that removes arch_unmap() ?
> Can you please post the link for the patch ?
> 

Thanks for looking at this patch.

The patch to remove arch_unmap() cannot be used as powerpc needs a
replacement.  I will be moving the arch_unmap() call later in the unmap
process like it was before mpx moved it (mpx has been dropped from the
kernel).  I will add you to the Cc for the whole series next time.

Thanks,
Liam
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index df565f51971d..c343366b3ad2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2849,6 +2849,10 @@  do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	struct vma_munmap_struct vms;
 	int error;
 
+	/* Prevent unmapping a sealed VMA. */
+	if (unlikely(!can_modify_mm(mm, start, end)))
+		return -EPERM;
+
 	init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock);
 	error = vms_gather_munmap_vmas(&vms, &mas_detach);
 	if (error)
@@ -2899,13 +2903,6 @@  int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 	if (end == start)
 		return -EINVAL;
 
-	/*
-	 * Prevent unmapping a sealed VMA.
-	 * can_modify_mm assumes we have acquired the lock on MM.
-	 */
-	if (unlikely(!can_modify_mm(mm, start, end)))
-		return -EPERM;
-
 	/* Find the first overlapping VMA */
 	vma = vma_find(vmi, end);
 	if (!vma) {
@@ -2963,13 +2960,15 @@  unsigned long mmap_region(struct file *file, unsigned long addr,
 	if (!may_expand_vm(mm, vm_flags, pglen - nr_pages))
 		return -ENOMEM;
 
-	if (unlikely(!can_modify_mm(mm, addr, end)))
-		return -EPERM;
 
 	/* Find the first overlapping VMA */
 	vma = vma_find(&vmi, end);
 	init_vma_munmap(&vms, &vmi, vma, addr, end, uf, /* unlock = */ false);
 	if (vma) {
+		/* Prevent unmapping a sealed VMA. */
+		if (unlikely(!can_modify_mm(mm, addr, end)))
+			return -EPERM;
+
 		mt_init_flags(&mt_detach, vmi.mas.tree->ma_flags & MT_FLAGS_LOCK_MASK);
 		mt_on_stack(mt_detach);
 		mas_init(&mas_detach, &mt_detach, /* addr = */ 0);
@@ -3341,13 +3340,6 @@  int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
-	/*
-	 * Prevent unmapping a sealed VMA.
-	 * can_modify_mm assumes we have acquired the lock on MM.
-	 */
-	if (unlikely(!can_modify_mm(mm, start, end)))
-		return -EPERM;
-
 	return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
 }