diff mbox series

[v4,5/6] mm: always lock new vma before inserting into vma tree

Message ID 20230804152724.3090321-6-surenb@google.com (mailing list archive)
State New
Headers show
Series make vma locking more obvious | expand

Commit Message

Suren Baghdasaryan Aug. 4, 2023, 3:27 p.m. UTC
While it's not strictly necessary to lock a newly created vma before
adding it into the vma tree (as long as no further changes are performed
to it), it seems like a good policy to lock it and prevent accidental
changes after it becomes visible to the page faults. Lock the vma before
adding it into the vma tree.

Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
---
 mm/mmap.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Jann Horn Aug. 14, 2023, 2:54 p.m. UTC | #1
@akpm can you fix this up?

On Fri, Aug 4, 2023 at 5:27 PM Suren Baghdasaryan <surenb@google.com> wrote:
> While it's not strictly necessary to lock a newly created vma before
> adding it into the vma tree (as long as no further changes are performed
> to it), it seems like a good policy to lock it and prevent accidental
> changes after it becomes visible to the page faults. Lock the vma before
> adding it into the vma tree.
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com>
> ---
>  mm/mmap.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 3937479d0e07..850a39dee075 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -412,6 +412,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
>         if (vma_iter_prealloc(&vmi))
>                 return -ENOMEM;
>
> +       vma_start_write(vma);
> +
>         if (vma->vm_file) {
>                 mapping = vma->vm_file->f_mapping;
>                 i_mmap_lock_write(mapping);

Something went wrong when this part of the patch was applied, because
of a conflict with "mm/mmap: move vma operations to mm_struct out of
the critical section of file mapping lock"; see how this patch ended
up in the mm tree:
https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/commit/?id=26cb4dafc13871ab68a4fb480ca1e19381cff392

> @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
>
>   vma_iter_store(&vmi, vma);
>
> + vma_start_write(vma);
> +
>   if (vma->vm_file) {
>  mapping = vma->vm_file->f_mapping;
>  i_mmap_lock_write(mapping);

The "vma_start_write()" has to be ordered before the
"vma_iter_store(&vmi, vma)".
Andrew Morton Aug. 14, 2023, 7:15 p.m. UTC | #2
On Mon, 14 Aug 2023 16:54:01 +0200 Jann Horn <jannh@google.com> wrote:

> > @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> >
> >   vma_iter_store(&vmi, vma);
> >
> > + vma_start_write(vma);
> > +
> >   if (vma->vm_file) {
> >  mapping = vma->vm_file->f_mapping;
> >  i_mmap_lock_write(mapping);
> 
> The "vma_start_write()" has to be ordered before the
> "vma_iter_store(&vmi, vma)".

Thanks.  This?


--- a/mm/mmap.c~mm-always-lock-new-vma-before-inserting-into-vma-tree-fix
+++ a/mm/mmap.c
@@ -401,10 +401,10 @@ static int vma_link(struct mm_struct *mm
 	if (vma_iter_prealloc(&vmi, vma))
 		return -ENOMEM;
 
-	vma_iter_store(&vmi, vma);
-
 	vma_start_write(vma);
 
+	vma_iter_store(&vmi, vma);
+
 	if (vma->vm_file) {
 		mapping = vma->vm_file->f_mapping;
 		i_mmap_lock_write(mapping);
Liam R. Howlett Aug. 14, 2023, 7:19 p.m. UTC | #3
* Andrew Morton <akpm@linux-foundation.org> [230814 15:15]:
> On Mon, 14 Aug 2023 16:54:01 +0200 Jann Horn <jannh@google.com> wrote:
> 
> > > @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> > >
> > >   vma_iter_store(&vmi, vma);
> > >
> > > + vma_start_write(vma);
> > > +
> > >   if (vma->vm_file) {
> > >  mapping = vma->vm_file->f_mapping;
> > >  i_mmap_lock_write(mapping);
> > 
> > The "vma_start_write()" has to be ordered before the
> > "vma_iter_store(&vmi, vma)".
> 
> Thanks.  This?

Yes, this looks good.

> 
> 
> --- a/mm/mmap.c~mm-always-lock-new-vma-before-inserting-into-vma-tree-fix
> +++ a/mm/mmap.c
> @@ -401,10 +401,10 @@ static int vma_link(struct mm_struct *mm
>  	if (vma_iter_prealloc(&vmi, vma))
>  		return -ENOMEM;
>  
> -	vma_iter_store(&vmi, vma);
> -
>  	vma_start_write(vma);
>  
> +	vma_iter_store(&vmi, vma);
> +
>  	if (vma->vm_file) {
>  		mapping = vma->vm_file->f_mapping;
>  		i_mmap_lock_write(mapping);
> _
>
Jann Horn Aug. 14, 2023, 8:02 p.m. UTC | #4
On Mon, Aug 14, 2023 at 9:15 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> On Mon, 14 Aug 2023 16:54:01 +0200 Jann Horn <jannh@google.com> wrote:
>
> > > @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> > >
> > >   vma_iter_store(&vmi, vma);
> > >
> > > + vma_start_write(vma);
> > > +
> > >   if (vma->vm_file) {
> > >  mapping = vma->vm_file->f_mapping;
> > >  i_mmap_lock_write(mapping);
> >
> > The "vma_start_write()" has to be ordered before the
> > "vma_iter_store(&vmi, vma)".
>
> Thanks.  This?
>
>
> --- a/mm/mmap.c~mm-always-lock-new-vma-before-inserting-into-vma-tree-fix
> +++ a/mm/mmap.c
> @@ -401,10 +401,10 @@ static int vma_link(struct mm_struct *mm
>         if (vma_iter_prealloc(&vmi, vma))
>                 return -ENOMEM;
>
> -       vma_iter_store(&vmi, vma);
> -
>         vma_start_write(vma);
>
> +       vma_iter_store(&vmi, vma);
> +
>         if (vma->vm_file) {
>                 mapping = vma->vm_file->f_mapping;
>                 i_mmap_lock_write(mapping);

Yes, thanks, that looks good.
Suren Baghdasaryan Aug. 14, 2023, 8:06 p.m. UTC | #5
On Mon, Aug 14, 2023 at 1:02 PM Jann Horn <jannh@google.com> wrote:
>
> On Mon, Aug 14, 2023 at 9:15 PM Andrew Morton <akpm@linux-foundation.org> wrote:
> > On Mon, 14 Aug 2023 16:54:01 +0200 Jann Horn <jannh@google.com> wrote:
> >
> > > > @@ -403,6 +403,8 @@ static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
> > > >
> > > >   vma_iter_store(&vmi, vma);
> > > >
> > > > + vma_start_write(vma);
> > > > +
> > > >   if (vma->vm_file) {
> > > >  mapping = vma->vm_file->f_mapping;
> > > >  i_mmap_lock_write(mapping);
> > >
> > > The "vma_start_write()" has to be ordered before the
> > > "vma_iter_store(&vmi, vma)".
> >
> > Thanks.  This?
> >
> >
> > --- a/mm/mmap.c~mm-always-lock-new-vma-before-inserting-into-vma-tree-fix
> > +++ a/mm/mmap.c
> > @@ -401,10 +401,10 @@ static int vma_link(struct mm_struct *mm
> >         if (vma_iter_prealloc(&vmi, vma))
> >                 return -ENOMEM;
> >
> > -       vma_iter_store(&vmi, vma);
> > -
> >         vma_start_write(vma);
> >
> > +       vma_iter_store(&vmi, vma);
> > +
> >         if (vma->vm_file) {
> >                 mapping = vma->vm_file->f_mapping;
> >                 i_mmap_lock_write(mapping);
>
> Yes, thanks, that looks good.

Ack. Thanks!
diff mbox series

Patch

diff --git a/mm/mmap.c b/mm/mmap.c
index 3937479d0e07..850a39dee075 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -412,6 +412,8 @@  static int vma_link(struct mm_struct *mm, struct vm_area_struct *vma)
 	if (vma_iter_prealloc(&vmi))
 		return -ENOMEM;
 
+	vma_start_write(vma);
+
 	if (vma->vm_file) {
 		mapping = vma->vm_file->f_mapping;
 		i_mmap_lock_write(mapping);
@@ -477,7 +479,8 @@  static inline void vma_prepare(struct vma_prepare *vp)
 	vma_start_write(vp->vma);
 	if (vp->adj_next)
 		vma_start_write(vp->adj_next);
-	/* vp->insert is always a newly created VMA, no need for locking */
+	if (vp->insert)
+		vma_start_write(vp->insert);
 	if (vp->remove)
 		vma_start_write(vp->remove);
 	if (vp->remove2)
@@ -3098,6 +3101,7 @@  static int do_brk_flags(struct vma_iterator *vmi, struct vm_area_struct *vma,
 	vma->vm_pgoff = addr >> PAGE_SHIFT;
 	vm_flags_init(vma, flags);
 	vma->vm_page_prot = vm_get_page_prot(flags);
+	vma_start_write(vma);
 	if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL))
 		goto mas_store_fail;
 
@@ -3345,7 +3349,6 @@  struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 			get_file(new_vma->vm_file);
 		if (new_vma->vm_ops && new_vma->vm_ops->open)
 			new_vma->vm_ops->open(new_vma);
-		vma_start_write(new_vma);
 		if (vma_link(mm, new_vma))
 			goto out_vma_link;
 		*need_rmap_locks = false;