diff mbox series

[v5,06/10] userfaultfd/shmem: modify shmem_mcopy_atomic_pte to use install_pte()

Message ID 20210427225244.4326-7-axelrasmussen@google.com (mailing list archive)
State New
Headers show
Series userfaultfd: add minor fault handling for shmem | expand

Commit Message

Axel Rasmussen April 27, 2021, 10:52 p.m. UTC
In a previous commit, we added the mcopy_atomic_install_pte() helper.
This helper does the job of setting up PTEs for an existing page, to map
it into a given VMA. It deals with both the anon and shmem cases, as
well as the shared and private cases.

In other words, shmem_mcopy_atomic_pte() duplicates a case it already
handles. So, expose it, and let shmem_mcopy_atomic_pte() use it
directly, to reduce code duplication.

This requires that we refactor shmem_mcopy_atomic_pte() a bit:

Instead of doing accounting (shmem_recalc_inode() et al) part-way
through the PTE setup, do it afterward. This frees up
mcopy_atomic_install_pte() from having to care about this accounting,
and means we don't need to e.g. shmem_uncharge() in the error path.

A side effect is this switches shmem_mcopy_atomic_pte() to use
lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add().
This wrapper does some extra accounting in an exceptional case, if
appropriate, so it's actually the more correct thing to use.

Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
---
 include/linux/userfaultfd_k.h |  5 ++++
 mm/shmem.c                    | 48 +++++------------------------------
 mm/userfaultfd.c              | 17 +++++--------
 3 files changed, 18 insertions(+), 52 deletions(-)

Comments

Hugh Dickins April 28, 2021, 12:58 a.m. UTC | #1
On Tue, 27 Apr 2021, Axel Rasmussen wrote:

> In a previous commit, we added the mcopy_atomic_install_pte() helper.
> This helper does the job of setting up PTEs for an existing page, to map
> it into a given VMA. It deals with both the anon and shmem cases, as
> well as the shared and private cases.
> 
> In other words, shmem_mcopy_atomic_pte() duplicates a case it already
> handles. So, expose it, and let shmem_mcopy_atomic_pte() use it
> directly, to reduce code duplication.
> 
> This requires that we refactor shmem_mcopy_atomic_pte() a bit:
> 
> Instead of doing accounting (shmem_recalc_inode() et al) part-way
> through the PTE setup, do it afterward. This frees up
> mcopy_atomic_install_pte() from having to care about this accounting,
> and means we don't need to e.g. shmem_uncharge() in the error path.
> 
> A side effect is this switches shmem_mcopy_atomic_pte() to use
> lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add().
> This wrapper does some extra accounting in an exceptional case, if
> appropriate, so it's actually the more correct thing to use.
> 
> Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>

Not quite. Two things.

One, in this version, delete_from_page_cache(page) has vanished
from the particular error path which needs it.

Two, and I think this predates your changes (so needs a separate
fix patch first, for backport to stable? a user with bad intentions
might be able to trigger the BUG), in pondering the new error paths
and that /* don't free the page */ one in particular, isn't it the
case that the shmem_inode_acct_block() on entry might succeed the
first time, but atomic copy fail so -ENOENT, then something else
fill up the tmpfs before the retry comes in, so that retry then
fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()?

(As I understand it, the shmem_inode_unacct_blocks() has to be
done before returning, because the caller may be unable to retry.)

What the right fix is rather depends on other uses of __mcopy_atomic():
if they obviously cannot hit that BUG_ON(page), you may prefer to leave
it in, and fix it here where shmem_inode_acct_block() fails. Or you may
prefer instead to delete that "else BUG_ON(page);" - looks as if that
would end up doing the right thing.  Peter may have a preference.

(Or, we could consider doing the shmem_inode_acct_block() only after
the page has been copied in: its current placing reflects how shmem.c
does it elsewhere, and there's reason for that, but it doesn't always
work out right. Don't be surprised if I change the ordering in future,
but it's probably best not to mess with that ordering now.)

Sorry, if this is a pre-existing issue, then we are taking advantage
of you, in asking you to fix it: but I hope that while you're in there,
it will make sense to do so.

Thanks,
Hugh

> ---
>  include/linux/userfaultfd_k.h |  5 ++++
>  mm/shmem.c                    | 48 +++++------------------------------
>  mm/userfaultfd.c              | 17 +++++--------
>  3 files changed, 18 insertions(+), 52 deletions(-)
Peter Xu April 28, 2021, 3:56 p.m. UTC | #2
On Tue, Apr 27, 2021 at 05:58:16PM -0700, Hugh Dickins wrote:
> On Tue, 27 Apr 2021, Axel Rasmussen wrote:
> 
> > In a previous commit, we added the mcopy_atomic_install_pte() helper.
> > This helper does the job of setting up PTEs for an existing page, to map
> > it into a given VMA. It deals with both the anon and shmem cases, as
> > well as the shared and private cases.
> > 
> > In other words, shmem_mcopy_atomic_pte() duplicates a case it already
> > handles. So, expose it, and let shmem_mcopy_atomic_pte() use it
> > directly, to reduce code duplication.
> > 
> > This requires that we refactor shmem_mcopy_atomic_pte() a bit:
> > 
> > Instead of doing accounting (shmem_recalc_inode() et al) part-way
> > through the PTE setup, do it afterward. This frees up
> > mcopy_atomic_install_pte() from having to care about this accounting,
> > and means we don't need to e.g. shmem_uncharge() in the error path.
> > 
> > A side effect is this switches shmem_mcopy_atomic_pte() to use
> > lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add().
> > This wrapper does some extra accounting in an exceptional case, if
> > appropriate, so it's actually the more correct thing to use.
> > 
> > Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
> 
> Not quite. Two things.
> 
> One, in this version, delete_from_page_cache(page) has vanished
> from the particular error path which needs it.

Agreed.  I also spotted that the set_page_dirty() seems to have been overlooked
when reusing mcopy_atomic_install_pte(), which afaiu should be move into the
helper.

> 
> Two, and I think this predates your changes (so needs a separate
> fix patch first, for backport to stable? a user with bad intentions
> might be able to trigger the BUG), in pondering the new error paths
> and that /* don't free the page */ one in particular, isn't it the
> case that the shmem_inode_acct_block() on entry might succeed the
> first time, but atomic copy fail so -ENOENT, then something else
> fill up the tmpfs before the retry comes in, so that retry then
> fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()?
> 
> (As I understand it, the shmem_inode_unacct_blocks() has to be
> done before returning, because the caller may be unable to retry.)
> 
> What the right fix is rather depends on other uses of __mcopy_atomic():
> if they obviously cannot hit that BUG_ON(page), you may prefer to leave
> it in, and fix it here where shmem_inode_acct_block() fails. Or you may
> prefer instead to delete that "else BUG_ON(page);" - looks as if that
> would end up doing the right thing.  Peter may have a preference.

To me, the BUG_ON(page) wanted to guarantee mfill_atomic_pte() should have
consumed the page properly when possible.  Removing the BUG_ON() looks good
already, it will just stop covering the case when e.g. ret==0.

So maybe slightly better to release the page when shmem_inode_acct_block()
fails (so as to still keep some guard on the page)?

Thanks,
Axel Rasmussen April 28, 2021, 3:59 p.m. UTC | #3
On Wed, Apr 28, 2021 at 8:56 AM Peter Xu <peterx@redhat.com> wrote:
>
> On Tue, Apr 27, 2021 at 05:58:16PM -0700, Hugh Dickins wrote:
> > On Tue, 27 Apr 2021, Axel Rasmussen wrote:
> >
> > > In a previous commit, we added the mcopy_atomic_install_pte() helper.
> > > This helper does the job of setting up PTEs for an existing page, to map
> > > it into a given VMA. It deals with both the anon and shmem cases, as
> > > well as the shared and private cases.
> > >
> > > In other words, shmem_mcopy_atomic_pte() duplicates a case it already
> > > handles. So, expose it, and let shmem_mcopy_atomic_pte() use it
> > > directly, to reduce code duplication.
> > >
> > > This requires that we refactor shmem_mcopy_atomic_pte() a bit:
> > >
> > > Instead of doing accounting (shmem_recalc_inode() et al) part-way
> > > through the PTE setup, do it afterward. This frees up
> > > mcopy_atomic_install_pte() from having to care about this accounting,
> > > and means we don't need to e.g. shmem_uncharge() in the error path.
> > >
> > > A side effect is this switches shmem_mcopy_atomic_pte() to use
> > > lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add().
> > > This wrapper does some extra accounting in an exceptional case, if
> > > appropriate, so it's actually the more correct thing to use.
> > >
> > > Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
> >
> > Not quite. Two things.
> >
> > One, in this version, delete_from_page_cache(page) has vanished
> > from the particular error path which needs it.
>
> Agreed.  I also spotted that the set_page_dirty() seems to have been overlooked
> when reusing mcopy_atomic_install_pte(), which afaiu should be move into the
> helper.

I think this is covered: we explicitly call SetPageDirty() just before
returning in shmem_mcopy_atomic_pte(). If I remember correctly from a
couple of revisions ago, we consciously put it here instead of in the
helper because it resulted in simpler code (error handling in
particular, I think?), and not all callers of the new helper need it.

>
> >
> > Two, and I think this predates your changes (so needs a separate
> > fix patch first, for backport to stable? a user with bad intentions
> > might be able to trigger the BUG), in pondering the new error paths
> > and that /* don't free the page */ one in particular, isn't it the
> > case that the shmem_inode_acct_block() on entry might succeed the
> > first time, but atomic copy fail so -ENOENT, then something else
> > fill up the tmpfs before the retry comes in, so that retry then
> > fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()?
> >
> > (As I understand it, the shmem_inode_unacct_blocks() has to be
> > done before returning, because the caller may be unable to retry.)
> >
> > What the right fix is rather depends on other uses of __mcopy_atomic():
> > if they obviously cannot hit that BUG_ON(page), you may prefer to leave
> > it in, and fix it here where shmem_inode_acct_block() fails. Or you may
> > prefer instead to delete that "else BUG_ON(page);" - looks as if that
> > would end up doing the right thing.  Peter may have a preference.
>
> To me, the BUG_ON(page) wanted to guarantee mfill_atomic_pte() should have
> consumed the page properly when possible.  Removing the BUG_ON() looks good
> already, it will just stop covering the case when e.g. ret==0.
>
> So maybe slightly better to release the page when shmem_inode_acct_block()
> fails (so as to still keep some guard on the page)?

This second issue, I will take some more time to investigate. :)

>
> Thanks,
>
> --
> Peter Xu
>
Peter Xu April 28, 2021, 4:23 p.m. UTC | #4
On Wed, Apr 28, 2021 at 08:59:53AM -0700, Axel Rasmussen wrote:
> On Wed, Apr 28, 2021 at 8:56 AM Peter Xu <peterx@redhat.com> wrote:
> >
> > On Tue, Apr 27, 2021 at 05:58:16PM -0700, Hugh Dickins wrote:
> > > On Tue, 27 Apr 2021, Axel Rasmussen wrote:
> > >
> > > > In a previous commit, we added the mcopy_atomic_install_pte() helper.
> > > > This helper does the job of setting up PTEs for an existing page, to map
> > > > it into a given VMA. It deals with both the anon and shmem cases, as
> > > > well as the shared and private cases.
> > > >
> > > > In other words, shmem_mcopy_atomic_pte() duplicates a case it already
> > > > handles. So, expose it, and let shmem_mcopy_atomic_pte() use it
> > > > directly, to reduce code duplication.
> > > >
> > > > This requires that we refactor shmem_mcopy_atomic_pte() a bit:
> > > >
> > > > Instead of doing accounting (shmem_recalc_inode() et al) part-way
> > > > through the PTE setup, do it afterward. This frees up
> > > > mcopy_atomic_install_pte() from having to care about this accounting,
> > > > and means we don't need to e.g. shmem_uncharge() in the error path.
> > > >
> > > > A side effect is this switches shmem_mcopy_atomic_pte() to use
> > > > lru_cache_add_inactive_or_unevictable() instead of just lru_cache_add().
> > > > This wrapper does some extra accounting in an exceptional case, if
> > > > appropriate, so it's actually the more correct thing to use.
> > > >
> > > > Signed-off-by: Axel Rasmussen <axelrasmussen@google.com>
> > >
> > > Not quite. Two things.
> > >
> > > One, in this version, delete_from_page_cache(page) has vanished
> > > from the particular error path which needs it.
> >
> > Agreed.  I also spotted that the set_page_dirty() seems to have been overlooked
> > when reusing mcopy_atomic_install_pte(), which afaiu should be move into the
> > helper.
> 
> I think this is covered: we explicitly call SetPageDirty() just before
> returning in shmem_mcopy_atomic_pte(). If I remember correctly from a
> couple of revisions ago, we consciously put it here instead of in the
> helper because it resulted in simpler code (error handling in
> particular, I think?), and not all callers of the new helper need it.

Indeed, yes that looks okay.

> 
> >
> > >
> > > Two, and I think this predates your changes (so needs a separate
> > > fix patch first, for backport to stable? a user with bad intentions
> > > might be able to trigger the BUG), in pondering the new error paths
> > > and that /* don't free the page */ one in particular, isn't it the
> > > case that the shmem_inode_acct_block() on entry might succeed the
> > > first time, but atomic copy fail so -ENOENT, then something else
> > > fill up the tmpfs before the retry comes in, so that retry then
> > > fail with -ENOMEM, and hit the BUG_ON(page) in __mcopy_atomic()?
> > >
> > > (As I understand it, the shmem_inode_unacct_blocks() has to be
> > > done before returning, because the caller may be unable to retry.)
> > >
> > > What the right fix is rather depends on other uses of __mcopy_atomic():
> > > if they obviously cannot hit that BUG_ON(page), you may prefer to leave
> > > it in, and fix it here where shmem_inode_acct_block() fails. Or you may
> > > prefer instead to delete that "else BUG_ON(page);" - looks as if that
> > > would end up doing the right thing.  Peter may have a preference.
> >
> > To me, the BUG_ON(page) wanted to guarantee mfill_atomic_pte() should have
> > consumed the page properly when possible.  Removing the BUG_ON() looks good
> > already, it will just stop covering the case when e.g. ret==0.
> >
> > So maybe slightly better to release the page when shmem_inode_acct_block()
> > fails (so as to still keep some guard on the page)?
> 
> This second issue, I will take some more time to investigate. :)

No worry - take your time. :)
diff mbox series

Patch

diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index 794d1538b8ba..39c094cc6641 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -53,6 +53,11 @@  enum mcopy_atomic_mode {
 	MCOPY_ATOMIC_CONTINUE,
 };
 
+extern int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
+				    struct vm_area_struct *dst_vma,
+				    unsigned long dst_addr, struct page *page,
+				    bool newly_allocated, bool wp_copy);
+
 extern ssize_t mcopy_atomic(struct mm_struct *dst_mm, unsigned long dst_start,
 			    unsigned long src_start, unsigned long len,
 			    bool *mmap_changing, __u64 mode);
diff --git a/mm/shmem.c b/mm/shmem.c
index 30c0bb501dc9..37db52f45cb5 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2378,10 +2378,8 @@  int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	struct address_space *mapping = inode->i_mapping;
 	gfp_t gfp = mapping_gfp_mask(mapping);
 	pgoff_t pgoff = linear_page_index(dst_vma, dst_addr);
-	spinlock_t *ptl;
 	void *page_kaddr;
 	struct page *page;
-	pte_t _dst_pte, *dst_pte;
 	int ret;
 	pgoff_t max_off;
 
@@ -2404,9 +2402,9 @@  int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
 			/* fallback to copy_from_user outside mmap_lock */
 			if (unlikely(ret)) {
 				*pagep = page;
-				shmem_inode_unacct_blocks(inode, 1);
+				ret = -ENOENT;
 				/* don't free the page */
-				return -ENOENT;
+				goto out_unacct_blocks;
 			}
 		} else {		/* ZEROPAGE */
 			clear_highpage(page);
@@ -2432,32 +2430,10 @@  int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	if (ret)
 		goto out_release;
 
-	_dst_pte = mk_pte(page, dst_vma->vm_page_prot);
-	if (dst_vma->vm_flags & VM_WRITE)
-		_dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte));
-	else {
-		/*
-		 * We don't set the pte dirty if the vma has no
-		 * VM_WRITE permission, so mark the page dirty or it
-		 * could be freed from under us. We could do it
-		 * unconditionally before unlock_page(), but doing it
-		 * only if VM_WRITE is not set is faster.
-		 */
-		set_page_dirty(page);
-	}
-
-	dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
-
-	ret = -EFAULT;
-	max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE);
-	if (unlikely(pgoff >= max_off))
-		goto out_release_unlock;
-
-	ret = -EEXIST;
-	if (!pte_none(*dst_pte))
-		goto out_release_unlock;
-
-	lru_cache_add(page);
+	ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr,
+				       page, true, false);
+	if (ret)
+		goto out_release;
 
 	spin_lock_irq(&info->lock);
 	info->alloced++;
@@ -2465,21 +2441,11 @@  int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm,
 	shmem_recalc_inode(inode);
 	spin_unlock_irq(&info->lock);
 
-	inc_mm_counter(dst_mm, mm_counter_file(page));
-	page_add_file_rmap(page, false);
-	set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
-
-	/* No need to invalidate - it was non-present before */
-	update_mmu_cache(dst_vma, dst_addr, dst_pte);
-	pte_unmap_unlock(dst_pte, ptl);
+	SetPageDirty(page);
 	unlock_page(page);
 	ret = 0;
 out:
 	return ret;
-out_release_unlock:
-	pte_unmap_unlock(dst_pte, ptl);
-	ClearPageDirty(page);
-	delete_from_page_cache(page);
 out_release:
 	unlock_page(page);
 	put_page(page);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 51d8c0127161..3a9ddbb2dbbd 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -51,18 +51,13 @@  struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm,
 /*
  * Install PTEs, to map dst_addr (within dst_vma) to page.
  *
- * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed),
- * whether or not dst_vma is VM_SHARED. It also handles the more general
- * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file
- * backed, or not).
- *
- * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by
- * shmem_mcopy_atomic_pte instead.
+ * This function handles both MCOPY_ATOMIC_NORMAL and _CONTINUE for both shmem
+ * and anon, and for both shared and private VMAs.
  */
-static int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
-				    struct vm_area_struct *dst_vma,
-				    unsigned long dst_addr, struct page *page,
-				    bool newly_allocated, bool wp_copy)
+int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
+			     struct vm_area_struct *dst_vma,
+			     unsigned long dst_addr, struct page *page,
+			     bool newly_allocated, bool wp_copy)
 {
 	int ret;
 	pte_t _dst_pte, *dst_pte;