diff mbox series

[v4,5/6] dax: fix missing writeprotect the pte entry

Message ID 20220302082718.32268-6-songmuchun@bytedance.com (mailing list archive)
State New, archived
Headers show
Series Fix some bugs related to ramp and dax | expand

Commit Message

Muchun Song March 2, 2022, 8:27 a.m. UTC
Currently dax_mapping_entry_mkclean() fails to clean and write protect
the pte entry within a DAX PMD entry during an *sync operation. This
can result in data loss in the following sequence:

  1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
     making the pmd entry dirty and writeable.
  2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
     write to the same file, dirtying PMD radix tree entry (already
     done in 1)) and making the pte entry dirty and writeable.
  3) fsync, flushing out PMD data and cleaning the radix tree entry. We
     currently fail to mark the pte entry as clean and write protected
     since the vma of process B is not covered in dax_entry_mkclean().
  4) process B writes to the pte. These don't cause any page faults since
     the pte entry is dirty and writeable. The radix tree entry remains
     clean.
  5) fsync, which fails to flush the dirty PMD data because the radix tree
     entry was clean.
  6) crash - dirty data that should have been fsync'd as part of 5) could
     still have been in the processor cache, and is lost.

Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/dax.c | 83 ++++++----------------------------------------------------------
 1 file changed, 7 insertions(+), 76 deletions(-)

Comments

Dan Williams March 10, 2022, 12:59 a.m. UTC | #1
On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> Currently dax_mapping_entry_mkclean() fails to clean and write protect
> the pte entry within a DAX PMD entry during an *sync operation. This
> can result in data loss in the following sequence:
>
>   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
>      making the pmd entry dirty and writeable.
>   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
>      write to the same file, dirtying PMD radix tree entry (already
>      done in 1)) and making the pte entry dirty and writeable.
>   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
>      currently fail to mark the pte entry as clean and write protected
>      since the vma of process B is not covered in dax_entry_mkclean().
>   4) process B writes to the pte. These don't cause any page faults since
>      the pte entry is dirty and writeable. The radix tree entry remains
>      clean.
>   5) fsync, which fails to flush the dirty PMD data because the radix tree
>      entry was clean.
>   6) crash - dirty data that should have been fsync'd as part of 5) could
>      still have been in the processor cache, and is lost.

Excellent description.

>
> Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
that do not have spare PTE bits to indicate pmd_devmap(). So this fix
can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
seems you can use the current page_mkclean_one(), right? So perhaps
the fix is to skip patch 3, keep patch 4 and make this patch use
page_mkclean_one() along with this:

diff --git a/fs/Kconfig b/fs/Kconfig
index 7a2b11c0b803..42108adb7a78 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -83,6 +83,7 @@ config FS_DAX_PMD
        depends on FS_DAX
        depends on ZONE_DEVICE
        depends on TRANSPARENT_HUGEPAGE
+       depends on !FS_DAX_LIMITED

 # Selected by DAX drivers that do not expect filesystem DAX to support
 # get_user_pages() of DAX mappings. I.e. "limited" indicates no support

...to preclude the pmd conflict in that case?
Muchun Song March 11, 2022, 9:04 a.m. UTC | #2
On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > the pte entry within a DAX PMD entry during an *sync operation. This
> > can result in data loss in the following sequence:
> >
> >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> >      making the pmd entry dirty and writeable.
> >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> >      write to the same file, dirtying PMD radix tree entry (already
> >      done in 1)) and making the pte entry dirty and writeable.
> >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> >      currently fail to mark the pte entry as clean and write protected
> >      since the vma of process B is not covered in dax_entry_mkclean().
> >   4) process B writes to the pte. These don't cause any page faults since
> >      the pte entry is dirty and writeable. The radix tree entry remains
> >      clean.
> >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> >      entry was clean.
> >   6) crash - dirty data that should have been fsync'd as part of 5) could
> >      still have been in the processor cache, and is lost.
>
> Excellent description.
>
> >
> > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
>
> So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> seems you can use the current page_mkclean_one(), right?

I don't know the history of CONFIG_FS_DAX_LIMITED.
page_mkclean_one() need a struct page associated with
the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
and ! FS_DAX_PMD? If yes, I think you are right. But I don't
see this guarantee. I am not familiar with DAX code, so what am
I missing here?

Thanks.
Dan Williams March 14, 2022, 8:50 p.m. UTC | #3
On Fri, Mar 11, 2022 at 1:06 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > > the pte entry within a DAX PMD entry during an *sync operation. This
> > > can result in data loss in the following sequence:
> > >
> > >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> > >      making the pmd entry dirty and writeable.
> > >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> > >      write to the same file, dirtying PMD radix tree entry (already
> > >      done in 1)) and making the pte entry dirty and writeable.
> > >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> > >      currently fail to mark the pte entry as clean and write protected
> > >      since the vma of process B is not covered in dax_entry_mkclean().
> > >   4) process B writes to the pte. These don't cause any page faults since
> > >      the pte entry is dirty and writeable. The radix tree entry remains
> > >      clean.
> > >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> > >      entry was clean.
> > >   6) crash - dirty data that should have been fsync'd as part of 5) could
> > >      still have been in the processor cache, and is lost.
> >
> > Excellent description.
> >
> > >
> > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
> >
> > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> > that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> > seems you can use the current page_mkclean_one(), right?
>
> I don't know the history of CONFIG_FS_DAX_LIMITED.
> page_mkclean_one() need a struct page associated with
> the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
> and ! FS_DAX_PMD?

CONFIG_FS_DAX_LIMITED was created to preserve some DAX use for S390
which does not have CONFIG_ARCH_HAS_PTE_DEVMAP. Without PTE_DEVMAP
then get_user_pages() for DAX mappings fails.

To your question, no, there are no pages at all in the
CONFIG_FS_DAX_LIMITED=y case. So page_mkclean_one() could only be
deployed for PMD mappings, but I think it is reasonable to just
disable PMD mappings for the CONFIG_FS_DAX_LIMITED=y case.

Going forward the hope is to remove the ARCH_HAS_PTE_DEVMAP
requirement for DAX, and use PTE_SPECIAL for the S390 case. However,
that still wants to have 'struct page' availability as an across the
board requirement.

> If yes, I think you are right. But I don't
> see this guarantee. I am not familiar with DAX code, so what am
> I missing here?

Perhaps I missed a 'struct page' dependency? I thought the bug you are
fixing only triggers in the presence of PMDs. The
CONFIG_FS_DAX_LIMITED=y case can still use the current "page-less"
mkclean path for PTEs.
Muchun Song March 15, 2022, 7:51 a.m. UTC | #4
On Tue, Mar 15, 2022 at 4:50 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Mar 11, 2022 at 1:06 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > > > the pte entry within a DAX PMD entry during an *sync operation. This
> > > > can result in data loss in the following sequence:
> > > >
> > > >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> > > >      making the pmd entry dirty and writeable.
> > > >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> > > >      write to the same file, dirtying PMD radix tree entry (already
> > > >      done in 1)) and making the pte entry dirty and writeable.
> > > >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> > > >      currently fail to mark the pte entry as clean and write protected
> > > >      since the vma of process B is not covered in dax_entry_mkclean().
> > > >   4) process B writes to the pte. These don't cause any page faults since
> > > >      the pte entry is dirty and writeable. The radix tree entry remains
> > > >      clean.
> > > >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> > > >      entry was clean.
> > > >   6) crash - dirty data that should have been fsync'd as part of 5) could
> > > >      still have been in the processor cache, and is lost.
> > >
> > > Excellent description.
> > >
> > > >
> > > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
> > >
> > > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> > > that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> > > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> > > seems you can use the current page_mkclean_one(), right?
> >
> > I don't know the history of CONFIG_FS_DAX_LIMITED.
> > page_mkclean_one() need a struct page associated with
> > the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
> > and ! FS_DAX_PMD?
>
> CONFIG_FS_DAX_LIMITED was created to preserve some DAX use for S390
> which does not have CONFIG_ARCH_HAS_PTE_DEVMAP. Without PTE_DEVMAP
> then get_user_pages() for DAX mappings fails.
>
> To your question, no, there are no pages at all in the
> CONFIG_FS_DAX_LIMITED=y case. So page_mkclean_one() could only be
> deployed for PMD mappings, but I think it is reasonable to just
> disable PMD mappings for the CONFIG_FS_DAX_LIMITED=y case.
>
> Going forward the hope is to remove the ARCH_HAS_PTE_DEVMAP
> requirement for DAX, and use PTE_SPECIAL for the S390 case. However,
> that still wants to have 'struct page' availability as an across the
> board requirement.

Got it. Thanks for your patient explanation.

>
> > If yes, I think you are right. But I don't
> > see this guarantee. I am not familiar with DAX code, so what am
> > I missing here?
>
> Perhaps I missed a 'struct page' dependency? I thought the bug you are
> fixing only triggers in the presence of PMDs. The

Right.

> CONFIG_FS_DAX_LIMITED=y case can still use the current "page-less"
> mkclean path for PTEs.

But I think introducing pfn_mkclean_range() could make the code
simple and easy to maintain here since it could handle both PTE
and PMD mappings.  And page_vma_mapped_walk() could work
on PFNs since commit [1], which is the case here, we do not need
extra code to handle the page-less case here.  What do you
think?

[1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=b786e44a4dbfe64476e7120ec7990b89a37be37d
diff mbox series

Patch

diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a16769f9 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -24,6 +24,7 @@ 
 #include <linux/sizes.h>
 #include <linux/mmu_notifier.h>
 #include <linux/iomap.h>
+#include <linux/rmap.h>
 #include <asm/pgalloc.h>
 
 #define CREATE_TRACE_POINTS
@@ -789,87 +790,17 @@  static void *dax_insert_entry(struct xa_state *xas,
 	return entry;
 }
 
-static inline
-unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
-{
-	unsigned long address;
-
-	address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
-	return address;
-}
-
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
-		unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn,
+			      unsigned long npfn, pgoff_t start)
 {
 	struct vm_area_struct *vma;
-	pte_t pte, *ptep = NULL;
-	pmd_t *pmdp = NULL;
-	spinlock_t *ptl;
+	pgoff_t end = start + npfn - 1;
 
 	i_mmap_lock_read(mapping);
-	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) {
-		struct mmu_notifier_range range;
-		unsigned long address;
-
+	vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) {
+		pfn_mkclean_range(pfn, npfn, start, vma);
 		cond_resched();
-
-		if (!(vma->vm_flags & VM_SHARED))
-			continue;
-
-		address = pgoff_address(index, vma);
-
-		/*
-		 * follow_invalidate_pte() will use the range to call
-		 * mmu_notifier_invalidate_range_start() on our behalf before
-		 * taking any lock.
-		 */
-		if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep,
-					  &pmdp, &ptl))
-			continue;
-
-		/*
-		 * No need to call mmu_notifier_invalidate_range() as we are
-		 * downgrading page table protection not changing it to point
-		 * to a new page.
-		 *
-		 * See Documentation/vm/mmu_notifier.rst
-		 */
-		if (pmdp) {
-#ifdef CONFIG_FS_DAX_PMD
-			pmd_t pmd;
-
-			if (pfn != pmd_pfn(*pmdp))
-				goto unlock_pmd;
-			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
-				goto unlock_pmd;
-
-			flush_cache_range(vma, address,
-					  address + HPAGE_PMD_SIZE);
-			pmd = pmdp_invalidate(vma, address, pmdp);
-			pmd = pmd_wrprotect(pmd);
-			pmd = pmd_mkclean(pmd);
-			set_pmd_at(vma->vm_mm, address, pmdp, pmd);
-unlock_pmd:
-#endif
-			spin_unlock(ptl);
-		} else {
-			if (pfn != pte_pfn(*ptep))
-				goto unlock_pte;
-			if (!pte_dirty(*ptep) && !pte_write(*ptep))
-				goto unlock_pte;
-
-			flush_cache_page(vma, address, pfn);
-			pte = ptep_clear_flush(vma, address, ptep);
-			pte = pte_wrprotect(pte);
-			pte = pte_mkclean(pte);
-			set_pte_at(vma->vm_mm, address, ptep, pte);
-unlock_pte:
-			pte_unmap_unlock(ptep, ptl);
-		}
-
-		mmu_notifier_invalidate_range_end(&range);
 	}
 	i_mmap_unlock_read(mapping);
 }
@@ -937,7 +868,7 @@  static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 	count = 1UL << dax_entry_order(entry);
 	index = xas->xa_index & ~(count - 1);
 
-	dax_entry_mkclean(mapping, index, pfn);
+	dax_entry_mkclean(mapping, pfn, count, index);
 	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
 	/*
 	 * After we have flushed the cache, we can clear the dirty tag. There