mbox series

[v2,00/10] fsdax,xfs: Add reflink&dedupe support for fsdax

Message ID 20210226002030.653855-1-ruansy.fnst@fujitsu.com (mailing list archive)
Headers show
Series fsdax,xfs: Add reflink&dedupe support for fsdax | expand

Message

Shiyang Ruan Feb. 26, 2021, 12:20 a.m. UTC
This patchset is attempt to add CoW support for fsdax, and take XFS,
which has both reflink and fsdax feature, as an example.

Changes from V1:
 - Factor some helper functions to simplify dax fault code
 - Introduce iomap_apply2() for dax_dedupe_file_range_compare()
 - Fix mistakes and other problems
 - Rebased on v5.11

One of the key mechanism need to be implemented in fsdax is CoW.  Copy
the data from srcmap before we actually write data to the destance
iomap.  And we just copy range in which data won't be changed.

Another mechanism is range comparison.  In page cache case, readpage()
is used to load data on disk to page cache in order to be able to
compare data.  In fsdax case, readpage() does not work.  So, we need
another compare data with direct access support.

With the two mechanism implemented in fsdax, we are able to make reflink
and fsdax work together in XFS.


Some of the patches are picked up from Goldwyn's patchset.  I made some
changes to adapt to this patchset.

(Rebased on v5.11)
==

Shiyang Ruan (10):
  fsdax: Factor helpers to simplify dax fault code
  fsdax: Factor helper: dax_fault_actor()
  fsdax: Output address in dax_iomap_pfn() and rename it
  fsdax: Introduce dax_iomap_cow_copy()
  fsdax: Replace mmap entry in case of CoW
  fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero
  iomap: Introduce iomap_apply2() for operations on two files
  fsdax: Dedup file range to use a compare function
  fs/xfs: Handle CoW for fsdax write() path
  fs/xfs: Add dedupe support for fsdax

 fs/dax.c               | 532 +++++++++++++++++++++++++++--------------
 fs/iomap/apply.c       |  51 ++++
 fs/iomap/buffered-io.c |   2 +-
 fs/remap_range.c       |  45 +++-
 fs/xfs/xfs_bmap_util.c |   3 +-
 fs/xfs/xfs_file.c      |  29 ++-
 fs/xfs/xfs_inode.c     |   8 +-
 fs/xfs/xfs_inode.h     |   1 +
 fs/xfs/xfs_iomap.c     |  30 ++-
 fs/xfs/xfs_iomap.h     |   1 +
 fs/xfs/xfs_iops.c      |  11 +-
 fs/xfs/xfs_reflink.c   |  16 +-
 include/linux/dax.h    |   7 +-
 include/linux/fs.h     |  15 +-
 include/linux/iomap.h  |   7 +-
 15 files changed, 550 insertions(+), 208 deletions(-)

Comments

Shiyang Ruan Feb. 26, 2021, 9:45 a.m. UTC | #1
Hi, guys

Beside this patchset, I'd like to confirm something about the "EXPERIMENTAL" tag for dax in XFS.

In XFS, the "EXPERIMENTAL" tag, which is reported in waring message when we mount a pmem device with dax option, has been existed for a while.  It's a bit annoying when using fsdax feature.  So, my initial intention was to remove this tag.  And I started to find out and solve the problems which prevent it from being removed.

As is talked before, there are 3 main problems.  The first one is "dax semantics", which has been resolved.  The rest two are "RMAP for fsdax" and "support dax reflink for filesystem", which I have been working on.  

So, what I want to confirm is: does it means that we can remove the "EXPERIMENTAL" tag when the rest two problem are solved?  Or maybe there are other important problems need to be fixed before removing it?  If there are, could you please show me that?

Thank you.


--
Ruan Shiyang.
Darrick J. Wong Feb. 26, 2021, 7:04 p.m. UTC | #2
On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> Hi, guys
> 
> Beside this patchset, I'd like to confirm something about the
> "EXPERIMENTAL" tag for dax in XFS.
> 
> In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> when we mount a pmem device with dax option, has been existed for a
> while.  It's a bit annoying when using fsdax feature.  So, my initial
> intention was to remove this tag.  And I started to find out and solve
> the problems which prevent it from being removed.
> 
> As is talked before, there are 3 main problems.  The first one is "dax
> semantics", which has been resolved.  The rest two are "RMAP for
> fsdax" and "support dax reflink for filesystem", which I have been
> working on.  

<nod>

> So, what I want to confirm is: does it means that we can remove the
> "EXPERIMENTAL" tag when the rest two problem are solved?

Yes.  I'd keep the experimental tag for a cycle or two to make sure that
nothing new pops up, but otherwise the two patchsets you've sent close
those two big remaining gaps.  Thank you for working on this!

> Or maybe there are other important problems need to be fixed before
> removing it?  If there are, could you please show me that?

That remains to be seen through QA/validation, but I think that's it.

Granted, I still have to read through the two patchsets...

--D

> 
> Thank you.
> 
> 
> --
> Ruan Shiyang.
Dan Williams Feb. 26, 2021, 7:24 p.m. UTC | #3
On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
>
> On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> > Hi, guys
> >
> > Beside this patchset, I'd like to confirm something about the
> > "EXPERIMENTAL" tag for dax in XFS.
> >
> > In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> > when we mount a pmem device with dax option, has been existed for a
> > while.  It's a bit annoying when using fsdax feature.  So, my initial
> > intention was to remove this tag.  And I started to find out and solve
> > the problems which prevent it from being removed.
> >
> > As is talked before, there are 3 main problems.  The first one is "dax
> > semantics", which has been resolved.  The rest two are "RMAP for
> > fsdax" and "support dax reflink for filesystem", which I have been
> > working on.
>
> <nod>
>
> > So, what I want to confirm is: does it means that we can remove the
> > "EXPERIMENTAL" tag when the rest two problem are solved?
>
> Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> nothing new pops up, but otherwise the two patchsets you've sent close
> those two big remaining gaps.  Thank you for working on this!
>
> > Or maybe there are other important problems need to be fixed before
> > removing it?  If there are, could you please show me that?
>
> That remains to be seen through QA/validation, but I think that's it.
>
> Granted, I still have to read through the two patchsets...

I've been meaning to circle back here as well.

My immediate concern is the issue Jason recently highlighted [1] with
respect to invalidating all dax mappings when / if the device is
ripped out from underneath the fs. I don't think that will collide
with Ruan's implementation, but it does need new communication from
driver to fs about removal events.

[1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
Dave Chinner Feb. 26, 2021, 8:51 p.m. UTC | #4
On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
> >
> > On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> > > Hi, guys
> > >
> > > Beside this patchset, I'd like to confirm something about the
> > > "EXPERIMENTAL" tag for dax in XFS.
> > >
> > > In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> > > when we mount a pmem device with dax option, has been existed for a
> > > while.  It's a bit annoying when using fsdax feature.  So, my initial
> > > intention was to remove this tag.  And I started to find out and solve
> > > the problems which prevent it from being removed.
> > >
> > > As is talked before, there are 3 main problems.  The first one is "dax
> > > semantics", which has been resolved.  The rest two are "RMAP for
> > > fsdax" and "support dax reflink for filesystem", which I have been
> > > working on.
> >
> > <nod>
> >
> > > So, what I want to confirm is: does it means that we can remove the
> > > "EXPERIMENTAL" tag when the rest two problem are solved?
> >
> > Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> > nothing new pops up, but otherwise the two patchsets you've sent close
> > those two big remaining gaps.  Thank you for working on this!
> >
> > > Or maybe there are other important problems need to be fixed before
> > > removing it?  If there are, could you please show me that?
> >
> > That remains to be seen through QA/validation, but I think that's it.
> >
> > Granted, I still have to read through the two patchsets...
> 
> I've been meaning to circle back here as well.
> 
> My immediate concern is the issue Jason recently highlighted [1] with
> respect to invalidating all dax mappings when / if the device is
> ripped out from underneath the fs. I don't think that will collide
> with Ruan's implementation, but it does need new communication from
> driver to fs about removal events.
> 
> [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com

Oh, yay.

The XFS shutdown code is centred around preventing new IO from being
issued - we don't actually do anything about DAX mappings because,
well, I don't think anyone on the filesystem side thought they had
to do anything special if pmem went away from under it.

My understanding -was- that the pmem removal invalidates
all the ptes currently mapped into CPU page tables that point at
the dax device across the system. THe vmas that manage these
mappings are not really something the filesystem really manages,
but a function of the mm subsystem. What the filesystem cares about
is that it gets page faults triggered when a change of state occurs
so that it can remap the page to it's backing store correctly.

IOWs, all the mm subsystem needs to when pmem goes away is clear the
CPU ptes, because then when then when userspace tries to access the
mapped DAX pages we get a new page fault. In processing the fault, the
filesystem will try to get direct access to the pmem from the block
device. This will get an ENODEV error from the block device because
because the backing store (pmem) has been unplugged and is no longer
there...

AFAICT, as long as pmem removal invalidates all the active ptes that
point at the pmem being removed, the filesystem doesn't need to
care about device removal at all, DAX or no DAX...

Cheers,

Dave.
Dan Williams Feb. 26, 2021, 8:59 p.m. UTC | #5
On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
> > >
> > > On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> > > > Hi, guys
> > > >
> > > > Beside this patchset, I'd like to confirm something about the
> > > > "EXPERIMENTAL" tag for dax in XFS.
> > > >
> > > > In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> > > > when we mount a pmem device with dax option, has been existed for a
> > > > while.  It's a bit annoying when using fsdax feature.  So, my initial
> > > > intention was to remove this tag.  And I started to find out and solve
> > > > the problems which prevent it from being removed.
> > > >
> > > > As is talked before, there are 3 main problems.  The first one is "dax
> > > > semantics", which has been resolved.  The rest two are "RMAP for
> > > > fsdax" and "support dax reflink for filesystem", which I have been
> > > > working on.
> > >
> > > <nod>
> > >
> > > > So, what I want to confirm is: does it means that we can remove the
> > > > "EXPERIMENTAL" tag when the rest two problem are solved?
> > >
> > > Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> > > nothing new pops up, but otherwise the two patchsets you've sent close
> > > those two big remaining gaps.  Thank you for working on this!
> > >
> > > > Or maybe there are other important problems need to be fixed before
> > > > removing it?  If there are, could you please show me that?
> > >
> > > That remains to be seen through QA/validation, but I think that's it.
> > >
> > > Granted, I still have to read through the two patchsets...
> >
> > I've been meaning to circle back here as well.
> >
> > My immediate concern is the issue Jason recently highlighted [1] with
> > respect to invalidating all dax mappings when / if the device is
> > ripped out from underneath the fs. I don't think that will collide
> > with Ruan's implementation, but it does need new communication from
> > driver to fs about removal events.
> >
> > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
>
> Oh, yay.
>
> The XFS shutdown code is centred around preventing new IO from being
> issued - we don't actually do anything about DAX mappings because,
> well, I don't think anyone on the filesystem side thought they had
> to do anything special if pmem went away from under it.
>
> My understanding -was- that the pmem removal invalidates
> all the ptes currently mapped into CPU page tables that point at
> the dax device across the system. THe vmas that manage these
> mappings are not really something the filesystem really manages,
> but a function of the mm subsystem. What the filesystem cares about
> is that it gets page faults triggered when a change of state occurs
> so that it can remap the page to it's backing store correctly.
>
> IOWs, all the mm subsystem needs to when pmem goes away is clear the
> CPU ptes, because then when then when userspace tries to access the
> mapped DAX pages we get a new page fault. In processing the fault, the
> filesystem will try to get direct access to the pmem from the block
> device. This will get an ENODEV error from the block device because
> because the backing store (pmem) has been unplugged and is no longer
> there...
>
> AFAICT, as long as pmem removal invalidates all the active ptes that
> point at the pmem being removed, the filesystem doesn't need to
> care about device removal at all, DAX or no DAX...

How would the pmem removal do that without walking all the active
inodes in the fs at the time of shutdown and call
unmap_mapping_range(inode->i_mapping, 0, 0, 1)?

The core-mm does tear down the ptes in the direct map, but user
mappings to pmem are not afaics in xfs_do_force_shutdown().
Dave Chinner Feb. 26, 2021, 9:27 p.m. UTC | #6
On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
> > > >
> > > > On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> > > > > Hi, guys
> > > > >
> > > > > Beside this patchset, I'd like to confirm something about the
> > > > > "EXPERIMENTAL" tag for dax in XFS.
> > > > >
> > > > > In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> > > > > when we mount a pmem device with dax option, has been existed for a
> > > > > while.  It's a bit annoying when using fsdax feature.  So, my initial
> > > > > intention was to remove this tag.  And I started to find out and solve
> > > > > the problems which prevent it from being removed.
> > > > >
> > > > > As is talked before, there are 3 main problems.  The first one is "dax
> > > > > semantics", which has been resolved.  The rest two are "RMAP for
> > > > > fsdax" and "support dax reflink for filesystem", which I have been
> > > > > working on.
> > > >
> > > > <nod>
> > > >
> > > > > So, what I want to confirm is: does it means that we can remove the
> > > > > "EXPERIMENTAL" tag when the rest two problem are solved?
> > > >
> > > > Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> > > > nothing new pops up, but otherwise the two patchsets you've sent close
> > > > those two big remaining gaps.  Thank you for working on this!
> > > >
> > > > > Or maybe there are other important problems need to be fixed before
> > > > > removing it?  If there are, could you please show me that?
> > > >
> > > > That remains to be seen through QA/validation, but I think that's it.
> > > >
> > > > Granted, I still have to read through the two patchsets...
> > >
> > > I've been meaning to circle back here as well.
> > >
> > > My immediate concern is the issue Jason recently highlighted [1] with
> > > respect to invalidating all dax mappings when / if the device is
> > > ripped out from underneath the fs. I don't think that will collide
> > > with Ruan's implementation, but it does need new communication from
> > > driver to fs about removal events.
> > >
> > > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> >
> > Oh, yay.
> >
> > The XFS shutdown code is centred around preventing new IO from being
> > issued - we don't actually do anything about DAX mappings because,
> > well, I don't think anyone on the filesystem side thought they had
> > to do anything special if pmem went away from under it.
> >
> > My understanding -was- that the pmem removal invalidates
> > all the ptes currently mapped into CPU page tables that point at
> > the dax device across the system. THe vmas that manage these
> > mappings are not really something the filesystem really manages,
> > but a function of the mm subsystem. What the filesystem cares about
> > is that it gets page faults triggered when a change of state occurs
> > so that it can remap the page to it's backing store correctly.
> >
> > IOWs, all the mm subsystem needs to when pmem goes away is clear the
> > CPU ptes, because then when then when userspace tries to access the
> > mapped DAX pages we get a new page fault. In processing the fault, the
> > filesystem will try to get direct access to the pmem from the block
> > device. This will get an ENODEV error from the block device because
> > because the backing store (pmem) has been unplugged and is no longer
> > there...
> >
> > AFAICT, as long as pmem removal invalidates all the active ptes that
> > point at the pmem being removed, the filesystem doesn't need to
> > care about device removal at all, DAX or no DAX...
> 
> How would the pmem removal do that without walking all the active
> inodes in the fs at the time of shutdown and call
> unmap_mapping_range(inode->i_mapping, 0, 0, 1)?

Which then immediately ends up back at the vmas that manage the ptes
to unmap them.

Isn't finding the vma(s) that map a specific memory range exactly
what the rmap code in the mm subsystem is supposed to address?

Cheers,

Dave.
Dan Williams Feb. 26, 2021, 10:41 p.m. UTC | #7
On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > On Fri, Feb 26, 2021 at 11:24:53AM -0800, Dan Williams wrote:
> > > > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
> > > > >
> > > > > On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> > > > > > Hi, guys
> > > > > >
> > > > > > Beside this patchset, I'd like to confirm something about the
> > > > > > "EXPERIMENTAL" tag for dax in XFS.
> > > > > >
> > > > > > In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> > > > > > when we mount a pmem device with dax option, has been existed for a
> > > > > > while.  It's a bit annoying when using fsdax feature.  So, my initial
> > > > > > intention was to remove this tag.  And I started to find out and solve
> > > > > > the problems which prevent it from being removed.
> > > > > >
> > > > > > As is talked before, there are 3 main problems.  The first one is "dax
> > > > > > semantics", which has been resolved.  The rest two are "RMAP for
> > > > > > fsdax" and "support dax reflink for filesystem", which I have been
> > > > > > working on.
> > > > >
> > > > > <nod>
> > > > >
> > > > > > So, what I want to confirm is: does it means that we can remove the
> > > > > > "EXPERIMENTAL" tag when the rest two problem are solved?
> > > > >
> > > > > Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> > > > > nothing new pops up, but otherwise the two patchsets you've sent close
> > > > > those two big remaining gaps.  Thank you for working on this!
> > > > >
> > > > > > Or maybe there are other important problems need to be fixed before
> > > > > > removing it?  If there are, could you please show me that?
> > > > >
> > > > > That remains to be seen through QA/validation, but I think that's it.
> > > > >
> > > > > Granted, I still have to read through the two patchsets...
> > > >
> > > > I've been meaning to circle back here as well.
> > > >
> > > > My immediate concern is the issue Jason recently highlighted [1] with
> > > > respect to invalidating all dax mappings when / if the device is
> > > > ripped out from underneath the fs. I don't think that will collide
> > > > with Ruan's implementation, but it does need new communication from
> > > > driver to fs about removal events.
> > > >
> > > > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> > >
> > > Oh, yay.
> > >
> > > The XFS shutdown code is centred around preventing new IO from being
> > > issued - we don't actually do anything about DAX mappings because,
> > > well, I don't think anyone on the filesystem side thought they had
> > > to do anything special if pmem went away from under it.
> > >
> > > My understanding -was- that the pmem removal invalidates
> > > all the ptes currently mapped into CPU page tables that point at
> > > the dax device across the system. THe vmas that manage these
> > > mappings are not really something the filesystem really manages,
> > > but a function of the mm subsystem. What the filesystem cares about
> > > is that it gets page faults triggered when a change of state occurs
> > > so that it can remap the page to it's backing store correctly.
> > >
> > > IOWs, all the mm subsystem needs to when pmem goes away is clear the
> > > CPU ptes, because then when then when userspace tries to access the
> > > mapped DAX pages we get a new page fault. In processing the fault, the
> > > filesystem will try to get direct access to the pmem from the block
> > > device. This will get an ENODEV error from the block device because
> > > because the backing store (pmem) has been unplugged and is no longer
> > > there...
> > >
> > > AFAICT, as long as pmem removal invalidates all the active ptes that
> > > point at the pmem being removed, the filesystem doesn't need to
> > > care about device removal at all, DAX or no DAX...
> >
> > How would the pmem removal do that without walking all the active
> > inodes in the fs at the time of shutdown and call
> > unmap_mapping_range(inode->i_mapping, 0, 0, 1)?
>
> Which then immediately ends up back at the vmas that manage the ptes
> to unmap them.
>
> Isn't finding the vma(s) that map a specific memory range exactly
> what the rmap code in the mm subsystem is supposed to address?

rmap can lookup only vmas from a virt address relative to a given
mm_struct. The driver has neither the list of mm_struct objects nor
virt addresses to do a lookup. All it knows is that someone might have
mapped pages through the fsdax interface.

To me this looks like a notifier that fires from memunmap_pages()
after dev_pagemap_kill() to notify any block_device associated with
that dev_pagemap() to say that any dax mappings arranged through this
block_device are now invalid. The reason to do this after
dev_pagemap_kill() is so that any new mapping attempts that are racing
the removal will be blocked.

The receiver of that notification needs to go from a block_device to a
superblock that has mapped inodes and walk ->sb_inodes triggering the
unmap/invalidation.
Dave Chinner Feb. 27, 2021, 10:36 p.m. UTC | #8
On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > My immediate concern is the issue Jason recently highlighted [1] with
> > > > > respect to invalidating all dax mappings when / if the device is
> > > > > ripped out from underneath the fs. I don't think that will collide
> > > > > with Ruan's implementation, but it does need new communication from
> > > > > driver to fs about removal events.
> > > > >
> > > > > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> > > >
> > > > Oh, yay.
> > > >
> > > > The XFS shutdown code is centred around preventing new IO from being
> > > > issued - we don't actually do anything about DAX mappings because,
> > > > well, I don't think anyone on the filesystem side thought they had
> > > > to do anything special if pmem went away from under it.
> > > >
> > > > My understanding -was- that the pmem removal invalidates
> > > > all the ptes currently mapped into CPU page tables that point at
> > > > the dax device across the system. THe vmas that manage these
> > > > mappings are not really something the filesystem really manages,
> > > > but a function of the mm subsystem. What the filesystem cares about
> > > > is that it gets page faults triggered when a change of state occurs
> > > > so that it can remap the page to it's backing store correctly.
> > > >
> > > > IOWs, all the mm subsystem needs to when pmem goes away is clear the
> > > > CPU ptes, because then when then when userspace tries to access the
> > > > mapped DAX pages we get a new page fault. In processing the fault, the
> > > > filesystem will try to get direct access to the pmem from the block
> > > > device. This will get an ENODEV error from the block device because
> > > > because the backing store (pmem) has been unplugged and is no longer
> > > > there...
> > > >
> > > > AFAICT, as long as pmem removal invalidates all the active ptes that
> > > > point at the pmem being removed, the filesystem doesn't need to
> > > > care about device removal at all, DAX or no DAX...
> > >
> > > How would the pmem removal do that without walking all the active
> > > inodes in the fs at the time of shutdown and call
> > > unmap_mapping_range(inode->i_mapping, 0, 0, 1)?
> >
> > Which then immediately ends up back at the vmas that manage the ptes
> > to unmap them.
> >
> > Isn't finding the vma(s) that map a specific memory range exactly
> > what the rmap code in the mm subsystem is supposed to address?
> 
> rmap can lookup only vmas from a virt address relative to a given
> mm_struct. The driver has neither the list of mm_struct objects nor
> virt addresses to do a lookup. All it knows is that someone might have
> mapped pages through the fsdax interface.

So there's no physical addr to vma translation in the mm subsystem
at all?

That doesn't make sense. We do exactly this for hwpoison for DAX
mappings. While we don't look at ptes, we get a pfn, grab the page
it points to, check if it points to the PMEM that is being removed,
grab the page it points to, map that to the relevant struct page,
run collect_procs() on that page, then kill the user processes that
map that page.

So why can't we walk the ptes, check the physical pages that they
map to and if they map to a pmem page we go poison that
page and that kills any user process that maps it.

i.e. I can't see how unexpected pmem device unplug is any different
to an MCE delivering a hwpoison event to a DAX mapped page.  Both
indicate a physical address range now contains invalid data and the
filesystem has to take the same action...

IOWs, we could just call ->corrupted_range(0, EOD) here to tell the
filesystem the entire device went away. Then the filesystem deal
with this however it needs to. However, it would be more efficient
from an invalidation POV to just call it on the pages that have
currently active ptes because once the block device is dead
new page faults on DAX mappings will get a SIGBUS naturally.

> To me this looks like a notifier that fires from memunmap_pages()
> after dev_pagemap_kill() to notify any block_device associated with
> that dev_pagemap() to say that any dax mappings arranged through this
> block_device are now invalid. The reason to do this after
> dev_pagemap_kill() is so that any new mapping attempts that are racing
> the removal will be blocked.

I don't see why this needs a unique notifier. At the filesystem
level, we want a single interface that tells us "something bad
happened to the block device", not a proliferation of similar but
subtly different "bad thing X happened to block device" interfaces
that are unique to specific physical device drivers...

> The receiver of that notification needs to go from a block_device to a
> superblock that has mapped inodes and walk ->sb_inodes triggering the
> unmap/invalidation.

Not necessarily.

What if the filesystem is managing mirrored data across multiple
devices and this device is only one leg of the mirror? Or that the
pmem was used by the RT device in XFS and the data/log devices are
still fine? What if the pmem is just being used as a cache tier, and
no data was actually lost?

IOWs, what needs to happen at this point is very filesystem
specific. Assuming that "device unplug == filesystem dead" is not
correct, nor is specifying a generic action that assumes the
filesystem is dead because a device it is using went away.

Cheers,

Dave.
Dan Williams Feb. 27, 2021, 11:40 p.m. UTC | #9
On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > > On Fri, Feb 26, 2021 at 12:51 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > > My immediate concern is the issue Jason recently highlighted [1] with
> > > > > > respect to invalidating all dax mappings when / if the device is
> > > > > > ripped out from underneath the fs. I don't think that will collide
> > > > > > with Ruan's implementation, but it does need new communication from
> > > > > > driver to fs about removal events.
> > > > > >
> > > > > > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> > > > >
> > > > > Oh, yay.
> > > > >
> > > > > The XFS shutdown code is centred around preventing new IO from being
> > > > > issued - we don't actually do anything about DAX mappings because,
> > > > > well, I don't think anyone on the filesystem side thought they had
> > > > > to do anything special if pmem went away from under it.
> > > > >
> > > > > My understanding -was- that the pmem removal invalidates
> > > > > all the ptes currently mapped into CPU page tables that point at
> > > > > the dax device across the system. THe vmas that manage these
> > > > > mappings are not really something the filesystem really manages,
> > > > > but a function of the mm subsystem. What the filesystem cares about
> > > > > is that it gets page faults triggered when a change of state occurs
> > > > > so that it can remap the page to it's backing store correctly.
> > > > >
> > > > > IOWs, all the mm subsystem needs to when pmem goes away is clear the
> > > > > CPU ptes, because then when then when userspace tries to access the
> > > > > mapped DAX pages we get a new page fault. In processing the fault, the
> > > > > filesystem will try to get direct access to the pmem from the block
> > > > > device. This will get an ENODEV error from the block device because
> > > > > because the backing store (pmem) has been unplugged and is no longer
> > > > > there...
> > > > >
> > > > > AFAICT, as long as pmem removal invalidates all the active ptes that
> > > > > point at the pmem being removed, the filesystem doesn't need to
> > > > > care about device removal at all, DAX or no DAX...
> > > >
> > > > How would the pmem removal do that without walking all the active
> > > > inodes in the fs at the time of shutdown and call
> > > > unmap_mapping_range(inode->i_mapping, 0, 0, 1)?
> > >
> > > Which then immediately ends up back at the vmas that manage the ptes
> > > to unmap them.
> > >
> > > Isn't finding the vma(s) that map a specific memory range exactly
> > > what the rmap code in the mm subsystem is supposed to address?
> >
> > rmap can lookup only vmas from a virt address relative to a given
> > mm_struct. The driver has neither the list of mm_struct objects nor
> > virt addresses to do a lookup. All it knows is that someone might have
> > mapped pages through the fsdax interface.
>
> So there's no physical addr to vma translation in the mm subsystem
> at all?
>
> That doesn't make sense. We do exactly this for hwpoison for DAX
> mappings. While we don't look at ptes, we get a pfn,

True hwpoison does get a known failing pfn and then uses page->mapping
to get the 'struct address_space' to do the unmap. I discounted that
approach from the outset because it would mean walking every pfn in a
multi-terabyte device just in case one is mapped at device shutdown.

> it points to, check if it points to the PMEM that is being removed,
> grab the page it points to, map that to the relevant struct page,
> run collect_procs() on that page, then kill the user processes that
> map that page.
>
> So why can't we walk the ptescheck the physical pages that they
> map to and if they map to a pmem page we go poison that
> page and that kills any user process that maps it.
>
> i.e. I can't see how unexpected pmem device unplug is any different
> to an MCE delivering a hwpoison event to a DAX mapped page.

I guess the tradeoff is walking a long list of inodes vs walking a
large array of pages.

There's likely always more pages than inodes, but perhaps it's more
efficient to walk the 'struct page' array than sb->s_inodes?

>  Both
> indicate a physical address range now contains invalid data and the
> filesystem has to take the same action...
>
> IOWs, we could just call ->corrupted_range(0, EOD) here to tell the
> filesystem the entire device went away. Then the filesystem deal
> with this however it needs to. However, it would be more efficient
> from an invalidation POV to just call it on the pages that have
> currently active ptes because once the block device is dead
> new page faults on DAX mappings will get a SIGBUS naturally.

There is no efficient way to lookup "currently active ptes" relative
to a physical pfn range.

SIGBUS will happen naturally either way. I don't think the hwpoison
signal with the extra BUS_MCEERR_* info is appropriate given that
indicates data loss vs data offline of a device being unplugged.

>
> > To me this looks like a notifier that fires from memunmap_pages()
> > after dev_pagemap_kill() to notify any block_device associated with
> > that dev_pagemap() to say that any dax mappings arranged through this
> > block_device are now invalid. The reason to do this after
> > dev_pagemap_kill() is so that any new mapping attempts that are racing
> > the removal will be blocked.
>
> I don't see why this needs a unique notifier. At the filesystem
> level, we want a single interface that tells us "something bad
> happened to the block device", not a proliferation of similar but
> subtly different "bad thing X happened to block device" interfaces
> that are unique to specific physical device drivers...
>
> > The receiver of that notification needs to go from a block_device to a
> > superblock that has mapped inodes and walk ->sb_inodes triggering the
> > unmap/invalidation.
>
> Not necessarily.
>
> What if the filesystem is managing mirrored data across multiple
> devices and this device is only one leg of the mirror?

I can see DAX mapping for read access to one leg of the mirror. The
unplug would fire zap_pte for all the inodes with DAX mappings for
that fs. Filesystem is still free at that point to wait for the next
user access, take a refault, and re-establish the mapping to another
leg of the mirror.

> Or that the
> pmem was used by the RT device in XFS and the data/log devices are
> still fine?

I was assuming that the callback would only be triggered for a dax
device as the data device. So xfs_open_devices() would register
mp->m_super for dax_rtdev.

> What if the pmem is just being used as a cache tier, and
> no data was actually lost?

That's fine the cache mapping is zapped and re-fault figures out what
to do. If anything these questions are a reason not to use
->corrupted_range() for this because recovery can happen at refault vs
taking permanent action on a data loss event.

>
> IOWs, what needs to happen at this point is very filesystem
> specific. Assuming that "device unplug == filesystem dead" is not
> correct, nor is specifying a generic action that assumes the
> filesystem is dead because a device it is using went away.

Ok, I think I set this discussion in the wrong direction implying any
mapping of this action to a "filesystem dead" event. It's just a "zap
all ptes" event and upper layers recover from there.
Dave Chinner Feb. 28, 2021, 10:38 p.m. UTC | #10
On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > it points to, check if it points to the PMEM that is being removed,
> > grab the page it points to, map that to the relevant struct page,
> > run collect_procs() on that page, then kill the user processes that
> > map that page.
> >
> > So why can't we walk the ptescheck the physical pages that they
> > map to and if they map to a pmem page we go poison that
> > page and that kills any user process that maps it.
> >
> > i.e. I can't see how unexpected pmem device unplug is any different
> > to an MCE delivering a hwpoison event to a DAX mapped page.
> 
> I guess the tradeoff is walking a long list of inodes vs walking a
> large array of pages.

Not really. You're assuming all a filesystem has to do is invalidate
everything if a device goes away, and that's not true. Finding if an
inode has a mapping that spans a specific device in a multi-device
filesystem can be a lot more complex than that. Just walking inodes
is easy - determining whihc inodes need invalidation is the hard
part.

That's where ->corrupt_range() comes in - the filesystem is already
set up to do reverse mapping from physical range to inode(s)
offsets...

> There's likely always more pages than inodes, but perhaps it's more
> efficient to walk the 'struct page' array than sb->s_inodes?

I really don't see you seem to be telling us that invalidation is an
either/or choice. There's more ways to convert physical block
address -> inode file offset and mapping index than brute force
inode cache walks....

.....

> > IOWs, what needs to happen at this point is very filesystem
> > specific. Assuming that "device unplug == filesystem dead" is not
> > correct, nor is specifying a generic action that assumes the
> > filesystem is dead because a device it is using went away.
> 
> Ok, I think I set this discussion in the wrong direction implying any
> mapping of this action to a "filesystem dead" event. It's just a "zap
> all ptes" event and upper layers recover from there.

Yes, that's exactly what ->corrupt_range() is intended for. It
allows the filesystem to lock out access to the bad range
and then recover the data. Or metadata, if that's where the bad
range lands. If that recovery fails, it can then report a data
loss/filesystem shutdown event to userspace and kill user procs that
span the bad range...

FWIW, is this notification going to occur before or after the device
has been physically unplugged? i.e. what do we do about the
time-of-unplug-to-time-of-invalidation window where userspace can
still attempt to access the missing pmem though the
not-yet-invalidated ptes? It may not be likely that people just yank
pmem nvdimms out of machines, but with NVMe persistent memory
spaces, there's every chance that someone pulls the wrong device...

Cheers,

Dave.
Yasunori Gotou (Fujitsu) March 1, 2021, 7:26 a.m. UTC | #11
Hello, Dan-san,

On 2021/02/27 4:24, Dan Williams wrote:
> On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
>>
>> On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
>>> Hi, guys
>>>
>>> Beside this patchset, I'd like to confirm something about the
>>> "EXPERIMENTAL" tag for dax in XFS.
>>>
>>> In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
>>> when we mount a pmem device with dax option, has been existed for a
>>> while.  It's a bit annoying when using fsdax feature.  So, my initial
>>> intention was to remove this tag.  And I started to find out and solve
>>> the problems which prevent it from being removed.
>>>
>>> As is talked before, there are 3 main problems.  The first one is "dax
>>> semantics", which has been resolved.  The rest two are "RMAP for
>>> fsdax" and "support dax reflink for filesystem", which I have been
>>> working on.
>>
>> <nod>
>>
>>> So, what I want to confirm is: does it means that we can remove the
>>> "EXPERIMENTAL" tag when the rest two problem are solved?
>>
>> Yes.  I'd keep the experimental tag for a cycle or two to make sure that
>> nothing new pops up, but otherwise the two patchsets you've sent close
>> those two big remaining gaps.  Thank you for working on this!
>>
>>> Or maybe there are other important problems need to be fixed before
>>> removing it?  If there are, could you please show me that?
>>
>> That remains to be seen through QA/validation, but I think that's it.
>>
>> Granted, I still have to read through the two patchsets...
> 
> I've been meaning to circle back here as well.
> 
> My immediate concern is the issue Jason recently highlighted [1] with
> respect to invalidating all dax mappings when / if the device is
> ripped out from underneath the fs. I don't think that will collide
> with Ruan's implementation, but it does need new communication from
> driver to fs about removal events.
> 
> [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> 

I'm not sure why there is a race condition between unbinding operation 
and accessing mmaped file on filesystem dax yet.

May be silly question, but could you tell me why the "unbinding" 
operation of the namespace which is mounted by filesystem dax must be
allowed?
If "unbinding" is rejected when the filesystem is mounted with dax 
enabled, what is inconvenience?

I can imagine if a device like usb memory stick is removed surprisingly, 
kernel/filesystem need to reject writeback at the time, and discard page 
cache. Then, I can understand that unbinding operation is essential for 
such case.
But I don't know why PMEM device/namespace allows unbinding operation 
like surprising removal event.

Thanks,
Dan Williams March 1, 2021, 8:55 p.m. UTC | #12
On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > it points to, check if it points to the PMEM that is being removed,
> > > grab the page it points to, map that to the relevant struct page,
> > > run collect_procs() on that page, then kill the user processes that
> > > map that page.
> > >
> > > So why can't we walk the ptescheck the physical pages that they
> > > map to and if they map to a pmem page we go poison that
> > > page and that kills any user process that maps it.
> > >
> > > i.e. I can't see how unexpected pmem device unplug is any different
> > > to an MCE delivering a hwpoison event to a DAX mapped page.
> >
> > I guess the tradeoff is walking a long list of inodes vs walking a
> > large array of pages.
>
> Not really. You're assuming all a filesystem has to do is invalidate
> everything if a device goes away, and that's not true. Finding if an
> inode has a mapping that spans a specific device in a multi-device
> filesystem can be a lot more complex than that. Just walking inodes
> is easy - determining whihc inodes need invalidation is the hard
> part.

That inode-to-device level of specificity is not needed for the same
reason that drop_caches does not need to be specific. If the wrong
page is unmapped a re-fault will bring it back, and re-fault will fail
for the pages that are successfully removed.

> That's where ->corrupt_range() comes in - the filesystem is already
> set up to do reverse mapping from physical range to inode(s)
> offsets...

Sure, but what is the need to get to that level of specificity with
the filesystem for something that should rarely happen in the course
of normal operation outside of a mistake?

>
> > There's likely always more pages than inodes, but perhaps it's more
> > efficient to walk the 'struct page' array than sb->s_inodes?
>
> I really don't see you seem to be telling us that invalidation is an
> either/or choice. There's more ways to convert physical block
> address -> inode file offset and mapping index than brute force
> inode cache walks....

Yes, but I was trying to map it to an existing mechanism and the
internals of drop_pagecache_sb() are, in coarse terms, close to what
needs to happen here.

>
> .....
>
> > > IOWs, what needs to happen at this point is very filesystem
> > > specific. Assuming that "device unplug == filesystem dead" is not
> > > correct, nor is specifying a generic action that assumes the
> > > filesystem is dead because a device it is using went away.
> >
> > Ok, I think I set this discussion in the wrong direction implying any
> > mapping of this action to a "filesystem dead" event. It's just a "zap
> > all ptes" event and upper layers recover from there.
>
> Yes, that's exactly what ->corrupt_range() is intended for. It
> allows the filesystem to lock out access to the bad range
> and then recover the data. Or metadata, if that's where the bad
> range lands. If that recovery fails, it can then report a data
> loss/filesystem shutdown event to userspace and kill user procs that
> span the bad range...
>
> FWIW, is this notification going to occur before or after the device
> has been physically unplugged?

Before. This will be operations that happen in the pmem driver
->remove() callback.

> i.e. what do we do about the
> time-of-unplug-to-time-of-invalidation window where userspace can
> still attempt to access the missing pmem though the
> not-yet-invalidated ptes? It may not be likely that people just yank
> pmem nvdimms out of machines, but with NVMe persistent memory
> spaces, there's every chance that someone pulls the wrong device...

The physical removal aspect is only theoretical today. While the pmem
driver has a ->remove() path that's purely a software unbind
operation. That said the vulnerability window today is if a process
acquires a dax mapping, the pmem device hosting that filesystem goes
through an unbind / bind cycle, and then a new filesystem is created /
mounted. That old pte may be able to access data that is outside its
intended protection domain.

Going forward, for buses like CXL, there will be a managed physical
remove operation via PCIE native hotplug. The flow there is that the
PCIE hotplug driver will notify the OS of a pending removal, trigger
->remove() on the pmem driver, and then notify the technician (slot
status LED) that the card is safe to pull.
Dan Williams March 1, 2021, 9:34 p.m. UTC | #13
On Sun, Feb 28, 2021 at 11:27 PM Yasunori Goto <y-goto@fujitsu.com> wrote:
>
> Hello, Dan-san,
>
> On 2021/02/27 4:24, Dan Williams wrote:
> > On Fri, Feb 26, 2021 at 11:05 AM Darrick J. Wong <djwong@kernel.org> wrote:
> >>
> >> On Fri, Feb 26, 2021 at 09:45:45AM +0000, ruansy.fnst@fujitsu.com wrote:
> >>> Hi, guys
> >>>
> >>> Beside this patchset, I'd like to confirm something about the
> >>> "EXPERIMENTAL" tag for dax in XFS.
> >>>
> >>> In XFS, the "EXPERIMENTAL" tag, which is reported in waring message
> >>> when we mount a pmem device with dax option, has been existed for a
> >>> while.  It's a bit annoying when using fsdax feature.  So, my initial
> >>> intention was to remove this tag.  And I started to find out and solve
> >>> the problems which prevent it from being removed.
> >>>
> >>> As is talked before, there are 3 main problems.  The first one is "dax
> >>> semantics", which has been resolved.  The rest two are "RMAP for
> >>> fsdax" and "support dax reflink for filesystem", which I have been
> >>> working on.
> >>
> >> <nod>
> >>
> >>> So, what I want to confirm is: does it means that we can remove the
> >>> "EXPERIMENTAL" tag when the rest two problem are solved?
> >>
> >> Yes.  I'd keep the experimental tag for a cycle or two to make sure that
> >> nothing new pops up, but otherwise the two patchsets you've sent close
> >> those two big remaining gaps.  Thank you for working on this!
> >>
> >>> Or maybe there are other important problems need to be fixed before
> >>> removing it?  If there are, could you please show me that?
> >>
> >> That remains to be seen through QA/validation, but I think that's it.
> >>
> >> Granted, I still have to read through the two patchsets...
> >
> > I've been meaning to circle back here as well.
> >
> > My immediate concern is the issue Jason recently highlighted [1] with
> > respect to invalidating all dax mappings when / if the device is
> > ripped out from underneath the fs. I don't think that will collide
> > with Ruan's implementation, but it does need new communication from
> > driver to fs about removal events.
> >
> > [1]: http://lore.kernel.org/r/CAPcyv4i+PZhYZiePf2PaH0dT5jDfkmkDX-3usQy1fAhf6LPyfw@mail.gmail.com
> >
>
> I'm not sure why there is a race condition between unbinding operation
> and accessing mmaped file on filesystem dax yet.
>
> May be silly question, but could you tell me why the "unbinding"
> operation of the namespace which is mounted by filesystem dax must be
> allowed?

The unbind operation is used to switch the mode of a namespace between
fsdax and devdax. There is no way to fail unbind. At most it can be
delayed for a short while to perform cleanup, but it can't be blocked
indefinitely. There is the option to specify 'suppress_bind_attrs' to
the driver to preclude software triggered device removal, but that
would disable mode changes for the device.

> If "unbinding" is rejected when the filesystem is mounted with dax
> enabled, what is inconvenience?

It would be interesting (read difficult) to introduce the concept of
dynamic 'suppress_bind_attrs'. Today the decision is static at driver
registration time, not in response to how the device is being used.

I think global invalidation of all inodes that might be affected by a
dax-capable device being ripped away from the filesystem is sufficient
and avoids per-fs enabling, but I'm willing to be convinced that
->corrupted_range() is the proper vehicle for this.

>
> I can imagine if a device like usb memory stick is removed surprisingly,
> kernel/filesystem need to reject writeback at the time, and discard page
> cache. Then, I can understand that unbinding operation is essential for
> such case.

For usb the system is protected by the fact that all future block-i/o
submissions to the old block-device will fail, and a new usb-device
being plugged in will get a new block-device. I.e. the old security
model is invalidated / all holes are closed by blk_cleanup_queue().

> But I don't know why PMEM device/namespace allows unbinding operation
> like surprising removal event.

DAX hands direct mappings to physical pages, if the security model
fronting those physical pages changes the mappings attained via the
old security model need to be invalidated. blk_cleanup_queue() does
not invalidate DAX mappings.

The practical value of fixing that hole is small given that physical
unplug is not implemented for NVDIMMs today, but the get_user_pages()
path can be optimized if this invalidation is implemented, and future
hotplug-capable NVDIMM buses like CXL will need this.
Dave Chinner March 1, 2021, 10:46 p.m. UTC | #14
On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > > it points to, check if it points to the PMEM that is being removed,
> > > > grab the page it points to, map that to the relevant struct page,
> > > > run collect_procs() on that page, then kill the user processes that
> > > > map that page.
> > > >
> > > > So why can't we walk the ptescheck the physical pages that they
> > > > map to and if they map to a pmem page we go poison that
> > > > page and that kills any user process that maps it.
> > > >
> > > > i.e. I can't see how unexpected pmem device unplug is any different
> > > > to an MCE delivering a hwpoison event to a DAX mapped page.
> > >
> > > I guess the tradeoff is walking a long list of inodes vs walking a
> > > large array of pages.
> >
> > Not really. You're assuming all a filesystem has to do is invalidate
> > everything if a device goes away, and that's not true. Finding if an
> > inode has a mapping that spans a specific device in a multi-device
> > filesystem can be a lot more complex than that. Just walking inodes
> > is easy - determining whihc inodes need invalidation is the hard
> > part.
> 
> That inode-to-device level of specificity is not needed for the same
> reason that drop_caches does not need to be specific. If the wrong
> page is unmapped a re-fault will bring it back, and re-fault will fail
> for the pages that are successfully removed.
> 
> > That's where ->corrupt_range() comes in - the filesystem is already
> > set up to do reverse mapping from physical range to inode(s)
> > offsets...
> 
> Sure, but what is the need to get to that level of specificity with
> the filesystem for something that should rarely happen in the course
> of normal operation outside of a mistake?

Dan, you made this mistake with the hwpoisoning code that we're
trying to fix that here. Hard coding a 1:1 physical address to
inode/offset into the DAX mapping was a bad mistake. It's also one
that should never have occurred because it's *obviously wrong* to
filesystem developers and has been for a long time.

Now we have the filesytem people providing a mechanism for the pmem
devices to tell the filesystems about physical device failures so
they can handle such failures correctly themselves. Having the
device go away unexpectedly from underneath a mounted and active
filesystem is a *device failure*, not an "unplug event".

The mistake you made was not understanding how filesystems work,
nor actually asking filesystem developers what they actually needed.
You're doing the same thing here - you're telling us what you think
the solution filesystems need is. Please listen when we say "that is
not sufficient" because we don't want to be backed into a corner
that we have to fix ourselves again before we can enable some basic
filesystem functionality that we should have been able to support on
DAX from the start...

> > > There's likely always more pages than inodes, but perhaps it's more
> > > efficient to walk the 'struct page' array than sb->s_inodes?
> >
> > I really don't see you seem to be telling us that invalidation is an
> > either/or choice. There's more ways to convert physical block
> > address -> inode file offset and mapping index than brute force
> > inode cache walks....
> 
> Yes, but I was trying to map it to an existing mechanism and the
> internals of drop_pagecache_sb() are, in coarse terms, close to what
> needs to happen here.

No.

drop_pagecache_sb() is not a relevant model for telling a filesystem
that the block device underneath has gone away, nor for a device to
ensure that access protections that *are managed by the filesystem*
are enforced/revoked sanely.

drop_pagecache_sb() is a brute-force model for invalidating user
data mappings that the filesystem performs in response to such a
notification. It only needs this brute-force approach if it has no
other way to find active DAX mappings across the range of the device
that has gone away.

But this model doesn't work for direct mapped metadata, journals or
any other internal direct filesystem mappings that aren't referenced
by inodes that the filesytem might be using. The filesystem still
needs to invalidate all those mappings and prevent further access to
them, even from within the kernel itself.

Filesystems are way more complex than pure DAX devices, and hence
handle errors and failure events differently. Unlike DAX devices, we
have both internal and external references to the DAX device, and we
can have both external and internal direct maps.  Invalidating user
data mappings is all dax devices need to do on unplug, but for
filesystems it is only a small part of what we have to do when a
range of a device goes bad.

IOWs, there is no "one size fits all" approach that works for all
filesystems, nor is there one strategy that is is optimal for all
filesystems. Failure handling in a filesystem is almost always
filesystem specific...

> > > Ok, I think I set this discussion in the wrong direction implying any
> > > mapping of this action to a "filesystem dead" event. It's just a "zap
> > > all ptes" event and upper layers recover from there.
> >
> > Yes, that's exactly what ->corrupt_range() is intended for. It
> > allows the filesystem to lock out access to the bad range
> > and then recover the data. Or metadata, if that's where the bad
> > range lands. If that recovery fails, it can then report a data
> > loss/filesystem shutdown event to userspace and kill user procs that
> > span the bad range...
> >
> > FWIW, is this notification going to occur before or after the device
> > has been physically unplugged?
> 
> Before. This will be operations that happen in the pmem driver
> ->remove() callback.
> 
> > i.e. what do we do about the
> > time-of-unplug-to-time-of-invalidation window where userspace can
> > still attempt to access the missing pmem though the
> > not-yet-invalidated ptes? It may not be likely that people just yank
> > pmem nvdimms out of machines, but with NVMe persistent memory
> > spaces, there's every chance that someone pulls the wrong device...
> 
> The physical removal aspect is only theoretical today.

For actual pmem, maybe. But hotplug RAM is a thing; big numa
machines that can hotplug nodes into their fabric and so have CPUs
and memory able to come and go from a live machine. It's not a small
stretch to extend that to having PMEM in those nodes. So it's a
practical design concern right now, even ignoring that NVMe is
hotplug....

> While the pmem
> driver has a ->remove() path that's purely a software unbind
> operation. That said the vulnerability window today is if a process
> acquires a dax mapping, the pmem device hosting that filesystem goes
> through an unbind / bind cycle, and then a new filesystem is created /
> mounted. That old pte may be able to access data that is outside its
> intended protection domain.

So what is being done to prevent stale DAX mappings from being
leaked this way right now, seeing as the leak you mention here
doesn't appear in any way to be filesystem related?

> Going forward, for buses like CXL, there will be a managed physical
> remove operation via PCIE native hotplug. The flow there is that the
> PCIE hotplug driver will notify the OS of a pending removal, trigger
> ->remove() on the pmem driver, and then notify the technician (slot
> status LED) that the card is safe to pull.

That doesn't protect against pulling the wrong device, or having
someone pull the device without first running an admin command that
makes systems using DAX safe to pull the device....

And once you take into account that "pulling the wrong device" can
happen, how does the filesystem tell tell the difference between a
device being pulled and a drive cage just dying and so the drive
just disappear from the system? How are these accidental vs real
failures any different from the perspective of a filesystem mounted
on that device?

And then there is the elephant in the room: if there's a "human in
the loop" step needed to hot unplug a pmem device safely, then
why the hell is the filesystem on that device still mounted and the
DAX applications still running?

This just makes no sense at all from an operations perspective - if
you know that you are about to do an unplug that will result in all
your DAX apps and filesystems being killed (i.e. fatal production
environment failure) then why haven't they all been stopped by the
admin before the device unplug is done? Why does this "human in the
loop" admin task require the applications and filesystems to handle
this without warning and have to treat it as a "device failure"
event when this can all be avoided for normal, scheduled, controlled
unplug operations? The "unexpected unplug" is a catastrophic failure
event which may have severe side effects on system operation and
stability. Why would you design an unplug process that does not
start with a clean, a controlled shutdown process from the top down?
If we make the assumption that planned unplugs are well planned,
organised and scheduled, then the only thing that an unplug event
needs to mean to a filesystem is "catastrophic device failure has
occurred".

So from a system level, the way you are describing the way hot
unplug events are supposed to occur and work looks completely
screwed up to me. Exactly what use case do you have for pmem device
hot-unplug from under a live filesystem that isn't considered a
*catastrophic runtime device failure* by the filesystem?

Cheers,

Dave.
Dan Williams March 2, 2021, 12:32 a.m. UTC | #15
On Mon, Mar 1, 2021 at 2:47 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> > On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > > > it points to, check if it points to the PMEM that is being removed,
> > > > > grab the page it points to, map that to the relevant struct page,
> > > > > run collect_procs() on that page, then kill the user processes that
> > > > > map that page.
> > > > >
> > > > > So why can't we walk the ptescheck the physical pages that they
> > > > > map to and if they map to a pmem page we go poison that
> > > > > page and that kills any user process that maps it.
> > > > >
> > > > > i.e. I can't see how unexpected pmem device unplug is any different
> > > > > to an MCE delivering a hwpoison event to a DAX mapped page.
> > > >
> > > > I guess the tradeoff is walking a long list of inodes vs walking a
> > > > large array of pages.
> > >
> > > Not really. You're assuming all a filesystem has to do is invalidate
> > > everything if a device goes away, and that's not true. Finding if an
> > > inode has a mapping that spans a specific device in a multi-device
> > > filesystem can be a lot more complex than that. Just walking inodes
> > > is easy - determining whihc inodes need invalidation is the hard
> > > part.
> >
> > That inode-to-device level of specificity is not needed for the same
> > reason that drop_caches does not need to be specific. If the wrong
> > page is unmapped a re-fault will bring it back, and re-fault will fail
> > for the pages that are successfully removed.
> >
> > > That's where ->corrupt_range() comes in - the filesystem is already
> > > set up to do reverse mapping from physical range to inode(s)
> > > offsets...
> >
> > Sure, but what is the need to get to that level of specificity with
> > the filesystem for something that should rarely happen in the course
> > of normal operation outside of a mistake?
>
> Dan, you made this mistake with the hwpoisoning code that we're
> trying to fix that here. Hard coding a 1:1 physical address to
> inode/offset into the DAX mapping was a bad mistake. It's also one
> that should never have occurred because it's *obviously wrong* to
> filesystem developers and has been for a long time.

I admit that mistake. The traditional memory error handling model
assumptions around page->mapping were broken by DAX, I'm not trying to
repeat that mistake. I feel we're talking past each other on the
discussion of the proposals.

> Now we have the filesytem people providing a mechanism for the pmem
> devices to tell the filesystems about physical device failures so
> they can handle such failures correctly themselves. Having the
> device go away unexpectedly from underneath a mounted and active
> filesystem is a *device failure*, not an "unplug event".

It's the same difference to the physical page, all mappings to that
page need to be torn down. I'm happy to call an fs callback and let
each filesystem do what it wants with a "every pfn in this dax device
needs to be unmapped".

I'm looking at the ->corrupted_range() patches trying to map it to
this use case and I don't see how, for example a realtime-xfs over DM
over multiple PMEM gets the notification to the right place.
bd_corrupted_range() uses get_super() which get the wrong answer for
both realtime-xfs and DM.

I'd flip that arrangement around and have the FS tell the block device
"if something happens to you, here is the super_block to notify". So
to me this looks like a fs_dax_register_super() helper that plumbs the
superblock through an arbitrary stack of block devices to the leaf
block-device that might want to send a notification up when a global
unmap operation needs to be performed.

I naively think that "for_each_inode()
unmap_mapping_range(&inode->i_mapping)" is sufficient as a generic
implementation, that does not preclude XFS to override that generic
implementation and handle it directly if it so chooses.

> The mistake you made was not understanding how filesystems work,
> nor actually asking filesystem developers what they actually needed.

You're going too far here, but that's off topic.

> You're doing the same thing here - you're telling us what you think
> the solution filesystems need is.

No, I'm not, I'm trying to understand tradeoffs. I apologize if this
is coming across as not listening.

> Please listen when we say "that is
> not sufficient" because we don't want to be backed into a corner
> that we have to fix ourselves again before we can enable some basic
> filesystem functionality that we should have been able to support on
> DAX from the start...

That's some revisionist interpretation of how the discovery of the
reflink+dax+memory-error-handling collision went down.

The whole point of this discussion is to determine if
->corrupted_range() is suitable for this notification, and looking at
the code as is, it isn't. Perhaps you have a different implementation
of ->corrupted_range() in mind that allows this to be plumbed
correctly?

>
> > > > There's likely always more pages than inodes, but perhaps it's more
> > > > efficient to walk the 'struct page' array than sb->s_inodes?
> > >
> > > I really don't see you seem to be telling us that invalidation is an
> > > either/or choice. There's more ways to convert physical block
> > > address -> inode file offset and mapping index than brute force
> > > inode cache walks....
> >
> > Yes, but I was trying to map it to an existing mechanism and the
> > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > needs to happen here.
>
> No.
>
> drop_pagecache_sb() is not a relevant model for telling a filesystem
> that the block device underneath has gone away,

Like I said I'm not trying to communicate "device has gone away", only
"unmap all dax pages". If you want those to be one in the same
mechanism I'm listening, but like I said it was my mistake for tying
global unmap to device-gone, they need not be the same given
fileystems have not historically been notified proactively of device
removal.

> nor for a device to
> ensure that access protections that *are managed by the filesystem*
> are enforced/revoked sanely.

Yes, if the fs needs / wants to do more than the generic need of unmap
all dax it's free to override the generic implementation.

> drop_pagecache_sb() is a brute-force model for invalidating user
> data mappings that the filesystem performs in response to such a
> notification. It only needs this brute-force approach if it has no
> other way to find active DAX mappings across the range of the device
> that has gone away.

Ok.

> But this model doesn't work for direct mapped metadata, journals or
> any other internal direct filesystem mappings that aren't referenced
> by inodes that the filesytem might be using. The filesystem still
> needs to invalidate all those mappings and prevent further access to
> them, even from within the kernel itself.

Agree. If the filesystem was using DAX techniques for metadata it
would want to know before the direct-map is torn down. No argument
there.

> Filesystems are way more complex than pure DAX devices, and hence
> handle errors and failure events differently. Unlike DAX devices, we
> have both internal and external references to the DAX device, and we
> can have both external and internal direct maps.  Invalidating user
> data mappings is all dax devices need to do on unplug, but for
> filesystems it is only a small part of what we have to do when a
> range of a device goes bad.
>
> IOWs, there is no "one size fits all" approach that works for all
> filesystems, nor is there one strategy that is is optimal for all
> filesystems. Failure handling in a filesystem is almost always
> filesystem specific...

Point taken, if a filesystem is not using the block-layer for metadata
I/O and using DAX techniques directly it needs this event too
otherwise it will crash vs report failed operations...
->corrupted_range() does not offer the correct plumbing for that
today.

There's an additional problem this brings to mind. Device-mapper
targets like dm-writecache need this notification as well because it
is using direct physical page access via the linear map and may crash
like the filesystem if the mm-direct-map is torn down from underneath
it.

> > > > Ok, I think I set this discussion in the wrong direction implying any
> > > > mapping of this action to a "filesystem dead" event. It's just a "zap
> > > > all ptes" event and upper layers recover from there.
> > >
> > > Yes, that's exactly what ->corrupt_range() is intended for. It
> > > allows the filesystem to lock out access to the bad range
> > > and then recover the data. Or metadata, if that's where the bad
> > > range lands. If that recovery fails, it can then report a data
> > > loss/filesystem shutdown event to userspace and kill user procs that
> > > span the bad range...
> > >
> > > FWIW, is this notification going to occur before or after the device
> > > has been physically unplugged?
> >
> > Before. This will be operations that happen in the pmem driver
> > ->remove() callback.
> >
> > > i.e. what do we do about the
> > > time-of-unplug-to-time-of-invalidation window where userspace can
> > > still attempt to access the missing pmem though the
> > > not-yet-invalidated ptes? It may not be likely that people just yank
> > > pmem nvdimms out of machines, but with NVMe persistent memory
> > > spaces, there's every chance that someone pulls the wrong device...
> >
> > The physical removal aspect is only theoretical today.
>
> For actual pmem, maybe. But hotplug RAM is a thing; big numa
> machines that can hotplug nodes into their fabric and so have CPUs
> and memory able to come and go from a live machine. It's not a small
> stretch to extend that to having PMEM in those nodes. So it's a
> practical design concern right now, even ignoring that NVMe is
> hotplug....

Memory hotplug today requires the memory-device to be offlined before
the memory is unplugged and the core-mm has a chance to say "no" if it
sees even one page with an elevated reference. Block-devices in
contrast have no option to say "no" to being unplugged / ->remove()
triggered.

> > While the pmem
> > driver has a ->remove() path that's purely a software unbind
> > operation. That said the vulnerability window today is if a process
> > acquires a dax mapping, the pmem device hosting that filesystem goes
> > through an unbind / bind cycle, and then a new filesystem is created /
> > mounted. That old pte may be able to access data that is outside its
> > intended protection domain.
>
> So what is being done to prevent stale DAX mappings from being
> leaked this way right now, seeing as the leak you mention here
> doesn't appear in any way to be filesystem related?

For device-dax where there is only one inode->i_mapping to deal with,
one unmap_mapping_range() call is performed in the device shutdown
path. For filesystem-dax only the direct-map is torn down.

The user mapping teardown gap is why I'm coming at this elephant from
the user mapping perspective and not necessarily the "what does the
filesystem want to do about device removal" perspective.

> > Going forward, for buses like CXL, there will be a managed physical
> > remove operation via PCIE native hotplug. The flow there is that the
> > PCIE hotplug driver will notify the OS of a pending removal, trigger
> > ->remove() on the pmem driver, and then notify the technician (slot
> > status LED) that the card is safe to pull.
>
> That doesn't protect against pulling the wrong device, or having
> someone pull the device without first running an admin command that
> makes systems using DAX safe to pull the device....

Of course not, at some point surprise removal can't be compensated.
There are hardware mechanisms to try to contain mistakes, but those
can only go so far...

> And once you take into account that "pulling the wrong device" can
> happen, how does the filesystem tell tell the difference between a
> device being pulled and a drive cage just dying and so the drive
> just disappear from the system? How are these accidental vs real
> failures any different from the perspective of a filesystem mounted
> on that device?

Not even the device driver can tell you that. The Linux driver model
has no way to communicate why ->remove() is being called, it only
knows that it needs to revoke everything that was handed out since
->probe().

> And then there is the elephant in the room: if there's a "human in
> the loop" step needed to hot unplug a pmem device safely, then
> why the hell is the filesystem on that device still mounted and the
> DAX applications still running?

This goes back to Yasunori's question, can't ->remove() just be
blocked when the filesystem is mounted? The answer is similar to
asking the filesystem to allow DAX RDMA pages to be pinned
indefinitely and lock-out the filesystem from making any extent-map
changes. If the admin wants the device disabled while the filesystem
is mounted the kernel should do everything it can to honor that
request safely.

> This just makes no sense at all from an operations perspective - if
> you know that you are about to do an unplug that will result in all
> your DAX apps and filesystems being killed (i.e. fatal production
> environment failure) then why haven't they all been stopped by the
> admin before the device unplug is done? Why does this "human in the
> loop" admin task require the applications and filesystems to handle
> this without warning and have to treat it as a "device failure"
> event when this can all be avoided for normal, scheduled, controlled
> unplug operations? The "unexpected unplug" is a catastrophic failure
> event which may have severe side effects on system operation and
> stability. Why would you design an unplug process that does not
> start with a clean, a controlled shutdown process from the top down?
> If we make the assumption that planned unplugs are well planned,
> organised and scheduled, then the only thing that an unplug event
> needs to mean to a filesystem is "catastrophic device failure has
> occurred".

There is a difference between the kernel saying "don't do that, bad
things will happen" and "you can't do that the entire system will
crash / security promises will be violated".

git grep -n suppress_bind_attr drivers/ata/ drivers/scsi/ drivers/nvme/

There are no block-device providers that I can find on a quick search
that forbid triggering ->remove() on the driver if a filesystem is
mounted. pmem is not the first block device driver to present this
problem.

> So from a system level, the way you are describing the way hot
> unplug events are supposed to occur and work looks completely
> screwed up to me. Exactly what use case do you have for pmem device
> hot-unplug from under a live filesystem that isn't considered a
> *catastrophic runtime device failure* by the filesystem?

I'm coming at this from the perspective of it historically always
being possible for a block-device to be ripped out from underneath a
filesystem. I seem to be just the messenger conveying that bad news.
What's different now is that DAX has expanded what was previously
constrained to something the block layer could handle with a BLK_STS_*
return value for new I/O to a live pte that needs to be torn down, not
a page cache page that can live on indefinitely.
Dave Chinner March 2, 2021, 2:42 a.m. UTC | #16
On Mon, Mar 01, 2021 at 04:32:36PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 2:47 PM Dave Chinner <david@fromorbit.com> wrote:
> > Now we have the filesytem people providing a mechanism for the pmem
> > devices to tell the filesystems about physical device failures so
> > they can handle such failures correctly themselves. Having the
> > device go away unexpectedly from underneath a mounted and active
> > filesystem is a *device failure*, not an "unplug event".
> 
> It's the same difference to the physical page, all mappings to that
> page need to be torn down. I'm happy to call an fs callback and let
> each filesystem do what it wants with a "every pfn in this dax device
> needs to be unmapped".

You keep talking like this is something specific to a DAX device.
It isn't - the filesystem needs to take specific actions if any type
of block device reports that it has a corrupted range, not just DAX.
A DAX device simply adds "and invalidate direct mappings" to the
list of stuff that needs to be done.

And as far as a filesystem is concerned, there is no difference
between "this 4kB range is bad" and "the range of this entire device
is bad". We have to do the same things in both situations.

> I'm looking at the ->corrupted_range() patches trying to map it to
> this use case and I don't see how, for example a realtime-xfs over DM
> over multiple PMEM gets the notification to the right place.
> bd_corrupted_range() uses get_super() which get the wrong answer for
> both realtime-xfs and DM.

I'm not sure I follow your logic. What is generating the wrong
answer?

We already have infrastructure for the block device to look up the
superblock mounted on top of it, an DM already uses that for things
like "dmsetup suspend" to freeze the filesystem before it does
something.  This "superblock lookup" only occurs for the top level
DM device, not for the component pmem devices that make up the DM
device.


IOWs, if there's a DM device that maps multiple pmem devices, then
it should be stacking the bd_corrupted_range() callbacks to map the
physical device range to the range in the higher level DM device
that belongs to. This mapping of ranges is what DM exists to do -
the filesystem has no clue about what devices make up a DM device,
so the DM device *must* translate ranges for component devices into
the ranges that it maps that device into the LBA range it exposes to
the filesystem.

> I'd flip that arrangement around and have the FS tell the block device
> "if something happens to you, here is the super_block to notify".

We already have a mechanism for this that the block device calls:
get_active_super(bdev). There can be only one superblock per block
device - the superblock has exclusive ownership of the block device
while the filesystem is mounted.

get_active_super() returns the superblock that sits on top of the
bdev with an active reference, allowing the caller to safely access
and operate on the sueprblock without having to worry about the
superblock going away in the middle of whatever operation the block
device needs to perform.

If this isn't working, then existing storage stack functionality
doesn't work as it should and this needs fixing independently of
the PMEM/DAX stuff we are talking about here.

> So
> to me this looks like a fs_dax_register_super() helper that plumbs the
> superblock through an arbitrary stack of block devices to the leaf
> block-device that might want to send a notification up when a global
> unmap operation needs to be performed.

No, this is just wrong. The filesystem has no clue what block device
is at the leaf level of a block device stack, nor what LBA block
range represents that device within the address space the stacked
block devices present to the filesystem.

> > Please listen when we say "that is
> > not sufficient" because we don't want to be backed into a corner
> > that we have to fix ourselves again before we can enable some basic
> > filesystem functionality that we should have been able to support on
> > DAX from the start...
> 
> That's some revisionist interpretation of how the discovery of the
> reflink+dax+memory-error-handling collision went down.
> 
> The whole point of this discussion is to determine if
> ->corrupted_range() is suitable for this notification, and looking at
> the code as is, it isn't. Perhaps you have a different implementation
> of ->corrupted_range() in mind that allows this to be plumbed
> correctly?

So rather than try to make the generic ->corrupted_range
infrastructure be able to report "DAX range is invalid" (which is
the very definition of a corrupted block device range!), you want
to introduce a DAX specific notification to tell us that a range in
the block device contains invalid/corrupt data?

We're talking about a patchset that is in development. The proposed
notification path is supposed to be generic and *not PMEM specific*,
and is intended to handle exactly the use case that you raised.
The implementation may not be perfect yet, so rather than trying to
say "we need something different but does the same thing", work to
ensure that the proposed -generic infrastructure- can pass the
information you want to pass to the filesystem.

> > > Yes, but I was trying to map it to an existing mechanism and the
> > > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > > needs to happen here.
> >
> > No.
> >
> > drop_pagecache_sb() is not a relevant model for telling a filesystem
> > that the block device underneath has gone away,
> 
> Like I said I'm not trying to communicate "device has gone away", only
> "unmap all dax pages".

That is the wrong thing to be communicating.  If the device has gone
away, the filesystem needs to know that the device has gone away,
not that it should just unmap DAX pages.

> If you want those to be one in the same
> mechanism I'm listening, but like I said it was my mistake for tying
> global unmap to device-gone, they need not be the same given
> fileystems have not historically been notified proactively of device
> removal.

What other circumstance is there for the device driver punching
through block device layers to tell the filesystem it should "unmap
all dax pages"? ANd if we get such an event, what does that mean for
any of the other filesystem data/metadata in that range?

You are still trying to tell the filesystem what action it must take
based on what went wrong at the device driver level, not
communicating what error just occurred to the device. The filesystem
needs to know about the error that occurred, not what some device
thinks the filesystem should do when the device detects an error.

> > Filesystems are way more complex than pure DAX devices, and hence
> > handle errors and failure events differently. Unlike DAX devices, we
> > have both internal and external references to the DAX device, and we
> > can have both external and internal direct maps.  Invalidating user
> > data mappings is all dax devices need to do on unplug, but for
> > filesystems it is only a small part of what we have to do when a
> > range of a device goes bad.
> >
> > IOWs, there is no "one size fits all" approach that works for all
> > filesystems, nor is there one strategy that is is optimal for all
> > filesystems. Failure handling in a filesystem is almost always
> > filesystem specific...
> 
> Point taken, if a filesystem is not using the block-layer for metadata
> I/O and using DAX techniques directly it needs this event too
> otherwise it will crash vs report failed operations...
> ->corrupted_range() does not offer the correct plumbing for that
> today.
> 
> There's an additional problem this brings to mind. Device-mapper
> targets like dm-writecache need this notification as well because it
> is using direct physical page access via the linear map and may crash
> like the filesystem if the mm-direct-map is torn down from underneath
> it.

Yes, dm gets the notification by the ->corrupted_range() callback
from it's underlying device(s). It can then do what it needs to map
the range and pass that error on to the filesystem. Fundamentally,
though, if the range is mapped into userspace and it goes away, the
user has lost data and there's nothing DM can do to recover it so
all it can do is pass the corruption up the stack to the next layer
(either another block device or the filesystem).

> > For actual pmem, maybe. But hotplug RAM is a thing; big numa
> > machines that can hotplug nodes into their fabric and so have CPUs
> > and memory able to come and go from a live machine. It's not a small
> > stretch to extend that to having PMEM in those nodes. So it's a
> > practical design concern right now, even ignoring that NVMe is
> > hotplug....
> 
> Memory hotplug today requires the memory-device to be offlined before
> the memory is unplugged and the core-mm has a chance to say "no" if it
> sees even one page with an elevated reference. Block-devices in
> contrast have no option to say "no" to being unplugged / ->remove()
> triggered.

Yes, I know that. That's my whole point - NVMe persistent regions
mean that DAX filesystems will have to handle the latter case, and
that it looks no different to normal block device failure to the
filesystem.  ->corrupted_range is exactly how these events are
intended to be sent up the storage stack to the filesystem, so why
should PMEM be handled any different?

> > > While the pmem
> > > driver has a ->remove() path that's purely a software unbind
> > > operation. That said the vulnerability window today is if a process
> > > acquires a dax mapping, the pmem device hosting that filesystem goes
> > > through an unbind / bind cycle, and then a new filesystem is created /
> > > mounted. That old pte may be able to access data that is outside its
> > > intended protection domain.
> >
> > So what is being done to prevent stale DAX mappings from being
> > leaked this way right now, seeing as the leak you mention here
> > doesn't appear in any way to be filesystem related?
> 
> For device-dax where there is only one inode->i_mapping to deal with,
> one unmap_mapping_range() call is performed in the device shutdown
> path. For filesystem-dax only the direct-map is torn down.
> 
> The user mapping teardown gap is why I'm coming at this elephant from
> the user mapping perspective and not necessarily the "what does the
> filesystem want to do about device removal" perspective.

But that doesn't help avoid the "user mapping teardown gap" at all -
that gap only gets bigger when you add a filesystem into the picture
because not we have tens to hundreds of millions of cache inodes to
walk and invalidate mappings on.

Closing this gap requires brute force purging the CPU ptes the
moment an unexpected DAX device unplug occurs. There is no other way
to do it quickly, and just waiting until the filesystem can unmap it
only increases the gap between the ptes becoming invalid and them
getting invalidated.

> > And once you take into account that "pulling the wrong device" can
> > happen, how does the filesystem tell tell the difference between a
> > device being pulled and a drive cage just dying and so the drive
> > just disappear from the system? How are these accidental vs real
> > failures any different from the perspective of a filesystem mounted
> > on that device?
> 
> Not even the device driver can tell you that.

Exactly my point. As there is no difference between unplug and
device failure from a filesystem perspective, the comunication
should come through a single "device failure" interface, not some
special DAX-specific notification path that you are advocating for.

> This goes back to Yasunori's question, can't ->remove() just be
> blocked when the filesystem is mounted? The answer is similar to
> asking the filesystem to allow DAX RDMA pages to be pinned
> indefinitely and lock-out the filesystem from making any extent-map
> changes. If the admin wants the device disabled while the filesystem
> is mounted the kernel should do everything it can to honor that
> request safely.

Sure, but the end effect of this is that the filesystem seems that
the -device has failed- and there is no need for DAX devices to
require some special "invalidate all mappings" notification when a
"device jsut failed" notification tells the filesystem the same
thing and a whole lot more....

> > This just makes no sense at all from an operations perspective - if
> > you know that you are about to do an unplug that will result in all
> > your DAX apps and filesystems being killed (i.e. fatal production
> > environment failure) then why haven't they all been stopped by the
> > admin before the device unplug is done? Why does this "human in the
> > loop" admin task require the applications and filesystems to handle
> > this without warning and have to treat it as a "device failure"
> > event when this can all be avoided for normal, scheduled, controlled
> > unplug operations? The "unexpected unplug" is a catastrophic failure
> > event which may have severe side effects on system operation and
> > stability. Why would you design an unplug process that does not
> > start with a clean, a controlled shutdown process from the top down?
> > If we make the assumption that planned unplugs are well planned,
> > organised and scheduled, then the only thing that an unplug event
> > needs to mean to a filesystem is "catastrophic device failure has
> > occurred".
> 
> There is a difference between the kernel saying "don't do that, bad
> things will happen" and "you can't do that the entire system will
> crash / security promises will be violated".
> 
> git grep -n suppress_bind_attr drivers/ata/ drivers/scsi/ drivers/nvme/
> 
> There are no block-device providers that I can find on a quick search
> that forbid triggering ->remove() on the driver if a filesystem is
> mounted. pmem is not the first block device driver to present this
> problem.

Yes, that's because, as you point out,  pmem has unique
characteristics - DAX - that absolutely require us to handle storage
failures in this way. No other type of device requires the fileystem
to directly arbitrate userspace access to the device, and so we've
been able to get away with having the block device return EIO or
ENODEV when we try to do IO and handling the problem that way.

But we still have been wanting ENODEV notification from block
devices when they are unexpectedly unplugged, and have been wanting
that functionality for at least the last decade, if not longer.
Filesystem shutdown on device removal should be instantenous because
device removal for most filesystems is an unrecoverable error and
delaying the shutdown until a fatal IO error occurrs in the
filesystem benefits no-one.

And now, we can't even get reliable IO error reporting, because DAX.

That's the problems that this set of ->corrupted_range callbacks is
supposed to provide - it's generic enough that we can plumb
ata/scsi/nvme layers into it as well as PMEM, and the filesystem
will now get device failure notifications from all types of device
drivers and block devices.

We do not need a DAX specific mechanism to tell us "DAX device
gone", we need a generic block device interface that tells us "range
of block device is gone".

The reason that the block device is gone is irrelevant to the
filesystem. The type of block device is also irrelevant. If the
filesystem isn't using DAX (e.g. dax=never mount option) even when
it is on a DAX capable device, then it just doesn't care about
invalidating DAX mappings because it has none. But it still may care
about shutting down the filesystem because the block device is gone.

This is why we need to communicate what error occurred, not what
action a device driver thinks needs to be taken. The error is
important to the filesystem, the action might be completely
irrelevant. And, as we know now, shutdown on DAX enable filesystems
needs to imply DAX mapping invalidation in all cases, not just when
the device disappears from under the filesystem.

Cheers,

Dave.
Darrick J. Wong March 2, 2021, 3:28 a.m. UTC | #17
On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > > it points to, check if it points to the PMEM that is being removed,
> > > > grab the page it points to, map that to the relevant struct page,
> > > > run collect_procs() on that page, then kill the user processes that
> > > > map that page.
> > > >
> > > > So why can't we walk the ptescheck the physical pages that they
> > > > map to and if they map to a pmem page we go poison that
> > > > page and that kills any user process that maps it.
> > > >
> > > > i.e. I can't see how unexpected pmem device unplug is any different
> > > > to an MCE delivering a hwpoison event to a DAX mapped page.
> > >
> > > I guess the tradeoff is walking a long list of inodes vs walking a
> > > large array of pages.
> >
> > Not really. You're assuming all a filesystem has to do is invalidate
> > everything if a device goes away, and that's not true. Finding if an
> > inode has a mapping that spans a specific device in a multi-device
> > filesystem can be a lot more complex than that. Just walking inodes
> > is easy - determining whihc inodes need invalidation is the hard
> > part.
> 
> That inode-to-device level of specificity is not needed for the same
> reason that drop_caches does not need to be specific. If the wrong
> page is unmapped a re-fault will bring it back, and re-fault will fail
> for the pages that are successfully removed.
> 
> > That's where ->corrupt_range() comes in - the filesystem is already
> > set up to do reverse mapping from physical range to inode(s)
> > offsets...
> 
> Sure, but what is the need to get to that level of specificity with
> the filesystem for something that should rarely happen in the course
> of normal operation outside of a mistake?

I can't tell if we're conflating the "a bunch of your pmem went bad"
case with the "all your dimms fell out of the machine" case.

If, say, a single cacheline's worth of pmem goes bad on a node with 2TB
of pmem, I certainly want that level of specificity.  Just notify the
users of the dead piece, don't flush the whole machine down the drain.

> > > There's likely always more pages than inodes, but perhaps it's more
> > > efficient to walk the 'struct page' array than sb->s_inodes?
> >
> > I really don't see you seem to be telling us that invalidation is an
> > either/or choice. There's more ways to convert physical block
> > address -> inode file offset and mapping index than brute force
> > inode cache walks....
> 
> Yes, but I was trying to map it to an existing mechanism and the
> internals of drop_pagecache_sb() are, in coarse terms, close to what
> needs to happen here.

Yes.  XFS (with rmap enabled) can do all the iteration and walking in
that function except for the invalidate_mapping_* call itself.  The goal
of this series is first to wire up a callback within both the block and
pmem subsystems so that they can take notifications and reverse-map them
through the storage stack until they reach an fs superblock.

Once the information has reached XFS, it can use its own reverse
mappings to figure out which pages of which inodes are now targetted.
The future of DAX hw error handling can be that you throw the spitwad at
us, and it's our problem to distill that into mm invalidation calls.
XFS' reverse mapping data is indexed by storage location and isn't
sharded by address_space, so (except for the DIMMs falling out), we
don't need to walk the entire inode list or scan the entire mapping.

Between XFS and DAX and mm, the mm already has the invalidation calls,
xfs already has the distiller, and so all we need is that first bit.
The current mm code doesn't fully solve the problem, nor does it need
to, since it handles DRAM errors acceptably* already.

* Actually, the hwpoison code should _also_ be calling ->corrupted_range
when DRAM goes bad so that we can detect metadata failures and either
reload the buffer or (if it was dirty) shut down.

> >
> > .....
> >
> > > > IOWs, what needs to happen at this point is very filesystem
> > > > specific. Assuming that "device unplug == filesystem dead" is not
> > > > correct, nor is specifying a generic action that assumes the
> > > > filesystem is dead because a device it is using went away.
> > >
> > > Ok, I think I set this discussion in the wrong direction implying any
> > > mapping of this action to a "filesystem dead" event. It's just a "zap
> > > all ptes" event and upper layers recover from there.
> >
> > Yes, that's exactly what ->corrupt_range() is intended for. It
> > allows the filesystem to lock out access to the bad range
> > and then recover the data. Or metadata, if that's where the bad
> > range lands. If that recovery fails, it can then report a data
> > loss/filesystem shutdown event to userspace and kill user procs that
> > span the bad range...
> >
> > FWIW, is this notification going to occur before or after the device
> > has been physically unplugged?
> 
> Before. This will be operations that happen in the pmem driver
> ->remove() callback.
> 
> > i.e. what do we do about the
> > time-of-unplug-to-time-of-invalidation window where userspace can
> > still attempt to access the missing pmem though the
> > not-yet-invalidated ptes? It may not be likely that people just yank
> > pmem nvdimms out of machines, but with NVMe persistent memory
> > spaces, there's every chance that someone pulls the wrong device...
> 
> The physical removal aspect is only theoretical today. While the pmem
> driver has a ->remove() path that's purely a software unbind
> operation. That said the vulnerability window today is if a process
> acquires a dax mapping, the pmem device hosting that filesystem goes
> through an unbind / bind cycle, and then a new filesystem is created /
> mounted. That old pte may be able to access data that is outside its
> intended protection domain.
> 
> Going forward, for buses like CXL, there will be a managed physical
> remove operation via PCIE native hotplug. The flow there is that the
> PCIE hotplug driver will notify the OS of a pending removal, trigger
> ->remove() on the pmem driver, and then notify the technician (slot
> status LED) that the card is safe to pull.

Well, that's a relief.  Can we cancel longterm RDMA leases now too?
<duck>

--D
Dan Williams March 2, 2021, 3:33 a.m. UTC | #18
On Mon, Mar 1, 2021 at 6:42 PM Dave Chinner <david@fromorbit.com> wrote:
[..]
> We do not need a DAX specific mechanism to tell us "DAX device
> gone", we need a generic block device interface that tells us "range
> of block device is gone".

This is the crux of the disagreement. The block_device is going away
*and* the dax_device is going away. The dax_device removal implies one
set of actions (direct accessed pfns invalid) the block device removal
implies another (block layer sector access offline). corrupted_range
is blurring the notification for 2 different failure domains. Look at
the nascent idea to mount a filesystem on dax sans a block device.
Look at the existing plumbing for DM to map dax_operations through a
device stack. Look at the pushback Ruan got for adding a new
block_device operation for corrupted_range().

> The reason that the block device is gone is irrelevant to the
> filesystem. The type of block device is also irrelevant. If the
> filesystem isn't using DAX (e.g. dax=never mount option) even when
> it is on a DAX capable device, then it just doesn't care about
> invalidating DAX mappings because it has none. But it still may care
> about shutting down the filesystem because the block device is gone.

Sure, let's have a discussion about a block_device gone notification,
and a dax_device gone notification.

> This is why we need to communicate what error occurred, not what
> action a device driver thinks needs to be taken.

The driver is only an event producer in this model, whatever the
consumer does at the other end is not its concern. There may be a
generic consumer and a filesystem specific consumer.

> The error is
> important to the filesystem, the action might be completely
> irrelevant. And, as we know now, shutdown on DAX enable filesystems
> needs to imply DAX mapping invalidation in all cases, not just when
> the device disappears from under the filesystem.

Sure.
Dave Chinner March 2, 2021, 5:38 a.m. UTC | #19
On Mon, Mar 01, 2021 at 07:33:28PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 6:42 PM Dave Chinner <david@fromorbit.com> wrote:
> [..]
> > We do not need a DAX specific mechanism to tell us "DAX device
> > gone", we need a generic block device interface that tells us "range
> > of block device is gone".
> 
> This is the crux of the disagreement. The block_device is going away
> *and* the dax_device is going away.

No, that is not the disagreement I have with what you are saying.
You still haven't understand that it's even more basic and generic
than devices going away. At the simplest form, all the filesystem
wants is to be notified of is when *unrecoverable media errors*
occur in the persistent storage that underlies the filesystem.

The filesystem does not care what that media is build from - PMEM,
flash, corroded spinning disks, MRAM, or any other persistent media
you can think off. It just doesn't matter.

What we care about is that the contents of a *specific LBA range* no
longer contain *valid data*. IOWs, the data in that range of the
block device has been lost, cannot be retreived and/or cannot be
written to any more.

PMEM taking a MCE because ECC tripped is a media error because data
is lost and inaccessible until recovery actions are taken.

MD RAID failing a scrub is a media error and data is lost and
unrecoverable at that layer.

A device disappearing is a media error because the storage media is
now permanently inaccessible to the higher layers.

This "media error" categorisation is a fundamental property of
persistent storage and, as such, is a property of the block devices
used to access said persistent storage.

That's the disagreement here - that you and Christoph are saying
->corrupted_range is not a block device property because only a
pmem/DAX device currently generates it.

You both seem to be NACKing a generic interface because it's only
implemented for the first subsystem that needs it. AFAICT, you
either don't understand or are completely ignoring the architectural
need for it to be provided across the rest of the storage stack that
*block device based filesystems depend on*.

Sure, there might be dax device based fielsystems around the corner.
They just require a different pmem device ->corrupted_range callout
to implement the notification - one that directs to the dax device
rather than the block device. That's simple and trivial to
implement, but such functionaity for DAX devices  does not replace
the need for the same generic functionality to be provided across a
*range of different block devices* as required by *block device
based filesystems*.

And that's fundamentally the problem. XFS is block device based, not
DAX device based. We require errors to be reported through block
device mechanisms. fs-dax does not change this - it is based on pmem
being presented as a primarily as a block device to the block device
based filesystems and only secondarily as a dax device. Hence if it
can be trivially implemented as a block device interface, that's
where it should go, because then all the other block devices that
the filesytem runs on can provide the same functionality for similar
media error events....

> The dax_device removal implies one
> set of actions (direct accessed pfns invalid) the block device removal
> implies another (block layer sector access offline).

There you go again, saying DAX requires an action, while the block
device notification is a -state change- (i.e. goes offline).

This is exactly what I said was wrong in my last email.

> corrupted_range
> is blurring the notification for 2 different failure domains. Look at
> the nascent idea to mount a filesystem on dax sans a block device.
> Look at the existing plumbing for DM to map dax_operations through a
> device stack.

Ummm, it just maps the direct_access call to the underlying device
and calls it's ->direct_access method. All it's doing is LBA
mapping. That's all it needs to do for ->corrupted_range, too.
I have no clue why you think this is a problem for error
notification...

> Look at the pushback Ruan got for adding a new
> block_device operation for corrupted_range().

one person said "no". That's hardly pushback. Especially as I think
Christoph's objection about this being dax specific functionality
is simply wrong, as per above.

> > This is why we need to communicate what error occurred, not what
> > action a device driver thinks needs to be taken.
> 
> The driver is only an event producer in this model, whatever the
> consumer does at the other end is not its concern. There may be a
> generic consumer and a filesystem specific consumer.

<sigh>

That's why these are all ops functions that can provide multiple
implementations to different device types. So that when we get a new
use case, the ops function structure can be replaced with one that
directs the notification to the new user instead of to the existing
one. It's a design pattern we use all over the kernel code.

Cheers,

Dave.
Dan Williams March 2, 2021, 5:41 a.m. UTC | #20
On Mon, Mar 1, 2021 at 7:28 PM Darrick J. Wong <djwong@kernel.org> wrote:
>
> On Mon, Mar 01, 2021 at 12:55:53PM -0800, Dan Williams wrote:
> > On Sun, Feb 28, 2021 at 2:39 PM Dave Chinner <david@fromorbit.com> wrote:
> > >
> > > On Sat, Feb 27, 2021 at 03:40:24PM -0800, Dan Williams wrote:
> > > > On Sat, Feb 27, 2021 at 2:36 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > On Fri, Feb 26, 2021 at 02:41:34PM -0800, Dan Williams wrote:
> > > > > > On Fri, Feb 26, 2021 at 1:28 PM Dave Chinner <david@fromorbit.com> wrote:
> > > > > > > On Fri, Feb 26, 2021 at 12:59:53PM -0800, Dan Williams wrote:
> > > > > it points to, check if it points to the PMEM that is being removed,
> > > > > grab the page it points to, map that to the relevant struct page,
> > > > > run collect_procs() on that page, then kill the user processes that
> > > > > map that page.
> > > > >
> > > > > So why can't we walk the ptescheck the physical pages that they
> > > > > map to and if they map to a pmem page we go poison that
> > > > > page and that kills any user process that maps it.
> > > > >
> > > > > i.e. I can't see how unexpected pmem device unplug is any different
> > > > > to an MCE delivering a hwpoison event to a DAX mapped page.
> > > >
> > > > I guess the tradeoff is walking a long list of inodes vs walking a
> > > > large array of pages.
> > >
> > > Not really. You're assuming all a filesystem has to do is invalidate
> > > everything if a device goes away, and that's not true. Finding if an
> > > inode has a mapping that spans a specific device in a multi-device
> > > filesystem can be a lot more complex than that. Just walking inodes
> > > is easy - determining whihc inodes need invalidation is the hard
> > > part.
> >
> > That inode-to-device level of specificity is not needed for the same
> > reason that drop_caches does not need to be specific. If the wrong
> > page is unmapped a re-fault will bring it back, and re-fault will fail
> > for the pages that are successfully removed.
> >
> > > That's where ->corrupt_range() comes in - the filesystem is already
> > > set up to do reverse mapping from physical range to inode(s)
> > > offsets...
> >
> > Sure, but what is the need to get to that level of specificity with
> > the filesystem for something that should rarely happen in the course
> > of normal operation outside of a mistake?
>
> I can't tell if we're conflating the "a bunch of your pmem went bad"
> case with the "all your dimms fell out of the machine" case.

From the pmem driver perspective it has the media scanning to find
some small handful of cachelines that have gone bad, and it has the
driver ->remove() callback to tell it a bunch of pmem is now offline.
The NVDIMM device "range has gone bad" mechanism has no way to
communicate multiple terabytes have gone bad at once.

In fact I think the distinction is important that ->remove() is not
treated as ->corrupted_range() because I expect the level of freakout
is much worse for a "your storage is offline" notification vs "your
storage is corrupted" notification.

> If, say, a single cacheline's worth of pmem goes bad on a node with 2TB
> of pmem, I certainly want that level of specificity.  Just notify the
> users of the dead piece, don't flush the whole machine down the drain.

Right, something like corrupted_range() is there to say, "keep going
upper layers, but note that this handful of sectors now has
indeterminant data and will return -EIO on access until repaired". The
repair for device-offline is device-online.

>
> > > > There's likely always more pages than inodes, but perhaps it's more
> > > > efficient to walk the 'struct page' array than sb->s_inodes?
> > >
> > > I really don't see you seem to be telling us that invalidation is an
> > > either/or choice. There's more ways to convert physical block
> > > address -> inode file offset and mapping index than brute force
> > > inode cache walks....
> >
> > Yes, but I was trying to map it to an existing mechanism and the
> > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > needs to happen here.
>
> Yes.  XFS (with rmap enabled) can do all the iteration and walking in
> that function except for the invalidate_mapping_* call itself.  The goal
> of this series is first to wire up a callback within both the block and
> pmem subsystems so that they can take notifications and reverse-map them
> through the storage stack until they reach an fs superblock.

I'm chuckling because this "reverse map all the way up the block
layer" is the opposite of what Dave said at the first reaction to my
proposal, "can't the mm map pfns to fs inode  address_spaces?".

I think dax unmap is distinct from corrupted_range() precisely because
they are events happening in two different domains, block device
sectors vs dax device pfns.

Let's step back. I think a chain of ->corrupted_range() callbacks up
the block stack terminating in the filesystem with dax implications
tacked on is the wrong abstraction. Why not use the existing generic
object for communicating bad sector ranges, 'struct badblocks'?

Today whenever the pmem driver receives new corrupted range
notification from the lower level nvdimm
infrastructure(nd_pmem_notify) it updates the 'badblocks' instance
associated with the pmem gendisk and then notifies userspace that
there are new badblocks. This seems a perfect place to signal an upper
level stacked block device that may also be watching disk->bb. Then
each gendisk in a stacked topology is responsible for watching the
badblock notifications of the next level and storing a remapped
instance of those blocks until ultimately the filesystem mounted on
the top-level block device is responsible for registering for those
top-level disk->bb events.

The device gone notification does not map cleanly onto 'struct badblocks'.

If an upper level agent really cared about knowing about ->remove()
events before they happened it could maybe do something like:

dev = disk_to_dev(bdev->bd_disk)->parent;
bus_register_notifier(dev->bus. &disk_host_device_notifier_block)

...where it's trying to watch for events that will trigger the driver
->remove() callback on the device hosting a disk.

I still don't think that solves the need for a separate mechanism for
global dax_device pte invalidation.

I think that global dax_device invalidation needs new kernel
infrastructure to allow internal users, like dm-writecache and future
filesystems using dax for metadata, to take a fault when pmem is
offlined. They can't use the direct-map because the direct-map can't
fault, and they can't indefinitely pin metadata pages because that
blocks ->remove() from being guaranteed of forward progress.

Then an invalidation event is indeed a walk of address_space like
objects where some are fs-inodes and some are kernel-mode dax-users,
and that remains independent from remove events and badblocks
notifications because they are independent objects and events.

In contrast I think calling something like soft_offline_page() a pfn
at a time over terabytes will take forever especially when that event
need not fire if the dax_device is not mounted.

> Once the information has reached XFS, it can use its own reverse
> mappings to figure out which pages of which inodes are now targetted.

It has its own sector based reverse mappings, it does not have pfn reverse map.

> The future of DAX hw error handling can be that you throw the spitwad at
> us, and it's our problem to distill that into mm invalidation calls.
> XFS' reverse mapping data is indexed by storage location and isn't
> sharded by address_space, so (except for the DIMMs falling out), we
> don't need to walk the entire inode list or scan the entire mapping.

->remove() is effectively all the DIMMs falling out for all XFS knows.

> Between XFS and DAX and mm, the mm already has the invalidation calls,
> xfs already has the distiller, and so all we need is that first bit.
> The current mm code doesn't fully solve the problem, nor does it need
> to, since it handles DRAM errors acceptably* already.
>
> * Actually, the hwpoison code should _also_ be calling ->corrupted_range
> when DRAM goes bad so that we can detect metadata failures and either
> reload the buffer or (if it was dirty) shut down.
[..]
> > Going forward, for buses like CXL, there will be a managed physical
> > remove operation via PCIE native hotplug. The flow there is that the
> > PCIE hotplug driver will notify the OS of a pending removal, trigger
> > ->remove() on the pmem driver, and then notify the technician (slot
> > status LED) that the card is safe to pull.
>
> Well, that's a relief.  Can we cancel longterm RDMA leases now too?
> <duck>

Yes, all problems can be solved with more blinky lights.
Dan Williams March 2, 2021, 5:50 a.m. UTC | #21
On Mon, Mar 1, 2021 at 9:38 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Mon, Mar 01, 2021 at 07:33:28PM -0800, Dan Williams wrote:
> > On Mon, Mar 1, 2021 at 6:42 PM Dave Chinner <david@fromorbit.com> wrote:
> > [..]
> > > We do not need a DAX specific mechanism to tell us "DAX device
> > > gone", we need a generic block device interface that tells us "range
> > > of block device is gone".
> >
> > This is the crux of the disagreement. The block_device is going away
> > *and* the dax_device is going away.
>
> No, that is not the disagreement I have with what you are saying.
> You still haven't understand that it's even more basic and generic
> than devices going away. At the simplest form, all the filesystem
> wants is to be notified of is when *unrecoverable media errors*
> occur in the persistent storage that underlies the filesystem.
>
> The filesystem does not care what that media is build from - PMEM,
> flash, corroded spinning disks, MRAM, or any other persistent media
> you can think off. It just doesn't matter.
>
> What we care about is that the contents of a *specific LBA range* no
> longer contain *valid data*. IOWs, the data in that range of the
> block device has been lost, cannot be retreived and/or cannot be
> written to any more.
>
> PMEM taking a MCE because ECC tripped is a media error because data
> is lost and inaccessible until recovery actions are taken.
>
> MD RAID failing a scrub is a media error and data is lost and
> unrecoverable at that layer.
>
> A device disappearing is a media error because the storage media is
> now permanently inaccessible to the higher layers.
>
> This "media error" categorisation is a fundamental property of
> persistent storage and, as such, is a property of the block devices
> used to access said persistent storage.
>
> That's the disagreement here - that you and Christoph are saying
> ->corrupted_range is not a block device property because only a
> pmem/DAX device currently generates it.
>
> You both seem to be NACKing a generic interface because it's only
> implemented for the first subsystem that needs it. AFAICT, you
> either don't understand or are completely ignoring the architectural
> need for it to be provided across the rest of the storage stack that
> *block device based filesystems depend on*.

No I'm NAKing it because it's the wrong interface. See my 'struct
badblocks' argument in the reply to Darrick. That 'struct badblocks'
infrastructure arose from MD and is shared with PMEM.

>
> Sure, there might be dax device based fielsystems around the corner.
> They just require a different pmem device ->corrupted_range callout
> to implement the notification - one that directs to the dax device
> rather than the block device. That's simple and trivial to
> implement, but such functionaity for DAX devices  does not replace
> the need for the same generic functionality to be provided across a
> *range of different block devices* as required by *block device
> based filesystems*.
>
> And that's fundamentally the problem. XFS is block device based, not
> DAX device based. We require errors to be reported through block
> device mechanisms. fs-dax does not change this - it is based on pmem
> being presented as a primarily as a block device to the block device
> based filesystems and only secondarily as a dax device. Hence if it
> can be trivially implemented as a block device interface, that's
> where it should go, because then all the other block devices that
> the filesytem runs on can provide the same functionality for similar
> media error events....

Sure, use 'struct badblocks' not struct block_device and
block_device_operations.
>
> > The dax_device removal implies one
> > set of actions (direct accessed pfns invalid) the block device removal
> > implies another (block layer sector access offline).
>
> There you go again, saying DAX requires an action, while the block
> device notification is a -state change- (i.e. goes offline).

There you go reacting to the least generous interpretation of what I said.

s/pfns invalid/pfns offline/

>
> This is exactly what I said was wrong in my last email.
>
> > corrupted_range
> > is blurring the notification for 2 different failure domains. Look at
> > the nascent idea to mount a filesystem on dax sans a block device.
> > Look at the existing plumbing for DM to map dax_operations through a
> > device stack.
>
> Ummm, it just maps the direct_access call to the underlying device
> and calls it's ->direct_access method. All it's doing is LBA
> mapping. That's all it needs to do for ->corrupted_range, too.
> I have no clue why you think this is a problem for error
> notification...
>
> > Look at the pushback Ruan got for adding a new
> > block_device operation for corrupted_range().
>
> one person said "no". That's hardly pushback. Especially as I think
> Christoph's objection about this being dax specific functionality
> is simply wrong, as per above.

It's not wrong when we have a perfectly suitable object for sector
based error notification and when we're trying to disentangle 'struct
block_device' from 'struct dax_device'.
Dave Chinner March 2, 2021, 7:57 a.m. UTC | #22
On Mon, Mar 01, 2021 at 09:41:02PM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 7:28 PM Darrick J. Wong <djwong@kernel.org> wrote:
> > > > I really don't see you seem to be telling us that invalidation is an
> > > > either/or choice. There's more ways to convert physical block
> > > > address -> inode file offset and mapping index than brute force
> > > > inode cache walks....
> > >
> > > Yes, but I was trying to map it to an existing mechanism and the
> > > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > > needs to happen here.
> >
> > Yes.  XFS (with rmap enabled) can do all the iteration and walking in
> > that function except for the invalidate_mapping_* call itself.  The goal
> > of this series is first to wire up a callback within both the block and
> > pmem subsystems so that they can take notifications and reverse-map them
> > through the storage stack until they reach an fs superblock.
> 
> I'm chuckling because this "reverse map all the way up the block
> layer" is the opposite of what Dave said at the first reaction to my
> proposal, "can't the mm map pfns to fs inode  address_spaces?".

Ah, no, I never said that the filesystem can't do reverse maps. I
was asking if the mm could directly (brute-force) invalidate PTEs
pointing at physical pmem ranges without needing walk the inode
mappings. That would be far more efficient if it could be done....

> Today whenever the pmem driver receives new corrupted range
> notification from the lower level nvdimm
> infrastructure(nd_pmem_notify) it updates the 'badblocks' instance
> associated with the pmem gendisk and then notifies userspace that
> there are new badblocks. This seems a perfect place to signal an upper
> level stacked block device that may also be watching disk->bb. Then
> each gendisk in a stacked topology is responsible for watching the
> badblock notifications of the next level and storing a remapped
> instance of those blocks until ultimately the filesystem mounted on
> the top-level block device is responsible for registering for those
> top-level disk->bb events.
> 
> The device gone notification does not map cleanly onto 'struct badblocks'.

Filesystems are not allowed to interact with the gendisk
infrastructure - that's for supporting the device side of a block
device. It's a layering violation, and many a filesytem developer
has been shouted at for trying to do this. At most we can peek
through it to query functionality support from the request queue,
but otherwise filesystems do not interact with anything under
bdev->bd_disk.

As it is, badblocks are used by devices to manage internal state.
e.g. md for recording stripes that need recovery if the system
crashes while they are being written out.

> If an upper level agent really cared about knowing about ->remove()
> events before they happened it could maybe do something like:
> 
> dev = disk_to_dev(bdev->bd_disk)->parent;
> bus_register_notifier(dev->bus. &disk_host_device_notifier_block)

Yeah, that's exactly the sort of thing that filesystems have been
aggressively discouraged from doing for years.

Part of the reason for this is that gendisk based mechanisms are not
very good for stacked device error reporting. Part of the problem
here is that every layer of the stacked device has to hook the
notifier of the block devices underneath it, then translate the
event to match the upper block device map, then regenerate the
notification for the next layer up. This isn't an efficient way to
pass a notification through a series of stacked devices and it is
messy and cumbersome to maintain.

It can be effective for getting notifications to userspace about
something that happens to a specific block device. But The userspace
still ends up having to solve the "what does this error resolve to"
problem. i.e. Userspace still needs to map that notification to a
filesystem, and for data loss events map it to objects within the
filesystem, which can be extremely expensive to do from userspace.

This is exactly the sort of userspace error reporting mess that
various projects have asked us to try to fix. Plumbing errors
internally through the kernel up to the filesystem where the
filesytem can point directly to the user data that is affected is a
simple, effective solution to the problem. Especially if we then
have a generic error notification mechanism for filesystems to emit
errors to registered userspace watchers...

> I still don't think that solves the need for a separate mechanism for
> global dax_device pte invalidation.

It's just another type of media error because.....

> I think that global dax_device invalidation needs new kernel
> infrastructure to allow internal users, like dm-writecache and future
> filesystems using dax for metadata, to take a fault when pmem is
> offlined.

.... if userspace has directly mapped into the cache, and the cache
storage goes away, the userspace app has to be killed because we
have no idea if the device going away has caused data loss or not.
IOWs, if userspace writes direct to the cache device and it hasn't
been written back to other storage when it gets yanked, we have just
caused data corruption to occur.

At minimum, we now have to tell the filesystem that the dirty data
in the cache is now bad, and direct map applications that map those
dirty ranges need to be killed because their backing store is no
longer valid nor does the backup copy contain the data they last
wrote. Nor is it acessible by direct access, which is going to be
interesting because dynamically changing dax to non-dax access can't
be done without forcibly kicking the inode out of the cache. That
requires all references to the inode to go away. And that means the
event really has to go up to the filesystem.

But I think the biggest piece of the puzzle that you haven't grokked
here is that the dm cache device isn't a linear map - it's made up of
random ranges from the underlying devices. Hence the "remove" of a dm
cache device turns into a huge number of small, sparse corrupt
ranges, not a single linear device remove event.

IOWs, device unplug/remove events are not just simple "pass it on"
events in a stacked storage setup. There can be non-trivial mappings
through the layers, and device disappearance may in fact manifest to
the user as data corruption rather than causing data to be
inaccessible.

Hence "remove" notifications just don't work in the storage stack.
They need to be translated to block ranges going bad (i.e.  media
errors), and reported to higher layers as bad ranges, not as device
removal.

The same goes for DAX devices. The moment they can be placed in
storage stacks in non-trivial configurations and/or used as cache
devices that can be directly accessed over tranditional block
devices, we end up with error conditions that can only be mapped as
ranges of blocks that have gone bad.

Cheers,

Dave.
Dan Williams March 2, 2021, 5:49 p.m. UTC | #23
On Mon, Mar 1, 2021 at 11:57 PM Dave Chinner <david@fromorbit.com> wrote:
>
> On Mon, Mar 01, 2021 at 09:41:02PM -0800, Dan Williams wrote:
> > On Mon, Mar 1, 2021 at 7:28 PM Darrick J. Wong <djwong@kernel.org> wrote:
> > > > > I really don't see you seem to be telling us that invalidation is an
> > > > > either/or choice. There's more ways to convert physical block
> > > > > address -> inode file offset and mapping index than brute force
> > > > > inode cache walks....
> > > >
> > > > Yes, but I was trying to map it to an existing mechanism and the
> > > > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > > > needs to happen here.
> > >
> > > Yes.  XFS (with rmap enabled) can do all the iteration and walking in
> > > that function except for the invalidate_mapping_* call itself.  The goal
> > > of this series is first to wire up a callback within both the block and
> > > pmem subsystems so that they can take notifications and reverse-map them
> > > through the storage stack until they reach an fs superblock.
> >
> > I'm chuckling because this "reverse map all the way up the block
> > layer" is the opposite of what Dave said at the first reaction to my
> > proposal, "can't the mm map pfns to fs inode  address_spaces?".
>
> Ah, no, I never said that the filesystem can't do reverse maps. I
> was asking if the mm could directly (brute-force) invalidate PTEs
> pointing at physical pmem ranges without needing walk the inode
> mappings. That would be far more efficient if it could be done....
>
> > Today whenever the pmem driver receives new corrupted range
> > notification from the lower level nvdimm
> > infrastructure(nd_pmem_notify) it updates the 'badblocks' instance
> > associated with the pmem gendisk and then notifies userspace that
> > there are new badblocks. This seems a perfect place to signal an upper
> > level stacked block device that may also be watching disk->bb. Then
> > each gendisk in a stacked topology is responsible for watching the
> > badblock notifications of the next level and storing a remapped
> > instance of those blocks until ultimately the filesystem mounted on
> > the top-level block device is responsible for registering for those
> > top-level disk->bb events.
> >
> > The device gone notification does not map cleanly onto 'struct badblocks'.
>
> Filesystems are not allowed to interact with the gendisk
> infrastructure - that's for supporting the device side of a block
> device. It's a layering violation, and many a filesytem developer
> has been shouted at for trying to do this. At most we can peek
> through it to query functionality support from the request queue,
> but otherwise filesystems do not interact with anything under
> bdev->bd_disk.

So lets add an api that allows the querying of badblocks by bdev and
let the block core handle the bd_disk interaction. I see other block
functionality like blk-integrity reaching through gendisk. The fs need
not interact with the gendisk directly.

>
> As it is, badblocks are used by devices to manage internal state.
> e.g. md for recording stripes that need recovery if the system
> crashes while they are being written out.

I know, I was there when it was invented which is why it was
top-of-mind when pmem had a need to communicate badblocks. Other block
drivers have threatened to use it for badblocks tracking, but none of
those have carried through on that initial interest.

>
> > If an upper level agent really cared about knowing about ->remove()
> > events before they happened it could maybe do something like:
> >
> > dev = disk_to_dev(bdev->bd_disk)->parent;
> > bus_register_notifier(dev->bus. &disk_host_device_notifier_block)
>
> Yeah, that's exactly the sort of thing that filesystems have been
> aggressively discouraged from doing for years.

Yup, it's a layering violation.

> Part of the reason for this is that gendisk based mechanisms are not
> very good for stacked device error reporting. Part of the problem
> here is that every layer of the stacked device has to hook the
> notifier of the block devices underneath it, then translate the
> event to match the upper block device map, then regenerate the
> notification for the next layer up. This isn't an efficient way to
> pass a notification through a series of stacked devices and it is
> messy and cumbersome to maintain.

It's been messy and cumbersome to route new infrastructure through DM
every time a new dax_operation arrives. The corrupted_range() routing
has the same burden. The advantage of badblocks over corrupted_range()
is that it solves the "what If I miss a notification" problem. Each
layer of the stack maintains its sector translation of the next level
errors.
.
> It can be effective for getting notifications to userspace about
> something that happens to a specific block device.

No, it's not block device specific, it's stuck at the disk level. The
user notification aspect was added for pmem at the disk layer because
IIRC it was NAKd to add it to the block_device itself.

>
> But The userspace
> still ends up having to solve the "what does this error resolve to"
> problem. i.e. Userspace still needs to map that notification to a
> filesystem, and for data loss events map it to objects within the
> filesystem, which can be extremely expensive to do from userspace.

Expensive and vulnerable to TOCTOU, this has been the motivation for
filesystem native awareness of these errors from the beginning.

> This is exactly the sort of userspace error reporting mess that
> various projects have asked us to try to fix. Plumbing errors
> internally through the kernel up to the filesystem where the
> filesytem can point directly to the user data that is affected is a
> simple, effective solution to the problem. Especially if we then
> have a generic error notification mechanism for filesystems to emit
> errors to registered userspace watchers...

Agree, that's the dream worth pursuing.

>
> > I still don't think that solves the need for a separate mechanism for
> > global dax_device pte invalidation.
>
> It's just another type of media error because.....
>
> > I think that global dax_device invalidation needs new kernel
> > infrastructure to allow internal users, like dm-writecache and future
> > filesystems using dax for metadata, to take a fault when pmem is
> > offlined.
>
> .... if userspace has directly mapped into the cache, and the cache
> storage goes away, the userspace app has to be killed because we
> have no idea if the device going away has caused data loss or not.
> IOWs, if userspace writes direct to the cache device and it hasn't
> been written back to other storage when it gets yanked, we have just
> caused data corruption to occur.

If userspace has it direct mapped dirty in the cache when the remove
fires, there is no opportunity to flush the cache. Just as there is no
opportunity today with non-DAX and the page cache. The block-queue
will be invalidated and any dirty in page cache is stranded.

> At minimum, we now have to tell the filesystem that the dirty data
> in the cache is now bad, and direct map applications that map those
> dirty ranges need to be killed because their backing store is no
> longer valid nor does the backup copy contain the data they last
> wrote. Nor is it acessible by direct access, which is going to be
> interesting because dynamically changing dax to non-dax access can't
> be done without forcibly kicking the inode out of the cache. That
> requires all references to the inode to go away. And that means the
> event really has to go up to the filesystem.
>
> But I think the biggest piece of the puzzle that you haven't grokked
> here is that the dm cache device isn't a linear map - it's made up of
> random ranges from the underlying devices. Hence the "remove" of a dm
> cache device turns into a huge number of small, sparse corrupt
> ranges, not a single linear device remove event.

I am aware that DM is non-linear. The other non-linearity is sector-to-pfn.

> IOWs, device unplug/remove events are not just simple "pass it on"
> events in a stacked storage setup. There can be non-trivial mappings
> through the layers, and device disappearance may in fact manifest to
> the user as data corruption rather than causing data to be
> inaccessible.

Even MD does not rely on component device notifications for failure
notifications, it waits for write-errors, and yes losing a component
of a raid0 is more than a data offline event.

> Hence "remove" notifications just don't work in the storage stack.
> They need to be translated to block ranges going bad (i.e.  media
> errors), and reported to higher layers as bad ranges, not as device
> removal.

Yes, the generic top-level remove event is pretty much useless for
both the dax pte invalidation and lba range offline notification. I'm
distinguishing that from knock on events that fire in response to
->remove() triggering on the disk driver which seems to be where you
are at as well with the idea to trigger ->corrupted_range(0, EOD) from
->remove().

There's 2 ways to view the "filesystems have wanted proactive
notification of remove events from storage for a long time". There's
either enough pent up demand to convince all parties to come to the
table and get something done, or there's too much momentum with the
status quo to overcome.

I do not think it is fair to ask Ruan to solve a problem with brand
new plumbing that the Linux storage community has not seen fit to
address for a decade. Not when disk->bb is already plumbed without
anyone complaining about it.

> The same goes for DAX devices. The moment they can be placed in
> storage stacks in non-trivial configurations and/or used as cache
> devices that can be directly accessed over tranditional block
> devices, we end up with error conditions that can only be mapped as
> ranges of blocks that have gone bad.

I see plumbing corrupted_range() and using it to communicate removal
in addition to badblocks in addition to bad pfns as a revolutionary
change. A reuse of disk->bb for communicating poison sector discovery
events up the stack and a separate facility to invalidate dax devices
as evolutionary. The evolutionary change does not preclude the
eventual revolutionary change, but it has a better chance of making
forward progress in the near term.
Christoph Hellwig March 3, 2021, 9:39 a.m. UTC | #24
On Wed, Mar 03, 2021 at 08:45:14AM +0000, ruansy.fnst@fujitsu.com wrote:
> Sorry for making you confused. This is because I misunderstood how I should
> use iomap_apply2(). I have re-sent two new patches to fix this(PATCH 08/10)
> and the previous(PATCH 07/10) which are in-reply-to these two patch, please
> take a look on those two.  Maybe I should resend all of the patchset as a
> new one...

I haven't found any resent patch in my inbox yet, but then again
various mail servers seem to malfunction in the last days..
Christoph Hellwig March 3, 2021, 9:44 a.m. UTC | #25
On Wed, Mar 03, 2021 at 09:41:54AM +0000, ruansy.fnst@fujitsu.com wrote:
> 
> > >
> > >       if (dirty)
> > >               __mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
> > 
> > I still think the __mark_inode_dirty should just be moved into the one
> > caller that needs it.
> 
> I found that the dirty flag will be used in the next few lines, so I keep
> this function inside. If I move it outside, the drity flag should be passed
> in as well. 
> 
> @@ -774,6 +780,9 @@ static void *dax_insert_entry(struct xa_state *xas,
>          if (dirty)
>                  xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
>  
> +       if (cow)
> +               xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
> +
>          xas_unlock_irq(xas);
>          return entry;
> }
> 
> 
> So, may I ask what's your purpose for doing in that way?

Oh, true.  We can't just move that out as the xas needs to stay
locked.
Shiyang Ruan March 3, 2021, 9:46 a.m. UTC | #26
> 
> On Wed, Mar 03, 2021 at 08:45:14AM +0000, ruansy.fnst@fujitsu.com wrote:
> > Sorry for making you confused. This is because I misunderstood how I should
> > use iomap_apply2(). I have re-sent two new patches to fix this(PATCH 08/10)
> > and the previous(PATCH 07/10) which are in-reply-to these two patch, please
> > take a look on those two.  Maybe I should resend all of the patchset as a
> > new one...
> 
> I haven't found any resent patch in my inbox yet, but then again
> various mail servers seem to malfunction in the last days..
> 

Sorry again for that.  Let me increase the version number and resent them again.


--
Thanks,
Ruan Shiyang.
Shiyang Ruan March 3, 2021, 9:57 a.m. UTC | #27
> 
> On Fri, Feb 26, 2021 at 08:20:29AM +0800, Shiyang Ruan wrote:
> >       error = iomap_zero_range(VFS_I(ip), offset, len, NULL,
> > -                     &xfs_buffered_write_iomap_ops);
> > +               IS_DAX(VFS_I(ip)) ?
> > +               &xfs_dax_write_iomap_ops : &xfs_buffered_write_iomap_ops);
> 
> Please add a xfs_zero_range helper that picks the right iomap_ops
> instead of open coding this in a few places.

OK.  I'll add it.
> 
> > +static int
> > +xfs_dax_write_iomap_end(
> > +     struct inode            *inode,
> > +     loff_t                  pos,
> > +     loff_t                  length,
> > +     ssize_t                 written,
> > +     unsigned int            flags,
> > +     struct iomap            *iomap)
> > +{
> > +     int                     error = 0;
> > +     xfs_inode_t             *ip = XFS_I(inode);
> > +
> > +     if (pos + written > i_size_read(inode)) {
> > +             i_size_write(inode, pos + written);
> > +             error = xfs_setfilesize(ip, pos, written);
> > +     }
> > +     if (xfs_is_cow_inode(ip))
> > +             error = xfs_reflink_end_cow(ip, pos, written);
> > +
> > +     return error;
> 
> What is the advantage of the ioemap_end handler here?  It adds another
> indirect funtion call to the fast path, so if we can avoid it, I'd
> rather do that.

These code were in xfs_file_dax_write().  I moved them into the iomap_end
because the mmaped CoW need this.

I know this is not so good, but I could not find another better way. Do you
have any ideas? 

>
> Also, shouldn't we cancel the COW rather than finishing it when setting
> the file size fails?
> 

I did forget about this part.  Thanks for pointing out.


--
Thanks,
Ruan Shiyang.
Christoph Hellwig March 3, 2021, 10:43 a.m. UTC | #28
On Wed, Mar 03, 2021 at 09:57:48AM +0000, ruansy.fnst@fujitsu.com wrote:
> > What is the advantage of the ioemap_end handler here?  It adds another
> > indirect funtion call to the fast path, so if we can avoid it, I'd
> > rather do that.
> 
> These code were in xfs_file_dax_write().  I moved them into the iomap_end
> because the mmaped CoW need this.
> 
> I know this is not so good, but I could not find another better way. Do you
> have any ideas? 

mmaped copy is the copy_edge case?  Maybe just use different iomap_ops for
that case vs plain write?
Shiyang Ruan March 4, 2021, 1:35 a.m. UTC | #29
> On Wed, Mar 03, 2021 at 09:57:48AM +0000, ruansy.fnst@fujitsu.com wrote:
> > > What is the advantage of the ioemap_end handler here?  It adds another
> > > indirect funtion call to the fast path, so if we can avoid it, I'd
> > > rather do that.
> >
> > These code were in xfs_file_dax_write().  I moved them into the iomap_end
> > because the mmaped CoW need this.
> >
> > I know this is not so good, but I could not find another better way. Do you
> > have any ideas?
> mmaped copy is the copy_edge case?  Maybe just use different iomap_ops for
> that case vs plain write?

No, I mean mmaped CoW need a xfs_reflink_end_cow() to make sure the new extent
will be correctly remaped to the file.  Otherwise, the file will still refer to
the extent that srcmap point to.

We are able to call this in xfs_file_dax_write(), but cannot call it anywhere
except iomap_end in mmap path.


--
Thanks,
Ruan Shiyang.
Darrick J. Wong March 4, 2021, 11:40 p.m. UTC | #30
On Tue, Mar 02, 2021 at 09:49:30AM -0800, Dan Williams wrote:
> On Mon, Mar 1, 2021 at 11:57 PM Dave Chinner <david@fromorbit.com> wrote:
> >
> > On Mon, Mar 01, 2021 at 09:41:02PM -0800, Dan Williams wrote:
> > > On Mon, Mar 1, 2021 at 7:28 PM Darrick J. Wong <djwong@kernel.org> wrote:
> > > > > > I really don't see you seem to be telling us that invalidation is an
> > > > > > either/or choice. There's more ways to convert physical block
> > > > > > address -> inode file offset and mapping index than brute force
> > > > > > inode cache walks....
> > > > >
> > > > > Yes, but I was trying to map it to an existing mechanism and the
> > > > > internals of drop_pagecache_sb() are, in coarse terms, close to what
> > > > > needs to happen here.
> > > >
> > > > Yes.  XFS (with rmap enabled) can do all the iteration and walking in
> > > > that function except for the invalidate_mapping_* call itself.  The goal
> > > > of this series is first to wire up a callback within both the block and
> > > > pmem subsystems so that they can take notifications and reverse-map them
> > > > through the storage stack until they reach an fs superblock.
> > >
> > > I'm chuckling because this "reverse map all the way up the block
> > > layer" is the opposite of what Dave said at the first reaction to my
> > > proposal, "can't the mm map pfns to fs inode  address_spaces?".
> >
> > Ah, no, I never said that the filesystem can't do reverse maps. I
> > was asking if the mm could directly (brute-force) invalidate PTEs
> > pointing at physical pmem ranges without needing walk the inode
> > mappings. That would be far more efficient if it could be done....

So, uh, /can/ the kernel brute-force invalidate PTEs when the pmem
driver says that something died?  Part of what's keeping me from putting
together a coherent vision for how this would work is my relative
unfamiliarity with all things mm/.

> > > Today whenever the pmem driver receives new corrupted range
> > > notification from the lower level nvdimm
> > > infrastructure(nd_pmem_notify) it updates the 'badblocks' instance
> > > associated with the pmem gendisk and then notifies userspace that
> > > there are new badblocks. This seems a perfect place to signal an upper
> > > level stacked block device that may also be watching disk->bb. Then
> > > each gendisk in a stacked topology is responsible for watching the
> > > badblock notifications of the next level and storing a remapped
> > > instance of those blocks until ultimately the filesystem mounted on
> > > the top-level block device is responsible for registering for those
> > > top-level disk->bb events.
> > >
> > > The device gone notification does not map cleanly onto 'struct badblocks'.
> >
> > Filesystems are not allowed to interact with the gendisk
> > infrastructure - that's for supporting the device side of a block
> > device. It's a layering violation, and many a filesytem developer
> > has been shouted at for trying to do this. At most we can peek
> > through it to query functionality support from the request queue,
> > but otherwise filesystems do not interact with anything under
> > bdev->bd_disk.
> 
> So lets add an api that allows the querying of badblocks by bdev and
> let the block core handle the bd_disk interaction. I see other block
> functionality like blk-integrity reaching through gendisk. The fs need
> not interact with the gendisk directly.

(I thought it was ok for block code to fiddle with other block
internals, and it's filesystems messing with block internals that was
prohibited?)

> > As it is, badblocks are used by devices to manage internal state.
> > e.g. md for recording stripes that need recovery if the system
> > crashes while they are being written out.
> 
> I know, I was there when it was invented which is why it was
> top-of-mind when pmem had a need to communicate badblocks. Other block
> drivers have threatened to use it for badblocks tracking, but none of
> those have carried through on that initial interest.

I hadn't realized that badblocks was bolted onto gendisk nowadays, I
mistakenly thought it was still something internal to md.

Looking over badblocks, I see a major drawback in that it can only
remember a single page's worth of badblocks records.

> > > If an upper level agent really cared about knowing about ->remove()
> > > events before they happened it could maybe do something like:
> > >
> > > dev = disk_to_dev(bdev->bd_disk)->parent;
> > > bus_register_notifier(dev->bus. &disk_host_device_notifier_block)
> >
> > Yeah, that's exactly the sort of thing that filesystems have been
> > aggressively discouraged from doing for years.
> 
> Yup, it's a layering violation.
> 
> > Part of the reason for this is that gendisk based mechanisms are not
> > very good for stacked device error reporting. Part of the problem
> > here is that every layer of the stacked device has to hook the
> > notifier of the block devices underneath it, then translate the
> > event to match the upper block device map, then regenerate the
> > notification for the next layer up. This isn't an efficient way to
> > pass a notification through a series of stacked devices and it is
> > messy and cumbersome to maintain.
> 
> It's been messy and cumbersome to route new infrastructure through DM
> every time a new dax_operation arrives. The corrupted_range() routing
> has the same burden. The advantage of badblocks over corrupted_range()
> is that it solves the "what If I miss a notification" problem. Each
> layer of the stack maintains its sector translation of the next level
> errors.

Oh.  Hum.  This changes my interpretation of what you're advocating.

If I'm understanding you correctly, I think you want to handle pmem
persistence errors (aka "I lost this cache line") by ... what?  The pmem
driver marks the appropriate range in the block_device/dax_device's
badblocks list, invalidates the page tables to force fs page faults, and
the next time the fs tries to access that pmem (either via bios or by
creating a direct map) the lower level storage driver will see the
badblocks entry and fail the IO / decline the mapping?

<shrug> I dunno, does that even make sense?  I thought it was pretty
easy for the kernel to invalidate a mapping to force a page fault, since
we (xfs) do that to the regular page cache all the time.

Assuming I understood that part correctly, why is it objectionable to
ask for the one extra step where pmem steps through the dax_device to
call the filesystem ->memory_failure handler?  There's no pmem-mapper
layer (yet) so making this piece happen should be relatively simple
since it doesn't require translating through multiple layers of dm,
right?

Also, does your mental model of storage device error reporting center
around lower layers setting badblocks ranges and then poking filesystems
to call down into badblocks to find out what's bad?  Versus lower layers
calling filesystems with the bad ranges directly?  Or are you trying to
omit as much fs involvement as possible?

(I'll address invalidating dax devices a little further down)

> > It can be effective for getting notifications to userspace about
> > something that happens to a specific block device.
> 
> No, it's not block device specific, it's stuck at the disk level. The
> user notification aspect was added for pmem at the disk layer because
> IIRC it was NAKd to add it to the block_device itself.
> 
> >
> > But The userspace
> > still ends up having to solve the "what does this error resolve to"
> > problem. i.e. Userspace still needs to map that notification to a
> > filesystem, and for data loss events map it to objects within the
> > filesystem, which can be extremely expensive to do from userspace.
> 
> Expensive and vulnerable to TOCTOU, this has been the motivation for
> filesystem native awareness of these errors from the beginning.
> 
> > This is exactly the sort of userspace error reporting mess that
> > various projects have asked us to try to fix. Plumbing errors
> > internally through the kernel up to the filesystem where the
> > filesytem can point directly to the user data that is affected is a
> > simple, effective solution to the problem. Especially if we then
> > have a generic error notification mechanism for filesystems to emit
> > errors to registered userspace watchers...
> 
> Agree, that's the dream worth pursuing.

(Agree, the error reporting story is still a mess.)

> >
> > > I still don't think that solves the need for a separate mechanism for
> > > global dax_device pte invalidation.
> >
> > It's just another type of media error because.....
> >
> > > I think that global dax_device invalidation needs new kernel
> > > infrastructure to allow internal users, like dm-writecache and future
> > > filesystems using dax for metadata, to take a fault when pmem is
> > > offlined.
> >
> > .... if userspace has directly mapped into the cache, and the cache
> > storage goes away, the userspace app has to be killed because we
> > have no idea if the device going away has caused data loss or not.
> > IOWs, if userspace writes direct to the cache device and it hasn't
> > been written back to other storage when it gets yanked, we have just
> > caused data corruption to occur.
> 
> If userspace has it direct mapped dirty in the cache when the remove
> fires, there is no opportunity to flush the cache. Just as there is no
> opportunity today with non-DAX and the page cache. The block-queue
> will be invalidated and any dirty in page cache is stranded.

So this is the "dax device invalidation" case that you also mention
below.  How differently would you handle this case from the persistence
error case I outlined above?  It sounds like in this case all the mm can
really do is invalidate the active page table mappings and set some
"totally offline" state in the dax/block_device badblocks so that all
future io requests are declined?

Do I understand that correctly?

If so, then I guess my next question is about the coordinated
pre-removal step that I think you mentioned in connection with something
named "CXL"?  If someone /requests/ the removal of a chunk of pmem,
would you propagate that request far enough up the storage chain so that
a mounted filesystem could reject the removal attempt?

> > At minimum, we now have to tell the filesystem that the dirty data
> > in the cache is now bad, and direct map applications that map those
> > dirty ranges need to be killed because their backing store is no
> > longer valid nor does the backup copy contain the data they last
> > wrote. Nor is it acessible by direct access, which is going to be
> > interesting because dynamically changing dax to non-dax access can't
> > be done without forcibly kicking the inode out of the cache. That
> > requires all references to the inode to go away. And that means the
> > event really has to go up to the filesystem.
> >
> > But I think the biggest piece of the puzzle that you haven't grokked
> > here is that the dm cache device isn't a linear map - it's made up of
> > random ranges from the underlying devices. Hence the "remove" of a dm
> > cache device turns into a huge number of small, sparse corrupt
> > ranges, not a single linear device remove event.
> 
> I am aware that DM is non-linear. The other non-linearity is sector-to-pfn.
> 
> > IOWs, device unplug/remove events are not just simple "pass it on"
> > events in a stacked storage setup. There can be non-trivial mappings
> > through the layers, and device disappearance may in fact manifest to
> > the user as data corruption rather than causing data to be
> > inaccessible.
> 
> Even MD does not rely on component device notifications for failure
> notifications, it waits for write-errors, and yes losing a component
> of a raid0 is more than a data offline event.
> 
> > Hence "remove" notifications just don't work in the storage stack.
> > They need to be translated to block ranges going bad (i.e.  media
> > errors), and reported to higher layers as bad ranges, not as device
> > removal.
> 
> Yes, the generic top-level remove event is pretty much useless for
> both the dax pte invalidation and lba range offline notification. I'm
> distinguishing that from knock on events that fire in response to
> ->remove() triggering on the disk driver which seems to be where you
> are at as well with the idea to trigger ->corrupted_range(0, EOD) from
> ->remove().
> 
> There's 2 ways to view the "filesystems have wanted proactive
> notification of remove events from storage for a long time". There's
> either enough pent up demand to convince all parties to come to the
> table and get something done, or there's too much momentum with the
> status quo to overcome.

Don't forget my cynical product manager view: "Here's a good opportunity
to get the basics of this revolutionary change plumbed in while upper
management is still hot enough about pmem to spend engineer time". :P

> I do not think it is fair to ask Ruan to solve a problem with brand
> new plumbing that the Linux storage community has not seen fit to
> address for a decade.

Nevertheless, he's more or less built it now.  Honestly I'm pleased to
see him pushing this forward exactly /because/ nobody has seen fit to
address this for so long.

The part where we plumb notifications upwards through the storage stack
is indeed revolutionary.  However, I /do/ think it's fair to ask Ruan to
make a revolutionary change as part of adapting to recent revolutionary
changes in storage hardware.

(At the very least I think it soul-crushing to toss out Ruan's work
now that he's at least gotten the proof of concept running... but Ruan
is in the best place to say that)

> Not when disk->bb is already plumbed without anyone complaining about
> it.

...or noticing it was there, as was the case here. :/

> > The same goes for DAX devices. The moment they can be placed in
> > storage stacks in non-trivial configurations and/or used as cache
> > devices that can be directly accessed over tranditional block
> > devices, we end up with error conditions that can only be mapped as
> > ranges of blocks that have gone bad.
> 
> I see plumbing corrupted_range() and using it to communicate removal
> in addition to badblocks in addition to bad pfns as a revolutionary
> change. A reuse of disk->bb for communicating poison sector discovery
> events up the stack and a separate facility to invalidate dax devices
> as evolutionary. The evolutionary change does not preclude the
> eventual revolutionary change, but it has a better chance of making
> forward progress in the near term.

And I want both. :)

But I'll end this email here to make sure I've understood what you're
going for, Dan, before working on a reply.

Hopefully it doesn't take 2 days to roundtrip a reply email like the
last week of utter vger frustration. :(

--D
Xiaoguang Wang March 9, 2021, 6:36 a.m. UTC | #31
hi,

First thanks for your patchset.
I'd like to know whether your patchset pass fstests? Thanks.

Regards,
Xiaoguang Wang

> This patchset is attempt to add CoW support for fsdax, and take XFS,
> which has both reflink and fsdax feature, as an example.
> 
> Changes from V1:
>   - Factor some helper functions to simplify dax fault code
>   - Introduce iomap_apply2() for dax_dedupe_file_range_compare()
>   - Fix mistakes and other problems
>   - Rebased on v5.11
> 
> One of the key mechanism need to be implemented in fsdax is CoW.  Copy
> the data from srcmap before we actually write data to the destance
> iomap.  And we just copy range in which data won't be changed.
> 
> Another mechanism is range comparison.  In page cache case, readpage()
> is used to load data on disk to page cache in order to be able to
> compare data.  In fsdax case, readpage() does not work.  So, we need
> another compare data with direct access support.
> 
> With the two mechanism implemented in fsdax, we are able to make reflink
> and fsdax work together in XFS.
> 
> 
> Some of the patches are picked up from Goldwyn's patchset.  I made some
> changes to adapt to this patchset.
> 
> (Rebased on v5.11)
> ==
> 
> Shiyang Ruan (10):
>    fsdax: Factor helpers to simplify dax fault code
>    fsdax: Factor helper: dax_fault_actor()
>    fsdax: Output address in dax_iomap_pfn() and rename it
>    fsdax: Introduce dax_iomap_cow_copy()
>    fsdax: Replace mmap entry in case of CoW
>    fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero
>    iomap: Introduce iomap_apply2() for operations on two files
>    fsdax: Dedup file range to use a compare function
>    fs/xfs: Handle CoW for fsdax write() path
>    fs/xfs: Add dedupe support for fsdax
> 
>   fs/dax.c               | 532 +++++++++++++++++++++++++++--------------
>   fs/iomap/apply.c       |  51 ++++
>   fs/iomap/buffered-io.c |   2 +-
>   fs/remap_range.c       |  45 +++-
>   fs/xfs/xfs_bmap_util.c |   3 +-
>   fs/xfs/xfs_file.c      |  29 ++-
>   fs/xfs/xfs_inode.c     |   8 +-
>   fs/xfs/xfs_inode.h     |   1 +
>   fs/xfs/xfs_iomap.c     |  30 ++-
>   fs/xfs/xfs_iomap.h     |   1 +
>   fs/xfs/xfs_iops.c      |  11 +-
>   fs/xfs/xfs_reflink.c   |  16 +-
>   include/linux/dax.h    |   7 +-
>   include/linux/fs.h     |  15 +-
>   include/linux/iomap.h  |   7 +-
>   15 files changed, 550 insertions(+), 208 deletions(-)
>
Goldwyn Rodrigues March 9, 2021, 4:19 p.m. UTC | #32
Hi Shiang,

Thanks for picking up this work.

On  8:20 26/02, Shiyang Ruan wrote:
> This patchset is attempt to add CoW support for fsdax, and take XFS,
> which has both reflink and fsdax feature, as an example.

How does this work for read sequence for two different files
mapped to the same extent, both residing in DAX?

If two different files read the same shared extent, which file
would resultant page->mapping->host point to?

This problem is listed as a TODO over dax_associate_entry() and is
still not fixed.

<snip>
Shiyang Ruan March 10, 2021, 1:26 a.m. UTC | #33
>
> Hi Shiang,
> 
> Thanks for picking up this work.
> 
> On  8:20 26/02, Shiyang Ruan wrote:
> > This patchset is attempt to add CoW support for fsdax, and take XFS,
> > which has both reflink and fsdax feature, as an example.
> 
> How does this work for read sequence for two different files
> mapped to the same extent, both residing in DAX?
> 
> If two different files read the same shared extent, which file
> would resultant page->mapping->host point to?
> 
> This problem is listed as a TODO over dax_associate_entry() and is
> still not fixed.

I have posted another patchset which I called "fix dax-rmap"[1]. It is a
try to solve this problem, but still in disscussion for now.

[1] https://lkml.org/lkml/2021/2/8/347

--
Thanks,
Ruan Shiyang.

> 
> <snip>
> 
> --
> Goldwyn
Shiyang Ruan March 10, 2021, 1:32 a.m. UTC | #34
> hi,
> 
> First thanks for your patchset.
> I'd like to know whether your patchset pass fstests? Thanks.

It has been tested by xfstests with quick and clone groups.  I did not test it with other groups yet.


--
Thanks,
Ruan Shiyang.

> 
> Regards,
> Xiaoguang Wang
> 
> > This patchset is attempt to add CoW support for fsdax, and take XFS,
> > which has both reflink and fsdax feature, as an example.
> >
> > Changes from V1:
> >   - Factor some helper functions to simplify dax fault code
> >   - Introduce iomap_apply2() for dax_dedupe_file_range_compare()
> >   - Fix mistakes and other problems
> >   - Rebased on v5.11
> >
Neal Gompa March 10, 2021, 12:30 p.m. UTC | #35
On Thu, Feb 25, 2021 at 7:23 PM Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
>
> This patchset is attempt to add CoW support for fsdax, and take XFS,
> which has both reflink and fsdax feature, as an example.
>
> Changes from V1:
>  - Factor some helper functions to simplify dax fault code
>  - Introduce iomap_apply2() for dax_dedupe_file_range_compare()
>  - Fix mistakes and other problems
>  - Rebased on v5.11
>
> One of the key mechanism need to be implemented in fsdax is CoW.  Copy
> the data from srcmap before we actually write data to the destance
> iomap.  And we just copy range in which data won't be changed.
>
> Another mechanism is range comparison.  In page cache case, readpage()
> is used to load data on disk to page cache in order to be able to
> compare data.  In fsdax case, readpage() does not work.  So, we need
> another compare data with direct access support.
>
> With the two mechanism implemented in fsdax, we are able to make reflink
> and fsdax work together in XFS.
>
>
> Some of the patches are picked up from Goldwyn's patchset.  I made some
> changes to adapt to this patchset.
>
> (Rebased on v5.11)

Forgive my ignorance, but is there a reason why this isn't wired up to
Btrfs at the same time? It seems weird to me that adding a feature
like DAX to work with CoW filesystems is not being wired into *the*
CoW filesystem in the Linux kernel that fully takes advantage of
copy-on-write. I'm aware that XFS supports reflinks and does some
datacow stuff, but I don't know if I would consider XFS integration
sufficient for integrating this feature now, especially if it's
possible that the design might not work with Btrfs (I hadn't seen any
feedback from Btrfs developers, though given how much email there is
here, it's entirely possible that I missed it).
Matthew Wilcox March 10, 2021, 1:02 p.m. UTC | #36
On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> Forgive my ignorance, but is there a reason why this isn't wired up to
> Btrfs at the same time? It seems weird to me that adding a feature

btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.

If you think about it, btrfs and DAX are diametrically opposite things.
DAX is about giving raw access to the hardware.  btrfs is about offering
extra value (RAID, checksums, ...), none of which can be done if the
filesystem isn't in the read/write path.

That's why there's no DAX support in btrfs.  If you want DAX, you have
to give up all the features you like in btrfs.  So you may as well use
a different filesystem.
Neal Gompa March 10, 2021, 1:36 p.m. UTC | #37
On Wed, Mar 10, 2021 at 8:02 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > Forgive my ignorance, but is there a reason why this isn't wired up to
> > Btrfs at the same time? It seems weird to me that adding a feature
>
> btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
>
> If you think about it, btrfs and DAX are diametrically opposite things.
> DAX is about giving raw access to the hardware.  btrfs is about offering
> extra value (RAID, checksums, ...), none of which can be done if the
> filesystem isn't in the read/write path.
>
> That's why there's no DAX support in btrfs.  If you want DAX, you have
> to give up all the features you like in btrfs.  So you may as well use
> a different filesystem.

So does that mean that DAX is incompatible with those filesystems when
layered on DM (e.g. through LVM)?

Also, based on what you're saying, that means that DAX'd resources
would not be able to use reflinks on XFS, right? That'd put it in
similar territory as swap files on Btrfs, I would think.



--
真実はいつも一つ!/ Always, there's only one truth!
Matthew Wilcox March 10, 2021, 1:55 p.m. UTC | #38
On Wed, Mar 10, 2021 at 08:36:06AM -0500, Neal Gompa wrote:
> On Wed, Mar 10, 2021 at 8:02 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > > Forgive my ignorance, but is there a reason why this isn't wired up to
> > > Btrfs at the same time? It seems weird to me that adding a feature
> >
> > btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> >
> > If you think about it, btrfs and DAX are diametrically opposite things.
> > DAX is about giving raw access to the hardware.  btrfs is about offering
> > extra value (RAID, checksums, ...), none of which can be done if the
> > filesystem isn't in the read/write path.
> >
> > That's why there's no DAX support in btrfs.  If you want DAX, you have
> > to give up all the features you like in btrfs.  So you may as well use
> > a different filesystem.
> 
> So does that mean that DAX is incompatible with those filesystems when
> layered on DM (e.g. through LVM)?

Yes.  It might be possible to work through RAID-0 or read-only through
RAID-1, but I'm not sure anybody's bothered to do that work.

> Also, based on what you're saying, that means that DAX'd resources
> would not be able to use reflinks on XFS, right? That'd put it in
> similar territory as swap files on Btrfs, I would think.

You can use DAX with reflinks because the CPU can do read-only mmaps.
On a write fault, we break the reflink, copy the data and put in a
writable PTE.
Goldwyn Rodrigues March 10, 2021, 2:21 p.m. UTC | #39
On 13:02 10/03, Matthew Wilcox wrote:
> On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > Forgive my ignorance, but is there a reason why this isn't wired up to
> > Btrfs at the same time? It seems weird to me that adding a feature
> 
> btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> 
> If you think about it, btrfs and DAX are diametrically opposite things.
> DAX is about giving raw access to the hardware.  btrfs is about offering
> extra value (RAID, checksums, ...), none of which can be done if the
> filesystem isn't in the read/write path.
> 
> That's why there's no DAX support in btrfs.  If you want DAX, you have
> to give up all the features you like in btrfs.  So you may as well use
> a different filesystem.

DAX on btrfs has been attempted[1]. Of course, we could not
have checksums or multi-device with it. However, got stuck on
associating a shared extent on the same page mapping: basically the
TODO above dax_associate_entry().

Shiyang has proposed a way to disassociate existing mapping, but I
don't think that is the best solution. DAX for CoW will not work until
we have a way of mapping a page to multiple inodes (page->mapping),
which will convert a 1-N inode-page mapping to M-N inode-page mapping.

[1] https://lore.kernel.org/linux-btrfs/20190429172649.8288-1-rgoldwyn@suse.de/
Matthew Wilcox March 10, 2021, 2:26 p.m. UTC | #40
On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> On 13:02 10/03, Matthew Wilcox wrote:
> > On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > > Forgive my ignorance, but is there a reason why this isn't wired up to
> > > Btrfs at the same time? It seems weird to me that adding a feature
> > 
> > btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> > 
> > If you think about it, btrfs and DAX are diametrically opposite things.
> > DAX is about giving raw access to the hardware.  btrfs is about offering
> > extra value (RAID, checksums, ...), none of which can be done if the
> > filesystem isn't in the read/write path.
> > 
> > That's why there's no DAX support in btrfs.  If you want DAX, you have
> > to give up all the features you like in btrfs.  So you may as well use
> > a different filesystem.
> 
> DAX on btrfs has been attempted[1]. Of course, we could not

But why?  A completeness fetish?  I don't understand why you decided
to do this work.

> have checksums or multi-device with it. However, got stuck on
> associating a shared extent on the same page mapping: basically the
> TODO above dax_associate_entry().
> 
> Shiyang has proposed a way to disassociate existing mapping, but I
> don't think that is the best solution. DAX for CoW will not work until
> we have a way of mapping a page to multiple inodes (page->mapping),
> which will convert a 1-N inode-page mapping to M-N inode-page mapping.

If you're still thinking in terms of pages, you're doing DAX wrong.
DAX should work without a struct page.
Goldwyn Rodrigues March 10, 2021, 5:04 p.m. UTC | #41
On 14:26 10/03, Matthew Wilcox wrote:
> On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > On 13:02 10/03, Matthew Wilcox wrote:
> > > On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > > > Forgive my ignorance, but is there a reason why this isn't wired up to
> > > > Btrfs at the same time? It seems weird to me that adding a feature
> > > 
> > > btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> > > 
> > > If you think about it, btrfs and DAX are diametrically opposite things.
> > > DAX is about giving raw access to the hardware.  btrfs is about offering
> > > extra value (RAID, checksums, ...), none of which can be done if the
> > > filesystem isn't in the read/write path.
> > > 
> > > That's why there's no DAX support in btrfs.  If you want DAX, you have
> > > to give up all the features you like in btrfs.  So you may as well use
> > > a different filesystem.
> > 
> > DAX on btrfs has been attempted[1]. Of course, we could not
> 
> But why?  A completeness fetish?  I don't understand why you decided
> to do this work.

If only I had a penny every time I heard "why would you want to do that?"

> 
> > have checksums or multi-device with it. However, got stuck on
> > associating a shared extent on the same page mapping: basically the
> > TODO above dax_associate_entry().
> > 
> > Shiyang has proposed a way to disassociate existing mapping, but I
> > don't think that is the best solution. DAX for CoW will not work until
> > we have a way of mapping a page to multiple inodes (page->mapping),
> > which will convert a 1-N inode-page mapping to M-N inode-page mapping.
> 
> If you're still thinking in terms of pages, you're doing DAX wrong.
> DAX should work without a struct page.

Not pages specifically, but mappings.
fsdax needs the mappings during the page fault and it breaks in case both
files fault on the same shared extent.

For Reference: WARN_ON_ONCE(page->mapping && page->mapping != mapping)
in dax_disassociate_entry().
Dan Williams March 11, 2021, 12:53 a.m. UTC | #42
On Wed, Mar 10, 2021 at 6:27 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > On 13:02 10/03, Matthew Wilcox wrote:
> > > On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > > > Forgive my ignorance, but is there a reason why this isn't wired up to
> > > > Btrfs at the same time? It seems weird to me that adding a feature
> > >
> > > btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> > >
> > > If you think about it, btrfs and DAX are diametrically opposite things.
> > > DAX is about giving raw access to the hardware.  btrfs is about offering
> > > extra value (RAID, checksums, ...), none of which can be done if the
> > > filesystem isn't in the read/write path.
> > >
> > > That's why there's no DAX support in btrfs.  If you want DAX, you have
> > > to give up all the features you like in btrfs.  So you may as well use
> > > a different filesystem.
> >
> > DAX on btrfs has been attempted[1]. Of course, we could not
>
> But why?  A completeness fetish?  I don't understand why you decided
> to do this work.

Isn't DAX useful for pagecache minimization on read even if it is
awkward for a copy-on-write fs?

Seems it would be a useful case to have COW'd VM images on BTRFS that
don't need superfluous page cache allocations.
Neal Gompa March 11, 2021, 8:26 a.m. UTC | #43
On Wed, Mar 10, 2021 at 7:53 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Wed, Mar 10, 2021 at 6:27 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > > On 13:02 10/03, Matthew Wilcox wrote:
> > > > On Wed, Mar 10, 2021 at 07:30:41AM -0500, Neal Gompa wrote:
> > > > > Forgive my ignorance, but is there a reason why this isn't wired up to
> > > > > Btrfs at the same time? It seems weird to me that adding a feature
> > > >
> > > > btrfs doesn't support DAX.  only ext2, ext4, XFS and FUSE have DAX support.
> > > >
> > > > If you think about it, btrfs and DAX are diametrically opposite things.
> > > > DAX is about giving raw access to the hardware.  btrfs is about offering
> > > > extra value (RAID, checksums, ...), none of which can be done if the
> > > > filesystem isn't in the read/write path.
> > > >
> > > > That's why there's no DAX support in btrfs.  If you want DAX, you have
> > > > to give up all the features you like in btrfs.  So you may as well use
> > > > a different filesystem.
> > >
> > > DAX on btrfs has been attempted[1]. Of course, we could not
> >
> > But why?  A completeness fetish?  I don't understand why you decided
> > to do this work.
>
> Isn't DAX useful for pagecache minimization on read even if it is
> awkward for a copy-on-write fs?
>
> Seems it would be a useful case to have COW'd VM images on BTRFS that
> don't need superfluous page cache allocations.

I could also see this being useful for databases (and maybe even swap
files!) on Btrfs, if I'm understanding this feature correctly.
Adam Borowski March 13, 2021, 1:07 p.m. UTC | #44
On Wed, Mar 10, 2021 at 02:26:43PM +0000, Matthew Wilcox wrote:
> On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > DAX on btrfs has been attempted[1]. Of course, we could not
> 
> But why?  A completeness fetish?  I don't understand why you decided
> to do this work.

* xfs can shapshot only single files, btrfs entire subvolumes
* btrfs-send|receive
* enumeration of changed parts of a file


Meow!
Neal Gompa March 13, 2021, 4:24 p.m. UTC | #45
On Sat, Mar 13, 2021 at 8:09 AM Adam Borowski <kilobyte@angband.pl> wrote:
>
> On Wed, Mar 10, 2021 at 02:26:43PM +0000, Matthew Wilcox wrote:
> > On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > > DAX on btrfs has been attempted[1]. Of course, we could not
> >
> > But why?  A completeness fetish?  I don't understand why you decided
> > to do this work.
>
> * xfs can shapshot only single files, btrfs entire subvolumes
> * btrfs-send|receive
> * enumeration of changed parts of a file
>

XFS cannot do snapshots since it lacks metadata COW. XFS reflinking is
primarily for space efficiency.
Adam Borowski March 13, 2021, 10 p.m. UTC | #46
On Sat, Mar 13, 2021 at 11:24:00AM -0500, Neal Gompa wrote:
> On Sat, Mar 13, 2021 at 8:09 AM Adam Borowski <kilobyte@angband.pl> wrote:
> >
> > On Wed, Mar 10, 2021 at 02:26:43PM +0000, Matthew Wilcox wrote:
> > > On Wed, Mar 10, 2021 at 08:21:59AM -0600, Goldwyn Rodrigues wrote:
> > > > DAX on btrfs has been attempted[1]. Of course, we could not
> > >
> > > But why?  A completeness fetish?  I don't understand why you decided
> > > to do this work.
> >
> > * xfs can shapshot only single files, btrfs entire subvolumes
> > * btrfs-send|receive
> > * enumeration of changed parts of a file
> 
> XFS cannot do snapshots since it lacks metadata COW. XFS reflinking is
> primarily for space efficiency.

A reflink is a single-file snapshot.

My work team really wants this very patchset -- reflinks on DAX allow
backups and/or checkpointing, without stopping the world (there's a single
file, "pool", here).

Besides, you can still get poor-man's whole-subvolume(/directory)
snapshots by manually walking the tree and reflinking everything.
That's not atomic -- but rsync isn't atomic either.  That's enough for
eg. dnf/dpkg purposes.


Meow!