mbox series

[0/3] dax: clear poison on the fly along pwrite

Message ID 20210914233132.3680546-1-jane.chu@oracle.com (mailing list archive)
Headers show
Series dax: clear poison on the fly along pwrite | expand

Message

Jane Chu Sept. 14, 2021, 11:31 p.m. UTC
If pwrite(2) encounters poison in a pmem range, it fails with EIO.
This is unecessary if hardware is capable of clearing the poison.

Though not all dax backend hardware has the capability of clearing
poison on the fly, but dax backed by Intel DCPMEM has such capability,
and it's desirable to, first, speed up repairing by means of it;
second, maintain backend continuity instead of fragmenting it in
search for clean blocks.

Jane Chu (3):
  dax: introduce dax_operation dax_clear_poison
  dax: introduce dax_clear_poison to dax pwrite operation
  libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation

 drivers/dax/super.c   | 13 +++++++++++++
 drivers/nvdimm/pmem.c | 17 +++++++++++++++++
 fs/dax.c              |  9 +++++++++
 include/linux/dax.h   |  6 ++++++
 4 files changed, 45 insertions(+)

Comments

Dan Williams Sept. 15, 2021, 4:44 a.m. UTC | #1
On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
>
> If pwrite(2) encounters poison in a pmem range, it fails with EIO.
> This is unecessary if hardware is capable of clearing the poison.
>
> Though not all dax backend hardware has the capability of clearing
> poison on the fly, but dax backed by Intel DCPMEM has such capability,
> and it's desirable to, first, speed up repairing by means of it;
> second, maintain backend continuity instead of fragmenting it in
> search for clean blocks.
>
> Jane Chu (3):
>   dax: introduce dax_operation dax_clear_poison

The problem with new dax operations is that they need to be plumbed
not only through fsdax and pmem, but also through device-mapper.

In this case I think we're already covered by dax_zero_page_range().
That will ultimately trigger pmem_clear_poison() and it is routed
through device-mapper properly.

Can you clarify why the existing dax_zero_page_range() is not sufficient?

>   dax: introduce dax_clear_poison to dax pwrite operation
>   libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation
>
>  drivers/dax/super.c   | 13 +++++++++++++
>  drivers/nvdimm/pmem.c | 17 +++++++++++++++++
>  fs/dax.c              |  9 +++++++++
>  include/linux/dax.h   |  6 ++++++
>  4 files changed, 45 insertions(+)
>
> --
> 2.18.4
>
Jane Chu Sept. 15, 2021, 7:22 a.m. UTC | #2
Hi, Dan,

On 9/14/2021 9:44 PM, Dan Williams wrote:
> On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
>>
>> If pwrite(2) encounters poison in a pmem range, it fails with EIO.
>> This is unecessary if hardware is capable of clearing the poison.
>>
>> Though not all dax backend hardware has the capability of clearing
>> poison on the fly, but dax backed by Intel DCPMEM has such capability,
>> and it's desirable to, first, speed up repairing by means of it;
>> second, maintain backend continuity instead of fragmenting it in
>> search for clean blocks.
>>
>> Jane Chu (3):
>>    dax: introduce dax_operation dax_clear_poison
> 
> The problem with new dax operations is that they need to be plumbed
> not only through fsdax and pmem, but also through device-mapper.
> 
> In this case I think we're already covered by dax_zero_page_range().
> That will ultimately trigger pmem_clear_poison() and it is routed
> through device-mapper properly.
> 
> Can you clarify why the existing dax_zero_page_range() is not sufficient?

fallocate ZERO_RANGE is in itself a functionality that applied to dax
should lead to zero out the media range.  So one may argue it is part
of a block operations, and not something explicitly aimed at clearing
poison. I'm also thinking about the MOVEDIR64B instruction and how it
might be used to clear poison on the fly with a single 'store'.
Of course, that means we need to figure out how to narrow down the
error blast radius first.

With respect to plumbing through device-mapper, I thought about that,
and wasn't sure. I mean the clear-poison work will eventually fall on
the pmem driver, and thru the DM layers, how does that play out thru
DM?  BTW, our customer doesn't care about creating dax volume thru DM, so.

thanks!
-jane


> 
>>    dax: introduce dax_clear_poison to dax pwrite operation
>>    libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation
>>
>>   drivers/dax/super.c   | 13 +++++++++++++
>>   drivers/nvdimm/pmem.c | 17 +++++++++++++++++
>>   fs/dax.c              |  9 +++++++++
>>   include/linux/dax.h   |  6 ++++++
>>   4 files changed, 45 insertions(+)
>>
>> --
>> 2.18.4
>>
Darrick J. Wong Sept. 15, 2021, 4:15 p.m. UTC | #3
On Wed, Sep 15, 2021 at 12:22:05AM -0700, Jane Chu wrote:
> Hi, Dan,
> 
> On 9/14/2021 9:44 PM, Dan Williams wrote:
> > On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
> > > 
> > > If pwrite(2) encounters poison in a pmem range, it fails with EIO.
> > > This is unecessary if hardware is capable of clearing the poison.
> > > 
> > > Though not all dax backend hardware has the capability of clearing
> > > poison on the fly, but dax backed by Intel DCPMEM has such capability,
> > > and it's desirable to, first, speed up repairing by means of it;
> > > second, maintain backend continuity instead of fragmenting it in
> > > search for clean blocks.
> > > 
> > > Jane Chu (3):
> > >    dax: introduce dax_operation dax_clear_poison
> > 
> > The problem with new dax operations is that they need to be plumbed
> > not only through fsdax and pmem, but also through device-mapper.
> > 
> > In this case I think we're already covered by dax_zero_page_range().
> > That will ultimately trigger pmem_clear_poison() and it is routed
> > through device-mapper properly.
> > 
> > Can you clarify why the existing dax_zero_page_range() is not sufficient?
> 
> fallocate ZERO_RANGE is in itself a functionality that applied to dax
> should lead to zero out the media range.  So one may argue it is part
> of a block operations, and not something explicitly aimed at clearing
> poison.

Yeah, Christoph suggested that we make the clearing operation explicit
in a related thread a few weeks ago:
https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/

I like Jane's patchset far better than the one that I sent, because it
doesn't require a block device wrapper for the pmem, and it enables us
to tell application writers that they can handle media errors by
pwrite()ing the bad region, just like they do for nvme and spinners.

> I'm also thinking about the MOVEDIR64B instruction and how it
> might be used to clear poison on the fly with a single 'store'.
> Of course, that means we need to figure out how to narrow down the
> error blast radius first.

That was one of the advantages of Shiyang Ruan's NAKed patchset to
enable byte-granularity media errors to pass upwards through the stack
back to the filesystem, which could then tell applications exactly what
they lost.

I want to get back to that, though if Dan won't withdraw the NAK then I
don't know how to move forward...

> With respect to plumbing through device-mapper, I thought about that,
> and wasn't sure. I mean the clear-poison work will eventually fall on
> the pmem driver, and thru the DM layers, how does that play out thru
> DM?

Each of the dm drivers has to add their own ->clear_poison operation
that remaps the incoming (sector, len) parameters as appropriate for
that device and then calls the lower device's ->clear_poison with the
translated parameters.

This (AFAICT) has already been done for dax_zero_page_range, so I sense
that Dan is trying to save you a bunch of code plumbing work by nudging
you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
and then you only need patches 2-3.

> BTW, our customer doesn't care about creating dax volume thru DM, so.

They might not care, but anything going upstream should work in the
general case.

--D

> thanks!
> -jane
> 
> 
> > 
> > >    dax: introduce dax_clear_poison to dax pwrite operation
> > >    libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation
> > > 
> > >   drivers/dax/super.c   | 13 +++++++++++++
> > >   drivers/nvdimm/pmem.c | 17 +++++++++++++++++
> > >   fs/dax.c              |  9 +++++++++
> > >   include/linux/dax.h   |  6 ++++++
> > >   4 files changed, 45 insertions(+)
> > > 
> > > --
> > > 2.18.4
> > >
Dan Williams Sept. 15, 2021, 8:27 p.m. UTC | #4
On Wed, Sep 15, 2021 at 9:15 AM Darrick J. Wong <djwong@kernel.org> wrote:
>
> On Wed, Sep 15, 2021 at 12:22:05AM -0700, Jane Chu wrote:
> > Hi, Dan,
> >
> > On 9/14/2021 9:44 PM, Dan Williams wrote:
> > > On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
> > > >
> > > > If pwrite(2) encounters poison in a pmem range, it fails with EIO.
> > > > This is unecessary if hardware is capable of clearing the poison.
> > > >
> > > > Though not all dax backend hardware has the capability of clearing
> > > > poison on the fly, but dax backed by Intel DCPMEM has such capability,
> > > > and it's desirable to, first, speed up repairing by means of it;
> > > > second, maintain backend continuity instead of fragmenting it in
> > > > search for clean blocks.
> > > >
> > > > Jane Chu (3):
> > > >    dax: introduce dax_operation dax_clear_poison
> > >
> > > The problem with new dax operations is that they need to be plumbed
> > > not only through fsdax and pmem, but also through device-mapper.
> > >
> > > In this case I think we're already covered by dax_zero_page_range().
> > > That will ultimately trigger pmem_clear_poison() and it is routed
> > > through device-mapper properly.
> > >
> > > Can you clarify why the existing dax_zero_page_range() is not sufficient?
> >
> > fallocate ZERO_RANGE is in itself a functionality that applied to dax
> > should lead to zero out the media range.  So one may argue it is part
> > of a block operations, and not something explicitly aimed at clearing
> > poison.
>
> Yeah, Christoph suggested that we make the clearing operation explicit
> in a related thread a few weeks ago:
> https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/

That seemed to be tied to a proposal to plumb it all the way out to an
explicit fallocate() mode, not make it a silent side effect of
pwrite(). That said pwrite() does clear errors in hard drives in
not-DAX mode, but I like the change in direction to make it explicit
going forward.

> I like Jane's patchset far better than the one that I sent, because it
> doesn't require a block device wrapper for the pmem, and it enables us
> to tell application writers that they can handle media errors by
> pwrite()ing the bad region, just like they do for nvme and spinners.

pwrite(), hmm, so you're not onboard with the explicit clearing API
proposal, or...?

> > I'm also thinking about the MOVEDIR64B instruction and how it
> > might be used to clear poison on the fly with a single 'store'.
> > Of course, that means we need to figure out how to narrow down the
> > error blast radius first.

It turns out the MOVDIR64B error clearing idea runs into problem with
the device poison tracking. Without the explicit notification that
software wanted the error cleared the device may ghost report errors
that are not there anymore. I think we should continue explicit error
clearing and notification of the device that the error has been
cleared (by asking the device to clear it).

> That was one of the advantages of Shiyang Ruan's NAKed patchset to
> enable byte-granularity media errors

...the method of triggering reverse mapping had review feedback, I
apologize if that came across of a NAK of the whole proposal. As I
clarified to Eric this morning, I think the solution is iterating
towards upstream inclusion.

> to pass upwards through the stack
> back to the filesystem, which could then tell applications exactly what
> they lost.
>
> I want to get back to that, though if Dan won't withdraw the NAK then I
> don't know how to move forward...

No NAK in place. Let's go!

>
> > With respect to plumbing through device-mapper, I thought about that,
> > and wasn't sure. I mean the clear-poison work will eventually fall on
> > the pmem driver, and thru the DM layers, how does that play out thru
> > DM?
>
> Each of the dm drivers has to add their own ->clear_poison operation
> that remaps the incoming (sector, len) parameters as appropriate for
> that device and then calls the lower device's ->clear_poison with the
> translated parameters.
>
> This (AFAICT) has already been done for dax_zero_page_range, so I sense
> that Dan is trying to save you a bunch of code plumbing work by nudging
> you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> and then you only need patches 2-3.

Yes, but it sounds like Christoph was saying don't overload
dax_zero_page_range(). I'd be ok splitting the difference and having a
new fallocate clear poison mode map to dax_zero_page_range()
internally.

>
> > BTW, our customer doesn't care about creating dax volume thru DM, so.
>
> They might not care, but anything going upstream should work in the
> general case.

Agree.
Darrick J. Wong Sept. 16, 2021, 12:05 a.m. UTC | #5
On Wed, Sep 15, 2021 at 01:27:47PM -0700, Dan Williams wrote:
> On Wed, Sep 15, 2021 at 9:15 AM Darrick J. Wong <djwong@kernel.org> wrote:
> >
> > On Wed, Sep 15, 2021 at 12:22:05AM -0700, Jane Chu wrote:
> > > Hi, Dan,
> > >
> > > On 9/14/2021 9:44 PM, Dan Williams wrote:
> > > > On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
> > > > >
> > > > > If pwrite(2) encounters poison in a pmem range, it fails with EIO.
> > > > > This is unecessary if hardware is capable of clearing the poison.
> > > > >
> > > > > Though not all dax backend hardware has the capability of clearing
> > > > > poison on the fly, but dax backed by Intel DCPMEM has such capability,
> > > > > and it's desirable to, first, speed up repairing by means of it;
> > > > > second, maintain backend continuity instead of fragmenting it in
> > > > > search for clean blocks.
> > > > >
> > > > > Jane Chu (3):
> > > > >    dax: introduce dax_operation dax_clear_poison
> > > >
> > > > The problem with new dax operations is that they need to be plumbed
> > > > not only through fsdax and pmem, but also through device-mapper.
> > > >
> > > > In this case I think we're already covered by dax_zero_page_range().
> > > > That will ultimately trigger pmem_clear_poison() and it is routed
> > > > through device-mapper properly.
> > > >
> > > > Can you clarify why the existing dax_zero_page_range() is not sufficient?
> > >
> > > fallocate ZERO_RANGE is in itself a functionality that applied to dax
> > > should lead to zero out the media range.  So one may argue it is part
> > > of a block operations, and not something explicitly aimed at clearing
> > > poison.
> >
> > Yeah, Christoph suggested that we make the clearing operation explicit
> > in a related thread a few weeks ago:
> > https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/
> 
> That seemed to be tied to a proposal to plumb it all the way out to an
> explicit fallocate() mode, not make it a silent side effect of
> pwrite(). That said pwrite() does clear errors in hard drives in
> not-DAX mode, but I like the change in direction to make it explicit
> going forward.
> 
> > I like Jane's patchset far better than the one that I sent, because it
> > doesn't require a block device wrapper for the pmem, and it enables us
> > to tell application writers that they can handle media errors by
> > pwrite()ing the bad region, just like they do for nvme and spinners.
> 
> pwrite(), hmm, so you're not onboard with the explicit clearing API
> proposal, or...?

I don't really care either way.  I was going to send a reworked version
of that earlier patchset which would add an explicit fallocate mode and
make it work on regular block storage too, but then Jane sent this. :)

Hmm, maybe I should rework my patchset to call dax_zero_page_range
directly...?

> > > I'm also thinking about the MOVEDIR64B instruction and how it
> > > might be used to clear poison on the fly with a single 'store'.
> > > Of course, that means we need to figure out how to narrow down the
> > > error blast radius first.
> 
> It turns out the MOVDIR64B error clearing idea runs into problem with
> the device poison tracking. Without the explicit notification that
> software wanted the error cleared the device may ghost report errors
> that are not there anymore. I think we should continue explicit error
> clearing and notification of the device that the error has been
> cleared (by asking the device to clear it).

If the poison clearing is entirely OOB (i.e. you have to call ACPI
methods) and can't be made part of the memory controller, then I guess
you can't use movdir64b at all, right?

> > That was one of the advantages of Shiyang Ruan's NAKed patchset to
> > enable byte-granularity media errors
> 
> ...the method of triggering reverse mapping had review feedback, I
> apologize if that came across of a NAK of the whole proposal. As I
> clarified to Eric this morning, I think the solution is iterating
> towards upstream inclusion.
> 
> > to pass upwards through the stack
> > back to the filesystem, which could then tell applications exactly what
> > they lost.
> >
> > I want to get back to that, though if Dan won't withdraw the NAK then I
> > don't know how to move forward...
> 
> No NAK in place. Let's go!

Ok, thanks.  I'll start looking through Shiyang's patches tomorrow.

> 
> >
> > > With respect to plumbing through device-mapper, I thought about that,
> > > and wasn't sure. I mean the clear-poison work will eventually fall on
> > > the pmem driver, and thru the DM layers, how does that play out thru
> > > DM?
> >
> > Each of the dm drivers has to add their own ->clear_poison operation
> > that remaps the incoming (sector, len) parameters as appropriate for
> > that device and then calls the lower device's ->clear_poison with the
> > translated parameters.
> >
> > This (AFAICT) has already been done for dax_zero_page_range, so I sense
> > that Dan is trying to save you a bunch of code plumbing work by nudging
> > you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> > and then you only need patches 2-3.
> 
> Yes, but it sounds like Christoph was saying don't overload
> dax_zero_page_range(). I'd be ok splitting the difference and having a
> new fallocate clear poison mode map to dax_zero_page_range()
> internally.

Ok.

--D

> >
> > > BTW, our customer doesn't care about creating dax volume thru DM, so.
> >
> > They might not care, but anything going upstream should work in the
> > general case.
> 
> Agree.
Christoph Hellwig Sept. 16, 2021, 7:11 a.m. UTC | #6
On Wed, Sep 15, 2021 at 01:27:47PM -0700, Dan Williams wrote:
> > Yeah, Christoph suggested that we make the clearing operation explicit
> > in a related thread a few weeks ago:
> > https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/
> 
> That seemed to be tied to a proposal to plumb it all the way out to an
> explicit fallocate() mode, not make it a silent side effect of
> pwrite().

Yes.

> >
> > Each of the dm drivers has to add their own ->clear_poison operation
> > that remaps the incoming (sector, len) parameters as appropriate for
> > that device and then calls the lower device's ->clear_poison with the
> > translated parameters.
> >
> > This (AFAICT) has already been done for dax_zero_page_range, so I sense
> > that Dan is trying to save you a bunch of code plumbing work by nudging
> > you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> > and then you only need patches 2-3.
> 
> Yes, but it sounds like Christoph was saying don't overload
> dax_zero_page_range(). I'd be ok splitting the difference and having a
> new fallocate clear poison mode map to dax_zero_page_range()
> internally.

That was my gut feeling.  If everyone feels 100% comfortable with
zeroingas the mechanism to clear poisoning I'll cave in.  The most
important bit is that we do that through a dedicated DAX path instead
of abusing the block layer even more.

> 
> >
> > > BTW, our customer doesn't care about creating dax volume thru DM, so.
> >
> > They might not care, but anything going upstream should work in the
> > general case.
> 
> Agree.

I'm really worried about both patartitions on DAX and DM passing through
DAX because they deeply bind DAX to the block layer, which is just a bad
idea.  I think we also need to sort that whole story out before removing
the EXPERIMENTAL tags.
Dan Williams Sept. 16, 2021, 6:40 p.m. UTC | #7
On Thu, Sep 16, 2021 at 12:12 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Wed, Sep 15, 2021 at 01:27:47PM -0700, Dan Williams wrote:
> > > Yeah, Christoph suggested that we make the clearing operation explicit
> > > in a related thread a few weeks ago:
> > > https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/
> >
> > That seemed to be tied to a proposal to plumb it all the way out to an
> > explicit fallocate() mode, not make it a silent side effect of
> > pwrite().
>
> Yes.
>
> > >
> > > Each of the dm drivers has to add their own ->clear_poison operation
> > > that remaps the incoming (sector, len) parameters as appropriate for
> > > that device and then calls the lower device's ->clear_poison with the
> > > translated parameters.
> > >
> > > This (AFAICT) has already been done for dax_zero_page_range, so I sense
> > > that Dan is trying to save you a bunch of code plumbing work by nudging
> > > you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> > > and then you only need patches 2-3.
> >
> > Yes, but it sounds like Christoph was saying don't overload
> > dax_zero_page_range(). I'd be ok splitting the difference and having a
> > new fallocate clear poison mode map to dax_zero_page_range()
> > internally.
>
> That was my gut feeling.  If everyone feels 100% comfortable with
> zeroingas the mechanism to clear poisoning I'll cave in.  The most
> important bit is that we do that through a dedicated DAX path instead
> of abusing the block layer even more.

...or just rename dax_zero_page_range() to dax_reset_page_range()?
Where reset == "zero + clear-poison"?

> > > > BTW, our customer doesn't care about creating dax volume thru DM, so.
> > >
> > > They might not care, but anything going upstream should work in the
> > > general case.
> >
> > Agree.
>
> I'm really worried about both patartitions on DAX and DM passing through
> DAX because they deeply bind DAX to the block layer, which is just a bad
> idea.  I think we also need to sort that whole story out before removing
> the EXPERIMENTAL tags.

I do think it was a mistake to allow for DAX on partitions of a pmemX
block-device.

DAX-reflink support may be the opportunity to start deprecating that
support. Only enable DAX-reflink for direct mounting on /dev/pmemX
without partitions (later add dax-device direct mounting), change
DAX-experimental warning to a deprecation notification for DAX on
DM/partitions, continue to fail / never fix DAX-reflink for
DM/partitions, direct people to use namespace provisioning for
sub-divisions of PMEM capacity, and finally look into adding
concatenation and additional software striping support to the new CXL
region creation facility.
Christoph Hellwig Sept. 17, 2021, 12:53 p.m. UTC | #8
On Thu, Sep 16, 2021 at 11:40:28AM -0700, Dan Williams wrote:
> > That was my gut feeling.  If everyone feels 100% comfortable with
> > zeroingas the mechanism to clear poisoning I'll cave in.  The most
> > important bit is that we do that through a dedicated DAX path instead
> > of abusing the block layer even more.
> 
> ...or just rename dax_zero_page_range() to dax_reset_page_range()?
> Where reset == "zero + clear-poison"?

I'd say that naming is more confusing than overloading zero.

> > I'm really worried about both patartitions on DAX and DM passing through
> > DAX because they deeply bind DAX to the block layer, which is just a bad
> > idea.  I think we also need to sort that whole story out before removing
> > the EXPERIMENTAL tags.
> 
> I do think it was a mistake to allow for DAX on partitions of a pmemX
> block-device.
> 
> DAX-reflink support may be the opportunity to start deprecating that
> support. Only enable DAX-reflink for direct mounting on /dev/pmemX
> without partitions (later add dax-device direct mounting),

I think we need to fully or almost fully sort this out.

Here is my bold suggestions:

 1) drop no drop the EXPERMINTAL on the current block layer overload
    at all
 2) add direct mounting of the nvdimm namespaces ASAP.  Because all
    the filesystem currently also need the /dev/pmem0 device add a way
    to open the block device by the dax_device instead of our current
    way of doing the reverse
 3) deprecate DAX support through block layer mounts with a say 2 year
    deprecation period
 4) add DAX remapping devices as needed

I'll volunteer to write the initial code for 2).  And I think we should
not allow DAX+reflink on the block device shim at all.
Darrick J. Wong Sept. 17, 2021, 3:27 p.m. UTC | #9
On Fri, Sep 17, 2021 at 01:53:33PM +0100, Christoph Hellwig wrote:
> On Thu, Sep 16, 2021 at 11:40:28AM -0700, Dan Williams wrote:
> > > That was my gut feeling.  If everyone feels 100% comfortable with
> > > zeroingas the mechanism to clear poisoning I'll cave in.  The most
> > > important bit is that we do that through a dedicated DAX path instead
> > > of abusing the block layer even more.
> > 
> > ...or just rename dax_zero_page_range() to dax_reset_page_range()?
> > Where reset == "zero + clear-poison"?
> 
> I'd say that naming is more confusing than overloading zero.

How about dax_zeroinit_range() ?

To go with its fallocate flag (yeah I've been too busy sorting out -rc1
regressions to repost this) FALLOC_FL_ZEROINIT_RANGE that will reset the
hardware (whatever that means) and set the contents to the known value
zero.

Userspace usage model:

void handle_media_error(int fd, loff_t pos, size_t len)
{
	/* yell about this for posterior's sake */

	ret = fallocate(fd, FALLOC_FL_ZEROINIT_RANGE, pos, len);

	/* yay our disk drive / pmem / stone table engraver is online */
}

> > > I'm really worried about both patartitions on DAX and DM passing through
> > > DAX because they deeply bind DAX to the block layer, which is just a bad
> > > idea.  I think we also need to sort that whole story out before removing
> > > the EXPERIMENTAL tags.
> > 
> > I do think it was a mistake to allow for DAX on partitions of a pmemX
> > block-device.
> > 
> > DAX-reflink support may be the opportunity to start deprecating that
> > support. Only enable DAX-reflink for direct mounting on /dev/pmemX
> > without partitions (later add dax-device direct mounting),
> 
> I think we need to fully or almost fully sort this out.
> 
> Here is my bold suggestions:
> 
>  1) drop no drop the EXPERMINTAL on the current block layer overload
>     at all

I don't understand this.

>  2) add direct mounting of the nvdimm namespaces ASAP.  Because all
>     the filesystem currently also need the /dev/pmem0 device add a way
>     to open the block device by the dax_device instead of our current
>     way of doing the reverse
>  3) deprecate DAX support through block layer mounts with a say 2 year
>     deprecation period
>  4) add DAX remapping devices as needed

What devices are needed?  linear for lvm, and maybe error so we can
actually test all this stuff?

> I'll volunteer to write the initial code for 2).  And I think we should
> not allow DAX+reflink on the block device shim at all.

/me has other questions about daxreflink, but I'll ask them on shiyang's
thread.

--D
Dan Williams Sept. 17, 2021, 7:37 p.m. UTC | #10
On Fri, Sep 17, 2021 at 5:57 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Thu, Sep 16, 2021 at 11:40:28AM -0700, Dan Williams wrote:
> > > That was my gut feeling.  If everyone feels 100% comfortable with
> > > zeroingas the mechanism to clear poisoning I'll cave in.  The most
> > > important bit is that we do that through a dedicated DAX path instead
> > > of abusing the block layer even more.
> >
> > ...or just rename dax_zero_page_range() to dax_reset_page_range()?
> > Where reset == "zero + clear-poison"?
>
> I'd say that naming is more confusing than overloading zero.

Ok, I see Darrick has a better suggestion for the shed color.

>
> > > I'm really worried about both patartitions on DAX and DM passing through
> > > DAX because they deeply bind DAX to the block layer, which is just a bad
> > > idea.  I think we also need to sort that whole story out before removing
> > > the EXPERIMENTAL tags.
> >
> > I do think it was a mistake to allow for DAX on partitions of a pmemX
> > block-device.
> >
> > DAX-reflink support may be the opportunity to start deprecating that
> > support. Only enable DAX-reflink for direct mounting on /dev/pmemX
> > without partitions (later add dax-device direct mounting),
>
> I think we need to fully or almost fully sort this out.
>
> Here is my bold suggestions:
>
>  1) drop no drop the EXPERMINTAL on the current block layer overload
>     at all

s/drop no drop/do not drop/?

>  2) add direct mounting of the nvdimm namespaces ASAP.  Because all
>     the filesystem currently also need the /dev/pmem0 device add a way
>     to open the block device by the dax_device instead of our current
>     way of doing the reverse

Oh, interesting. I can get on board with that. There's currently no
/dev entry for namespaces. It's either /dev/pmemX, or /dev/daxX.Y as a
child of /sys/bus/nd/devices/namespaceX.Y. However, I see nothing
glaringly wrong with having /dev/daxX.Y always published regardless of
whether /dev/pmemX is also present.

>  3) deprecate DAX support through block layer mounts with a say 2 year
>     deprecation period
>  4) add DAX remapping devices as needed
>
> I'll volunteer to write the initial code for 2).  And I think we should
> not allow DAX+reflink on the block device shim at all.

Yeah, I think this can fly.
Dan Williams Sept. 17, 2021, 8:21 p.m. UTC | #11
On Fri, Sep 17, 2021 at 8:27 AM Darrick J. Wong <djwong@kernel.org> wrote:
>
> On Fri, Sep 17, 2021 at 01:53:33PM +0100, Christoph Hellwig wrote:
> > On Thu, Sep 16, 2021 at 11:40:28AM -0700, Dan Williams wrote:
> > > > That was my gut feeling.  If everyone feels 100% comfortable with
> > > > zeroingas the mechanism to clear poisoning I'll cave in.  The most
> > > > important bit is that we do that through a dedicated DAX path instead
> > > > of abusing the block layer even more.
> > >
> > > ...or just rename dax_zero_page_range() to dax_reset_page_range()?
> > > Where reset == "zero + clear-poison"?
> >
> > I'd say that naming is more confusing than overloading zero.
>
> How about dax_zeroinit_range() ?

Works for me.

>
> To go with its fallocate flag (yeah I've been too busy sorting out -rc1
> regressions to repost this) FALLOC_FL_ZEROINIT_RANGE that will reset the
> hardware (whatever that means) and set the contents to the known value
> zero.
>
> Userspace usage model:
>
> void handle_media_error(int fd, loff_t pos, size_t len)
> {
>         /* yell about this for posterior's sake */
>
>         ret = fallocate(fd, FALLOC_FL_ZEROINIT_RANGE, pos, len);
>
>         /* yay our disk drive / pmem / stone table engraver is online */

The fallocate mode can still be error-aware though, right? When the FS
has knowledge of the error locations the fallocate mode could be
fallocate(fd, FALLOC_FL_OVERWRITE_ERRORS, pos, len) with the semantics
of attempting to zero out any known poison extents in the given file
range? At the risk of going overboard on new fallocate modes there
could also (or instead of) be FALLOC_FL_PUNCH_ERRORS to skip trying to
clear them and just ask the FS to throw error extents away.

> }
>
> > > > I'm really worried about both patartitions on DAX and DM passing through
> > > > DAX because they deeply bind DAX to the block layer, which is just a bad
> > > > idea.  I think we also need to sort that whole story out before removing
> > > > the EXPERIMENTAL tags.
> > >
> > > I do think it was a mistake to allow for DAX on partitions of a pmemX
> > > block-device.
> > >
> > > DAX-reflink support may be the opportunity to start deprecating that
> > > support. Only enable DAX-reflink for direct mounting on /dev/pmemX
> > > without partitions (later add dax-device direct mounting),
> >
> > I think we need to fully or almost fully sort this out.
> >
> > Here is my bold suggestions:
> >
> >  1) drop no drop the EXPERMINTAL on the current block layer overload
> >     at all
>
> I don't understand this.
>
> >  2) add direct mounting of the nvdimm namespaces ASAP.  Because all
> >     the filesystem currently also need the /dev/pmem0 device add a way
> >     to open the block device by the dax_device instead of our current
> >     way of doing the reverse
> >  3) deprecate DAX support through block layer mounts with a say 2 year
> >     deprecation period
> >  4) add DAX remapping devices as needed
>
> What devices are needed?  linear for lvm, and maybe error so we can
> actually test all this stuff?

The proposal would be zero lvm support. The nvdimm namespace
definition would need to grow support for concatenation + striping.
Soft error injection could be achieved by writing to the badblocks
interface.
Darrick J. Wong Sept. 18, 2021, 12:07 a.m. UTC | #12
On Fri, Sep 17, 2021 at 01:21:25PM -0700, Dan Williams wrote:
> On Fri, Sep 17, 2021 at 8:27 AM Darrick J. Wong <djwong@kernel.org> wrote:
> >
> > On Fri, Sep 17, 2021 at 01:53:33PM +0100, Christoph Hellwig wrote:
> > > On Thu, Sep 16, 2021 at 11:40:28AM -0700, Dan Williams wrote:
> > > > > That was my gut feeling.  If everyone feels 100% comfortable with
> > > > > zeroingas the mechanism to clear poisoning I'll cave in.  The most
> > > > > important bit is that we do that through a dedicated DAX path instead
> > > > > of abusing the block layer even more.
> > > >
> > > > ...or just rename dax_zero_page_range() to dax_reset_page_range()?
> > > > Where reset == "zero + clear-poison"?
> > >
> > > I'd say that naming is more confusing than overloading zero.
> >
> > How about dax_zeroinit_range() ?
> 
> Works for me.
> 
> >
> > To go with its fallocate flag (yeah I've been too busy sorting out -rc1
> > regressions to repost this) FALLOC_FL_ZEROINIT_RANGE that will reset the
> > hardware (whatever that means) and set the contents to the known value
> > zero.
> >
> > Userspace usage model:
> >
> > void handle_media_error(int fd, loff_t pos, size_t len)
> > {
> >         /* yell about this for posterior's sake */
> >
> >         ret = fallocate(fd, FALLOC_FL_ZEROINIT_RANGE, pos, len);
> >
> >         /* yay our disk drive / pmem / stone table engraver is online */
> 
> The fallocate mode can still be error-aware though, right? When the FS
> has knowledge of the error locations the fallocate mode could be
> fallocate(fd, FALLOC_FL_OVERWRITE_ERRORS, pos, len) with the semantics
> of attempting to zero out any known poison extents in the given file
> range? At the risk of going overboard on new fallocate modes there
> could also (or instead of) be FALLOC_FL_PUNCH_ERRORS to skip trying to
> clear them and just ask the FS to throw error extents away.

It /could/ be, but for now I've stuck to what you see is what you get --
if you tell it to 'zero initialize' 1MB of pmem, it'll write zeroes and
clear the poison on all 1MB, regardless of the old contents.

IOWs, you can use it from a poison handler on just the range that it
told you about, or you could use it to bulk-clear a lot of space all at
once.

A dorky thing here is that the dax_zero_page_range function returns EIO
if you tell it to do more than one page...


> 
> > }
> >
> > > > > I'm really worried about both patartitions on DAX and DM passing through
> > > > > DAX because they deeply bind DAX to the block layer, which is just a bad
> > > > > idea.  I think we also need to sort that whole story out before removing
> > > > > the EXPERIMENTAL tags.
> > > >
> > > > I do think it was a mistake to allow for DAX on partitions of a pmemX
> > > > block-device.
> > > >
> > > > DAX-reflink support may be the opportunity to start deprecating that
> > > > support. Only enable DAX-reflink for direct mounting on /dev/pmemX
> > > > without partitions (later add dax-device direct mounting),
> > >
> > > I think we need to fully or almost fully sort this out.
> > >
> > > Here is my bold suggestions:
> > >
> > >  1) drop no drop the EXPERMINTAL on the current block layer overload
> > >     at all
> >
> > I don't understand this.
> >
> > >  2) add direct mounting of the nvdimm namespaces ASAP.  Because all
> > >     the filesystem currently also need the /dev/pmem0 device add a way
> > >     to open the block device by the dax_device instead of our current
> > >     way of doing the reverse
> > >  3) deprecate DAX support through block layer mounts with a say 2 year
> > >     deprecation period
> > >  4) add DAX remapping devices as needed
> >
> > What devices are needed?  linear for lvm, and maybe error so we can
> > actually test all this stuff?
> 
> The proposal would be zero lvm support. The nvdimm namespace
> definition would need to grow support for concatenation + striping.

Ah, ok.

> Soft error injection could be achieved by writing to the badblocks
> interface.

<nod>

I'll send out an RFC of what I have currently.

--D
Jane Chu Sept. 23, 2021, 8:48 p.m. UTC | #13
On 9/15/2021 1:27 PM, Dan Williams wrote:
>>> I'm also thinking about the MOVEDIR64B instruction and how it
>>> might be used to clear poison on the fly with a single 'store'.
>>> Of course, that means we need to figure out how to narrow down the
>>> error blast radius first.
> It turns out the MOVDIR64B error clearing idea runs into problem with
> the device poison tracking. Without the explicit notification that
> software wanted the error cleared the device may ghost report errors
> that are not there anymore. I think we should continue explicit error
> clearing and notification of the device that the error has been
> cleared (by asking the device to clear it).
> 

Sorry for the late response, I was out for several days.

Your concern is understood.  I wasn't thinking of an out-of-band
MOVDIR64B to clear poison, I was thinking about adding a case to
pmem_clear_poison(), such that if CPUID feature shows that
MOVDIR64B is supported, instead of calling the BIOS interface
to clear poison, MOVDIR64B could be called. The advantage is
a. a lot faster; b. smaller radius.  And the driver has a chance
to update its ->bb record.

thanks,
-jane
Jane Chu Sept. 23, 2021, 8:55 p.m. UTC | #14
On 9/15/2021 9:15 AM, Darrick J. Wong wrote:
> On Wed, Sep 15, 2021 at 12:22:05AM -0700, Jane Chu wrote:
>> Hi, Dan,
>>
>> On 9/14/2021 9:44 PM, Dan Williams wrote:
>>> On Tue, Sep 14, 2021 at 4:32 PM Jane Chu <jane.chu@oracle.com> wrote:
>>>>
>>>> If pwrite(2) encounters poison in a pmem range, it fails with EIO.
>>>> This is unecessary if hardware is capable of clearing the poison.
>>>>
>>>> Though not all dax backend hardware has the capability of clearing
>>>> poison on the fly, but dax backed by Intel DCPMEM has such capability,
>>>> and it's desirable to, first, speed up repairing by means of it;
>>>> second, maintain backend continuity instead of fragmenting it in
>>>> search for clean blocks.
>>>>
>>>> Jane Chu (3):
>>>>     dax: introduce dax_operation dax_clear_poison
>>>
>>> The problem with new dax operations is that they need to be plumbed
>>> not only through fsdax and pmem, but also through device-mapper.
>>>
>>> In this case I think we're already covered by dax_zero_page_range().
>>> That will ultimately trigger pmem_clear_poison() and it is routed
>>> through device-mapper properly.
>>>
>>> Can you clarify why the existing dax_zero_page_range() is not sufficient?
>>
>> fallocate ZERO_RANGE is in itself a functionality that applied to dax
>> should lead to zero out the media range.  So one may argue it is part
>> of a block operations, and not something explicitly aimed at clearing
>> poison.
> 
> Yeah, Christoph suggested that we make the clearing operation explicit
> in a related thread a few weeks ago:
> https://lore.kernel.org/linux-fsdevel/YRtnlPERHfMZ23Tr@infradead.org/
> 
> I like Jane's patchset far better than the one that I sent, because it
> doesn't require a block device wrapper for the pmem, and it enables us
> to tell application writers that they can handle media errors by
> pwrite()ing the bad region, just like they do for nvme and spinners.
> 
>> I'm also thinking about the MOVEDIR64B instruction and how it
>> might be used to clear poison on the fly with a single 'store'.
>> Of course, that means we need to figure out how to narrow down the
>> error blast radius first.
> 
> That was one of the advantages of Shiyang Ruan's NAKed patchset to
> enable byte-granularity media errors to pass upwards through the stack
> back to the filesystem, which could then tell applications exactly what
> they lost.
> 
> I want to get back to that, though if Dan won't withdraw the NAK then I
> don't know how to move forward...
> 
>> With respect to plumbing through device-mapper, I thought about that,
>> and wasn't sure. I mean the clear-poison work will eventually fall on
>> the pmem driver, and thru the DM layers, how does that play out thru
>> DM?
> 
> Each of the dm drivers has to add their own ->clear_poison operation
> that remaps the incoming (sector, len) parameters as appropriate for
> that device and then calls the lower device's ->clear_poison with the
> translated parameters.
> 
> This (AFAICT) has already been done for dax_zero_page_range, so I sense
> that Dan is trying to save you a bunch of code plumbing work by nudging
> you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> and then you only need patches 2-3.

Thanks Darrick for the explanation!
I don't mind to add DM layer support, it sounds straight forward.
I also like your latest patch and am wondering if the clear_poison API
is still of value.

thanks,
-jane

> 
>> BTW, our customer doesn't care about creating dax volume thru DM, so.
> 
> They might not care, but anything going upstream should work in the
> general case.
> 
> --D
> 
>> thanks!
>> -jane
>>
>>
>>>
>>>>     dax: introduce dax_clear_poison to dax pwrite operation
>>>>     libnvdimm/pmem: Provide pmem_dax_clear_poison for dax operation
>>>>
>>>>    drivers/dax/super.c   | 13 +++++++++++++
>>>>    drivers/nvdimm/pmem.c | 17 +++++++++++++++++
>>>>    fs/dax.c              |  9 +++++++++
>>>>    include/linux/dax.h   |  6 ++++++
>>>>    4 files changed, 45 insertions(+)
>>>>
>>>> --
>>>> 2.18.4
>>>>
Dan Williams Sept. 23, 2021, 9:42 p.m. UTC | #15
On Thu, Sep 23, 2021 at 1:56 PM Jane Chu <jane.chu@oracle.com> wrote:
[..]
> > This (AFAICT) has already been done for dax_zero_page_range, so I sense
> > that Dan is trying to save you a bunch of code plumbing work by nudging
> > you towards doing s/dax_clear_poison/dax_zero_page_range/ to this series
> > and then you only need patches 2-3.
>
> Thanks Darrick for the explanation!
> I don't mind to add DM layer support, it sounds straight forward.
> I also like your latest patch and am wondering if the clear_poison API
> is still of value.

No, the discussion about fallocate(...ZEROINIT...) has lead to a
better solution. Instead of making error clearing a silent /
opportunistic side-effect of writes, or trying to define new fallocate
mode, just add a new RWF_CLEAR_HWERROR flag to pwritev2(). This allows
for dax_direct_access() to map the page regardless of poison and
trigger pmem_copy_from_iter() to precisely handle sub-page poison.