diff mbox

dm-raid: add RAID discard support

Message ID 1411491106-23676-1-git-send-email-heinzm@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Heinz Mauelshagen Sept. 23, 2014, 4:51 p.m. UTC
From: Heinz Mauelshagen <heinzm@redhat.com>

Support discard in all RAID levels 1,10,4,5 and 6.

In case of RAID levels 4,5 and 6 we have to check for the capability
to zero data on discards to avoid stripe data corruption.

If that capabilizy is missing on any disk in a RAID set,
we have to disable discards.


Signed-off-by: Heinz Mauelshagen <heinzm|redhat.com>

---
 drivers/md/dm-raid.c | 44 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 43 insertions(+), 1 deletion(-)

Comments

Jonthan Brassow Sept. 23, 2014, 9:52 p.m. UTC | #1
On Sep 23, 2014, at 11:51 AM, heinzm@redhat.com wrote:

> From: Heinz Mauelshagen <heinzm@redhat.com>
> 
> Support discard in all RAID levels 1,10,4,5 and 6.
> 
> In case of RAID levels 4,5 and 6 we have to check for the capability
> to zero data on discards to avoid stripe data corruption.
> 
> If that capabilizy is missing on any disk in a RAID set,
> we have to disable discards.
> 
> 
> Signed-off-by: Heinz Mauelshagen <heinzm|redhat.com>

Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>

 brassow

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Martin K. Petersen Sept. 23, 2014, 11:07 p.m. UTC | #2
>>>>> "Heinz" == heinzm  <heinzm@redhat.com> writes:

Heinz> In case of RAID levels 4,5 and 6 we have to check for the
Heinz> capability to zero data on discards to avoid stripe data
Heinz> corruption.

I'm quite concerned about relying on discard_zeroes_data since it's a
hint and not a hard guarantee.
NeilBrown Sept. 23, 2014, 11:33 p.m. UTC | #3
On Tue, 23 Sep 2014 19:07:35 -0400 "Martin K. Petersen"
<martin.petersen@oracle.com> wrote:

> >>>>> "Heinz" == heinzm  <heinzm@redhat.com> writes:
> 
> Heinz> In case of RAID levels 4,5 and 6 we have to check for the
> Heinz> capability to zero data on discards to avoid stripe data
> Heinz> corruption.
> 
> I'm quite concerned about relying on discard_zeroes_data since it's a
> hint and not a hard guarantee.
> 

If it is a "hint", then why isn't it "discard_sometimes_zeroes_data"?

md/raid5 already depends on this.  If a read from a discarded region doesn't
reliably return zeros, then raid5 cannot support discard.
Should I turn of discard support in raid5?

Also, could you help my understand when such a hint-but-no-guarantee would be
useful?

Thanks,
NeilBrown
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Martin K. Petersen Sept. 24, 2014, 2:20 a.m. UTC | #4
>>>>> "Neil" == NeilBrown  <neilb@suse.de> writes:

Neil,

Several people have pointed out the issues with the reliability of
discard_zeroes_data in the past. My objection with Heinz' patch is
purely that we are perpetuating an interface that we have come to learn
that we can't entirely trust.

Neil> If it is a "hint", then why isn't it
Neil> "discard_sometimes_zeroes_data"?

Because at the time we put it in it was expected to be a solid
guarantee. However, as it turns out the spec is vaguely worded and only
says that the device must return zeroes if a block has been successfully
discarded. But since discard itself is only a hint the device is free to
silently ignore all or parts of the request. *sigh*

There were several approaches being entertained in both T10 and T13
about how to tighten this up, including a FORCE bit that would cause the
device to return an error if parts of the request were being ignored. I
was hoping that we would be able to use that. However, the consensus at
the T10 meeting this summer was that WRITE SAME is the only reliable way
to ensure that zeroed blocks are returned. And the spec was amended to
clarify that.

Neil> If a read from a discarded region doesn't reliably return zeros,
Neil> then raid5 cannot support discard.  Should I turn of discard
Neil> support in raid5?

I've been on the fence about this for a while.

The good news is that while the spec has been vague and many battles
fought in the standards bodies, most vendors are reasonably sane when it
comes to implementing this. We haven't had any corruption horror stories
from the field that I'm aware of. The first couple of generations of
SSDs were a travesty in this department but current devices behave
reasonably well. Hardware RAID vendors generally solve the RAID5 SSD
problem by explicitly whitelisting drives that are known to do the right
thing. Sadly we don't have a good device list. And for most drives the
discards only get dropped in rare corner cases so it's not trivial to
write a test tool to generate such a list.

I was really hoping we'd get better guarantees from the standards folks
this summer. But as the spec is currently written switching to a WRITE
SAME bias in our discard heuristics may have the opposite effect of what
we desire. That's because the spec mandates that if a device can not
discard the blocks in the request it must write zeroes to them. IOW, we
get the determinism at the expense of potentially ending up writing the
blocks in question, thus increasing write wear. Probably not what we
want for the discard use case.

I am in the process of tweaking the discard code to adhere to the August
27th T10 SBC-4 draft. The net takeaway is that if you depend on reading
zeroes back you should be using blkdev_issue_zeroout() and not
blkdev_issue_discard(). raid5.c effectively needs to translate discard
requests from above to zeroouts below.

I am torn between making blkdev_issue_zeroout() unprovision if possible
or having an explicit allocate flag. Some users only care about getting
zeroes back but others use blkdev_issue_zeroout() to preallocate blocks
(which is the current behavior).

Right now we have:

  discard		Unprovision, may or may not return zeroes
  zeroout		Provision, return zeroes

We'll have to move to a model where we have a third variant:

  discard		Unprovision, may or may not return zeroes
  zeroout, deallocate	Unprovision if possible, return zeroes
  zeroout, allocate     Provision, return zeroes

The zeroout, deallocate variant would be enabled if the device supports
WRITE SAME w/UNMAP and has the LBPRZ flag set (and is not our libata
SATL).

Discard is the bane of my existence :(
Jonthan Brassow Sept. 24, 2014, 4:05 a.m. UTC | #5
On Sep 23, 2014, at 9:20 PM, Martin K. Petersen wrote:

>>>>>> "Neil" == NeilBrown  <neilb@suse.de> writes:
> 
> Neil,
> 
> Several people have pointed out the issues with the reliability of
> discard_zeroes_data in the past. My objection with Heinz' patch is
> purely that we are perpetuating an interface that we have come to learn
> that we can't entirely trust.
> 
> Neil> If it is a "hint", then why isn't it
> Neil> "discard_sometimes_zeroes_data"?
> 
> Because at the time we put it in it was expected to be a solid
> guarantee. However, as it turns out the spec is vaguely worded and only
> says that the device must return zeroes if a block has been successfully
> discarded. But since discard itself is only a hint the device is free to
> silently ignore all or parts of the request. *sigh*
> 
> There were several approaches being entertained in both T10 and T13
> about how to tighten this up, including a FORCE bit that would cause the
> device to return an error if parts of the request were being ignored. I
> was hoping that we would be able to use that. However, the consensus at
> the T10 meeting this summer was that WRITE SAME is the only reliable way
> to ensure that zeroed blocks are returned. And the spec was amended to
> clarify that.
> 
> Neil> If a read from a discarded region doesn't reliably return zeros,
> Neil> then raid5 cannot support discard.  Should I turn of discard
> Neil> support in raid5?
> 
> I've been on the fence about this for a while.
> 
> The good news is that while the spec has been vague and many battles
> fought in the standards bodies, most vendors are reasonably sane when it
> comes to implementing this. We haven't had any corruption horror stories
> from the field that I'm aware of. The first couple of generations of
> SSDs were a travesty in this department but current devices behave
> reasonably well. Hardware RAID vendors generally solve the RAID5 SSD
> problem by explicitly whitelisting drives that are known to do the right
> thing. Sadly we don't have a good device list. And for most drives the
> discards only get dropped in rare corner cases so it's not trivial to
> write a test tool to generate such a list.
> 
> I was really hoping we'd get better guarantees from the standards folks
> this summer. But as the spec is currently written switching to a WRITE
> SAME bias in our discard heuristics may have the opposite effect of what
> we desire. That's because the spec mandates that if a device can not
> discard the blocks in the request it must write zeroes to them. IOW, we
> get the determinism at the expense of potentially ending up writing the
> blocks in question, thus increasing write wear. Probably not what we
> want for the discard use case.
> 
> I am in the process of tweaking the discard code to adhere to the August
> 27th T10 SBC-4 draft. The net takeaway is that if you depend on reading
> zeroes back you should be using blkdev_issue_zeroout() and not
> blkdev_issue_discard(). raid5.c effectively needs to translate discard
> requests from above to zeroouts below.
> 
> I am torn between making blkdev_issue_zeroout() unprovision if possible
> or having an explicit allocate flag. Some users only care about getting
> zeroes back but others use blkdev_issue_zeroout() to preallocate blocks
> (which is the current behavior).
> 
> Right now we have:
> 
>  discard		Unprovision, may or may not return zeroes
>  zeroout		Provision, return zeroes
> 
> We'll have to move to a model where we have a third variant:
> 
>  discard		Unprovision, may or may not return zeroes
>  zeroout, deallocate	Unprovision if possible, return zeroes
>  zeroout, allocate     Provision, return zeroes
> 
> The zeroout, deallocate variant would be enabled if the device supports
> WRITE SAME w/UNMAP and has the LBPRZ flag set (and is not our libata
> SATL).
> 
> Discard is the bane of my existence :(

It's not really so bad if it doesn't return zero's, is it?  As long as it returns the previous contents.  You mentioned that the discard was a hint - the drive may or may not discard the entire portion.  If that means the portions that are discarded will always return zero and the portions that aren't will always return their previous contents, is there any problem?  The real issue is randomness, no?  Could this be why "we haven't had any corruption horror stories", or is it because we got lucky with sane vendors?

 brassow



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
NeilBrown Sept. 24, 2014, 4:21 a.m. UTC | #6
On Tue, 23 Sep 2014 23:05:51 -0500 Brassow Jonathan <jbrassow@redhat.com>
wrote:

> 
> On Sep 23, 2014, at 9:20 PM, Martin K. Petersen wrote:
> 
> >>>>>> "Neil" == NeilBrown  <neilb@suse.de> writes:
> > 
> > Neil,
> > 
> > Several people have pointed out the issues with the reliability of
> > discard_zeroes_data in the past. My objection with Heinz' patch is
> > purely that we are perpetuating an interface that we have come to learn
> > that we can't entirely trust.
> > 
> > Neil> If it is a "hint", then why isn't it
> > Neil> "discard_sometimes_zeroes_data"?
> > 
> > Because at the time we put it in it was expected to be a solid
> > guarantee. However, as it turns out the spec is vaguely worded and only
> > says that the device must return zeroes if a block has been successfully
> > discarded. But since discard itself is only a hint the device is free to
> > silently ignore all or parts of the request. *sigh*
> > 
> > There were several approaches being entertained in both T10 and T13
> > about how to tighten this up, including a FORCE bit that would cause the
> > device to return an error if parts of the request were being ignored. I
> > was hoping that we would be able to use that. However, the consensus at
> > the T10 meeting this summer was that WRITE SAME is the only reliable way
> > to ensure that zeroed blocks are returned. And the spec was amended to
> > clarify that.
> > 
> > Neil> If a read from a discarded region doesn't reliably return zeros,
> > Neil> then raid5 cannot support discard.  Should I turn of discard
> > Neil> support in raid5?
> > 
> > I've been on the fence about this for a while.
> > 
> > The good news is that while the spec has been vague and many battles
> > fought in the standards bodies, most vendors are reasonably sane when it
> > comes to implementing this. We haven't had any corruption horror stories
> > from the field that I'm aware of. The first couple of generations of
> > SSDs were a travesty in this department but current devices behave
> > reasonably well. Hardware RAID vendors generally solve the RAID5 SSD
> > problem by explicitly whitelisting drives that are known to do the right
> > thing. Sadly we don't have a good device list. And for most drives the
> > discards only get dropped in rare corner cases so it's not trivial to
> > write a test tool to generate such a list.
> > 
> > I was really hoping we'd get better guarantees from the standards folks
> > this summer. But as the spec is currently written switching to a WRITE
> > SAME bias in our discard heuristics may have the opposite effect of what
> > we desire. That's because the spec mandates that if a device can not
> > discard the blocks in the request it must write zeroes to them. IOW, we
> > get the determinism at the expense of potentially ending up writing the
> > blocks in question, thus increasing write wear. Probably not what we
> > want for the discard use case.
> > 
> > I am in the process of tweaking the discard code to adhere to the August
> > 27th T10 SBC-4 draft. The net takeaway is that if you depend on reading
> > zeroes back you should be using blkdev_issue_zeroout() and not
> > blkdev_issue_discard(). raid5.c effectively needs to translate discard
> > requests from above to zeroouts below.
> > 
> > I am torn between making blkdev_issue_zeroout() unprovision if possible
> > or having an explicit allocate flag. Some users only care about getting
> > zeroes back but others use blkdev_issue_zeroout() to preallocate blocks
> > (which is the current behavior).
> > 
> > Right now we have:
> > 
> >  discard		Unprovision, may or may not return zeroes
> >  zeroout		Provision, return zeroes
> > 
> > We'll have to move to a model where we have a third variant:
> > 
> >  discard		Unprovision, may or may not return zeroes
> >  zeroout, deallocate	Unprovision if possible, return zeroes
> >  zeroout, allocate     Provision, return zeroes
> > 
> > The zeroout, deallocate variant would be enabled if the device supports
> > WRITE SAME w/UNMAP and has the LBPRZ flag set (and is not our libata
> > SATL).
> > 
> > Discard is the bane of my existence :(

Thanks for the detailed explanation Martin!!
It seems that "discard" still comes with a strong "Caveat Emptor"
Sad.

> 
> It's not really so bad if it doesn't return zero's, is it?  As long as it returns the previous contents.  You mentioned that the discard was a hint - the drive may or may not discard the entire portion.  If that means the portions that are discarded will always return zero and the portions that aren't will always return their previous contents, is there any problem?  The real issue is randomness, no?  Could this be why "we haven't had any corruption horror stories", or is it because we got lucky with sane vendors?

For RAID5 it is not acceptable to return "zeros or the original content".
This is because of the possibility of RMW cycles to update the parity block.
As "discard" the doesn't treat all blocks in a stripe in an identical way
will cause the parity block to become incorrect, and it will stay incorrect
through multiple RMW cycles until an RCW corrects it.
A device failure while the parity block is incorrect will lead to data
corruption.

The only real solution I can think of is to maintain an 'allocated' bitmap
and clear bits when DISCARDing.  When a write happens to a location that is
not 'allocated' the whole  bit worth gets resynced (or maybe zeroed-out).

Grumble.

Thanks again,
NeilBrown
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jonthan Brassow Sept. 24, 2014, 4:35 a.m. UTC | #7
On Sep 23, 2014, at 11:21 PM, NeilBrown wrote:

> 
> For RAID5 it is not acceptable to return "zeros or the original content".
> This is because of the possibility of RMW cycles to update the parity block.
> As "discard" the doesn't treat all blocks in a stripe in an identical way
> will cause the parity block to become incorrect, and it will stay incorrect
> through multiple RMW cycles until an RCW corrects it.
> A device failure while the parity block is incorrect will lead to data
> corruption.
> 
> The only real solution I can think of is to maintain an 'allocated' bitmap
> and clear bits when DISCARDing.  When a write happens to a location that is
> not 'allocated' the whole  bit worth gets resynced (or maybe zeroed-out).
> 
> Grumble.

I see, sure.

Are there problems with this 'allocated' bitmap idea when bitrot is considered?  Scrubbing (check/repair) assumes that the bitmap cannot be trusted and does a complete scan of the array - dealing with mismatches.  If bitrot happens in the 'allocated' bitmap...

 brassow


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Heinz Mauelshagen Sept. 24, 2014, 11:02 a.m. UTC | #8
Martin,

thanks for the good explanation of the state of the discard union.
Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?

I was planning to have a followup patch for dm-raid supporting a dm-raid 
table
line argument to prohibit discard passdown.

In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data 
support
related to RAID4/5/6, we need that in upstream together with the initial 
patch.

That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6 
table
lines to avoid possible data corruption but can be avoided on RAID1/10 
table lines,
because the latter are not suffering from any  discard_zeroes_data flaw.


Neil,

are you going to disable discards in RAID4/5/6 shortly
or rather go with your bitmap solution?

Heinz


On 09/24/2014 06:21 AM, NeilBrown wrote:
> On Tue, 23 Sep 2014 23:05:51 -0500 Brassow Jonathan <jbrassow@redhat.com>
> wrote:
>
>> On Sep 23, 2014, at 9:20 PM, Martin K. Petersen wrote:
>>
>>>>>>>> "Neil" == NeilBrown  <neilb@suse.de> writes:
>>> Neil,
>>>
>>> Several people have pointed out the issues with the reliability of
>>> discard_zeroes_data in the past. My objection with Heinz' patch is
>>> purely that we are perpetuating an interface that we have come to learn
>>> that we can't entirely trust.
>>>
>>> Neil> If it is a "hint", then why isn't it
>>> Neil> "discard_sometimes_zeroes_data"?
>>>
>>> Because at the time we put it in it was expected to be a solid
>>> guarantee. However, as it turns out the spec is vaguely worded and only
>>> says that the device must return zeroes if a block has been successfully
>>> discarded. But since discard itself is only a hint the device is free to
>>> silently ignore all or parts of the request. *sigh*
>>>
>>> There were several approaches being entertained in both T10 and T13
>>> about how to tighten this up, including a FORCE bit that would cause the
>>> device to return an error if parts of the request were being ignored. I
>>> was hoping that we would be able to use that. However, the consensus at
>>> the T10 meeting this summer was that WRITE SAME is the only reliable way
>>> to ensure that zeroed blocks are returned. And the spec was amended to
>>> clarify that.
>>>
>>> Neil> If a read from a discarded region doesn't reliably return zeros,
>>> Neil> then raid5 cannot support discard.  Should I turn of discard
>>> Neil> support in raid5?
>>>
>>> I've been on the fence about this for a while.
>>>
>>> The good news is that while the spec has been vague and many battles
>>> fought in the standards bodies, most vendors are reasonably sane when it
>>> comes to implementing this. We haven't had any corruption horror stories
>>> from the field that I'm aware of. The first couple of generations of
>>> SSDs were a travesty in this department but current devices behave
>>> reasonably well. Hardware RAID vendors generally solve the RAID5 SSD
>>> problem by explicitly whitelisting drives that are known to do the right
>>> thing. Sadly we don't have a good device list. And for most drives the
>>> discards only get dropped in rare corner cases so it's not trivial to
>>> write a test tool to generate such a list.
>>>
>>> I was really hoping we'd get better guarantees from the standards folks
>>> this summer. But as the spec is currently written switching to a WRITE
>>> SAME bias in our discard heuristics may have the opposite effect of what
>>> we desire. That's because the spec mandates that if a device can not
>>> discard the blocks in the request it must write zeroes to them. IOW, we
>>> get the determinism at the expense of potentially ending up writing the
>>> blocks in question, thus increasing write wear. Probably not what we
>>> want for the discard use case.
>>>
>>> I am in the process of tweaking the discard code to adhere to the August
>>> 27th T10 SBC-4 draft. The net takeaway is that if you depend on reading
>>> zeroes back you should be using blkdev_issue_zeroout() and not
>>> blkdev_issue_discard(). raid5.c effectively needs to translate discard
>>> requests from above to zeroouts below.
>>>
>>> I am torn between making blkdev_issue_zeroout() unprovision if possible
>>> or having an explicit allocate flag. Some users only care about getting
>>> zeroes back but others use blkdev_issue_zeroout() to preallocate blocks
>>> (which is the current behavior).
>>>
>>> Right now we have:
>>>
>>>   discard		Unprovision, may or may not return zeroes
>>>   zeroout		Provision, return zeroes
>>>
>>> We'll have to move to a model where we have a third variant:
>>>
>>>   discard		Unprovision, may or may not return zeroes
>>>   zeroout, deallocate	Unprovision if possible, return zeroes
>>>   zeroout, allocate     Provision, return zeroes
>>>
>>> The zeroout, deallocate variant would be enabled if the device supports
>>> WRITE SAME w/UNMAP and has the LBPRZ flag set (and is not our libata
>>> SATL).
>>>
>>> Discard is the bane of my existence :(
> Thanks for the detailed explanation Martin!!
> It seems that "discard" still comes with a strong "Caveat Emptor"
> Sad.
>
>> It's not really so bad if it doesn't return zero's, is it?  As long as it returns the previous contents.  You mentioned that the discard was a hint - the drive may or may not discard the entire portion.  If that means the portions that are discarded will always return zero and the portions that aren't will always return their previous contents, is there any problem?  The real issue is randomness, no?  Could this be why "we haven't had any corruption horror stories", or is it because we got lucky with sane vendors?
> For RAID5 it is not acceptable to return "zeros or the original content".
> This is because of the possibility of RMW cycles to update the parity block.
> As "discard" the doesn't treat all blocks in a stripe in an identical way
> will cause the parity block to become incorrect, and it will stay incorrect
> through multiple RMW cycles until an RCW corrects it.
> A device failure while the parity block is incorrect will lead to data
> corruption.
>
> The only real solution I can think of is to maintain an 'allocated' bitmap
> and clear bits when DISCARDing.  When a write happens to a location that is
> not 'allocated' the whole  bit worth gets resynced (or maybe zeroed-out).
>
> Grumble.
>
> Thanks again,
> NeilBrown
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
Christoph Hellwig Sept. 24, 2014, 2:21 p.m. UTC | #9
On Tue, Sep 23, 2014 at 07:07:35PM -0400, Martin K. Petersen wrote:
> >>>>> "Heinz" == heinzm  <heinzm@redhat.com> writes:
> 
> Heinz> In case of RAID levels 4,5 and 6 we have to check for the
> Heinz> capability to zero data on discards to avoid stripe data
> Heinz> corruption.
> 
> I'm quite concerned about relying on discard_zeroes_data since it's a
> hint and not a hard guarantee.

If we want to provide useful discards through raid we need to make
it a hard guarantee.  That is:

 - only set it when using a WRITE SAME variant on scsi
 - start adding a whitelist for ATA

Relying on zeroout which also works when it doesn't unmap below is not
the answer, especially if it may end up doing a physical write of zeroes
on disk devices.  Alternatively we could get rid of discard_zeroes_data
and stop the pretence of any sane behavior from discard, and instead
expose an unmap bit through the blkdev_issue_zeroout stack for those
callers who want to do a discard, but rely on zeroes.  The end effect
will be the same, just a different presentation.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Heinz Mauelshagen Sept. 24, 2014, 2:38 p.m. UTC | #10
On 09/24/2014 04:21 PM, Christoph Hellwig wrote:
> On Tue, Sep 23, 2014 at 07:07:35PM -0400, Martin K. Petersen wrote:
>>>>>>> "Heinz" == heinzm  <heinzm@redhat.com> writes:
>> Heinz> In case of RAID levels 4,5 and 6 we have to check for the
>> Heinz> capability to zero data on discards to avoid stripe data
>> Heinz> corruption.
>>
>> I'm quite concerned about relying on discard_zeroes_data since it's a
>> hint and not a hard guarantee.
> If we want to provide useful discards through raid we need to make
> it a hard guarantee.  That is:
>
>   - only set it when using a WRITE SAME variant on scsi
>   - start adding a whitelist for ATA
>
> Relying on zeroout which also works when it doesn't unmap below is not
> the answer, especially if it may end up doing a physical write of zeroes
> on disk devices.  Alternatively we could get rid of discard_zeroes_data
> and stop the pretence of any sane behavior from discard, and instead
> expose an unmap bit through the blkdev_issue_zeroout stack for those
> callers who want to do a discard, but rely on zeroes.  The end effect
> will be the same, just a different presentation.

Neil already came up with an unmap  bitmap  to add to the RAID4/5/6
personalities in order to harden MD in this regard.
In order to avoid any related data corruption with any future disk types
it seems to be the approprite solution.

Waiting for him to comment more on this.

Heinz

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Martin K. Petersen Sept. 24, 2014, 3:11 p.m. UTC | #11
>>>>> "Christoph" == Christoph Hellwig <hch@infradead.org> writes:

Christoph>  - start adding a whitelist for ATA

As far as I know the sanity bits have been discussed in T13 but haven't
been agreed upon yet.

The hard part about the whitelist is that bad things only happen in rare
circumstances. However, we may be able to take some hints from here:

http://www.lsi.com/downloads/Public/RAID%20Controllers/RAID%20Controllers%20Common%20Files/MegaRAID_SAS3_936x-938x_Compatibility_List.pdf

Christoph> Alternatively we could get rid of discard_zeroes_data and
Christoph> stop the pretence of any sane behavior from discard, and
Christoph> instead expose an unmap bit through the blkdev_issue_zeroout
Christoph> stack for those callers who want to do a discard, but rely on
Christoph> zeroes. The end effect will be the same, just a different
Christoph> presentation.

Yep.
NeilBrown Oct. 1, 2014, 2:56 a.m. UTC | #12
On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
wrote:

> 
> Martin,
> 
> thanks for the good explanation of the state of the discard union.
> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> 
> I was planning to have a followup patch for dm-raid supporting a dm-raid 
> table
> line argument to prohibit discard passdown.
> 
> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data 
> support
> related to RAID4/5/6, we need that in upstream together with the initial 
> patch.
> 
> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6 
> table
> lines to avoid possible data corruption but can be avoided on RAID1/10 
> table lines,
> because the latter are not suffering from any  discard_zeroes_data flaw.
> 
> 
> Neil,
> 
> are you going to disable discards in RAID4/5/6 shortly
> or rather go with your bitmap solution?

Can I just close my eyes and hope it goes away?

The idea of a bitmap of uninitialised areas is not a short-term solution.
But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
would  mean that people with good sensible hardware wouldn't be able to use
it properly.

I would really rather that discard_zeroes_data were only set on devices where
it was actually true.  Then it wouldn't be my problem any more.

Maybe I could do a loud warning
  "Not enabling DISCARD on RAID5 because we cannot trust committees.
   Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
   sectors as zeros"

and add an appropriate module parameter......

While we are on the topic, maybe I should write down my thoughts about the
bitmap thing in case someone wants to contribute.

  There are 3 states that a 'region' can be in:
    1- known to be in-sync
    2- possibly not in sync, but it should  be
    3- probably not in sync, contains no valuable data.

 A read from '3' should return zeroes.
 A write to '3' should change the region to be '2'.  It could either
    write zeros before allowing the write to start, or it could just start
    a normal resync.

 Here is a question:  if a region has been discarded, are we guaranteed that
 reads are at least stable.  i.e. if I read twice will I definitely get the
 same value?

 It would be simplest to store all the states in the one bitmap. i.e. have a
 new version of the current bitmap (which distinguishes between '1' and '2')
 which allows the 3rd state.  Probably just 2 bits per region, though we could
 squeeze state for 20 regions into 32 bits if we really wanted :-)

 Then we set the bitmap to all '3's when it is created and set regions to
 '3' when discarding.

 One problem with this approach is that it forces the same region size.
 Regions for the current bitmap can conveniently be 10s of Megabytes.  For
 state '3', we ideally want one region per stripe.
 For 4TB drives with a 512K chunk size (and smaller is not uncommon) that is
 8 million regions which is a megabyte of bits.

 I guess that isn't all that much.  One problem with small regions for the
 1/2 distinction is that we end up updating the bitmap more often, but
 that could be avoided by setting multiple bits at one.
 i.e. keep the internal counters with a larger granularity, and when a
 counter (of outstanding writes) becomes non-zero, set quite a few bits in
 the bitmap.

 So that is probably what I would do:
  - new version for bitmap which has 2 bits per region and encodes 3 states
  - bitmap granularity matches chunk size (by default)
  - decouple region size of bitmap from region size for internal 'dirty'
    accounting
  - write to a 'state 3' region sets it to 'state 2' and kicks off resync
  - 'discard' sets state to '3'.

NeilBrown
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Heinz Mauelshagen Oct. 1, 2014, 11:13 a.m. UTC | #13
On 10/01/2014 04:56 AM, NeilBrown wrote:
> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
>
>> Martin,
>>
>> thanks for the good explanation of the state of the discard union.
>> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
>>
>> I was planning to have a followup patch for dm-raid supporting a dm-raid
>> table
>> line argument to prohibit discard passdown.
>>
>> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
>> support
>> related to RAID4/5/6, we need that in upstream together with the initial
>> patch.
>>
>> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
>> table
>> lines to avoid possible data corruption but can be avoided on RAID1/10
>> table lines,
>> because the latter are not suffering from any  discard_zeroes_data flaw.
>>
>>
>> Neil,
>>
>> are you going to disable discards in RAID4/5/6 shortly
>> or rather go with your bitmap solution?
> Can I just close my eyes and hope it goes away?

I'd think that it actually would.

Sane deployments are unlikely to base MD RAID4/5/6 mappings on 
ancient/unreliable SSDs.
Your "md_mod.willing_to_risk_discard=Y" mentioned below is the proper
way against all insane odds.

BTW:
we're referring to SSD discard_zeroes_data issues only so far it seems?
Can we be sure that SCSI unmap with the TPRZ bit set will always return
zeroes on reads for arbitrary storage devices?

>
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would  mean that people with good sensible hardware wouldn't be able to use
> it properly.
>
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true.  Then it wouldn't be my problem any more.
>
> Maybe I could do a loud warning
>    "Not enabling DISCARD on RAID5 because we cannot trust committees.
>     Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
>     sectors as zeros"

Yes, we are thinking the same for dm-raid (like a "force_discard" 
parameter).

>
> and add an appropriate module parameter......
>
> While we are on the topic, maybe I should write down my thoughts about the
> bitmap thing in case someone wants to contribute.
>
>    There are 3 states that a 'region' can be in:
>      1- known to be in-sync
>      2- possibly not in sync, but it should  be
>      3- probably not in sync, contains no valuable data.
>
>   A read from '3' should return zeroes.
>   A write to '3' should change the region to be '2'.  It could either
>      write zeros before allowing the write to start, or it could just start
>      a normal resync.
>
>   Here is a question:  if a region has been discarded, are we guaranteed that
>   reads are at least stable.  i.e. if I read twice will I definitely get the
>   same value?

We can never be sure taking the standards and implementation hassle
into account.
Thus RAID system OEMs have whitelists for supported drives.

>
>   It would be simplest to store all the states in the one bitmap. i.e. have a
>   new version of the current bitmap (which distinguishes between '1' and '2')
>   which allows the 3rd state.  Probably just 2 bits per region, though we could
>   squeeze state for 20 regions into 32 bits if we really wanted :-)
>
>   Then we set the bitmap to all '3's when it is created and set regions to
>   '3' when discarding.
>
>   One problem with this approach is that it forces the same region size.
>   Regions for the current bitmap can conveniently be 10s of Megabytes.  For
>   state '3', we ideally want one region per stripe.
>   For 4TB drives with a 512K chunk size (and smaller is not uncommon) that is
>   8 million regions which is a megabyte of bits.
>
>   I guess that isn't all that much.  One problem with small regions for the
>   1/2 distinction is that we end up updating the bitmap more often, but
>   that could be avoided by setting multiple bits at one.
>   i.e. keep the internal counters with a larger granularity, and when a
>   counter (of outstanding writes) becomes non-zero, set quite a few bits in
>   the bitmap.

How close can you get with update patterns on bitmaps for that new approach
to what we have with the current one?
Will it be negligible or performance impacting?


>
>   So that is probably what I would do:
>    - new version for bitmap which has 2 bits per region and encodes 3 states
>    - bitmap granularity matches chunk size (by default)
>    - decouple region size of bitmap from region size for internal 'dirty'
>      accounting
>    - write to a 'state 3' region sets it to 'state 2' and kicks off resync
>    - 'discard' sets state to '3'.

Makes sense but we'd rather avoid it based on the aforementioned 
"willing" option.

Heinz

>
> NeilBrown
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Mike Snitzer Oct. 1, 2014, 1:32 p.m. UTC | #14
On Tue, Sep 30 2014 at 10:56pm -0400,
NeilBrown <neilb@suse.de> wrote:

> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
> 
> > 
> > Martin,
> > 
> > thanks for the good explanation of the state of the discard union.
> > Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> > 
> > I was planning to have a followup patch for dm-raid supporting a dm-raid 
> > table
> > line argument to prohibit discard passdown.
> > 
> > In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data 
> > support
> > related to RAID4/5/6, we need that in upstream together with the initial 
> > patch.
> > 
> > That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6 
> > table
> > lines to avoid possible data corruption but can be avoided on RAID1/10 
> > table lines,
> > because the latter are not suffering from any  discard_zeroes_data flaw.
> > 
> > 
> > Neil,
> > 
> > are you going to disable discards in RAID4/5/6 shortly
> > or rather go with your bitmap solution?
> 
> Can I just close my eyes and hope it goes away?
> 
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would  mean that people with good sensible hardware wouldn't be able to use
> it properly.
> 
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true.  Then it wouldn't be my problem any more.
> 
> Maybe I could do a loud warning
>   "Not enabling DISCARD on RAID5 because we cannot trust committees.
>    Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
>    sectors as zeros"
> 
> and add an appropriate module parameter......

I had the same thought and would be happy with this too.  I was going to
update Heinz's patch to have the same default off but allow user to
enable:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-next&id=8e0cff64f35971135a6de7907bbc8c3a010aff8f

But I'd love to just follow what you arrive at with MD (using the same
name for the module param in dm-raid).

I'm open to getting this done now and included in 3.18 if you are.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Andrey Kuzmin Oct. 1, 2014, 4 p.m. UTC | #15
On Wed, Oct 1, 2014 at 6:56 AM, NeilBrown <neilb@suse.de> wrote:
> On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> wrote:
>
>>
>> Martin,
>>
>> thanks for the good explanation of the state of the discard union.
>> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
>>
>> I was planning to have a followup patch for dm-raid supporting a dm-raid
>> table
>> line argument to prohibit discard passdown.
>>
>> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
>> support
>> related to RAID4/5/6, we need that in upstream together with the initial
>> patch.
>>
>> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
>> table
>> lines to avoid possible data corruption but can be avoided on RAID1/10
>> table lines,
>> because the latter are not suffering from any  discard_zeroes_data flaw.
>>
>>
>> Neil,
>>
>> are you going to disable discards in RAID4/5/6 shortly
>> or rather go with your bitmap solution?
>
> Can I just close my eyes and hope it goes away?
>
> The idea of a bitmap of uninitialised areas is not a short-term solution.
> But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> would  mean that people with good sensible hardware wouldn't be able to use
> it properly.
>
> I would really rather that discard_zeroes_data were only set on devices where
> it was actually true.  Then it wouldn't be my problem any more.
>
> Maybe I could do a loud warning
>   "Not enabling DISCARD on RAID5 because we cannot trust committees.
>    Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
>    sectors as zeros"
>
> and add an appropriate module parameter......
>
> While we are on the topic, maybe I should write down my thoughts about the
> bitmap thing in case someone wants to contribute.
>
>   There are 3 states that a 'region' can be in:
>     1- known to be in-sync
>     2- possibly not in sync, but it should  be
>     3- probably not in sync, contains no valuable data.
>
>  A read from '3' should return zeroes.
>  A write to '3' should change the region to be '2'.  It could either
>     write zeros before allowing the write to start, or it could just start
>     a normal resync.
>
>  Here is a question:  if a region has been discarded, are we guaranteed that
>  reads are at least stable.  i.e. if I read twice will I definitely get the
>  same value?

Not sure with other specs, but an NVMe-compliant SSD that supports
discard (Dataset Management command with Deallocate attribute, in NVMe
parlance) is, per spec, required to be deterministic when deallocated
range is subsequently read. That's what the spec (1.1) says:

The value read from a deallocated LBA shall be deterministic;
specifically, the value returned by subsequent reads of that LBA shall
be the same until a write occurs to that LBA. The values read from a
deallocated LBA and its metadata (excluding protection information)
shall be all zeros, all ones, or the last data written to the
associated LBA and its metadata. The values read from an unwritten or
deallocated LBA’s protection information field shall be all ones
(indicating the protection information shall not be checked).

Regards,
Andrey

>
>  It would be simplest to store all the states in the one bitmap. i.e. have a
>  new version of the current bitmap (which distinguishes between '1' and '2')
>  which allows the 3rd state.  Probably just 2 bits per region, though we could
>  squeeze state for 20 regions into 32 bits if we really wanted :-)
>
>  Then we set the bitmap to all '3's when it is created and set regions to
>  '3' when discarding.
>
>  One problem with this approach is that it forces the same region size.
>  Regions for the current bitmap can conveniently be 10s of Megabytes.  For
>  state '3', we ideally want one region per stripe.
>  For 4TB drives with a 512K chunk size (and smaller is not uncommon) that is
>  8 million regions which is a megabyte of bits.
>
>  I guess that isn't all that much.  One problem with small regions for the
>  1/2 distinction is that we end up updating the bitmap more often, but
>  that could be avoided by setting multiple bits at one.
>  i.e. keep the internal counters with a larger granularity, and when a
>  counter (of outstanding writes) becomes non-zero, set quite a few bits in
>  the bitmap.
>
>  So that is probably what I would do:
>   - new version for bitmap which has 2 bits per region and encodes 3 states
>   - bitmap granularity matches chunk size (by default)
>   - decouple region size of bitmap from region size for internal 'dirty'
>     accounting
>   - write to a 'state 3' region sets it to 'state 2' and kicks off resync
>   - 'discard' sets state to '3'.
>
> NeilBrown
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jonthan Brassow Oct. 1, 2014, 6:57 p.m. UTC | #16
On Sep 30, 2014, at 9:56 PM, NeilBrown wrote:

> So that is probably what I would do:
>  - new version for bitmap which has 2 bits per region and encodes 3 states
>  - bitmap granularity matches chunk size (by default)
>  - decouple region size of bitmap from region size for internal 'dirty'
>    accounting
>  - write to a 'state 3' region sets it to 'state 2' and kicks off resync
>  - 'discard' sets state to '3'.

There are scenarios where we do not trust the bitmap and perform exhaustive searches (scrubbing).  When we do this, we assume that the bitmap has been correctly handled, but the data has been corrupted somehow.  When we scrub, we now could presumably use the bitmap to avoid those regions that have been discarded.  However, are there problems to be considered when the corruption is the other way around - i.e. when the bitmap/superblock are corrupted instead of the data area?  Do we consider problems like this unlikely because of the frequency of superblock writes?  (The bitrot that I am familiar with usually happens in areas that are not touched very often - especially if they exist near areas that /are/ touched very often.)

 brassow

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
NeilBrown Oct. 1, 2014, 11:15 p.m. UTC | #17
On Wed, 1 Oct 2014 20:00:45 +0400 Andrey Kuzmin <andrey.v.kuzmin@gmail.com>
wrote:

> On Wed, Oct 1, 2014 at 6:56 AM, NeilBrown <neilb@suse.de> wrote:
> > On Wed, 24 Sep 2014 13:02:28 +0200 Heinz Mauelshagen <heinzm@redhat.com>
> > wrote:
> >
> >>
> >> Martin,
> >>
> >> thanks for the good explanation of the state of the discard union.
> >> Do you have an ETA for the 'zeroout, deallocate' ... support you mentioned?
> >>
> >> I was planning to have a followup patch for dm-raid supporting a dm-raid
> >> table
> >> line argument to prohibit discard passdown.
> >>
> >> In lieu of the fuzzy field situation wrt SSD fw and discard_zeroes_data
> >> support
> >> related to RAID4/5/6, we need that in upstream together with the initial
> >> patch.
> >>
> >> That 'no_discard_passdown' table line can be added to dm-raid RAID4/5/6
> >> table
> >> lines to avoid possible data corruption but can be avoided on RAID1/10
> >> table lines,
> >> because the latter are not suffering from any  discard_zeroes_data flaw.
> >>
> >>
> >> Neil,
> >>
> >> are you going to disable discards in RAID4/5/6 shortly
> >> or rather go with your bitmap solution?
> >
> > Can I just close my eyes and hope it goes away?
> >
> > The idea of a bitmap of uninitialised areas is not a short-term solution.
> > But I'm not really keen on simply disabling discard for RAID4/5/6 either. It
> > would  mean that people with good sensible hardware wouldn't be able to use
> > it properly.
> >
> > I would really rather that discard_zeroes_data were only set on devices where
> > it was actually true.  Then it wouldn't be my problem any more.
> >
> > Maybe I could do a loud warning
> >   "Not enabling DISCARD on RAID5 because we cannot trust committees.
> >    Set "md_mod.willing_to_risk_discard=Y" if your devices reads discarded
> >    sectors as zeros"
> >
> > and add an appropriate module parameter......
> >
> > While we are on the topic, maybe I should write down my thoughts about the
> > bitmap thing in case someone wants to contribute.
> >
> >   There are 3 states that a 'region' can be in:
> >     1- known to be in-sync
> >     2- possibly not in sync, but it should  be
> >     3- probably not in sync, contains no valuable data.
> >
> >  A read from '3' should return zeroes.
> >  A write to '3' should change the region to be '2'.  It could either
> >     write zeros before allowing the write to start, or it could just start
> >     a normal resync.
> >
> >  Here is a question:  if a region has been discarded, are we guaranteed that
> >  reads are at least stable.  i.e. if I read twice will I definitely get the
> >  same value?
> 
> Not sure with other specs, but an NVMe-compliant SSD that supports
> discard (Dataset Management command with Deallocate attribute, in NVMe
> parlance) is, per spec, required to be deterministic when deallocated
> range is subsequently read. That's what the spec (1.1) says:
> 
> The value read from a deallocated LBA shall be deterministic;
> specifically, the value returned by subsequent reads of that LBA shall
> be the same until a write occurs to that LBA. The values read from a
> deallocated LBA and its metadata (excluding protection information)
> shall be all zeros, all ones, or the last data written to the
> associated LBA and its metadata. The values read from an unwritten or
> deallocated LBA’s protection information field shall be all ones
> (indicating the protection information shall not be checked).
> 

That's good to know - thanks.

NeilBrown
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
NeilBrown Oct. 1, 2014, 11:18 p.m. UTC | #18
On Wed, 1 Oct 2014 13:57:24 -0500 Brassow Jonathan <jbrassow@redhat.com>
wrote:

> 
> On Sep 30, 2014, at 9:56 PM, NeilBrown wrote:
> 
> > So that is probably what I would do:
> >  - new version for bitmap which has 2 bits per region and encodes 3 states
> >  - bitmap granularity matches chunk size (by default)
> >  - decouple region size of bitmap from region size for internal 'dirty'
> >    accounting
> >  - write to a 'state 3' region sets it to 'state 2' and kicks off resync
> >  - 'discard' sets state to '3'.
> 
> There are scenarios where we do not trust the bitmap and perform exhaustive searches (scrubbing).  When we do this, we assume that the bitmap has been correctly handled, but the data has been corrupted somehow.  When we scrub, we now could presumably use the bitmap to avoid those regions that have been discarded.  However, are there problems to be considered when the corruption is the other way around - i.e. when the bitmap/superblock are corrupted instead of the data area?  Do we consider problems like this unlikely because of the frequency of superblock writes?  (The bitrot that I am familiar with usually happens in areas that are not touched very often - especially if they exist near areas that /are/ touched very often.)
> 
>  brassow

Good question....

I suspect the frequent writes that you mention would reduce the chance of
bitrot - and would make it quickly disappear if it did happen.

Maybe we could compare bitmaps across all devices as the array is assembled
and take some action if there are differences(??).

NeilBrown
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Martin K. Petersen Oct. 3, 2014, 1:09 a.m. UTC | #19
>>>>> "Neil" == NeilBrown  <neilb@suse.de> writes:

Neil> I would really rather that discard_zeroes_data were only set on
Neil> devices where it was actually true.  Then it wouldn't be my
Neil> problem any more.

Yeah. It's too late for 3.18 but I'll try to get a whitelist going for
3.19.

Neil>  Here is a question: if a region has been discarded, are we
Neil>  guaranteed that reads are at least stable.  i.e. if I read twice
Neil>  will I definitely get the same value?

If discard_zeroes_data=1, then yes.
Martin K. Petersen Oct. 3, 2014, 1:12 a.m. UTC | #20
>>>>> "Heinz" == Heinz Mauelshagen <heinzm@redhat.com> writes:

Heinz> BTW: we're referring to SSD discard_zeroes_data issues only so
Heinz> far it seems?  Can we be sure that SCSI unmap with the TPRZ bit
Heinz> set will always return zeroes on reads for arbitrary storage
Heinz> devices?

Only if we used WRITE SAME to discard.
diff mbox

Patch

diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 4880b69..4edb2c7 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -1,6 +1,6 @@ 
 /*
  * Copyright (C) 2010-2011 Neil Brown
- * Copyright (C) 2010-2011 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2010-2014 Red Hat, Inc. All rights reserved.
  *
  * This file is released under the GPL.
  */
@@ -1150,6 +1150,43 @@  static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
 }
 
 /*
+ * Enable/disable discard support on RAID set depending
+ * on RAID level and discard properties of underlying devices
+ * (i.e. the legs of the set).
+ */
+static void raid_check_discard(struct dm_target *ti, struct raid_set *rs)
+{
+	int i;
+	bool discard_supported = true;
+	/* RAID level 4,5,6 request discard_zeroes_data for data integrity! */
+	bool raid1_or_10 = rs->md.level == 1 || rs->md.level == 10;
+	bool zeroes_data_mandatory = !raid1_or_10;
+
+	for (i = 0; i < rs->md.raid_disks; i++) {
+		struct request_queue *q = bdev_get_queue(rs->dev[i].rdev.bdev);
+
+		if (!q ||
+		    !blk_queue_discard(q) ||
+		    (zeroes_data_mandatory && !q->limits.discard_zeroes_data)) {
+			discard_supported = false;
+			break;
+		}
+	}
+
+	if (discard_supported) {
+		ti->discards_supported = true;
+		/*
+		 * raid1 and raid10 personalities require bio splitting,
+		 * raid4/5/6 don't and process large discard bios properly.
+		 */
+		ti->split_discard_bios = raid1_or_10;
+		ti->num_discard_bios = 1;
+
+	} else
+		ti->discards_supported = false;
+}
+
+/*
  * Construct a RAID4/5/6 mapping:
  * Args:
  *	<raid_type> <#raid_params> <raid_params>		\
@@ -1231,6 +1268,11 @@  static int raid_ctr(struct dm_target *ti, unsigned argc, char **argv)
 	ti->private = rs;
 	ti->num_flush_bios = 1;
 
+	/*
+	 * Disable/enable discard support on RAID set.
+	 */
+	raid_check_discard(ti, rs);
+
 	mutex_lock(&rs->md.reconfig_mutex);
 	ret = md_run(&rs->md);
 	rs->md.in_sync = 0; /* Assume already marked dirty */