diff mbox

[3/3] Btrfs: make raid6 rebuild retry more

Message ID 20171204224037.7556-4-bo.li.liu@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Liu Bo Dec. 4, 2017, 10:40 p.m. UTC
There is a scenario that can end up with rebuild process failing to
return good content, i.e.
suppose that all disks can be read without problems and if the content
that was read out doesn't match its checksum, currently for raid6
btrfs at most retries twice,

- the 1st retry is to rebuild with all other stripes, it'll eventually
  be a raid5 xor rebuild,
- if the 1st fails, the 2nd retry will deliberately fail parity p so
  that it will do raid6 style rebuild,

however, the chances are that another non-parity stripe content also
has something corrupted, so that the above retries are not able to
return correct content, and users will think of this as data loss.
More seriouly, if the loss happens on some important internal btree
roots, it could refuse to mount.

This extends btrfs to do more retries and each retry fails only one
stripe.  Since raid6 can tolerate 2 disk failures, if there is one
more failure besides the failure on which we're recovering, this can
always work.

The worst case is to retry as many times as the number of raid6 disks,
but given the fact that such a scenario is really rare in practice,
it's still acceptable.

Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
---
 fs/btrfs/raid56.c  | 18 ++++++++++++++----
 fs/btrfs/volumes.c |  9 ++++++++-
 2 files changed, 22 insertions(+), 5 deletions(-)

Comments

Qu Wenruo Dec. 5, 2017, 8:07 a.m. UTC | #1
On 2017年12月05日 06:40, Liu Bo wrote:
> There is a scenario that can end up with rebuild process failing to
> return good content, i.e.
> suppose that all disks can be read without problems and if the content
> that was read out doesn't match its checksum, currently for raid6
> btrfs at most retries twice,
> 
> - the 1st retry is to rebuild with all other stripes, it'll eventually
>   be a raid5 xor rebuild,
> - if the 1st fails, the 2nd retry will deliberately fail parity p so
>   that it will do raid6 style rebuild,
> 
> however, the chances are that another non-parity stripe content also
> has something corrupted, so that the above retries are not able to
> return correct content, and users will think of this as data loss.
> More seriouly, if the loss happens on some important internal btree
> roots, it could refuse to mount.
> 
> This extends btrfs to do more retries and each retry fails only one
> stripe.  Since raid6 can tolerate 2 disk failures, if there is one
> more failure besides the failure on which we're recovering, this can
> always work.

This should be the correct behavior for RAID6, try all possible
combination until all combination is exhausted or correct data can be
recovered.

> 
> The worst case is to retry as many times as the number of raid6 disks,
> but given the fact that such a scenario is really rare in practice,
> it's still acceptable.

And even we tried that much times, I don't think it will be a big problem.
Since most of the that happens purely in memory, it should be so fast
that no obvious impact can be observed.

While with some small nitpick inlined below, the idea looks pretty good
to me.

Reviewed-by: Qu Wenruo <wqu@suse.com>

> 
> Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
> ---
>  fs/btrfs/raid56.c  | 18 ++++++++++++++----
>  fs/btrfs/volumes.c |  9 ++++++++-
>  2 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 8d09535..064d5bc 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
>  	}
>  
>  	/*
> -	 * reconstruct from the q stripe if they are
> -	 * asking for mirror 3
> +	 * Loop retry:
> +	 * for 'mirror == 2', reconstruct from all other stripes.

What about using macro to makes the reassemble method more human readable?

And for mirror == 2 case, "rebuild from all" do you mean rebuild using
all remaining data stripe + P? The word "all" here is a little confusing.

Thanks,
Qu

> +	 * for 'mirror_num > 2', select a stripe to fail on every retry.
>  	 */> -	if (mirror_num == 3)
> -		rbio->failb = rbio->real_stripes - 2;
> +	if (mirror_num > 2) {
> +		/*
> +		 * 'mirror == 3' is to fail the p stripe and
> +		 * reconstruct from the q stripe.  'mirror > 3' is to
> +		 * fail a data stripe and reconstruct from p+q stripe.
> +		 */
> +		rbio->failb = rbio->real_stripes - (mirror_num - 1);
> +		ASSERT(rbio->failb > 0);
> +		if (rbio->failb <= rbio->faila)
> +			rbio->failb--;
> +	}
>  
>  	ret = lock_stripe_add(rbio);
>  
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index b397375..95371f8 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5094,7 +5094,14 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
>  	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
>  		ret = 2;
>  	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
> -		ret = 3;
> +		/*
> +		 * There could be two corrupted data stripes, we need
> +		 * to loop retry in order to rebuild the correct data.
> +		 * 
> +		 * Fail a stripe at a time on every retry except the
> +		 * stripe under reconstruction.
> +		 */
> +		ret = map->num_stripes;
>  	else
>  		ret = 1;
>  	free_extent_map(em);
>
Qu Wenruo Dec. 5, 2017, 8:08 a.m. UTC | #2
On 2017年12月05日 06:40, Liu Bo wrote:
> There is a scenario that can end up with rebuild process failing to
> return good content, i.e.
> suppose that all disks can be read without problems and if the content
> that was read out doesn't match its checksum, currently for raid6
> btrfs at most retries twice,
> 
> - the 1st retry is to rebuild with all other stripes, it'll eventually
>   be a raid5 xor rebuild,
> - if the 1st fails, the 2nd retry will deliberately fail parity p so
>   that it will do raid6 style rebuild,
> 
> however, the chances are that another non-parity stripe content also
> has something corrupted, so that the above retries are not able to
> return correct content, and users will think of this as data loss.
> More seriouly, if the loss happens on some important internal btree
> roots, it could refuse to mount.
> 
> This extends btrfs to do more retries and each retry fails only one
> stripe.  Since raid6 can tolerate 2 disk failures, if there is one
> more failure besides the failure on which we're recovering, this can
> always work.

This should be the correct behavior for RAID6, try all possible
combination until all combination is exhausted or correct data can be
recovered.

> 
> The worst case is to retry as many times as the number of raid6 disks,
> but given the fact that such a scenario is really rare in practice,
> it's still acceptable.

And even we tried that much times, I don't think it will be a big problem.
Since most of the that happens purely in memory, it should be so fast
that no obvious impact can be observed.

While with some small nitpick inlined below, the idea looks pretty good
to me.

Reviewed-by: Qu Wenruo <wqu@suse.com>

> 
> Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
> ---
>  fs/btrfs/raid56.c  | 18 ++++++++++++++----
>  fs/btrfs/volumes.c |  9 ++++++++-
>  2 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 8d09535..064d5bc 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
>  	}
>  
>  	/*
> -	 * reconstruct from the q stripe if they are
> -	 * asking for mirror 3
> +	 * Loop retry:
> +	 * for 'mirror == 2', reconstruct from all other stripes.

What about using macro to makes the reassemble method more human readable?

And for mirror == 2 case, "rebuild from all" do you mean rebuild using
all remaining data stripe + P? The word "all" here is a little confusing.

Thanks,
Qu

> +	 * for 'mirror_num > 2', select a stripe to fail on every retry.
>  	 */> -	if (mirror_num == 3)
> -		rbio->failb = rbio->real_stripes - 2;
> +	if (mirror_num > 2) {
> +		/*
> +		 * 'mirror == 3' is to fail the p stripe and
> +		 * reconstruct from the q stripe.  'mirror > 3' is to
> +		 * fail a data stripe and reconstruct from p+q stripe.
> +		 */
> +		rbio->failb = rbio->real_stripes - (mirror_num - 1);
> +		ASSERT(rbio->failb > 0);
> +		if (rbio->failb <= rbio->faila)
> +			rbio->failb--;
> +	}
>  
>  	ret = lock_stripe_add(rbio);
>  
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index b397375..95371f8 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5094,7 +5094,14 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
>  	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
>  		ret = 2;
>  	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
> -		ret = 3;
> +		/*
> +		 * There could be two corrupted data stripes, we need
> +		 * to loop retry in order to rebuild the correct data.
> +		 * 
> +		 * Fail a stripe at a time on every retry except the
> +		 * stripe under reconstruction.
> +		 */
> +		ret = map->num_stripes;
>  	else
>  		ret = 1;
>  	free_extent_map(em);
>
Liu Bo Dec. 5, 2017, 6:04 p.m. UTC | #3
On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
> 
> 
> On 2017年12月05日 06:40, Liu Bo wrote:
> > There is a scenario that can end up with rebuild process failing to
> > return good content, i.e.
> > suppose that all disks can be read without problems and if the content
> > that was read out doesn't match its checksum, currently for raid6
> > btrfs at most retries twice,
> > 
> > - the 1st retry is to rebuild with all other stripes, it'll eventually
> >   be a raid5 xor rebuild,
> > - if the 1st fails, the 2nd retry will deliberately fail parity p so
> >   that it will do raid6 style rebuild,
> > 
> > however, the chances are that another non-parity stripe content also
> > has something corrupted, so that the above retries are not able to
> > return correct content, and users will think of this as data loss.
> > More seriouly, if the loss happens on some important internal btree
> > roots, it could refuse to mount.
> > 
> > This extends btrfs to do more retries and each retry fails only one
> > stripe.  Since raid6 can tolerate 2 disk failures, if there is one
> > more failure besides the failure on which we're recovering, this can
> > always work.
> 
> This should be the correct behavior for RAID6, try all possible
> combination until all combination is exhausted or correct data can be
> recovered.
> 
> > 
> > The worst case is to retry as many times as the number of raid6 disks,
> > but given the fact that such a scenario is really rare in practice,
> > it's still acceptable.
> 
> And even we tried that much times, I don't think it will be a big problem.
> Since most of the that happens purely in memory, it should be so fast
> that no obvious impact can be observed.
>

It's basically a while loop, so it may cause some delay/hang, anyway,
it's rare though.

> While with some small nitpick inlined below, the idea looks pretty good
> to me.
> 
> Reviewed-by: Qu Wenruo <wqu@suse.com>
> 
> > 
> > Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
> > ---
> >  fs/btrfs/raid56.c  | 18 ++++++++++++++----
> >  fs/btrfs/volumes.c |  9 ++++++++-
> >  2 files changed, 22 insertions(+), 5 deletions(-)
> > 
> > diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> > index 8d09535..064d5bc 100644
> > --- a/fs/btrfs/raid56.c
> > +++ b/fs/btrfs/raid56.c
> > @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
> >  	}
> >  
> >  	/*
> > -	 * reconstruct from the q stripe if they are
> > -	 * asking for mirror 3
> > +	 * Loop retry:
> > +	 * for 'mirror == 2', reconstruct from all other stripes.
> 
> What about using macro to makes the reassemble method more human readable?
> 
> And for mirror == 2 case, "rebuild from all" do you mean rebuild using
> all remaining data stripe + P? The word "all" here is a little confusing.
>

Thank you for the comments.

It depends, if all other stripes are good to read, then it'd do
'data+p' which is raid5 xor rebuild, if some disks also fail, then
it'd may do 'data+p+q' or 'data+q'.

Is it better to say "for mirror == 2, reconstruct from other available
stripes"?

Thanks,

-liubo

> Thanks,
> Qu
> 
> > +	 * for 'mirror_num > 2', select a stripe to fail on every retry.
> >  	 */> -	if (mirror_num == 3)
> > -		rbio->failb = rbio->real_stripes - 2;
> > +	if (mirror_num > 2) {
> > +		/*
> > +		 * 'mirror == 3' is to fail the p stripe and
> > +		 * reconstruct from the q stripe.  'mirror > 3' is to
> > +		 * fail a data stripe and reconstruct from p+q stripe.
> > +		 */
> > +		rbio->failb = rbio->real_stripes - (mirror_num - 1);
> > +		ASSERT(rbio->failb > 0);
> > +		if (rbio->failb <= rbio->faila)
> > +			rbio->failb--;
> > +	}
> >  
> >  	ret = lock_stripe_add(rbio);
> >  
> > diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> > index b397375..95371f8 100644
> > --- a/fs/btrfs/volumes.c
> > +++ b/fs/btrfs/volumes.c
> > @@ -5094,7 +5094,14 @@ int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
> >  	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
> >  		ret = 2;
> >  	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
> > -		ret = 3;
> > +		/*
> > +		 * There could be two corrupted data stripes, we need
> > +		 * to loop retry in order to rebuild the correct data.
> > +		 * 
> > +		 * Fail a stripe at a time on every retry except the
> > +		 * stripe under reconstruction.
> > +		 */
> > +		ret = map->num_stripes;
> >  	else
> >  		ret = 1;
> >  	free_extent_map(em);
> > 
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Sterba Dec. 5, 2017, 6:09 p.m. UTC | #4
On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
> > @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
> >  	}
> >  
> >  	/*
> > -	 * reconstruct from the q stripe if they are
> > -	 * asking for mirror 3
> > +	 * Loop retry:
> > +	 * for 'mirror == 2', reconstruct from all other stripes.
> 
> What about using macro to makes the reassemble method more human readable?

Yeah, that's definetelly needed and should be based on
BTRFS_MAX_MIRRORS, not just hardcoded to 3.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Sterba Dec. 5, 2017, 7:29 p.m. UTC | #5
On Tue, Dec 05, 2017 at 11:04:03AM -0700, Liu Bo wrote:
> On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
> > 
> > 
> > On 2017年12月05日 06:40, Liu Bo wrote:
> > > There is a scenario that can end up with rebuild process failing to
> > > return good content, i.e.
> > > suppose that all disks can be read without problems and if the content
> > > that was read out doesn't match its checksum, currently for raid6
> > > btrfs at most retries twice,
> > > 
> > > - the 1st retry is to rebuild with all other stripes, it'll eventually
> > >   be a raid5 xor rebuild,
> > > - if the 1st fails, the 2nd retry will deliberately fail parity p so
> > >   that it will do raid6 style rebuild,
> > > 
> > > however, the chances are that another non-parity stripe content also
> > > has something corrupted, so that the above retries are not able to
> > > return correct content, and users will think of this as data loss.
> > > More seriouly, if the loss happens on some important internal btree
> > > roots, it could refuse to mount.
> > > 
> > > This extends btrfs to do more retries and each retry fails only one
> > > stripe.  Since raid6 can tolerate 2 disk failures, if there is one
> > > more failure besides the failure on which we're recovering, this can
> > > always work.
> > 
> > This should be the correct behavior for RAID6, try all possible
> > combination until all combination is exhausted or correct data can be
> > recovered.
> > 
> > > 
> > > The worst case is to retry as many times as the number of raid6 disks,
> > > but given the fact that such a scenario is really rare in practice,
> > > it's still acceptable.
> > 
> > And even we tried that much times, I don't think it will be a big problem.
> > Since most of the that happens purely in memory, it should be so fast
> > that no obvious impact can be observed.
> >
> 
> It's basically a while loop, so it may cause some delay/hang, anyway,
> it's rare though.
> 
> > While with some small nitpick inlined below, the idea looks pretty good
> > to me.
> > 
> > Reviewed-by: Qu Wenruo <wqu@suse.com>
> > 
> > > 
> > > Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
> > > ---
> > >  fs/btrfs/raid56.c  | 18 ++++++++++++++----
> > >  fs/btrfs/volumes.c |  9 ++++++++-
> > >  2 files changed, 22 insertions(+), 5 deletions(-)
> > > 
> > > diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> > > index 8d09535..064d5bc 100644
> > > --- a/fs/btrfs/raid56.c
> > > +++ b/fs/btrfs/raid56.c
> > > @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
> > >  	}
> > >  
> > >  	/*
> > > -	 * reconstruct from the q stripe if they are
> > > -	 * asking for mirror 3
> > > +	 * Loop retry:
> > > +	 * for 'mirror == 2', reconstruct from all other stripes.
> > 
> > What about using macro to makes the reassemble method more human readable?
> > 
> > And for mirror == 2 case, "rebuild from all" do you mean rebuild using
> > all remaining data stripe + P? The word "all" here is a little confusing.
> >
> 
> Thank you for the comments.
> 
> It depends, if all other stripes are good to read, then it'd do
> 'data+p' which is raid5 xor rebuild, if some disks also fail, then
> it'd may do 'data+p+q' or 'data+q'.
> 
> Is it better to say "for mirror == 2, reconstruct from other available
> stripes"?

Yes it is, you can also add the examples from the previous paragraph.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Liu Bo Dec. 5, 2017, 10:55 p.m. UTC | #6
On Tue, Dec 05, 2017 at 07:09:25PM +0100, David Sterba wrote:
> On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
> > > @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
> > >  	}
> > >  
> > >  	/*
> > > -	 * reconstruct from the q stripe if they are
> > > -	 * asking for mirror 3
> > > +	 * Loop retry:
> > > +	 * for 'mirror == 2', reconstruct from all other stripes.
> > 
> > What about using macro to makes the reassemble method more human readable?
> 
> Yeah, that's definetelly needed and should be based on
> BTRFS_MAX_MIRRORS, not just hardcoded to 3.

OK.

In case of raid5/6, BTRFS_MAX_MIRRORS is an abused name, it's more a
raid1/10 concept, either BTRFS_RAID56_FULL_REBUILD or
BTRFS_RAID56_FULL_CHK is better to me, which one do you guys like?

Thanks,

-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Dec. 6, 2017, 12:11 a.m. UTC | #7
On 2017年12月06日 06:55, Liu Bo wrote:
> On Tue, Dec 05, 2017 at 07:09:25PM +0100, David Sterba wrote:
>> On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
>>>> @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
>>>>  	}
>>>>  
>>>>  	/*
>>>> -	 * reconstruct from the q stripe if they are
>>>> -	 * asking for mirror 3
>>>> +	 * Loop retry:
>>>> +	 * for 'mirror == 2', reconstruct from all other stripes.
>>>
>>> What about using macro to makes the reassemble method more human readable?
>>
>> Yeah, that's definetelly needed and should be based on
>> BTRFS_MAX_MIRRORS, not just hardcoded to 3.
> 
> OK.
> 
> In case of raid5/6, BTRFS_MAX_MIRRORS is an abused name, it's more a
> raid1/10 concept, either BTRFS_RAID56_FULL_REBUILD or
> BTRFS_RAID56_FULL_CHK is better to me, which one do you guys like?

For mirror > 2 case, the mirror_num is no longer a single indicator, but
a ranged iterator for later rebuild retries.

Something like set_raid_fail_from_mirror_num() seems better to me.

Thanks,
Qu

> 
> Thanks,
> 
> -liubo
>
Liu Bo Dec. 7, 2017, 12:26 a.m. UTC | #8
On Wed, Dec 06, 2017 at 08:11:30AM +0800, Qu Wenruo wrote:
> 
> 
> On 2017年12月06日 06:55, Liu Bo wrote:
> > On Tue, Dec 05, 2017 at 07:09:25PM +0100, David Sterba wrote:
> >> On Tue, Dec 05, 2017 at 04:07:35PM +0800, Qu Wenruo wrote:
> >>>> @@ -2166,11 +2166,21 @@ int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
> >>>>  	}
> >>>>  
> >>>>  	/*
> >>>> -	 * reconstruct from the q stripe if they are
> >>>> -	 * asking for mirror 3
> >>>> +	 * Loop retry:
> >>>> +	 * for 'mirror == 2', reconstruct from all other stripes.
> >>>
> >>> What about using macro to makes the reassemble method more human readable?
> >>
> >> Yeah, that's definetelly needed and should be based on
> >> BTRFS_MAX_MIRRORS, not just hardcoded to 3.
> > 
> > OK.
> > 
> > In case of raid5/6, BTRFS_MAX_MIRRORS is an abused name, it's more a
> > raid1/10 concept, either BTRFS_RAID56_FULL_REBUILD or
> > BTRFS_RAID56_FULL_CHK is better to me, which one do you guys like?
> 
> For mirror > 2 case, the mirror_num is no longer a single indicator, but
> a ranged iterator for later rebuild retries.
> 
> Something like set_raid_fail_from_mirror_num() seems better to me.

I feel like having the logic open-code'd plus necessary comments is
probably better as we can see which stripe failb is.

For those who are new to this raid6 retry logic are likely to feel
confused by a helper function like set_raid_fail_from_mirror_num() and
will go check the helper function about the retry logic.

Thanks,

-liubo
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 8d09535..064d5bc 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -2166,11 +2166,21 @@  int raid56_parity_recover(struct btrfs_fs_info *fs_info, struct bio *bio,
 	}
 
 	/*
-	 * reconstruct from the q stripe if they are
-	 * asking for mirror 3
+	 * Loop retry:
+	 * for 'mirror == 2', reconstruct from all other stripes.
+	 * for 'mirror_num > 2', select a stripe to fail on every retry.
 	 */
-	if (mirror_num == 3)
-		rbio->failb = rbio->real_stripes - 2;
+	if (mirror_num > 2) {
+		/*
+		 * 'mirror == 3' is to fail the p stripe and
+		 * reconstruct from the q stripe.  'mirror > 3' is to
+		 * fail a data stripe and reconstruct from p+q stripe.
+		 */
+		rbio->failb = rbio->real_stripes - (mirror_num - 1);
+		ASSERT(rbio->failb > 0);
+		if (rbio->failb <= rbio->faila)
+			rbio->failb--;
+	}
 
 	ret = lock_stripe_add(rbio);
 
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index b397375..95371f8 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5094,7 +5094,14 @@  int btrfs_num_copies(struct btrfs_fs_info *fs_info, u64 logical, u64 len)
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
 		ret = 2;
 	else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
-		ret = 3;
+		/*
+		 * There could be two corrupted data stripes, we need
+		 * to loop retry in order to rebuild the correct data.
+		 * 
+		 * Fail a stripe at a time on every retry except the
+		 * stripe under reconstruction.
+		 */
+		ret = map->num_stripes;
 	else
 		ret = 1;
 	free_extent_map(em);