diff mbox

[v3,2/5] btrfs: scrub: Fix RAID56 recovery race condition

Message ID 20170329013322.1323-3-quwenruo@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Qu Wenruo March 29, 2017, 1:33 a.m. UTC
When scrubbing a RAID5 which has recoverable data corruption (only one
data stripe is corrupted), sometimes scrub will report more csum errors
than expected. Sometimes even unrecoverable error will be reported.

The problem can be easily reproduced by the following steps:
1) Create a btrfs with RAID5 data profile with 3 devs
2) Mount it with nospace_cache or space_cache=v2
   To avoid extra data space usage.
3) Create a 128K file and sync the fs, unmount it
   Now the 128K file lies at the beginning of the data chunk
4) Locate the physical bytenr of data chunk on dev3
   Dev3 is the 1st data stripe.
5) Corrupt the first 64K of the data chunk stripe on dev3
6) Mount the fs and scrub it

The correct csum error number should be 16(assuming using x86_64).
Larger csum error number can be reported in a 1/3 chance.
And unrecoverable error can also be reported in a 1/10 chance.

The root cause of the problem is RAID5/6 recover code has race
condition, due to the fact that full scrub is initiated per device.

While for other mirror based profiles, each mirror is independent with
each other, so race won't cause any big problem.

For example:
        Corrupted       |       Correct          |      Correct        |
|   Scrub dev3 (D1)     |    Scrub dev2 (D2)     |    Scrub dev1(P)    |
------------------------------------------------------------------------
Read out D1             |Read out D2             |Read full stripe     |
Check csum              |Check csum              |Check parity         |
Csum mismatch           |Csum match, continue    |Parity mismatch      |
handle_errored_block    |                        |handle_errored_block |
 Read out full stripe   |                        | Read out full stripe|
 D1 csum error(err++)   |                        | D1 csum error(err++)|
 Recover D1             |                        | Recover D1          |

So D1's csum error is accounted twice, just because
handle_errored_block() doesn't have enough protect, and race can happen.

On even worse case, for example D1's recovery code is re-writing
D1/D2/P, and P's recovery code is just reading out full stripe, then we
can cause unrecoverable error.

This patch will use previously introduced lock_full_stripe() and
unlock_full_stripe() to protect the whole scrub_handle_errored_block()
function for RAID56 recovery.
So no extra csum error nor unrecoverable error.

Reported-by: Goffredo Baroncelli <kreijack@libero.it>
Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
---
 fs/btrfs/scrub.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Liu Bo March 30, 2017, 12:51 a.m. UTC | #1
On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote:
[...]
> 
> Reported-by: Goffredo Baroncelli <kreijack@libero.it>
> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
> ---
>  fs/btrfs/scrub.c | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index ab33b9a8aac2..f92d2512f4f3 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -1129,6 +1129,17 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  	have_csum = sblock_to_check->pagev[0]->have_csum;
>  	dev = sblock_to_check->pagev[0]->dev;
>  
> +	/*
> +	 * For RAID5/6 race can happen for different dev scrub thread.
> +	 * For data corruption, Parity and Data thread will both try
> +	 * to recovery the data.
> +	 * Race can lead to double added csum error, or even unrecoverable
> +	 * error.
> +	 */
> +	ret = lock_full_stripe(fs_info, logical);
> +	if (ret < 0)
> +		return ret;
> +

Firstly, sctx->stat needs to be set with errors before returning errors.

Secondly, I think the critical section starts right before re-checking the
failed mirror, doesn't it?  If so, we could get the benefit from
sblock_bad->page since the page->recover can tell if this is a raid56 profile so
that we may save a searching for block group.

>  	if (sctx->is_dev_replace && !is_metadata && !have_csum) {
>  		sblocks_for_recheck = NULL;
>  		goto nodatasum_case;
> @@ -1463,6 +1474,9 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>  		kfree(sblocks_for_recheck);
>  	}
>  
> +	ret = unlock_full_stripe(fs_info, logical);
> +	if (ret < 0)
> +		return ret;

Seems that the callers of scrub_handle_errored_block doesn't care about @ret.

And could you please put a 'locked' flag after taking the lock succesfully?
Otherwise every raid profile has to check block group for raid flag.

Thanks,

-liubo
>  	return 0;
>  }
>  
> -- 
> 2.12.1
> 
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo March 30, 2017, 3:24 a.m. UTC | #2
At 03/30/2017 08:51 AM, Liu Bo wrote:
> On Wed, Mar 29, 2017 at 09:33:19AM +0800, Qu Wenruo wrote:
> [...]
>>
>> Reported-by: Goffredo Baroncelli <kreijack@libero.it>
>> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
>> ---
>>  fs/btrfs/scrub.c | 14 ++++++++++++++
>>  1 file changed, 14 insertions(+)
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index ab33b9a8aac2..f92d2512f4f3 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -1129,6 +1129,17 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>  	have_csum = sblock_to_check->pagev[0]->have_csum;
>>  	dev = sblock_to_check->pagev[0]->dev;
>>
>> +	/*
>> +	 * For RAID5/6 race can happen for different dev scrub thread.
>> +	 * For data corruption, Parity and Data thread will both try
>> +	 * to recovery the data.
>> +	 * Race can lead to double added csum error, or even unrecoverable
>> +	 * error.
>> +	 */
>> +	ret = lock_full_stripe(fs_info, logical);
>> +	if (ret < 0)
>> +		return ret;
>> +
>
> Firstly, sctx->stat needs to be set with errors before returning errors.

Right.

I'll update malloc_errors for ret == -ENOMEM case and 
uncorrectable_errors for ret == -ENOENT case.

>
> Secondly, I think the critical section starts right before re-checking the
> failed mirror, doesn't it?  If so, we could get the benefit from
> sblock_bad->page since the page->recover can tell if this is a raid56 profile so
> that we may save a searching for block group.

It's true we can save a block group search, but that's relying on 
bbio->flags.

I prefer to make lock/unlock_full_stripe() to be as independent as 
possible, so such modification is not favoured.


Although I'm looking forward better scrub structure which allow us to 
get block group cache easier.
If we can get bg_cache easier in such context, then 
lock/unlock_full_stripe() have no need to search bg_cache by themselves.

But for now, I prefer to trade a little performance for more independent 
code.

>
>>  	if (sctx->is_dev_replace && !is_metadata && !have_csum) {
>>  		sblocks_for_recheck = NULL;
>>  		goto nodatasum_case;
>> @@ -1463,6 +1474,9 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
>>  		kfree(sblocks_for_recheck);
>>  	}
>>
>> +	ret = unlock_full_stripe(fs_info, logical);
>> +	if (ret < 0)
>> +		return ret;
>
> Seems that the callers of scrub_handle_errored_block doesn't care about @ret.
>
> And could you please put a 'locked' flag after taking the lock succesfully?
> Otherwise every raid profile has to check block group for raid flag.

I'm OK to introduce a bool as new paramter for lock/unlock_full_stripe() 
as a shortcut to avoid bg search at unlock.

But the true fix would be scrub structure cleanup to make us accessing 
bg_cache without hassle.

Thanks for the review,
Qu

>
> Thanks,
>
> -liubo
>>  	return 0;
>>  }
>>
>> --
>> 2.12.1
>>
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index ab33b9a8aac2..f92d2512f4f3 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -1129,6 +1129,17 @@  static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 	have_csum = sblock_to_check->pagev[0]->have_csum;
 	dev = sblock_to_check->pagev[0]->dev;
 
+	/*
+	 * For RAID5/6 race can happen for different dev scrub thread.
+	 * For data corruption, Parity and Data thread will both try
+	 * to recovery the data.
+	 * Race can lead to double added csum error, or even unrecoverable
+	 * error.
+	 */
+	ret = lock_full_stripe(fs_info, logical);
+	if (ret < 0)
+		return ret;
+
 	if (sctx->is_dev_replace && !is_metadata && !have_csum) {
 		sblocks_for_recheck = NULL;
 		goto nodatasum_case;
@@ -1463,6 +1474,9 @@  static int scrub_handle_errored_block(struct scrub_block *sblock_to_check)
 		kfree(sblocks_for_recheck);
 	}
 
+	ret = unlock_full_stripe(fs_info, logical);
+	if (ret < 0)
+		return ret;
 	return 0;
 }