diff mbox series

[v2] btrfs: raid56: data corruption on a device removal

Message ID CANYdAbLNAxoAXcxuC+ugpdhsCpTbh0zVjY3T5rtOcpVufnmWRA@mail.gmail.com (mailing list archive)
State New, archived
Headers show
Series [v2] btrfs: raid56: data corruption on a device removal | expand

Commit Message

Dmitriy Gorokh Dec. 14, 2018, 5:48 p.m. UTC
RAID5 or RAID6 filesystem might get corrupted in the following scenario:

1. Create 4 disks RAID6 filesystem
2. Preallocate 16 10Gb files
3. Run fio: 'fio --name=testload --directory=./ --size=10G
--numjobs=16 --bs=64k --iodepth=64 --rw=randrw --verify=sha256
--time_based --runtime=3600’
4. After few minutes pull out two drives: 'echo 1 >
/sys/block/sdc/device/delete ;  echo 1 > /sys/block/sdd/device/delete’

About 5 of 10 times the test is run, it led to a silent data
corruption of a random stripe, resulting in ‘IO Error’ and ‘csum
failed’ messages while trying to read the affected file. It usually
affects only small portion of the files.

It is possible that few bios which were being processed during the
drives removal, contained non zero bio->bi_iter.bi_done field despite
of EIO bi_status. bi_sector field was also increased from original one
by that 'bi_done' value. Looks like this is a quite rare condition.
Subsequently, in the raid_rmw_end_io handler that failed bio can be
translated to a wrong stripe number and fail wrong rbio.

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Dmitriy Gorokh <dmitriy.gorokh@wdc.com>
---
 fs/btrfs/raid56.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Liu Bo Dec. 26, 2018, 12:15 a.m. UTC | #1
On Fri, Dec 14, 2018 at 9:51 AM Dmitriy Gorokh <dmitriy.gorokh@gmail.com> wrote:
>
> RAID5 or RAID6 filesystem might get corrupted in the following scenario:
>
> 1. Create 4 disks RAID6 filesystem
> 2. Preallocate 16 10Gb files
> 3. Run fio: 'fio --name=testload --directory=./ --size=10G
> --numjobs=16 --bs=64k --iodepth=64 --rw=randrw --verify=sha256
> --time_based --runtime=3600’
> 4. After few minutes pull out two drives: 'echo 1 >
> /sys/block/sdc/device/delete ;  echo 1 > /sys/block/sdd/device/delete’
>
> About 5 of 10 times the test is run, it led to a silent data
> corruption of a random stripe, resulting in ‘IO Error’ and ‘csum
> failed’ messages while trying to read the affected file. It usually
> affects only small portion of the files.
>
> It is possible that few bios which were being processed during the
> drives removal, contained non zero bio->bi_iter.bi_done field despite
> of EIO bi_status. bi_sector field was also increased from original one
> by that 'bi_done' value. Looks like this is a quite rare condition.
> Subsequently, in the raid_rmw_end_io handler that failed bio can be
> translated to a wrong stripe number and fail wrong rbio.
>

Looks good.

Reviewed-by: Liu Bo <bo.liu@linux.alibaba.com>

thanks,
liubo

> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> Signed-off-by: Dmitriy Gorokh <dmitriy.gorokh@wdc.com>
> ---
>  fs/btrfs/raid56.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 3c8093757497..cd2038315feb 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -1451,6 +1451,12 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio,
>   struct btrfs_bio_stripe *stripe;
>
>   physical <<= 9;
> + /*
> +  * Since the failed bio can return partial data, bi_sector might be
> +  * incremented by that value. We need to revert it back to the
> +  * state before the bio was submitted.
> +  */
> + physical -= bio->bi_iter.bi_done;
>
>   for (i = 0; i < rbio->bbio->num_stripes; i++) {
>   stripe = &rbio->bbio->stripes[i];
> --
> 2.17.0
David Sterba Jan. 4, 2019, 4:49 p.m. UTC | #2
On Fri, Dec 14, 2018 at 08:48:50PM +0300, Dmitriy Gorokh wrote:
> RAID5 or RAID6 filesystem might get corrupted in the following scenario:
> 
> 1. Create 4 disks RAID6 filesystem
> 2. Preallocate 16 10Gb files
> 3. Run fio: 'fio --name=testload --directory=./ --size=10G
> --numjobs=16 --bs=64k --iodepth=64 --rw=randrw --verify=sha256
> --time_based --runtime=3600’
> 4. After few minutes pull out two drives: 'echo 1 >
> /sys/block/sdc/device/delete ;  echo 1 > /sys/block/sdd/device/delete’
> 
> About 5 of 10 times the test is run, it led to a silent data
> corruption of a random stripe, resulting in ‘IO Error’ and ‘csum
> failed’ messages while trying to read the affected file. It usually
> affects only small portion of the files.
> 
> It is possible that few bios which were being processed during the
> drives removal, contained non zero bio->bi_iter.bi_done field despite
> of EIO bi_status. bi_sector field was also increased from original one
> by that 'bi_done' value. Looks like this is a quite rare condition.
> Subsequently, in the raid_rmw_end_io handler that failed bio can be
> translated to a wrong stripe number and fail wrong rbio.
> 
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
> Signed-off-by: Dmitriy Gorokh <dmitriy.gorokh@wdc.com>
> ---
>  fs/btrfs/raid56.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 3c8093757497..cd2038315feb 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -1451,6 +1451,12 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio,
>   struct btrfs_bio_stripe *stripe;
> 
>   physical <<= 9;
> + /*
> +  * Since the failed bio can return partial data, bi_sector might be
> +  * incremented by that value. We need to revert it back to the
> +  * state before the bio was submitted.
> +  */
> + physical -= bio->bi_iter.bi_done;

The bi_done member has been removed in recent block layer changes
commit 7759eb23fd9808a2e4498cf36a798ed65cde78ae ("block: remove
bio_rewind_iter()"). I wonder what kind of block-magic do we need to do
as the iterators seem to be local and there's nothing available in the
call chain leading to find_bio_stripe. Johannes, any ideas?
Johannes Thumshirn Jan. 7, 2019, 11:03 a.m. UTC | #3
[+Cc Ming and full quote for reference]

On 04/01/2019 17:49, David Sterba wrote:
> On Fri, Dec 14, 2018 at 08:48:50PM +0300, Dmitriy Gorokh wrote:
>> RAID5 or RAID6 filesystem might get corrupted in the following scenario:
>>
>> 1. Create 4 disks RAID6 filesystem
>> 2. Preallocate 16 10Gb files
>> 3. Run fio: 'fio --name=testload --directory=./ --size=10G
>> --numjobs=16 --bs=64k --iodepth=64 --rw=randrw --verify=sha256
>> --time_based --runtime=3600’
>> 4. After few minutes pull out two drives: 'echo 1 >
>> /sys/block/sdc/device/delete ;  echo 1 > /sys/block/sdd/device/delete’
>>
>> About 5 of 10 times the test is run, it led to a silent data
>> corruption of a random stripe, resulting in ‘IO Error’ and ‘csum
>> failed’ messages while trying to read the affected file. It usually
>> affects only small portion of the files.
>>
>> It is possible that few bios which were being processed during the
>> drives removal, contained non zero bio->bi_iter.bi_done field despite
>> of EIO bi_status. bi_sector field was also increased from original one
>> by that 'bi_done' value. Looks like this is a quite rare condition.
>> Subsequently, in the raid_rmw_end_io handler that failed bio can be
>> translated to a wrong stripe number and fail wrong rbio.
>>
>> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
>> Signed-off-by: Dmitriy Gorokh <dmitriy.gorokh@wdc.com>
>> ---
>>  fs/btrfs/raid56.c | 6 ++++++
>>  1 file changed, 6 insertions(+)
>>
>> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
>> index 3c8093757497..cd2038315feb 100644
>> --- a/fs/btrfs/raid56.c
>> +++ b/fs/btrfs/raid56.c
>> @@ -1451,6 +1451,12 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio,
>>   struct btrfs_bio_stripe *stripe;
>>
>>   physical <<= 9;
>> + /*
>> +  * Since the failed bio can return partial data, bi_sector might be
>> +  * incremented by that value. We need to revert it back to the
>> +  * state before the bio was submitted.
>> +  */
>> + physical -= bio->bi_iter.bi_done;
> 
> The bi_done member has been removed in recent block layer changes
> commit 7759eb23fd9808a2e4498cf36a798ed65cde78ae ("block: remove
> bio_rewind_iter()"). I wonder what kind of block-magic do we need to do
> as the iterators seem to be local and there's nothing available in the
> call chain leading to find_bio_stripe. Johannes, any ideas?

Right, what we could do here is go the same way Ming did in
7759eb23fd980 ("block: remove bio_rewind_iter()") and save a bvec_iter
somewhere before submission and then see if we returned partial data,
but this somehow feels wrong to me (at least to do in btrfs code instead
of the block layer).

Ming can we resurrect ->bi_done, or do you have a different suggestion
for finding about partial written bios?
David Sterba Jan. 7, 2019, 3:34 p.m. UTC | #4
On Mon, Jan 07, 2019 at 12:03:43PM +0100, Johannes Thumshirn wrote:
> >> + /*
> >> +  * Since the failed bio can return partial data, bi_sector might be
> >> +  * incremented by that value. We need to revert it back to the
> >> +  * state before the bio was submitted.
> >> +  */
> >> + physical -= bio->bi_iter.bi_done;
> > 
> > The bi_done member has been removed in recent block layer changes
> > commit 7759eb23fd9808a2e4498cf36a798ed65cde78ae ("block: remove
> > bio_rewind_iter()"). I wonder what kind of block-magic do we need to do
> > as the iterators seem to be local and there's nothing available in the
> > call chain leading to find_bio_stripe. Johannes, any ideas?
> 
> Right, what we could do here is go the same way Ming did in
> 7759eb23fd980 ("block: remove bio_rewind_iter()") and save a bvec_iter
> somewhere before submission and then see if we returned partial data,
> but this somehow feels wrong to me (at least to do in btrfs code instead
> of the block layer).

> Ming can we resurrect ->bi_done, or do you have a different suggestion
> for finding about partial written bios?

I don't think bi_done can be resurrected
(https://marc.info/?l=linux-block&m=153549921116441&w=2) but I still am
not sure that saving the iterator is the right thing (or at least how to
do that right, not that the whole idea is wrong).
Johannes Thumshirn Jan. 10, 2019, 4:49 p.m. UTC | #5
On 07/01/2019 16:34, David Sterba wrote:
> On Mon, Jan 07, 2019 at 12:03:43PM +0100, Johannes Thumshirn wrote:
>>>> + /*
>>>> +  * Since the failed bio can return partial data, bi_sector might be
>>>> +  * incremented by that value. We need to revert it back to the
>>>> +  * state before the bio was submitted.
>>>> +  */
>>>> + physical -= bio->bi_iter.bi_done;
>>>
>>> The bi_done member has been removed in recent block layer changes
>>> commit 7759eb23fd9808a2e4498cf36a798ed65cde78ae ("block: remove
>>> bio_rewind_iter()"). I wonder what kind of block-magic do we need to do
>>> as the iterators seem to be local and there's nothing available in the
>>> call chain leading to find_bio_stripe. Johannes, any ideas?
>>
>> Right, what we could do here is go the same way Ming did in
>> 7759eb23fd980 ("block: remove bio_rewind_iter()") and save a bvec_iter
>> somewhere before submission and then see if we returned partial data,
>> but this somehow feels wrong to me (at least to do in btrfs code instead
>> of the block layer).
> 
>> Ming can we resurrect ->bi_done, or do you have a different suggestion
>> for finding about partial written bios?
> 
> I don't think bi_done can be resurrected
> (https://marc.info/?l=linux-block&m=153549921116441&w=2) but I still am
> not sure that saving the iterator is the right thing (or at least how to
> do that right, not that the whole idea is wrong).

Right, but we're looking for the number of already completed bytes to
rewind here, so from bvec.h's docs it is bi_bvec_done.

Dmitriy can you see if this works for you:

diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index e74455eb42f9..2d0e2eec5413 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1350,6 +1350,7 @@ static int find_bio_stripe(struct btrfs_raid_bio
*rbio,
        struct btrfs_bio_stripe *stripe;

        physical <<= 9;
+       physical -= bio->bi_iter.bi_bvec_done;

        for (i = 0; i < rbio->bbio->num_stripes; i++) {
                stripe = &rbio->bbio->stripes[i];
Johannes Thumshirn Jan. 11, 2019, 8:08 a.m. UTC | #6
On 10/01/2019 17:49, Johannes Thumshirn wrote:
> Right, but we're looking for the number of already completed bytes to
> rewind here, so from bvec.h's docs it is bi_bvec_done.
> 
> Dmitriy can you see if this works for you:
> 
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index e74455eb42f9..2d0e2eec5413 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -1350,6 +1350,7 @@ static int find_bio_stripe(struct btrfs_raid_bio
> *rbio,
>         struct btrfs_bio_stripe *stripe;
> 
>         physical <<= 9;
> +       physical -= bio->bi_iter.bi_bvec_done;

OK talked to Hannes about this issue, he says the only way is to save
the iterator state before submitting the bio.

So the above is bogus too.

This also is what Kent said in
https://marc.info/?l=linux-block&m=153549921116441&w=2
Ming Lei Jan. 11, 2019, 9:26 a.m. UTC | #7
On Fri, Jan 11, 2019 at 09:08:27AM +0100, Johannes Thumshirn wrote:
> On 10/01/2019 17:49, Johannes Thumshirn wrote:
> > Right, but we're looking for the number of already completed bytes to
> > rewind here, so from bvec.h's docs it is bi_bvec_done.
> > 
> > Dmitriy can you see if this works for you:
> > 
> > diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> > index e74455eb42f9..2d0e2eec5413 100644
> > --- a/fs/btrfs/raid56.c
> > +++ b/fs/btrfs/raid56.c
> > @@ -1350,6 +1350,7 @@ static int find_bio_stripe(struct btrfs_raid_bio
> > *rbio,
> >         struct btrfs_bio_stripe *stripe;
> > 
> >         physical <<= 9;
> > +       physical -= bio->bi_iter.bi_bvec_done;
> 
> OK talked to Hannes about this issue, he says the only way is to save
> the iterator state before submitting the bio.
> 
> So the above is bogus too.
> 
> This also is what Kent said in
> https://marc.info/?l=linux-block&m=153549921116441&w=2

Yeah, .bi_bvec_done can't help, seems you need to take the similar
approach in 7759eb23fd9808a2e ("block: remove bio_rewind_iter()").

In theory, the follow way might work, but still like a hack:

	#define BVEC_ITER_ALL_INIT (struct bvec_iter)                  \
	{                                                              \
	        .bi_sector      = 0,                                   \
	        .bi_size        = UINT_MAX,                            \
	        .bi_idx         = 0,                                   \
	        .bi_bvec_done   = 0,                                   \
	}
	
	old_bi_sector = bio->bi_iter.bi_sector;
	bio->bi_iter = BVEC_ITER_ALL_INIT;
	bio->bi_iter.bi_sector = old_bi_sector;
	bio_advance(bio, where_to_rewind);

This bio can't be the fast-cloned one, which should be satisfied
by fs code.

Also 'where_to_rewind' needs to point to the offset relative to start
of this bio.


Thanks,
Ming
diff mbox series

Patch

diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 3c8093757497..cd2038315feb 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1451,6 +1451,12 @@  static int find_bio_stripe(struct btrfs_raid_bio *rbio,
  struct btrfs_bio_stripe *stripe;

  physical <<= 9;
+ /*
+  * Since the failed bio can return partial data, bi_sector might be
+  * incremented by that value. We need to revert it back to the
+  * state before the bio was submitted.
+  */
+ physical -= bio->bi_iter.bi_done;

  for (i = 0; i < rbio->bbio->num_stripes; i++) {
  stripe = &rbio->bbio->stripes[i];