diff mbox

[2/2] btrfs: Remove unneeded missing device number check

Message ID 1442375031-18212-2-git-send-email-quwenruo@cn.fujitsu.com (mailing list archive)
State New, archived
Headers show

Commit Message

Qu Wenruo Sept. 16, 2015, 3:43 a.m. UTC
As we do per-chunk missing device number check at read_one_chunk() time,
it's not needed to do global missing device number check.

Just remove it.

Now btrfs can handle the following case:
 # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc

 Data chunk will be located in sdb, so we should be safe to wipe sdc
 # wipefs -a /dev/sdc

 # mount /dev/sdb /mnt/btrfs -o degraded

Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
---
 fs/btrfs/disk-io.c | 8 --------
 1 file changed, 8 deletions(-)

Comments

Anand Jain Sept. 17, 2015, 9:43 a.m. UTC | #1
On 09/16/2015 11:43 AM, Qu Wenruo wrote:
> As we do per-chunk missing device number check at read_one_chunk() time,
> it's not needed to do global missing device number check.
>
> Just remove it.

However the missing device count, what we have during the remount is not 
fine grained per chunk.
-----------
btrfs_remount
::
                  if (fs_info->fs_devices->missing_devices >
                      fs_info->num_tolerated_disk_barrier_failures &&
                     !(*flags & MS_RDONLY ||
                         btrfs_test_opt(root, DEGRADED))) {
                         btrfs_warn(fs_info,
                                 "too many missing devices, writeable 
remount is not allowed");
                         ret = -EACCES;
                         goto restore;
                 }
---------

Thanks, Anand


> Now btrfs can handle the following case:
>   # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
>
>   Data chunk will be located in sdb, so we should be safe to wipe sdc
>   # wipefs -a /dev/sdc
>
>   # mount /dev/sdb /mnt/btrfs -o degraded
>
> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
> ---
>   fs/btrfs/disk-io.c | 8 --------
>   1 file changed, 8 deletions(-)
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 0b658d0..ac640ea 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2947,14 +2947,6 @@ retry_root_backup:
>   	}
>   	fs_info->num_tolerated_disk_barrier_failures =
>   		btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
> -	if (fs_info->fs_devices->missing_devices >
> -	     fs_info->num_tolerated_disk_barrier_failures &&
> -	    !(sb->s_flags & MS_RDONLY)) {
> -		pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d), writeable mount is not allowed\n",
> -			fs_info->fs_devices->missing_devices,
> -			fs_info->num_tolerated_disk_barrier_failures);
> -		goto fail_sysfs;
> -	}
>
>   	fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
>   					       "btrfs-cleaner");
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Sept. 17, 2015, 10:01 a.m. UTC | #2
Thanks for pointing this out.

Although previous patch is small enough, but for remount case, we need 
to iterate all the existing chunk cache.

So fix for remount will take a little more time.

Thanks for reviewing.
Qu

? 2015?09?17? 17:43, Anand Jain ??:
>
>
> On 09/16/2015 11:43 AM, Qu Wenruo wrote:
>> As we do per-chunk missing device number check at read_one_chunk() time,
>> it's not needed to do global missing device number check.
>>
>> Just remove it.
>
> However the missing device count, what we have during the remount is not
> fine grained per chunk.
> -----------
> btrfs_remount
> ::
>                   if (fs_info->fs_devices->missing_devices >
>                       fs_info->num_tolerated_disk_barrier_failures &&
>                      !(*flags & MS_RDONLY ||
>                          btrfs_test_opt(root, DEGRADED))) {
>                          btrfs_warn(fs_info,
>                                  "too many missing devices, writeable
> remount is not allowed");
>                          ret = -EACCES;
>                          goto restore;
>                  }
> ---------
>
> Thanks, Anand
>
>
>> Now btrfs can handle the following case:
>>   # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
>>
>>   Data chunk will be located in sdb, so we should be safe to wipe sdc
>>   # wipefs -a /dev/sdc
>>
>>   # mount /dev/sdb /mnt/btrfs -o degraded
>>
>> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
>> ---
>>   fs/btrfs/disk-io.c | 8 --------
>>   1 file changed, 8 deletions(-)
>>
>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>> index 0b658d0..ac640ea 100644
>> --- a/fs/btrfs/disk-io.c
>> +++ b/fs/btrfs/disk-io.c
>> @@ -2947,14 +2947,6 @@ retry_root_backup:
>>       }
>>       fs_info->num_tolerated_disk_barrier_failures =
>>           btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
>> -    if (fs_info->fs_devices->missing_devices >
>> -         fs_info->num_tolerated_disk_barrier_failures &&
>> -        !(sb->s_flags & MS_RDONLY)) {
>> -        pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d),
>> writeable mount is not allowed\n",
>> -            fs_info->fs_devices->missing_devices,
>> -            fs_info->num_tolerated_disk_barrier_failures);
>> -        goto fail_sysfs;
>> -    }
>>
>>       fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
>>                              "btrfs-cleaner");
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anand Jain Sept. 18, 2015, 1:47 a.m. UTC | #3
On 09/17/2015 06:01 PM, Qu Wenruo wrote:
> Thanks for pointing this out.


> Although previous patch is small enough, but for remount case, we need
> to iterate all the existing chunk cache.

  yes indeed.

  thinking hard on this - is there any test-case that these two patches 
are solving, which the original patch [1] didn't solve ?

  I tried to break both the approaches (this patch set and [1]) but I 
wasn't successful. sorry if I am missing something.

Thanks, Anand

[1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile


> So fix for remount will take a little more time.

> Thanks for reviewing.
> Qu
>
> ? 2015?09?17? 17:43, Anand Jain ??:
>>
>>
>> On 09/16/2015 11:43 AM, Qu Wenruo wrote:
>>> As we do per-chunk missing device number check at read_one_chunk() time,
>>> it's not needed to do global missing device number check.
>>>
>>> Just remove it.
>>
>> However the missing device count, what we have during the remount is not
>> fine grained per chunk.
>> -----------
>> btrfs_remount
>> ::
>>                   if (fs_info->fs_devices->missing_devices >
>>                       fs_info->num_tolerated_disk_barrier_failures &&
>>                      !(*flags & MS_RDONLY ||
>>                          btrfs_test_opt(root, DEGRADED))) {
>>                          btrfs_warn(fs_info,
>>                                  "too many missing devices, writeable
>> remount is not allowed");
>>                          ret = -EACCES;
>>                          goto restore;
>>                  }
>> ---------
>>
>> Thanks, Anand
>>
>>
>>> Now btrfs can handle the following case:
>>>   # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
>>>
>>>   Data chunk will be located in sdb, so we should be safe to wipe sdc
>>>   # wipefs -a /dev/sdc
>>>
>>>   # mount /dev/sdb /mnt/btrfs -o degraded
>>>
>>> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
>>> ---
>>>   fs/btrfs/disk-io.c | 8 --------
>>>   1 file changed, 8 deletions(-)
>>>
>>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>>> index 0b658d0..ac640ea 100644
>>> --- a/fs/btrfs/disk-io.c
>>> +++ b/fs/btrfs/disk-io.c
>>> @@ -2947,14 +2947,6 @@ retry_root_backup:
>>>       }
>>>       fs_info->num_tolerated_disk_barrier_failures =
>>>           btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
>>> -    if (fs_info->fs_devices->missing_devices >
>>> -         fs_info->num_tolerated_disk_barrier_failures &&
>>> -        !(sb->s_flags & MS_RDONLY)) {
>>> -        pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d),
>>> writeable mount is not allowed\n",
>>> -            fs_info->fs_devices->missing_devices,
>>> -            fs_info->num_tolerated_disk_barrier_failures);
>>> -        goto fail_sysfs;
>>> -    }
>>>
>>>       fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
>>>                              "btrfs-cleaner");
>>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Sept. 18, 2015, 2:06 a.m. UTC | #4
Anand Jain wrote on 2015/09/18 09:47 +0800:
>
>
> On 09/17/2015 06:01 PM, Qu Wenruo wrote:
>> Thanks for pointing this out.
>
>
>> Although previous patch is small enough, but for remount case, we need
>> to iterate all the existing chunk cache.
>
>   yes indeed.
>
>   thinking hard on this - is there any test-case that these two patches
> are solving, which the original patch [1] didn't solve ?

Yep, your patch is OK to fix single chunk on safe disk case.
But IMHO, it's a little aggressive and not safe as old codes.

For example, if one use single metadata for 2 disks, and each disk has 
one metadata chunk on it.

One device got missing later.

Then your patch will allow the fs to be mounted as rw, even some tree 
block can be in the missing device.
For RO case, it won't be too dangerous, but if we mounted it as RW, who 
knows what will happen.
(Normal tree COW thing should fail before real write, but I'm not sure 
about other RW operation like scrub/replace/balance and others)

And I think that's the original design concept behind the old missing 
device number check, and it's not a bad idea to follow it anyway.

For the patch size, I find a good idea to handle it, and should make the 
patch(set) size below 200 lines.

Further more, it's even possible to make btrfs change mount option to 
degraded for runtime device missing.

Thanks,
Qu
>
>   I tried to break both the approaches (this patch set and [1]) but I
> wasn't successful. sorry if I am missing something.
>
> Thanks, Anand
>
> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
>
>
>> So fix for remount will take a little more time.
>
>> Thanks for reviewing.
>> Qu
>>
>> ? 2015?09?17? 17:43, Anand Jain ??:
>>>
>>>
>>> On 09/16/2015 11:43 AM, Qu Wenruo wrote:
>>>> As we do per-chunk missing device number check at read_one_chunk()
>>>> time,
>>>> it's not needed to do global missing device number check.
>>>>
>>>> Just remove it.
>>>
>>> However the missing device count, what we have during the remount is not
>>> fine grained per chunk.
>>> -----------
>>> btrfs_remount
>>> ::
>>>                   if (fs_info->fs_devices->missing_devices >
>>>                       fs_info->num_tolerated_disk_barrier_failures &&
>>>                      !(*flags & MS_RDONLY ||
>>>                          btrfs_test_opt(root, DEGRADED))) {
>>>                          btrfs_warn(fs_info,
>>>                                  "too many missing devices, writeable
>>> remount is not allowed");
>>>                          ret = -EACCES;
>>>                          goto restore;
>>>                  }
>>> ---------
>>>
>>> Thanks, Anand
>>>
>>>
>>>> Now btrfs can handle the following case:
>>>>   # mkfs.btrfs -f -m raid1 -d single /dev/sdb /dev/sdc
>>>>
>>>>   Data chunk will be located in sdb, so we should be safe to wipe sdc
>>>>   # wipefs -a /dev/sdc
>>>>
>>>>   # mount /dev/sdb /mnt/btrfs -o degraded
>>>>
>>>> Signed-off-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
>>>> ---
>>>>   fs/btrfs/disk-io.c | 8 --------
>>>>   1 file changed, 8 deletions(-)
>>>>
>>>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>>>> index 0b658d0..ac640ea 100644
>>>> --- a/fs/btrfs/disk-io.c
>>>> +++ b/fs/btrfs/disk-io.c
>>>> @@ -2947,14 +2947,6 @@ retry_root_backup:
>>>>       }
>>>>       fs_info->num_tolerated_disk_barrier_failures =
>>>>           btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
>>>> -    if (fs_info->fs_devices->missing_devices >
>>>> -         fs_info->num_tolerated_disk_barrier_failures &&
>>>> -        !(sb->s_flags & MS_RDONLY)) {
>>>> -        pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d),
>>>> writeable mount is not allowed\n",
>>>> -            fs_info->fs_devices->missing_devices,
>>>> -            fs_info->num_tolerated_disk_barrier_failures);
>>>> -        goto fail_sysfs;
>>>> -    }
>>>>
>>>>       fs_info->cleaner_kthread = kthread_run(cleaner_kthread,
>>>> tree_root,
>>>>                              "btrfs-cleaner");
>>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe
>>> linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anand Jain Sept. 18, 2015, 6:45 a.m. UTC | #5
Hi Qu,

  Thanks for the comments on patch [1].

> For example, if one use single metadata for 2 disks,
 > and each disk has one metadata chunk on it.

  how that can be achieved ?

> One device got missing later.

  it would surely depend on which one of the device ? (initial only 
devid 1 mountable, with other missing)


Thanks, Anand

> Then your patch will allow the fs to be mounted as rw, even some tree
> block can be in the missing device.
> For RO case, it won't be too dangerous, but if we mounted it as RW, who
> knows what will happen.
> (Normal tree COW thing should fail before real write, but I'm not sure
> about other RW operation like scrub/replace/balance and others)
>
> And I think that's the original design concept behind the old missing
> device number check, and it's not a bad idea to follow it anyway.
>
> For the patch size, I find a good idea to handle it, and should make the
> patch(set) size below 200 lines.
>
> Further more, it's even possible to make btrfs change mount option to
> degraded for runtime device missing.
>
> Thanks,
> Qu
>>
>>   I tried to break both the approaches (this patch set and [1]) but I
>> wasn't successful. sorry if I am missing something.
>>
>> Thanks, Anand
>>
>> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile



--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Sept. 20, 2015, 12:31 a.m. UTC | #6
? 2015?09?18? 14:45, Anand Jain ??:
>
> Hi Qu,
>
>   Thanks for the comments on patch [1].
>
>> For example, if one use single metadata for 2 disks,
>  > and each disk has one metadata chunk on it.
>
>   how that can be achieved ?
By this, I mean, the metadata profile is SINGLE,
and there is 2 metadata chunks.

One on disk1 and one on disk2.

As btrfs chunk allocate will always use device by avaiable space order,
it should be quite easy to archieve that situation.

In that case, any missing device will be a disaster, and it's better not 
to allow RW mount.

Thanks,
Qu
>
>> One device got missing later.
>
>   it would surely depend on which one of the device ? (initial only
> devid 1 mountable, with other missing)
>
>
> Thanks, Anand
>
>> Then your patch will allow the fs to be mounted as rw, even some tree
>> block can be in the missing device.
>> For RO case, it won't be too dangerous, but if we mounted it as RW, who
>> knows what will happen.
>> (Normal tree COW thing should fail before real write, but I'm not sure
>> about other RW operation like scrub/replace/balance and others)
>>
>> And I think that's the original design concept behind the old missing
>> device number check, and it's not a bad idea to follow it anyway.
>>
>> For the patch size, I find a good idea to handle it, and should make the
>> patch(set) size below 200 lines.
>>
>> Further more, it's even possible to make btrfs change mount option to
>> degraded for runtime device missing.
>>
>> Thanks,
>> Qu
>>>
>>>   I tried to break both the approaches (this patch set and [1]) but I
>>> wasn't successful. sorry if I am missing something.
>>>
>>> Thanks, Anand
>>>
>>> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Anand Jain Sept. 20, 2015, 5:37 a.m. UTC | #7
On 09/20/2015 08:31 AM, Qu Wenruo wrote:
>
>
> ? 2015?09?18? 14:45, Anand Jain ??:
>>
>> Hi Qu,
>>
>>   Thanks for the comments on patch [1].
>>
>>> For example, if one use single metadata for 2 disks,
>>  > and each disk has one metadata chunk on it.
>>
>>   how that can be achieved ?
> By this, I mean, the metadata profile is SINGLE,
> and there is 2 metadata chunks.
>
> One on disk1 and one on disk2.
>
> As btrfs chunk allocate will always use device by avaiable space order,
> it should be quite easy to archieve that situation.
>
> In that case, any missing device will be a disaster,

  in this case the read chunk would anyway fail, right ?
  and that will lead to mount fail.

Thanks, Anand

> and it's better not
> to allow RW mount.
>
> Thanks,
> Qu
>>
>>> One device got missing later.
>>
>>   it would surely depend on which one of the device ? (initial only
>> devid 1 mountable, with other missing)
>>
>>
>> Thanks, Anand
>>
>>> Then your patch will allow the fs to be mounted as rw, even some tree
>>> block can be in the missing device.
>>> For RO case, it won't be too dangerous, but if we mounted it as RW, who
>>> knows what will happen.
>>> (Normal tree COW thing should fail before real write, but I'm not sure
>>> about other RW operation like scrub/replace/balance and others)
>>>
>>> And I think that's the original design concept behind the old missing
>>> device number check, and it's not a bad idea to follow it anyway.
>>>
>>> For the patch size, I find a good idea to handle it, and should make the
>>> patch(set) size below 200 lines.
>>>
>>> Further more, it's even possible to make btrfs change mount option to
>>> degraded for runtime device missing.
>>>
>>> Thanks,
>>> Qu
>>>>
>>>>   I tried to break both the approaches (this patch set and [1]) but I
>>>> wasn't successful. sorry if I am missing something.
>>>>
>>>> Thanks, Anand
>>>>
>>>> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
>>
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Qu Wenruo Sept. 21, 2015, 2:09 a.m. UTC | #8
Anand Jain wrote on 2015/09/20 13:37 +0800:
>
>
> On 09/20/2015 08:31 AM, Qu Wenruo wrote:
>>
>>
>> ? 2015?09?18? 14:45, Anand Jain ??:
>>>
>>> Hi Qu,
>>>
>>>   Thanks for the comments on patch [1].
>>>
>>>> For example, if one use single metadata for 2 disks,
>>>  > and each disk has one metadata chunk on it.
>>>
>>>   how that can be achieved ?
>> By this, I mean, the metadata profile is SINGLE,
>> and there is 2 metadata chunks.
>>
>> One on disk1 and one on disk2.
>>
>> As btrfs chunk allocate will always use device by avaiable space order,
>> it should be quite easy to archieve that situation.
>>
>> In that case, any missing device will be a disaster,
>
>   in this case the read chunk would anyway fail, right ?
>   and that will lead to mount fail.
>
> Thanks, Anand

Yes, read will cause fail, but I'm not completely sure other operation 
can handle it well or not.
Like chunk/extent allocation or scrub/replace/balance.

So I'd still keep it allow RO mount only.

Thanks,
Qu

>
>> and it's better not
>> to allow RW mount.
>>
>> Thanks,
>> Qu
>>>
>>>> One device got missing later.
>>>
>>>   it would surely depend on which one of the device ? (initial only
>>> devid 1 mountable, with other missing)
>>>
>>>
>>> Thanks, Anand
>>>
>>>> Then your patch will allow the fs to be mounted as rw, even some tree
>>>> block can be in the missing device.
>>>> For RO case, it won't be too dangerous, but if we mounted it as RW, who
>>>> knows what will happen.
>>>> (Normal tree COW thing should fail before real write, but I'm not sure
>>>> about other RW operation like scrub/replace/balance and others)
>>>>
>>>> And I think that's the original design concept behind the old missing
>>>> device number check, and it's not a bad idea to follow it anyway.
>>>>
>>>> For the patch size, I find a good idea to handle it, and should make
>>>> the
>>>> patch(set) size below 200 lines.
>>>>
>>>> Further more, it's even possible to make btrfs change mount option to
>>>> degraded for runtime device missing.
>>>>
>>>> Thanks,
>>>> Qu
>>>>>
>>>>>   I tried to break both the approaches (this patch set and [1]) but I
>>>>> wasn't successful. sorry if I am missing something.
>>>>>
>>>>> Thanks, Anand
>>>>>
>>>>> [1] [PATCH 23/23] Btrfs: allow -o rw,degraded for single group profile
>>>
>>>
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe
>>> linux-btrfs" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 0b658d0..ac640ea 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2947,14 +2947,6 @@  retry_root_backup:
 	}
 	fs_info->num_tolerated_disk_barrier_failures =
 		btrfs_calc_num_tolerated_disk_barrier_failures(fs_info);
-	if (fs_info->fs_devices->missing_devices >
-	     fs_info->num_tolerated_disk_barrier_failures &&
-	    !(sb->s_flags & MS_RDONLY)) {
-		pr_warn("BTRFS: missing devices(%llu) exceeds the limit(%d), writeable mount is not allowed\n",
-			fs_info->fs_devices->missing_devices,
-			fs_info->num_tolerated_disk_barrier_failures);
-		goto fail_sysfs;
-	}
 
 	fs_info->cleaner_kthread = kthread_run(cleaner_kthread, tree_root,
 					       "btrfs-cleaner");