diff mbox series

[3/4] btrfs: fix force usage in inc_block_group_ro

Message ID 20191125144011.146722-4-josef@toxicpanda.com (mailing list archive)
State New, archived
Headers show
Series clean up how we mark block groups read only | expand

Commit Message

Josef Bacik Nov. 25, 2019, 2:40 p.m. UTC
For some reason we've translated the do_chunk_alloc that goes into
btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
two different things.

force for inc_block_group_ro is used when we are forcing the block group
read only no matter what, for example when the underlying chunk is
marked read only.  We need to not do the space check here as this block
group needs to be read only.

btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
we need to pre-allocate a chunk before marking the block group read
only.  This has nothing to do with forcing, and in fact we _always_ want
to do the space check in this case, so unconditionally pass false for
force in this case.

Then fixup inc_block_group_ro to honor force as it's expected and
documented to do.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/block-group.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Qu Wenruo Nov. 26, 2019, 2:43 a.m. UTC | #1
On 2019/11/25 下午10:40, Josef Bacik wrote:
> For some reason we've translated the do_chunk_alloc that goes into
> btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
> two different things.
> 
> force for inc_block_group_ro is used when we are forcing the block group
> read only no matter what, for example when the underlying chunk is
> marked read only.  We need to not do the space check here as this block
> group needs to be read only.
> 
> btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
> we need to pre-allocate a chunk before marking the block group read
> only.  This has nothing to do with forcing, and in fact we _always_ want
> to do the space check in this case, so unconditionally pass false for
> force in this case.

I think the patch order makes thing a little hard to grasp here.
Without the last patch, the idea itself is not correct.

The reason to force ro is because we want to avoid empty chunk to be
allocated, especially for scrub case.


If you put the last patch before this one, it's more clear, as then we
can accept over-commit, we won't return false ENOSPC and no empty chunk
created.

BTW, with the last patch applied, we can remove that @force parameter
for inc_block_group_ro().

Thanks,
Qu
> 
> Then fixup inc_block_group_ro to honor force as it's expected and
> documented to do.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/block-group.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index db539bfc5a52..3ffbc2e0af21 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -1190,8 +1190,10 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
>  	spin_lock(&sinfo->lock);
>  	spin_lock(&cache->lock);
>  
> -	if (cache->ro) {
> +	if (cache->ro || force) {
>  		cache->ro++;
> +		if (list_empty(&cache->ro_list))
> +			list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
>  		ret = 0;
>  		goto out;
>  	}
> @@ -2063,7 +2065,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
>  		}
>  	}
>  
> -	ret = inc_block_group_ro(cache, !do_chunk_alloc);
> +	ret = inc_block_group_ro(cache, false);
>  	if (!do_chunk_alloc)
>  		goto unlock_out;
>  	if (!ret)
>
Qu Wenruo Nov. 26, 2019, 4:59 a.m. UTC | #2
On 2019/11/26 上午10:43, Qu Wenruo wrote:
> 
> 
> On 2019/11/25 下午10:40, Josef Bacik wrote:
>> For some reason we've translated the do_chunk_alloc that goes into
>> btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
>> two different things.
>>
>> force for inc_block_group_ro is used when we are forcing the block group
>> read only no matter what, for example when the underlying chunk is
>> marked read only.  We need to not do the space check here as this block
>> group needs to be read only.
>>
>> btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
>> we need to pre-allocate a chunk before marking the block group read
>> only.  This has nothing to do with forcing, and in fact we _always_ want
>> to do the space check in this case, so unconditionally pass false for
>> force in this case.
> 
> I think the patch order makes thing a little hard to grasp here.
> Without the last patch, the idea itself is not correct.
> 
> The reason to force ro is because we want to avoid empty chunk to be
> allocated, especially for scrub case.
> 
> 
> If you put the last patch before this one, it's more clear, as then we
> can accept over-commit, we won't return false ENOSPC and no empty chunk
> created.
> 
> BTW, with the last patch applied, we can remove that @force parameter
> for inc_block_group_ro().

My bad, @force parameter is still needed. Didn't notice that until all
patches applied.

Thanks,
Qu

> 
> Thanks,
> Qu
>>
>> Then fixup inc_block_group_ro to honor force as it's expected and
>> documented to do.
>>
>> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
>> ---
>>  fs/btrfs/block-group.c | 6 ++++--
>>  1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
>> index db539bfc5a52..3ffbc2e0af21 100644
>> --- a/fs/btrfs/block-group.c
>> +++ b/fs/btrfs/block-group.c
>> @@ -1190,8 +1190,10 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
>>  	spin_lock(&sinfo->lock);
>>  	spin_lock(&cache->lock);
>>  
>> -	if (cache->ro) {
>> +	if (cache->ro || force) {
>>  		cache->ro++;
>> +		if (list_empty(&cache->ro_list))
>> +			list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
>>  		ret = 0;
>>  		goto out;
>>  	}
>> @@ -2063,7 +2065,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
>>  		}
>>  	}
>>  
>> -	ret = inc_block_group_ro(cache, !do_chunk_alloc);
>> +	ret = inc_block_group_ro(cache, false);
>>  	if (!do_chunk_alloc)
>>  		goto unlock_out;
>>  	if (!ret)
>>
>
Nikolay Borisov Nov. 26, 2019, 10:09 a.m. UTC | #3
On 25.11.19 г. 16:40 ч., Josef Bacik wrote:
> For some reason we've translated the do_chunk_alloc that goes into
> btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
> two different things.
> 
> force for inc_block_group_ro is used when we are forcing the block group
> read only no matter what, for example when the underlying chunk is
> marked read only.  We need to not do the space check here as this block
> group needs to be read only.
> 
> btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
> we need to pre-allocate a chunk before marking the block group read
> only.  This has nothing to do with forcing, and in fact we _always_ want
> to do the space check in this case, so unconditionally pass false for
> force in this case.
> 
> Then fixup inc_block_group_ro to honor force as it's expected and
> documented to do.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

Reviewed-by: Nikolay Borisov <nborisov@suse.com>

> ---
>  fs/btrfs/block-group.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index db539bfc5a52..3ffbc2e0af21 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -1190,8 +1190,10 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
>  	spin_lock(&sinfo->lock);
>  	spin_lock(&cache->lock);
>  
> -	if (cache->ro) {
> +	if (cache->ro || force) {
>  		cache->ro++;
> +		if (list_empty(&cache->ro_list))
> +			list_add_tail(&cache->ro_list, &sinfo->ro_bgs);

nit: This only makes sense in the case of force e.g. just to make it
clearer perhahps the check can be modified to if (force || list_empty)?

>  		ret = 0;
>  		goto out;
>  	}
> @@ -2063,7 +2065,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
>  		}
>  	}
>  
> -	ret = inc_block_group_ro(cache, !do_chunk_alloc);
> +	ret = inc_block_group_ro(cache, false);
>  	if (!do_chunk_alloc)
>  		goto unlock_out;
>  	if (!ret)
>
diff mbox series

Patch

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index db539bfc5a52..3ffbc2e0af21 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1190,8 +1190,10 @@  static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
 	spin_lock(&sinfo->lock);
 	spin_lock(&cache->lock);
 
-	if (cache->ro) {
+	if (cache->ro || force) {
 		cache->ro++;
+		if (list_empty(&cache->ro_list))
+			list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
 		ret = 0;
 		goto out;
 	}
@@ -2063,7 +2065,7 @@  int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
 		}
 	}
 
-	ret = inc_block_group_ro(cache, !do_chunk_alloc);
+	ret = inc_block_group_ro(cache, false);
 	if (!do_chunk_alloc)
 		goto unlock_out;
 	if (!ret)