diff mbox series

[2/3] btrfs: fix force usage in inc_block_group_ro

Message ID 20200117140739.42560-3-josef@toxicpanda.com (mailing list archive)
State New, archived
Headers show
Series clean up how we mark block groups read only | expand

Commit Message

Josef Bacik Jan. 17, 2020, 2:07 p.m. UTC
For some reason we've translated the do_chunk_alloc that goes into
btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are
two different things.

force for inc_block_group_ro is used when we are forcing the block group
read only no matter what, for example when the underlying chunk is
marked read only.  We need to not do the space check here as this block
group needs to be read only.

btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that
we need to pre-allocate a chunk before marking the block group read
only.  This has nothing to do with forcing, and in fact we _always_ want
to do the space check in this case, so unconditionally pass false for
force in this case.

Then fixup inc_block_group_ro to honor force as it's expected and
documented to do.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/block-group.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 8877af541ed0..71770d04b7a3 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1213,7 +1213,7 @@  static int inc_block_group_ro(struct btrfs_block_group *cache, int force)
 	 * Here we make sure if we mark this bg RO, we still have enough
 	 * free space as buffer.
 	 */
-	if (sinfo_used + num_bytes <= sinfo->total_bytes) {
+	if (force || (sinfo_used + num_bytes <= sinfo->total_bytes)) {
 		sinfo->bytes_readonly += num_bytes;
 		cache->ro++;
 		list_add_tail(&cache->ro_list, &sinfo->ro_bgs);
@@ -2225,7 +2225,7 @@  int btrfs_inc_block_group_ro(struct btrfs_block_group *cache,
 		}
 	}
 
-	ret = inc_block_group_ro(cache, !do_chunk_alloc);
+	ret = inc_block_group_ro(cache, 0);
 	if (!do_chunk_alloc)
 		goto unlock_out;
 	if (!ret)