diff mbox series

[1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK

Message ID 20211206055258.49061-1-wqu@suse.com (mailing list archive)
State New, archived
Headers show
Series [1/2] btrfs: update SCRUB_MAX_PAGES_PER_BLOCK | expand

Commit Message

Qu Wenruo Dec. 6, 2021, 5:52 a.m. UTC
Use BTRFS_MAX_METADATA_BLOCKSIZE and SZ_4K (minimal sectorsize) to
calculate this value.

And remove one stale comment on the value, in fact with recent subpage
support, BTRFS_MAX_METADATA_BLOCKSIZE * PAGE_SIZE is already beyond
BTRFS_STRIPE_LEN, just we don't use the full page.

Also since we're here, update the BUG_ON() related to
SCRUB_MAX_PAGES_PER_BLOCK to ASSERT().

As those ASSERT() are really only for developers to catch early obvious
bugs, not to let end users suffer.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/scrub.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

Comments

David Sterba Dec. 6, 2021, 9:14 p.m. UTC | #1
On Mon, Dec 06, 2021 at 01:52:57PM +0800, Qu Wenruo wrote:
> Use BTRFS_MAX_METADATA_BLOCKSIZE and SZ_4K (minimal sectorsize) to
> calculate this value.
> 
> And remove one stale comment on the value, in fact with recent subpage
> support, BTRFS_MAX_METADATA_BLOCKSIZE * PAGE_SIZE is already beyond
> BTRFS_STRIPE_LEN, just we don't use the full page.
> 
> Also since we're here, update the BUG_ON() related to
> SCRUB_MAX_PAGES_PER_BLOCK to ASSERT().
> 
> As those ASSERT() are really only for developers to catch early obvious
> bugs, not to let end users suffer.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

1 and 2 added to misc-next, thanks.
diff mbox series

Patch

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 15a123e67108..0870d8db92cd 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -49,11 +49,10 @@  struct scrub_ctx;
 #define SCRUB_BIOS_PER_SCTX	64	/* 8MB per device in flight */
 
 /*
- * the following value times PAGE_SIZE needs to be large enough to match the
+ * The following value times PAGE_SIZE needs to be large enough to match the
  * largest node/leaf/sector size that shall be supported.
- * Values larger than BTRFS_STRIPE_LEN are not supported.
  */
-#define SCRUB_MAX_PAGES_PER_BLOCK	16	/* 64k per node/leaf/sector */
+#define SCRUB_MAX_PAGES_PER_BLOCK	(BTRFS_MAX_METADATA_BLOCKSIZE / SZ_4K)
 
 struct scrub_recover {
 	refcount_t		refs;
@@ -1313,7 +1312,7 @@  static int scrub_setup_recheck_block(struct scrub_block *original_sblock,
 		recover->bioc = bioc;
 		recover->map_length = mapped_length;
 
-		BUG_ON(page_index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(page_index < SCRUB_MAX_PAGES_PER_BLOCK);
 
 		nmirrors = min(scrub_nr_raid_mirrors(bioc), BTRFS_MAX_MIRRORS);
 
@@ -2297,7 +2296,7 @@  static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u32 len,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;
 		spage->sblock = sblock;
@@ -2631,7 +2630,7 @@  static int scrub_pages_for_parity(struct scrub_parity *sparity,
 			scrub_block_put(sblock);
 			return -ENOMEM;
 		}
-		BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK);
+		ASSERT(index < SCRUB_MAX_PAGES_PER_BLOCK);
 		/* For scrub block */
 		scrub_page_get(spage);
 		sblock->pagev[index] = spage;