@@ -1824,15 +1824,19 @@ static int scrub_checksum_data(struct scrub_block *sblock)
if (!spage->have_csum)
return 0;
+ /*
+ * In scrub_pages() and scrub_pages_for_parity() we ensure
+ * each spage only contains just one sector of data.
+ */
+ ASSERT(spage->page_len == sctx->fs_info->sectorsize);
kaddr = page_address(spage->page);
shash->tfm = fs_info->csum_shash;
crypto_shash_init(shash);
- crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum);
+ crypto_shash_digest(shash, kaddr, spage->page_len, csum);
if (memcmp(csum, spage->csums, sctx->csum_size))
sblock->checksum_error = 1;
-
return sblock->checksum_error;
}
Btrfs scrub is in fact much more flex than buffered data write path, as we can read an unaligned subpage data into page offset 0. This ability makes subpage support much easier, we just need to check each scrub_page::page_len and ensure we only calculate hash for [0, page_len) of a page, and call it a day for subpage scrub support. There is a small thing to notice, for subpage case, we still do sector by sector scrub. This means we will submit a read bio for each sector to scrub, resulting the same amount of read bios, just like the 4K page systems. This behavior can be considered as a good thing, if we want everything to be the same as 4K page systems. But this also means, we're wasting the ability to submit larger bio using 64K page size. This is another problem to consider in the future. Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/scrub.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-)