@@ -1872,6 +1872,8 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx)
stripe = &sctx->stripes[i];
wait_scrub_stripe_io(stripe);
+ sctx->stat.last_physical = stripe->physical +
+ stripe_length(stripe);
scrub_reset_stripe(stripe);
}
out:
@@ -2337,6 +2339,8 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx,
stripe_logical += chunk_logical;
ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg,
map, stripe_logical);
+ sctx->stat.last_physical = min(physical + BTRFS_STRIPE_LEN,
+ physical_end);
if (ret)
goto out;
goto next;
Currently sctx->stat.last_physical only got updated in the following cases: - When the last stripe of a non-RAID56 chunk is scrubbed This implies a pitfall, if the last stripe is at the chunk boundary, and we finished the scrub of the whole chunk, we won't update last_physical at all until the next chunk. - When a P/Q stripe of a RAID56 chunk is scrubbed This leads makes sctx->stat.last_physical to be not update for a long time if we're scrubbing a large data chunk (which can go up to 10GiB). And if scrub is cancelled halfway, we would restart from last_physical, but that last_physical is only updated to the last finished chunk end, we would re-scrub the same chunk again. This can waste a lot of time especially when the chunk is huge. Fix the problem by properly updating @last_physical after each stripe is scrubbed. And since we're here, for the sake of consistency, use spin lock to protect the update of @last_physical, just like all the remaining call sites touching sctx->stat. Reported-by: Michel Palleau <michel.palleau@gmail.com> Link: https://lore.kernel.org/linux-btrfs/CAMFk-+igFTv2E8svg=cQ6o3e6CrR5QwgQ3Ok9EyRaEvvthpqCQ@mail.gmail.com/ Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/scrub.c | 4 ++++ 1 file changed, 4 insertions(+)