Message ID | cover.1681364951.git.wqu@suse.com (mailing list archive) |
---|---|
Headers | show |
Series | btrfs: reduce the duplicated reads during P/Q scrub | expand |
On Thu, Apr 13, 2023 at 01:57:16PM +0800, Qu Wenruo wrote: > [PROBLEM] > It's a known problem that btrfs scrub for RAID56 is pretty slow. > > [CAUSE] > One of the causes is that we read the same data stripes at least twice > during P/Q stripes scrub. > > This means, during a full fs scrub (one scrub process for each device), > there will be quite some extra seeks just because of this. > > [FIX] > The truth is, scrub stripes have a much better view of the data stripes. > As btrfs would firstly verify all data stripes, and only continue > scrubing the P/Q stripes if all the data stripes is fine after needed > repair. > > So this means, as long as there is no new RMW writes into the RAID56 > block group, we can re-use the scrub cache for P/Q verification. > > This patchset would fix it by: > > - Ensure the RAID56 block groups are marked read-only for scrub > This is to avoid RMW in to the block group, or scrub cache is no > longer reliable. > > - Introduce a new interface to pass cached pages to RAID56 cache > The only disadvantage is here we still need to do page copy, due to > the uncertain lifespan of an rbio. > > Qu Wenruo (2): > btrfs: scrub: try harder to mark RAID56 block groups read-only > btrfs: scrub: use recovered data stripes as cache to avoid unnecessary > read Added to misc-next, thanks. I'd like to batch it with the rest of the scrub but we also need to let it test for a while it so it'll be part of some rc.