mbox series

[0/8] btrfs: scrub: support subpage scrub (completely independent version)

Message ID 20201026071115.57225-1-wqu@suse.com (mailing list archive)
Headers show
Series btrfs: scrub: support subpage scrub (completely independent version) | expand

Message

Qu Wenruo Oct. 26, 2020, 7:11 a.m. UTC
To my surprise, the scrub functionality is completely independent, thus
it can support subpage much easier than subpage read/write itself.

Thus here comes the independent scrub support for subpage.

== BACKGROUND ==
With my experimental subpage read/write, the scrub is always reporting
my subpage fs is completely corrupted, every tree block and every data
sector is corrupted.

Thus there must be something wrong with the scrub and subpage.

== CAUSE ==
It turns out that, scrub is hard coding tons of PAGE_SIZE, and always
assume PAGE_SIZE == sectorsize.
Structure scrub_page is in fact more like scrub_sector, where it only
stores one sector.

But there is also some good news, since scrub is submitting its own read
write bio, it avoids all the hassles to handle page unaligned sectors.

== WORKAROUND ==
The workaround is pretty straightforward, always store just one sector
in one scrub_page.
And teach the scrub_checksum_*() functions to follow the sector size of
each scrub_page.

The cost is pretty obvious for 64K page size systems.
If using 4K sector size, we need a full page to scrub one 4K sector.
And we will allocate 16 times more space to scrub 4K sectors compared to
4K page size systems.

But still, the cost should be more or less acceptable for now.

== TODO ==
To properly handle the case, we should get rid of scrub_page completely.

The main objective of scrub_page is just to restore the per-sector csum.
In fact all the members like logical/physical/physical_for_replace can
be calculated from scrub_block.

If we can store pages/csums/csums_bitmap in scrub_block, we can easily
do proper page based csum check for both data and metadata, and take the
advantage of much larger page size.

But that work is beyond the scope of subpage support, I will take that
work after the subpage functionality if fully completely.

== PATCHSET STRUCTURE ==
Patch 01~04:	Small refactors and cleanups spotted during the
		development.
Patch 05~08:	Support for subpage scrub.

All these patches will be also included in next subpage patchset update,
but considering they are way more independent than current subpage
patchset, it's still worthy submitting.


The support won't change anything for current sector size == PAGE_SIZE
cases.
Both 4K and 64K page systems tested.

For subpage testing, it's only basic scrub and repair tested, and there
are still some blockage for full fstests run.

Qu Wenruo (8):
  btrfs: scrub: distinguish scrub_page from regular page
  btrfs: scrub: remove the @force parameter of scrub_pages()
  btrfs: scrub: use flexible array for scrub_page::csums
  btrfs: scrub: refactor scrub_find_csum()
  btrfs: scrub: introduce scrub_page::page_len for subpage support
  btrfs: scrub: always allocate one full page for one sector for RAID56
  btrfs: scrub: support subpage tree block scrub
  btrfs: scrub: support subpage data scrub

 fs/btrfs/scrub.c | 292 +++++++++++++++++++++++++++++++----------------
 1 file changed, 192 insertions(+), 100 deletions(-)