mbox series

[v2,0/5] btrfs: scrub: improve the scrub performance

Message ID cover.1691044274.git.wqu@suse.com (mailing list archive)
Headers show
Series btrfs: scrub: improve the scrub performance | expand

Message

Qu Wenruo Aug. 3, 2023, 6:33 a.m. UTC
[REPO]
https://github.com/adam900710/linux/tree/scrub_testing

[CHANGELOG]
v2:
- Fix a double accounting error in the last patch
  scrub_stripe_report_errors() is called twice, thus doubling the
  accounting.

v1:
- Rebased to latest misc-next

- Rework the read IO grouping patch
  David has found some crashes mostly related to scrub performance
  fixes, meanwhile the original grouping patch has one extra flag,
  SCRUB_FLAG_READ_SUBMITTED, to avoid double submitting.

  But this flag can be avoided as we can easily avoid double submitting
  just by properly checking the sctx->nr_stripe variable.

  This reworked grouping read IO patch should be safer compared to the
  initial version, with better code structure.

  Unfortunately, the final performance is worse than the initial version
  (2.2GiB/s vs 2.5GiB/s), but it should be less racy thus safer.

- Re-order the patches
  The first 3 patches are the main fixes, and I put safer patches first,
  so even if David still found crash at certain patch, the remaining can
  be dropped if needed.

There is a huge scrub performance drop introduced by v6.4 kernel, that 
the scrub performance is only around 1/3 for large data extents.

There are several causes:

- Missing blk plug
  This means read requests won't be merged by block layer, this can
  hugely reduce the read performance.

- Extra time spent on extent/csum tree search
  This including extra path allocation/freeing and tree searchs.
  This is especially obvious for large data extents, as previously we
  only do one csum search per 512K, but now we do one csum search per
  64K, an 8 times increase in csum tree search.

- Less concurrency
  Mostly due to the fact that we're doing submit-and-wait, thus much
  lower queue depth, affecting things like NVME which benefits a lot
  from high concurrency.

The first 3 patches would greately improve the scrub read performance,
but unfortunately it's still not as fast as the pre-6.4 kernels.
(2.2GiB/s vs 3.0GiB/s), but still much better than 6.4 kernels (2.2GiB
vs 1.0GiB/s).

Qu Wenruo (5):
  btrfs: scrub: avoid unnecessary extent tree search preparing stripes
  btrfs: scrub: avoid unnecessary csum tree search preparing stripes
  btrfs: scrub: fix grouping of read IO
  btrfs: scrub: don't go ordered workqueue for dev-replace
  btrfs: scrub: move write back of repaired sectors into
    scrub_stripe_read_repair_worker()

 fs/btrfs/file-item.c |  33 +++---
 fs/btrfs/file-item.h |   6 +-
 fs/btrfs/raid56.c    |   4 +-
 fs/btrfs/scrub.c     | 235 ++++++++++++++++++++++++++-----------------
 4 files changed, 169 insertions(+), 109 deletions(-)

Comments

David Sterba Aug. 10, 2023, 6:09 p.m. UTC | #1
On Thu, Aug 03, 2023 at 02:33:28PM +0800, Qu Wenruo wrote:
> [REPO]
> https://github.com/adam900710/linux/tree/scrub_testing
> 
> [CHANGELOG]
> v2:
> - Fix a double accounting error in the last patch
>   scrub_stripe_report_errors() is called twice, thus doubling the
>   accounting.

I've added the series to for-next. Current plan is to get it to 6.5
eventually and then backport to 6.4. I need to review it more carefully
than last time the scrub rewrite got merged and also give it a test on
NVMe drives myself. Fallback plan is 6.6 and then do the backports.
We're approaching 6.5 final and even though it's a big regression I
don't want to introduce bugs given the remaining time to fix them.
David Sterba Aug. 15, 2023, 8:52 p.m. UTC | #2
On Thu, Aug 10, 2023 at 08:09:05PM +0200, David Sterba wrote:
> On Thu, Aug 03, 2023 at 02:33:28PM +0800, Qu Wenruo wrote:
> > [REPO]
> > https://github.com/adam900710/linux/tree/scrub_testing
> > 
> > [CHANGELOG]
> > v2:
> > - Fix a double accounting error in the last patch
> >   scrub_stripe_report_errors() is called twice, thus doubling the
> >   accounting.
> 
> I've added the series to for-next. Current plan is to get it to 6.5
> eventually and then backport to 6.4. I need to review it more carefully
> than last time the scrub rewrite got merged and also give it a test on
> NVMe drives myself. Fallback plan is 6.6 and then do the backports.
> We're approaching 6.5 final and even though it's a big regression I
> don't want to introduce bugs given the remaining time to fix them.

Moved from for-next to misc-next. With the fixup of kvzalloc of scrub
context due to increased side. As mentioned on slack, the testing was
not conclusive, I can't reproduce the slow scrub on single or raid0
profiles, both versions go up to full speed (3G/s on a PCIe, but in a VM
so there are caching effects). But given the time we need to move
forward so I've added the series to misc-next.