From patchwork Mon Oct 26 07:11:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60CA3C388F9 for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EC4D8222EC for ; Mon, 26 Oct 2020 07:11:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="BnAHVp7Q" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770921AbgJZHL1 (ORCPT ); Mon, 26 Oct 2020 03:11:27 -0400 Received: from mx2.suse.de ([195.135.220.15]:51272 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770501AbgJZHL1 (ORCPT ); Mon, 26 Oct 2020 03:11:27 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696285; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ZWvMiTH+7TwdhY3JrEyUMeX2QaSLWIkGdtD6U83ZW8=; b=BnAHVp7QG/XioTN1Rhm4x/20+4Hfdijs7N7knUzVjL52zANhKrav/XKHoGOeoe3103Hovv whVX1T4zGMT5gMIERza0gbdxk/Qi062yYrpNpA23Va3cUUZAxy/kdgb7yTaxvu9CYYZ0hN nB1Kxe7T0v5SJxzT6n5pl4IICzOjWUg= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id DC528AD2F for ; Mon, 26 Oct 2020 07:11:24 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 1/8] btrfs: scrub: distinguish scrub_page from regular page Date: Mon, 26 Oct 2020 15:11:08 +0800 Message-Id: <20201026071115.57225-2-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There are several call sites where we declare something like "struct scrub_page *page". This is just asking for troubles when read the code, as we also have scrub_page::page member. To avoid confusion, use "spage" for scrub_page strcture pointers. Signed-off-by: Qu Wenruo Reviewed-by: Josef Bacik --- fs/btrfs/scrub.c | 102 +++++++++++++++++++++++------------------------ 1 file changed, 51 insertions(+), 51 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 806523515d2f..0e25f9391b93 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -256,10 +256,10 @@ static void __scrub_blocked_if_needed(struct btrfs_fs_info *fs_info); static void scrub_blocked_if_needed(struct btrfs_fs_info *fs_info); static void scrub_put_ctx(struct scrub_ctx *sctx); -static inline int scrub_is_page_on_raid56(struct scrub_page *page) +static inline int scrub_is_page_on_raid56(struct scrub_page *spage) { - return page->recover && - (page->recover->bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK); + return spage->recover && + (spage->recover->bbio->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK); } static void scrub_pending_bio_inc(struct scrub_ctx *sctx) @@ -1092,11 +1092,11 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) success = 1; for (page_num = 0; page_num < sblock_bad->page_count; page_num++) { - struct scrub_page *page_bad = sblock_bad->pagev[page_num]; + struct scrub_page *spage_bad = sblock_bad->pagev[page_num]; struct scrub_block *sblock_other = NULL; /* skip no-io-error page in scrub */ - if (!page_bad->io_error && !sctx->is_dev_replace) + if (!spage_bad->io_error && !sctx->is_dev_replace) continue; if (scrub_is_page_on_raid56(sblock_bad->pagev[0])) { @@ -1108,7 +1108,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) * sblock_for_recheck array to target device. */ sblock_other = NULL; - } else if (page_bad->io_error) { + } else if (spage_bad->io_error) { /* try to find no-io-error page in mirrors */ for (mirror_index = 0; mirror_index < BTRFS_MAX_MIRRORS && @@ -1147,7 +1147,7 @@ static int scrub_handle_errored_block(struct scrub_block *sblock_to_check) sblock_other, page_num, 0); if (0 == ret) - page_bad->io_error = 0; + spage_bad->io_error = 0; else success = 0; } @@ -1325,13 +1325,13 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, for (mirror_index = 0; mirror_index < nmirrors; mirror_index++) { struct scrub_block *sblock; - struct scrub_page *page; + struct scrub_page *spage; sblock = sblocks_for_recheck + mirror_index; sblock->sctx = sctx; - page = kzalloc(sizeof(*page), GFP_NOFS); - if (!page) { + spage = kzalloc(sizeof(*spage), GFP_NOFS); + if (!spage) { leave_nomem: spin_lock(&sctx->stat_lock); sctx->stat.malloc_errors++; @@ -1339,15 +1339,15 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, scrub_put_recover(fs_info, recover); return -ENOMEM; } - scrub_page_get(page); - sblock->pagev[page_index] = page; - page->sblock = sblock; - page->flags = flags; - page->generation = generation; - page->logical = logical; - page->have_csum = have_csum; + scrub_page_get(spage); + sblock->pagev[page_index] = spage; + spage->sblock = sblock; + spage->flags = flags; + spage->generation = generation; + spage->logical = logical; + spage->have_csum = have_csum; if (have_csum) - memcpy(page->csum, + memcpy(spage->csum, original_sblock->pagev[0]->csum, sctx->csum_size); @@ -1360,23 +1360,23 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, mirror_index, &stripe_index, &stripe_offset); - page->physical = bbio->stripes[stripe_index].physical + + spage->physical = bbio->stripes[stripe_index].physical + stripe_offset; - page->dev = bbio->stripes[stripe_index].dev; + spage->dev = bbio->stripes[stripe_index].dev; BUG_ON(page_index >= original_sblock->page_count); - page->physical_for_dev_replace = + spage->physical_for_dev_replace = original_sblock->pagev[page_index]-> physical_for_dev_replace; /* for missing devices, dev->bdev is NULL */ - page->mirror_num = mirror_index + 1; + spage->mirror_num = mirror_index + 1; sblock->page_count++; - page->page = alloc_page(GFP_NOFS); - if (!page->page) + spage->page = alloc_page(GFP_NOFS); + if (!spage->page) goto leave_nomem; scrub_get_recover(recover); - page->recover = recover; + spage->recover = recover; } scrub_put_recover(fs_info, recover); length -= sublen; @@ -1394,19 +1394,19 @@ static void scrub_bio_wait_endio(struct bio *bio) static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info, struct bio *bio, - struct scrub_page *page) + struct scrub_page *spage) { DECLARE_COMPLETION_ONSTACK(done); int ret; int mirror_num; - bio->bi_iter.bi_sector = page->logical >> 9; + bio->bi_iter.bi_sector = spage->logical >> 9; bio->bi_private = &done; bio->bi_end_io = scrub_bio_wait_endio; - mirror_num = page->sblock->pagev[0]->mirror_num; - ret = raid56_parity_recover(fs_info, bio, page->recover->bbio, - page->recover->map_length, + mirror_num = spage->sblock->pagev[0]->mirror_num; + ret = raid56_parity_recover(fs_info, bio, spage->recover->bbio, + spage->recover->map_length, mirror_num, 0); if (ret) return ret; @@ -1431,10 +1431,10 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info, bio_set_dev(bio, first_page->dev->bdev); for (page_num = 0; page_num < sblock->page_count; page_num++) { - struct scrub_page *page = sblock->pagev[page_num]; + struct scrub_page *spage = sblock->pagev[page_num]; - WARN_ON(!page->page); - bio_add_page(bio, page->page, PAGE_SIZE, 0); + WARN_ON(!spage->page); + bio_add_page(bio, spage->page, PAGE_SIZE, 0); } if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) { @@ -1475,24 +1475,24 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info, for (page_num = 0; page_num < sblock->page_count; page_num++) { struct bio *bio; - struct scrub_page *page = sblock->pagev[page_num]; + struct scrub_page *spage = sblock->pagev[page_num]; - if (page->dev->bdev == NULL) { - page->io_error = 1; + if (spage->dev->bdev == NULL) { + spage->io_error = 1; sblock->no_io_error_seen = 0; continue; } - WARN_ON(!page->page); + WARN_ON(!spage->page); bio = btrfs_io_bio_alloc(1); - bio_set_dev(bio, page->dev->bdev); + bio_set_dev(bio, spage->dev->bdev); - bio_add_page(bio, page->page, PAGE_SIZE, 0); - bio->bi_iter.bi_sector = page->physical >> 9; + bio_add_page(bio, spage->page, PAGE_SIZE, 0); + bio->bi_iter.bi_sector = spage->physical >> 9; bio->bi_opf = REQ_OP_READ; if (btrfsic_submit_bio_wait(bio)) { - page->io_error = 1; + spage->io_error = 1; sblock->no_io_error_seen = 0; } @@ -1548,36 +1548,36 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad, struct scrub_block *sblock_good, int page_num, int force_write) { - struct scrub_page *page_bad = sblock_bad->pagev[page_num]; - struct scrub_page *page_good = sblock_good->pagev[page_num]; + struct scrub_page *spage_bad = sblock_bad->pagev[page_num]; + struct scrub_page *spage_good = sblock_good->pagev[page_num]; struct btrfs_fs_info *fs_info = sblock_bad->sctx->fs_info; - BUG_ON(page_bad->page == NULL); - BUG_ON(page_good->page == NULL); + BUG_ON(spage_bad->page == NULL); + BUG_ON(spage_good->page == NULL); if (force_write || sblock_bad->header_error || - sblock_bad->checksum_error || page_bad->io_error) { + sblock_bad->checksum_error || spage_bad->io_error) { struct bio *bio; int ret; - if (!page_bad->dev->bdev) { + if (!spage_bad->dev->bdev) { btrfs_warn_rl(fs_info, "scrub_repair_page_from_good_copy(bdev == NULL) is unexpected"); return -EIO; } bio = btrfs_io_bio_alloc(1); - bio_set_dev(bio, page_bad->dev->bdev); - bio->bi_iter.bi_sector = page_bad->physical >> 9; + bio_set_dev(bio, spage_bad->dev->bdev); + bio->bi_iter.bi_sector = spage_bad->physical >> 9; bio->bi_opf = REQ_OP_WRITE; - ret = bio_add_page(bio, page_good->page, PAGE_SIZE, 0); + ret = bio_add_page(bio, spage_good->page, PAGE_SIZE, 0); if (PAGE_SIZE != ret) { bio_put(bio); return -EIO; } if (btrfsic_submit_bio_wait(bio)) { - btrfs_dev_stat_inc_and_print(page_bad->dev, + btrfs_dev_stat_inc_and_print(spage_bad->dev, BTRFS_DEV_STAT_WRITE_ERRS); atomic64_inc(&fs_info->dev_replace.num_write_errors); bio_put(bio); From patchwork Mon Oct 26 07:11:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A93EEC56201 for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C9562231B for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="hQW70OOi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770923AbgJZHL3 (ORCPT ); Mon, 26 Oct 2020 03:11:29 -0400 Received: from mx2.suse.de ([195.135.220.15]:51300 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770918AbgJZHL2 (ORCPT ); Mon, 26 Oct 2020 03:11:28 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696286; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3T5TfVYtidsfpmcIJddeL16twj1rcwrGxZQJfI1vSSM=; b=hQW70OOiR/is8sLovVddfYmGOjliNbpm3FSd821ZgLZOQ/3EC0yYyYnaFRrtKNEoIfYTJZ R1U1Jdp8aU4hf71Ve4FmicAUk7+YJpujHKPX83CH0Y0atqmOhP/t7Wl3neEkDeDpYyHmhM 39UI3kuRC1p6ynBiYec0WCiFyssAVBo= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 9F779B1AA for ; Mon, 26 Oct 2020 07:11:26 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 2/8] btrfs: scrub: remove the @force parameter of scrub_pages() Date: Mon, 26 Oct 2020 15:11:09 +0800 Message-Id: <20201026071115.57225-3-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The @force parameter for scrub_pages() is to indicate whether we want to force bio submission. Currently it's only used for super block scrub, and it can be easily determined by the @flags. So remove the parameter to make the parameter a little shorter. Signed-off-by: Qu Wenruo Reviewed-by: Josef Bacik --- fs/btrfs/scrub.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 0e25f9391b93..3cb51d1f061f 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -237,7 +237,7 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx, struct scrub_page *spage); static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, u64 physical, struct btrfs_device *dev, u64 flags, - u64 gen, int mirror_num, u8 *csum, int force, + u64 gen, int mirror_num, u8 *csum, u64 physical_for_dev_replace); static void scrub_bio_end_io(struct bio *bio); static void scrub_bio_end_io_worker(struct btrfs_work *work); @@ -2152,12 +2152,16 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock) static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, u64 physical, struct btrfs_device *dev, u64 flags, - u64 gen, int mirror_num, u8 *csum, int force, + u64 gen, int mirror_num, u8 *csum, u64 physical_for_dev_replace) { struct scrub_block *sblock; + bool force_submit = false; int index; + if (flags & BTRFS_EXTENT_FLAG_SUPER) + force_submit = true; + sblock = kzalloc(sizeof(*sblock), GFP_KERNEL); if (!sblock) { spin_lock(&sctx->stat_lock); @@ -2231,7 +2235,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, } } - if (force) + if (force_submit) scrub_submit(sctx); } @@ -2442,7 +2446,7 @@ static int scrub_extent(struct scrub_ctx *sctx, struct map_lookup *map, ++sctx->stat.no_csum; } ret = scrub_pages(sctx, logical, l, physical, dev, flags, gen, - mirror_num, have_csum ? csum : NULL, 0, + mirror_num, have_csum ? csum : NULL, physical_for_dev_replace); if (ret) return ret; @@ -3707,7 +3711,7 @@ static noinline_for_stack int scrub_supers(struct scrub_ctx *sctx, ret = scrub_pages(sctx, bytenr, BTRFS_SUPER_INFO_SIZE, bytenr, scrub_dev, BTRFS_EXTENT_FLAG_SUPER, gen, i, - NULL, 1, bytenr); + NULL, bytenr); if (ret) return ret; } From patchwork Mon Oct 26 07:11:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855807 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9CBDC5517A for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 88530222EA for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="AQP7h/ya" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770927AbgJZHLb (ORCPT ); Mon, 26 Oct 2020 03:11:31 -0400 Received: from mx2.suse.de ([195.135.220.15]:51310 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770918AbgJZHLa (ORCPT ); Mon, 26 Oct 2020 03:11:30 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696288; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VFjSxUjsT05dCOK8ykeEDO2kZ9tFvMYjjxR/v2Plz58=; b=AQP7h/yafYuxdjKv5t1c4oUmykPILpW4IkXwYPEDFKJVW03l6Zx7tLlwylQawEGTIExCe1 OGx8m0mDlsJsyA60uV4Gy51GA0F1VKMJq/dTxYhL6mOpo81cxNSE+O3foww/K0mBI0jpLp VbC9l6GT0+NXwYDNoXw8EU53hWwOYBA= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id D8441ACA3 for ; Mon, 26 Oct 2020 07:11:28 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 3/8] btrfs: scrub: use flexible array for scrub_page::csums Date: Mon, 26 Oct 2020 15:11:10 +0800 Message-Id: <20201026071115.57225-4-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org There are several factors affecting how many checksum bytes are needed for one scrub_page: - Sector size and page size For subpage case, one page can contain several sectors, thus the csum size will differ. - Checksum size Since btrfs supports different csum size now, which can vary from 4 bytes for CRC32 to 32 bytes for SHA256. So instead of using fixed BTRFS_CSUM_SIZE, now use flexible array for scrub_page::csums, and determine the size at scrub_page allocation time. This does not only provide the basis for later subpage scrub support, but also reduce the memory usage for default btrfs on x86_64. As the default CRC32 only uses 4 bytes, thus we can save 28 bytes for each scrub page. Signed-off-by: Qu Wenruo Reviewed-by: Josef Bacik --- fs/btrfs/scrub.c | 41 ++++++++++++++++++++++++++++++----------- 1 file changed, 30 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 3cb51d1f061f..321d6d457942 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -76,9 +76,14 @@ struct scrub_page { unsigned int have_csum:1; unsigned int io_error:1; }; - u8 csum[BTRFS_CSUM_SIZE]; - struct scrub_recover *recover; + + /* + * The csums size for the page is deteremined by page size, + * sector size and csum size. + * Thus the length has to be determined at runtime. + */ + u8 csums[]; }; struct scrub_bio { @@ -207,6 +212,19 @@ struct full_stripe_lock { struct mutex mutex; }; +static struct scrub_page *alloc_scrub_page(struct scrub_ctx *sctx, gfp_t mask) +{ + u32 sectorsize = sctx->fs_info->sectorsize; + size_t size; + + /* No support for multi-page sector size yet */ + ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize)); + + size = sizeof(struct scrub_page); + size += (PAGE_SIZE / sectorsize) * sctx->csum_size; + return kzalloc(size, mask); +} + static void scrub_pending_bio_inc(struct scrub_ctx *sctx); static void scrub_pending_bio_dec(struct scrub_ctx *sctx); static int scrub_handle_errored_block(struct scrub_block *sblock_to_check); @@ -1330,7 +1348,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, sblock = sblocks_for_recheck + mirror_index; sblock->sctx = sctx; - spage = kzalloc(sizeof(*spage), GFP_NOFS); + spage = alloc_scrub_page(sctx, GFP_NOFS); if (!spage) { leave_nomem: spin_lock(&sctx->stat_lock); @@ -1347,8 +1365,8 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, spage->logical = logical; spage->have_csum = have_csum; if (have_csum) - memcpy(spage->csum, - original_sblock->pagev[0]->csum, + memcpy(spage->csums, + original_sblock->pagev[0]->csums, sctx->csum_size); scrub_stripe_index_and_offset(logical, @@ -1800,7 +1818,7 @@ static int scrub_checksum_data(struct scrub_block *sblock) crypto_shash_init(shash); crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum); - if (memcmp(csum, spage->csum, sctx->csum_size)) + if (memcmp(csum, spage->csums, sctx->csum_size)) sblock->checksum_error = 1; return sblock->checksum_error; @@ -2180,7 +2198,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, struct scrub_page *spage; u64 l = min_t(u64, len, PAGE_SIZE); - spage = kzalloc(sizeof(*spage), GFP_KERNEL); + spage = alloc_scrub_page(sctx, GFP_KERNEL); if (!spage) { leave_nomem: spin_lock(&sctx->stat_lock); @@ -2202,7 +2220,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, spage->mirror_num = mirror_num; if (csum) { spage->have_csum = 1; - memcpy(spage->csum, csum, sctx->csum_size); + memcpy(spage->csums, csum, sctx->csum_size); } else { spage->have_csum = 0; } @@ -2487,7 +2505,9 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, struct scrub_page *spage; u64 l = min_t(u64, len, PAGE_SIZE); - spage = kzalloc(sizeof(*spage), GFP_KERNEL); + BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK); + + spage = alloc_scrub_page(sctx, GFP_KERNEL); if (!spage) { leave_nomem: spin_lock(&sctx->stat_lock); @@ -2496,7 +2516,6 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, scrub_block_put(sblock); return -ENOMEM; } - BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK); /* For scrub block */ scrub_page_get(spage); sblock->pagev[index] = spage; @@ -2512,7 +2531,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, spage->mirror_num = mirror_num; if (csum) { spage->have_csum = 1; - memcpy(spage->csum, csum, sctx->csum_size); + memcpy(spage->csums, csum, sctx->csum_size); } else { spage->have_csum = 0; } From patchwork Mon Oct 26 07:11:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24610C56202 for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C46C3222EC for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="dR/i82cU" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770930AbgJZHLc (ORCPT ); Mon, 26 Oct 2020 03:11:32 -0400 Received: from mx2.suse.de ([195.135.220.15]:51300 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770924AbgJZHLc (ORCPT ); Mon, 26 Oct 2020 03:11:32 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696290; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Y4XbNc3nRkl/FnavotUGPr6EUfK3Wuu+okZFsUcTFgY=; b=dR/i82cU8Hwjwt2eAuBJ1rNkPZchoblD8iLYq8jwqhR6qEAP/b6Lv4KRUlTt8KYITjmeIg kKqnFcGCvAxoOfMVq0lhJs9UIMEn+iaP5ANjfIVLfZjflqXhVUyUSNVUEvwyTEJEImdLuz 6VFAttqoxpdZJD6J+V4Fsx1R5agv4AI= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6F70CAD2F for ; Mon, 26 Oct 2020 07:11:30 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 4/8] btrfs: scrub: refactor scrub_find_csum() Date: Mon, 26 Oct 2020 15:11:11 +0800 Message-Id: <20201026071115.57225-5-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Function scrub_find_csum() is to locate the csum for bytenr @logical from sctx->csum_list. However it lacks a lot of comments to explaining things like how the csum_list is organized and why we need to drop csum range which is before us. Refactor the function by: - Add more comment explaining the behavior - Add comment explaining why we need to drop the csum range - Put the csum copy in the main loop This is mostly for the incoming patches to make scrub_find_csum() able to find multiple checksums. Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 70 +++++++++++++++++++++++++++++++++++------------- 1 file changed, 51 insertions(+), 19 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 321d6d457942..0d078393f986 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -2386,37 +2386,69 @@ static void scrub_block_complete(struct scrub_block *sblock) } } +static void drop_csum_range(struct scrub_ctx *sctx, + struct btrfs_ordered_sum *sum) +{ + u32 sectorsize = sctx->fs_info->sectorsize; + + sctx->stat.csum_discards += sum->len / sectorsize; + list_del(&sum->list); + kfree(sum); +} + +/* + * Find the desired csum for range [@logical, @logical + sectorsize), and + * store the csum into @csum. + * + * The search source is sctx->csum_list, which is a pre-populated list + * storing bytenr ordered csum ranges. + * We're reponsible to cleanup any range that is before @logical. + * + * Return 0 if there is no csum for the range. + * Return 1 if there is csum for the range and copied to @csum. + */ static int scrub_find_csum(struct scrub_ctx *sctx, u64 logical, u8 *csum) { - struct btrfs_ordered_sum *sum = NULL; - unsigned long index; - unsigned long num_sectors; + u32 sectorsize = sctx->fs_info->sectorsize; + u32 csum_size = sctx->csum_size; + bool found = false; while (!list_empty(&sctx->csum_list)) { + struct btrfs_ordered_sum *sum = NULL; + unsigned long index; + unsigned long num_sectors; + sum = list_first_entry(&sctx->csum_list, struct btrfs_ordered_sum, list); + /* The current csum range is beyond our range, no csum found */ if (sum->bytenr > logical) - return 0; - if (sum->bytenr + sum->len > logical) break; - ++sctx->stat.csum_discards; - list_del(&sum->list); - kfree(sum); - sum = NULL; - } - if (!sum) - return 0; + /* + * The current sum is before our bytenr, since scrub is + * always done in bytenr order, the csum will never be used + * anymore, clean it up so that later calls won't bother the + * range, and continue search the next range. + */ + if (sum->bytenr + sum->len <= logical) { + drop_csum_range(sctx, sum); + continue; + } - index = div_u64(logical - sum->bytenr, sctx->fs_info->sectorsize); - ASSERT(index < UINT_MAX); + /* Now the csum range covers our bytenr, copy the csum */ + found = true; + index = div_u64(logical - sum->bytenr, sectorsize); + num_sectors = sum->len / sectorsize; - num_sectors = sum->len / sctx->fs_info->sectorsize; - memcpy(csum, sum->sums + index * sctx->csum_size, sctx->csum_size); - if (index == num_sectors - 1) { - list_del(&sum->list); - kfree(sum); + memcpy(csum, sum->sums + index * csum_size, csum_size); + + /* Cleanup the range if we're at the end of the csum range */ + if (index == num_sectors - 1) + drop_csum_range(sctx, sum); + break; } + if (!found) + return 0; return 1; } From patchwork Mon Oct 26 07:11:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58D83C5DF9E for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0429B222EA for ; Mon, 26 Oct 2020 07:11:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="puwjBySA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770935AbgJZHLg (ORCPT ); Mon, 26 Oct 2020 03:11:36 -0400 Received: from mx2.suse.de ([195.135.220.15]:51328 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770928AbgJZHLd (ORCPT ); Mon, 26 Oct 2020 03:11:33 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696292; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=OvCh8+A/QvQlV5NBLohGKLUnJc7UaoGRrNywcohDy34=; b=puwjBySANm1cVDeS/yzvm5/bZUVeeqv4wLhDX39oB3VKihPDhn3t+BhpuDn8Rk0DYXb5GI LocQhs1pX/HLOSMy4i5WsxWAt7ubgNaikh2i5+UzK/p+0aZYjDHeeWTS7uQe+SsEaz0vUM 5yPXvQrehPbyDx0Ie+3Cseg1BabCm5Q= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 2030CACA3 for ; Mon, 26 Oct 2020 07:11:32 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 5/8] btrfs: scrub: introduce scrub_page::page_len for subpage support Date: Mon, 26 Oct 2020 15:11:12 +0800 Message-Id: <20201026071115.57225-6-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Currently scrub_page only has one csum for each page, this is fine if page size == sector size, then each page has one csum for it. But for subpage support, we could have cases where only part of the page is utilized. E.g one 4K sector is read into a 64K page. In that case, we need a way to determine which range is really utilized. This patch will introduce scrub_page::page_len so that we can know where the utilized range ends. This is especially important for subpage. As write bio can overwrite existing data if we just submit full page bio. Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 36 +++++++++++++++++++++++++----------- 1 file changed, 25 insertions(+), 11 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 0d078393f986..bad88c651dd4 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -72,9 +72,15 @@ struct scrub_page { u64 physical_for_dev_replace; atomic_t refs; struct { - unsigned int mirror_num:8; - unsigned int have_csum:1; - unsigned int io_error:1; + /* + * For subpage case, where only part of the page is utilized + * Note that 16 bits can only go 65535, not 65536, thus we have + * to use 17 bits here. + */ + u32 page_len:17; + u32 mirror_num:8; + u32 have_csum:1; + u32 io_error:1; }; struct scrub_recover *recover; @@ -217,6 +223,11 @@ static struct scrub_page *alloc_scrub_page(struct scrub_ctx *sctx, gfp_t mask) u32 sectorsize = sctx->fs_info->sectorsize; size_t size; + /* + * The bits in scrub_page::page_len only supports up to 64K page size. + */ + BUILD_BUG_ON(PAGE_SIZE > SZ_64K); + /* No support for multi-page sector size yet */ ASSERT(PAGE_SIZE >= sectorsize && IS_ALIGNED(PAGE_SIZE, sectorsize)); @@ -1359,6 +1370,7 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, } scrub_page_get(spage); sblock->pagev[page_index] = spage; + spage->page_len = sublen; spage->sblock = sblock; spage->flags = flags; spage->generation = generation; @@ -1452,7 +1464,7 @@ static void scrub_recheck_block_on_raid56(struct btrfs_fs_info *fs_info, struct scrub_page *spage = sblock->pagev[page_num]; WARN_ON(!spage->page); - bio_add_page(bio, spage->page, PAGE_SIZE, 0); + bio_add_page(bio, spage->page, spage->page_len, 0); } if (scrub_submit_raid56_bio_wait(fs_info, bio, first_page)) { @@ -1505,7 +1517,7 @@ static void scrub_recheck_block(struct btrfs_fs_info *fs_info, bio = btrfs_io_bio_alloc(1); bio_set_dev(bio, spage->dev->bdev); - bio_add_page(bio, spage->page, PAGE_SIZE, 0); + bio_add_page(bio, spage->page, spage->page_len, 0); bio->bi_iter.bi_sector = spage->physical >> 9; bio->bi_opf = REQ_OP_READ; @@ -1588,8 +1600,8 @@ static int scrub_repair_page_from_good_copy(struct scrub_block *sblock_bad, bio->bi_iter.bi_sector = spage_bad->physical >> 9; bio->bi_opf = REQ_OP_WRITE; - ret = bio_add_page(bio, spage_good->page, PAGE_SIZE, 0); - if (PAGE_SIZE != ret) { + ret = bio_add_page(bio, spage_good->page, spage_good->page_len, 0); + if (ret != spage_good->page_len) { bio_put(bio); return -EIO; } @@ -1685,8 +1697,8 @@ static int scrub_add_page_to_wr_bio(struct scrub_ctx *sctx, goto again; } - ret = bio_add_page(sbio->bio, spage->page, PAGE_SIZE, 0); - if (ret != PAGE_SIZE) { + ret = bio_add_page(sbio->bio, spage->page, spage->page_len, 0); + if (ret != spage->page_len) { if (sbio->page_count < 1) { bio_put(sbio->bio); sbio->bio = NULL; @@ -2033,8 +2045,8 @@ static int scrub_add_page_to_rd_bio(struct scrub_ctx *sctx, } sbio->pagev[sbio->page_count] = spage; - ret = bio_add_page(sbio->bio, spage->page, PAGE_SIZE, 0); - if (ret != PAGE_SIZE) { + ret = bio_add_page(sbio->bio, spage->page, spage->page_len, 0); + if (ret != spage->page_len) { if (sbio->page_count < 1) { bio_put(sbio->bio); sbio->bio = NULL; @@ -2210,6 +2222,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK); scrub_page_get(spage); sblock->pagev[index] = spage; + spage->page_len = l; spage->sblock = sblock; spage->dev = dev; spage->flags = flags; @@ -2554,6 +2567,7 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, /* For scrub parity */ scrub_page_get(spage); list_add_tail(&spage->list, &sparity->spages); + spage->page_len = l; spage->sblock = sblock; spage->dev = dev; spage->flags = flags; From patchwork Mon Oct 26 07:11:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86020C5DF9D for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3C8A4222EC for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="sI3ZImxw" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770933AbgJZHLg (ORCPT ); Mon, 26 Oct 2020 03:11:36 -0400 Received: from mx2.suse.de ([195.135.220.15]:51300 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770924AbgJZHLe (ORCPT ); Mon, 26 Oct 2020 03:11:34 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696293; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Qju62XYluIeImaLXhdXG8gs6+EJdtHaRGtU12saybNQ=; b=sI3ZImxwzOK3BNueV3ItBvkvfFS6/ZBWmumT9cz7DOKp0Zh+AAsKs7PXPV0ACtXPLi6dBq z/OFI1GoFLxUplLQr2eCqb7FXPr2l1saxHWOpXZa8sk4lOW6AOfnGEl918uzZzmVWzcVy0 eacqOYdZTAkmCE9z+SYwd9UVBWaL7cU= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id AB9C1AD2F for ; Mon, 26 Oct 2020 07:11:33 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 6/8] btrfs: scrub: always allocate one full page for one sector for RAID56 Date: Mon, 26 Oct 2020 15:11:13 +0800 Message-Id: <20201026071115.57225-7-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org For scrub_pages() and scrub_pages_for_parity(), we currently allocate one scrub_page structure for one page. This is fine if we only read/write one sector one time. But for cases like scrubing RAID56, we need to read/write the full stripe, which is in 64K size. For subpage size, we will submit the read in just one page, which is normally a good thing, but for RAID56 case, it only expects to see one sector, not the full stripe in its endio function. This could lead to wrong parity checksum for RAID56 on subpage. To make the existing code work well for subpage case, here we take a shortcut, by always allocating a full page for one sector. This should provide the basis to make RAID56 work for subpage case. The cost is pretty obvious now, for one RAID56 stripe now we always need 16 pages. For support subpage situation (64K page size, 4K sector size), this means we need full one megabyte to scrub just one RAID56 stripe. And for data scrub, each 4K sector will also need one 64K page. This is mostly just a workaround, the proper fix for this is a much larger project, using scrub_block to replace scrub_page, and allow scrub_block to handle multi pages, csums, and csum_bitmap to avoid allocating one page for each sector. Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index bad88c651dd4..10836aec389f 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -2186,6 +2186,7 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, u64 physical_for_dev_replace) { struct scrub_block *sblock; + u32 sectorsize = sctx->fs_info->sectorsize; bool force_submit = false; int index; @@ -2208,7 +2209,15 @@ static int scrub_pages(struct scrub_ctx *sctx, u64 logical, u64 len, for (index = 0; len > 0; index++) { struct scrub_page *spage; - u64 l = min_t(u64, len, PAGE_SIZE); + /* + * Here we will allocate one page for one sector to scrub. + * This is fine if PAGE_SIZE == sectorsize, but will cost + * more memory for PAGE_SIZE > sectorsize case. + * + * TODO: Make scrub_block to handle multiple pages and csums, + * so that we don't need scrub_page structure at all. + */ + u32 l = min_t(u32, sectorsize, len); spage = alloc_scrub_page(sctx, GFP_KERNEL); if (!spage) { @@ -2528,8 +2537,11 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, { struct scrub_ctx *sctx = sparity->sctx; struct scrub_block *sblock; + u32 sectorsize = sctx->fs_info->sectorsize; int index; + ASSERT(IS_ALIGNED(len, sectorsize)); + sblock = kzalloc(sizeof(*sblock), GFP_KERNEL); if (!sblock) { spin_lock(&sctx->stat_lock); @@ -2548,7 +2560,8 @@ static int scrub_pages_for_parity(struct scrub_parity *sparity, for (index = 0; len > 0; index++) { struct scrub_page *spage; - u64 l = min_t(u64, len, PAGE_SIZE); + /* Check scrub_page() for the reason why we use sectorsize */ + u32 l = sectorsize; BUG_ON(index >= SCRUB_MAX_PAGES_PER_BLOCK); From patchwork Mon Oct 26 07:11:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2EEAC61DD8 for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7C037222EA for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="UAG95atZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770939AbgJZHLk (ORCPT ); Mon, 26 Oct 2020 03:11:40 -0400 Received: from mx2.suse.de ([195.135.220.15]:51374 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770928AbgJZHLj (ORCPT ); Mon, 26 Oct 2020 03:11:39 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696298; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JpYrpieoUMSTgeAERV2vEnfaYH5wBiO28P6Lqwc4t3M=; b=UAG95atZ5N4EKaeIJu0FDSj3Vi3bGn29lTifz8caCE6okAkmAm2nYpEjB6qa0POWDNKkk7 xxsPYZSj6SpmTYf+WmoAhHwHVbXi8r2Ts6IimV4D5YvFB4tsV323V5oEQ2V5yeAoduGdOa zm2VykSFjM7Larj9WDm6LBqrqyLDphI= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 1E7D6ACA3 for ; Mon, 26 Oct 2020 07:11:38 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 7/8] btrfs: scrub: support subpage tree block scrub Date: Mon, 26 Oct 2020 15:11:14 +0800 Message-Id: <20201026071115.57225-8-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org To support subpage tree block scrub, scrub_checksum_tree_block() only needs to learn 2 new tricks: - Follow scrub_page::page_len Now scrub_page only represents one sector, we need to follow it properly. - Run checksum on all sectors Since scrub_page only represents one sector, we need to run hash on all sectors, no longer just (nodesize >> PAGE_SIZE). Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 10836aec389f..13355cc2b1ae 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1841,15 +1841,21 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock) struct scrub_ctx *sctx = sblock->sctx; struct btrfs_header *h; struct btrfs_fs_info *fs_info = sctx->fs_info; + u32 sectorsize = sctx->fs_info->sectorsize; + u32 nodesize = sctx->fs_info->nodesize; SHASH_DESC_ON_STACK(shash, fs_info->csum_shash); u8 calculated_csum[BTRFS_CSUM_SIZE]; u8 on_disk_csum[BTRFS_CSUM_SIZE]; - const int num_pages = sctx->fs_info->nodesize >> PAGE_SHIFT; + const int num_sectors = nodesize / sectorsize; int i; struct scrub_page *spage; char *kaddr; BUG_ON(sblock->page_count < 1); + + /* Each pagev[] is in fact just one sector, not a full page */ + ASSERT(sblock->page_count == num_sectors); + spage = sblock->pagev[0]; kaddr = page_address(spage->page); h = (struct btrfs_header *)kaddr; @@ -1878,11 +1884,11 @@ static int scrub_checksum_tree_block(struct scrub_block *sblock) shash->tfm = fs_info->csum_shash; crypto_shash_init(shash); crypto_shash_update(shash, kaddr + BTRFS_CSUM_SIZE, - PAGE_SIZE - BTRFS_CSUM_SIZE); + spage->page_len - BTRFS_CSUM_SIZE); - for (i = 1; i < num_pages; i++) { + for (i = 1; i < num_sectors; i++) { kaddr = page_address(sblock->pagev[i]->page); - crypto_shash_update(shash, kaddr, PAGE_SIZE); + crypto_shash_update(shash, kaddr, sblock->pagev[i]->page_len); } crypto_shash_final(shash, calculated_csum); From patchwork Mon Oct 26 07:11:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qu Wenruo X-Patchwork-Id: 11855805 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39F3BC6369E for ; Mon, 26 Oct 2020 07:11:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F08BF222EA for ; Mon, 26 Oct 2020 07:11:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=suse.com header.i=@suse.com header.b="hxARkBty" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1770943AbgJZHLn (ORCPT ); Mon, 26 Oct 2020 03:11:43 -0400 Received: from mx2.suse.de ([195.135.220.15]:51400 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1770928AbgJZHLm (ORCPT ); Mon, 26 Oct 2020 03:11:42 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1603696300; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eN1LW7f+PaO+xk5GpELYs3XmQV4VoqvhUKFHIr+JJEQ=; b=hxARkBtyUftjHVYkYpudErJQPXWQ8E5AhQqajAN/H2YmnoYPR2TVQAXO1a6iiaQ7Py+w9A ginBYzjPyKEmeesnLXKW3axaRkT5Y31LE8vE+sAaSutpEB+116gOvlXGL4DXKhn0BgdMAZ YmsNdz1gtARb9Zow1zW6l85FBWOdP50= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 36252ACA3 for ; Mon, 26 Oct 2020 07:11:40 +0000 (UTC) From: Qu Wenruo To: linux-btrfs@vger.kernel.org Subject: [PATCH 8/8] btrfs: scrub: support subpage data scrub Date: Mon, 26 Oct 2020 15:11:15 +0800 Message-Id: <20201026071115.57225-9-wqu@suse.com> X-Mailer: git-send-email 2.29.0 In-Reply-To: <20201026071115.57225-1-wqu@suse.com> References: <20201026071115.57225-1-wqu@suse.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Btrfs scrub is in fact much more flex than buffered data write path, as we can read an unaligned subpage data into page offset 0. This ability makes subpage support much easier, we just need to check each scrub_page::page_len and ensure we only calculate hash for [0, page_len) of a page, and call it a day for subpage scrub support. There is a small thing to notice, for subpage case, we still do sector by sector scrub. This means we will submit a read bio for each sector to scrub, resulting the same amount of read bios, just like the 4K page systems. This behavior can be considered as a good thing, if we want everything to be the same as 4K page systems. But this also means, we're wasting the ability to submit larger bio using 64K page size. This is another problem to consider in the future. Signed-off-by: Qu Wenruo --- fs/btrfs/scrub.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 13355cc2b1ae..55a357e9416e 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1824,15 +1824,19 @@ static int scrub_checksum_data(struct scrub_block *sblock) if (!spage->have_csum) return 0; + /* + * In scrub_pages() and scrub_pages_for_parity() we ensure + * each spage only contains just one sector of data. + */ + ASSERT(spage->page_len == sctx->fs_info->sectorsize); kaddr = page_address(spage->page); shash->tfm = fs_info->csum_shash; crypto_shash_init(shash); - crypto_shash_digest(shash, kaddr, PAGE_SIZE, csum); + crypto_shash_digest(shash, kaddr, spage->page_len, csum); if (memcmp(csum, spage->csums, sctx->csum_size)) sblock->checksum_error = 1; - return sblock->checksum_error; }