From patchwork Wed Jun 15 15:15:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12882515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FA34C433EF for ; Wed, 15 Jun 2022 15:15:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242137AbiFOPP3 (ORCPT ); Wed, 15 Jun 2022 11:15:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S244854AbiFOPPY (ORCPT ); Wed, 15 Jun 2022 11:15:24 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7583529CAC for ; Wed, 15 Jun 2022 08:15:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=e20jtuePYoxWZRI5AsACJTCNpRCcwub/NsIpN9sNxBI=; b=P7rhZRKj+l5/+xgyla+JGIv6r9 KJrp3cjaYqA289oG1PG3938hNHvZ0t2rQL+lU5veMZwWgZt01xv2PwSIRxh9Prnu6cYpGd8LWuvyh fZnw2LAKCAloVq/btzG+o36q4wXcEMu2ZeUPRXQOEvOQljU9SFOEq+HDbhfc/gVR1hcprf7x6UAfh nBVYwodnZNjVuSgAz0zWZbV/6IEpUuXoWkSQIc9hxwlmd4FdIGDytv1wgAvwugOSWmMj7yt328Gui ImkJKOUYWVz1QXYhC+nE455+ce+KK0Y0khJCSeL1qPVdYOAWhjzz/TctaHt8B3SAlVbaRGJbbWTi5 qbqRiqHQ==; Received: from [2001:4bb8:180:36f6:5ab4:8a8:5e48:3d75] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o1Ujp-00F8r2-Ix; Wed, 15 Jun 2022 15:15:22 +0000 From: Christoph Hellwig To: David Sterba , Josef Bacik , Qu Wenruo Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 1/5] btrfs: remove a bunch of pointles stripe_len arguments Date: Wed, 15 Jun 2022 17:15:11 +0200 Message-Id: <20220615151515.888424-2-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220615151515.888424-1-hch@lst.de> References: <20220615151515.888424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org The raid56 code assumes a fixed stripe length. To start simplifying the bio mapping code remove a few arguments and explicitly hard code that fixes stripe length. Partially based on a patch from Qu Wenruo. Signed-off-by: Christoph Hellwig Reviewed-by: Qu Wenruo Reviewed-by: Johannes Thumshirn --- fs/btrfs/raid56.c | 61 ++++++++++++++++++++-------------------------- fs/btrfs/raid56.h | 12 +++------ fs/btrfs/scrub.c | 9 +++---- fs/btrfs/volumes.c | 13 ++++------ 4 files changed, 39 insertions(+), 56 deletions(-) diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index f002334d244a7..e071648f2c591 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -474,9 +474,9 @@ static int rbio_is_full(struct btrfs_raid_bio *rbio) int ret = 1; spin_lock_irqsave(&rbio->bio_list_lock, flags); - if (size != rbio->nr_data * rbio->stripe_len) + if (size != rbio->nr_data * BTRFS_STRIPE_LEN) ret = 0; - BUG_ON(size > rbio->nr_data * rbio->stripe_len); + BUG_ON(size > rbio->nr_data * BTRFS_STRIPE_LEN); spin_unlock_irqrestore(&rbio->bio_list_lock, flags); return ret; @@ -913,19 +913,18 @@ static struct sector_ptr *sector_in_rbio(struct btrfs_raid_bio *rbio, * this does not allocate any pages for rbio->pages. */ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info, - struct btrfs_io_context *bioc, - u32 stripe_len) + struct btrfs_io_context *bioc) { const unsigned int real_stripes = bioc->num_stripes - bioc->num_tgtdevs; - const unsigned int stripe_npages = stripe_len >> PAGE_SHIFT; + const unsigned int stripe_npages = BTRFS_STRIPE_LEN >> PAGE_SHIFT; const unsigned int num_pages = stripe_npages * real_stripes; - const unsigned int stripe_nsectors = stripe_len >> fs_info->sectorsize_bits; + const unsigned int stripe_nsectors = + BTRFS_STRIPE_LEN >> fs_info->sectorsize_bits; const unsigned int num_sectors = stripe_nsectors * real_stripes; struct btrfs_raid_bio *rbio; int nr_data = 0; void *p; - ASSERT(IS_ALIGNED(stripe_len, PAGE_SIZE)); /* PAGE_SIZE must also be aligned to sectorsize for subpage support */ ASSERT(IS_ALIGNED(PAGE_SIZE, fs_info->sectorsize)); /* @@ -949,7 +948,6 @@ static struct btrfs_raid_bio *alloc_rbio(struct btrfs_fs_info *fs_info, INIT_LIST_HEAD(&rbio->stripe_cache); INIT_LIST_HEAD(&rbio->hash_list); rbio->bioc = bioc; - rbio->stripe_len = stripe_len; rbio->nr_pages = num_pages; rbio->nr_sectors = num_sectors; rbio->real_stripes = real_stripes; @@ -1026,7 +1024,6 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, struct sector_ptr *sector, unsigned int stripe_nr, unsigned int sector_nr, - unsigned long bio_max_len, unsigned int opf) { const u32 sectorsize = rbio->bioc->fs_info->sectorsize; @@ -1071,7 +1068,8 @@ static int rbio_add_io_sector(struct btrfs_raid_bio *rbio, } /* put a new bio on the list */ - bio = bio_alloc(stripe->dev->bdev, max(bio_max_len >> PAGE_SHIFT, 1UL), + bio = bio_alloc(stripe->dev->bdev, + max(BTRFS_STRIPE_LEN >> PAGE_SHIFT, 1), opf, GFP_NOFS); bio->bi_iter.bi_sector = disk_start >> 9; bio->bi_private = rbio; @@ -1293,8 +1291,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) } ret = rbio_add_io_sector(rbio, &bio_list, sector, stripe, - sectornr, rbio->stripe_len, - REQ_OP_WRITE); + sectornr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -1333,8 +1330,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio) ret = rbio_add_io_sector(rbio, &bio_list, sector, rbio->bioc->tgtdev_map[stripe], - sectornr, rbio->stripe_len, - REQ_OP_WRITE); + sectornr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -1379,7 +1375,7 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio, for (i = 0; i < rbio->bioc->num_stripes; i++) { stripe = &rbio->bioc->stripes[i]; - if (in_range(physical, stripe->physical, rbio->stripe_len) && + if (in_range(physical, stripe->physical, BTRFS_STRIPE_LEN) && stripe->dev->bdev && bio->bi_bdev == stripe->dev->bdev) { return i; } @@ -1401,7 +1397,7 @@ static int find_logical_bio_stripe(struct btrfs_raid_bio *rbio, for (i = 0; i < rbio->nr_data; i++) { u64 stripe_start = rbio->bioc->raid_map[i]; - if (in_range(logical, stripe_start, rbio->stripe_len)) + if (in_range(logical, stripe_start, BTRFS_STRIPE_LEN)) return i; } return -1; @@ -1586,8 +1582,7 @@ static int raid56_rmw_stripe(struct btrfs_raid_bio *rbio) continue; ret = rbio_add_io_sector(rbio, &bio_list, sector, - stripe, sectornr, rbio->stripe_len, - REQ_OP_READ); + stripe, sectornr, REQ_OP_READ); if (ret) goto cleanup; } @@ -1796,7 +1791,7 @@ static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio) ASSERT(orig_logical >= full_stripe_start && orig_logical + orig_len <= full_stripe_start + - rbio->nr_data * rbio->stripe_len); + rbio->nr_data * BTRFS_STRIPE_LEN); bio_list_add(&rbio->bio_list, orig_bio); rbio->bio_list_bytes += orig_bio->bi_iter.bi_size; @@ -1814,7 +1809,7 @@ static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio) /* * our main entry point for writes from the rest of the FS. */ -int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc, u32 stripe_len) +int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; @@ -1822,7 +1817,7 @@ int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc, u32 stri struct blk_plug_cb *cb; int ret; - rbio = alloc_rbio(fs_info, bioc, stripe_len); + rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) { btrfs_put_bioc(bioc); return PTR_ERR(rbio); @@ -2147,8 +2142,7 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio) continue; ret = rbio_add_io_sector(rbio, &bio_list, sector, stripe, - sectornr, rbio->stripe_len, - REQ_OP_READ); + sectornr, REQ_OP_READ); if (ret < 0) goto cleanup; } @@ -2206,7 +2200,7 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio) * of the drive. */ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, - u32 stripe_len, int mirror_num, int generic_io) + int mirror_num, int generic_io) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; @@ -2217,7 +2211,7 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, btrfs_bio(bio)->mirror_num = mirror_num; } - rbio = alloc_rbio(fs_info, bioc, stripe_len); + rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) { if (generic_io) btrfs_put_bioc(bioc); @@ -2311,14 +2305,14 @@ static void read_rebuild_work(struct work_struct *work) struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, struct btrfs_io_context *bioc, - u32 stripe_len, struct btrfs_device *scrub_dev, + struct btrfs_device *scrub_dev, unsigned long *dbitmap, int stripe_nsectors) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; int i; - rbio = alloc_rbio(fs_info, bioc, stripe_len); + rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) return NULL; bio_list_add(&rbio->bio_list, bio); @@ -2363,7 +2357,7 @@ void raid56_add_scrub_pages(struct btrfs_raid_bio *rbio, struct page *page, ASSERT(logical >= rbio->bioc->raid_map[0]); ASSERT(logical + sectorsize <= rbio->bioc->raid_map[0] + - rbio->stripe_len * rbio->nr_data); + BTRFS_STRIPE_LEN * rbio->nr_data); stripe_offset = (int)(logical - rbio->bioc->raid_map[0]); index = stripe_offset / sectorsize; rbio->bio_sectors[index].page = page; @@ -2519,7 +2513,7 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, sector = rbio_stripe_sector(rbio, rbio->scrubp, sectornr); ret = rbio_add_io_sector(rbio, &bio_list, sector, rbio->scrubp, - sectornr, rbio->stripe_len, REQ_OP_WRITE); + sectornr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -2533,7 +2527,7 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio, sector = rbio_stripe_sector(rbio, rbio->scrubp, sectornr); ret = rbio_add_io_sector(rbio, &bio_list, sector, bioc->tgtdev_map[rbio->scrubp], - sectornr, rbio->stripe_len, REQ_OP_WRITE); + sectornr, REQ_OP_WRITE); if (ret) goto cleanup; } @@ -2700,7 +2694,7 @@ static void raid56_parity_scrub_stripe(struct btrfs_raid_bio *rbio) continue; ret = rbio_add_io_sector(rbio, &bio_list, sector, stripe, - sectornr, rbio->stripe_len, REQ_OP_READ); + sectornr, REQ_OP_READ); if (ret) goto cleanup; } @@ -2765,13 +2759,12 @@ void raid56_parity_submit_scrub_rbio(struct btrfs_raid_bio *rbio) /* The following code is used for dev replace of a missing RAID 5/6 device. */ struct btrfs_raid_bio * -raid56_alloc_missing_rbio(struct bio *bio, struct btrfs_io_context *bioc, - u64 length) +raid56_alloc_missing_rbio(struct bio *bio, struct btrfs_io_context *bioc) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; - rbio = alloc_rbio(fs_info, bioc, length); + rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) return NULL; diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h index 3b22657ca857e..9e4e0501e4e89 100644 --- a/fs/btrfs/raid56.h +++ b/fs/btrfs/raid56.h @@ -56,9 +56,6 @@ struct btrfs_raid_bio { */ enum btrfs_rbio_ops operation; - /* Size of each individual stripe on disk */ - u32 stripe_len; - /* How many pages there are for the full stripe including P/Q */ u16 nr_pages; @@ -179,21 +176,20 @@ static inline int nr_data_stripes(const struct map_lookup *map) struct btrfs_device; int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, - u32 stripe_len, int mirror_num, int generic_io); -int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc, u32 stripe_len); + int mirror_num, int generic_io); +int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc); void raid56_add_scrub_pages(struct btrfs_raid_bio *rbio, struct page *page, unsigned int pgoff, u64 logical); struct btrfs_raid_bio *raid56_parity_alloc_scrub_rbio(struct bio *bio, - struct btrfs_io_context *bioc, u32 stripe_len, + struct btrfs_io_context *bioc, struct btrfs_device *scrub_dev, unsigned long *dbitmap, int stripe_nsectors); void raid56_parity_submit_scrub_rbio(struct btrfs_raid_bio *rbio); struct btrfs_raid_bio * -raid56_alloc_missing_rbio(struct bio *bio, struct btrfs_io_context *bioc, - u64 length); +raid56_alloc_missing_rbio(struct bio *bio, struct btrfs_io_context *bioc); void raid56_submit_missing_rbio(struct btrfs_raid_bio *rbio); int btrfs_alloc_stripe_hash_table(struct btrfs_fs_info *info); diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index db700e6ec5a93..18986d062cf63 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1216,7 +1216,6 @@ static inline int scrub_nr_raid_mirrors(struct btrfs_io_context *bioc) static inline void scrub_stripe_index_and_offset(u64 logical, u64 map_type, u64 *raid_map, - u64 mapped_length, int nstripes, int mirror, int *stripe_index, u64 *stripe_offset) @@ -1231,7 +1230,7 @@ static inline void scrub_stripe_index_and_offset(u64 logical, u64 map_type, continue; if (logical >= raid_map[i] && - logical < raid_map[i] + mapped_length) + logical < raid_map[i] + BTRFS_STRIPE_LEN) break; } @@ -1335,7 +1334,6 @@ static int scrub_setup_recheck_block(struct scrub_block *original_sblock, scrub_stripe_index_and_offset(logical, bioc->map_type, bioc->raid_map, - mapped_length, bioc->num_stripes - bioc->num_tgtdevs, mirror_index, @@ -1387,7 +1385,6 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info, mirror_num = sector->sblock->sectors[0]->mirror_num; ret = raid56_parity_recover(bio, sector->recover->bioc, - sector->recover->map_length, mirror_num, 0); if (ret) return ret; @@ -2195,7 +2192,7 @@ static void scrub_missing_raid56_pages(struct scrub_block *sblock) bio->bi_private = sblock; bio->bi_end_io = scrub_missing_raid56_end_io; - rbio = raid56_alloc_missing_rbio(bio, bioc, length); + rbio = raid56_alloc_missing_rbio(bio, bioc); if (!rbio) goto rbio_out; @@ -2829,7 +2826,7 @@ static void scrub_parity_check_and_repair(struct scrub_parity *sparity) bio->bi_private = sparity; bio->bi_end_io = scrub_parity_bio_endio; - rbio = raid56_parity_alloc_scrub_rbio(bio, bioc, length, + rbio = raid56_parity_alloc_scrub_rbio(bio, bioc, sparity->scrub_dev, &sparity->dbitmap, sparity->nsectors); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 12a6150ee19d2..07b1e005d89df 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6459,6 +6459,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, } } else if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) { + ASSERT(map->stripe_len == BTRFS_STRIPE_LEN); if (need_raid_map && (need_full_stripe(op) || mirror_num > 1)) { /* push stripe_nr back to the start of the full stripe */ stripe_nr = div64_u64(raid56_full_stripe_start, @@ -6756,14 +6757,10 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, if ((bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) && ((btrfs_op(bio) == BTRFS_MAP_WRITE) || (mirror_num > 1))) { - /* In this case, map_length has been set to the length of - a single stripe; not the whole write */ - if (btrfs_op(bio) == BTRFS_MAP_WRITE) { - ret = raid56_parity_write(bio, bioc, map_length); - } else { - ret = raid56_parity_recover(bio, bioc, map_length, - mirror_num, 1); - } + if (btrfs_op(bio) == BTRFS_MAP_WRITE) + ret = raid56_parity_write(bio, bioc); + else + ret = raid56_parity_recover(bio, bioc, mirror_num, 1); goto out_dec; } From patchwork Wed Jun 15 15:15:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12882516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5273AC43334 for ; Wed, 15 Jun 2022 15:15:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345805AbiFOPPb (ORCPT ); Wed, 15 Jun 2022 11:15:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1343803AbiFOPP0 (ORCPT ); Wed, 15 Jun 2022 11:15:26 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0AB803C49C for ; Wed, 15 Jun 2022 08:15:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=VH50fX1D2WRdj/D8tZlw/7D2Zl7TPBqgQ7VjcH5QYuQ=; b=WQTUQSpBNgdCmiyhgAFrnH7VUE da7nQq4wLqyEiBOKjxIr7KD4sWqGBSS4mX6RVhm5Awif1whEL8WvthXQCixnZqmvmB/anBS5Xwnbp KThWkV32frJJAXczlDa04r2/h70ZfK8uKRY1JlU9xl7lTnBR7RBDsmd38+IHBcx2aY9as2HoWjasy Nq4mdcnQlCEi36TWckNwLz2LLf4WZLd2951pGFuehxQfzr3OP8bjNbLWWUBZtuGwMoccNrDA0zdOC /xbQd2BPnQTifNUgxh30P1+WGKRTKxnCD3GFGqoPhiUu9p2SbLsM8fTay7ObaywXHaZJ8m+tu6W8a GInm9PAA==; Received: from [2001:4bb8:180:36f6:5ab4:8a8:5e48:3d75] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o1Ujs-00F8rS-82; Wed, 15 Jun 2022 15:15:24 +0000 From: Christoph Hellwig To: David Sterba , Josef Bacik , Qu Wenruo Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 2/5] btrfs: return proper mapped length for RAID56 profiles in __btrfs_map_block() Date: Wed, 15 Jun 2022 17:15:12 +0200 Message-Id: <20220615151515.888424-3-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220615151515.888424-1-hch@lst.de> References: <20220615151515.888424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org From: Qu Wenruo For profiles other than RAID56, __btrfs_map_block() returns @map_length as min(stripe_end, logical + *length), which is also the same result from btrfs_get_io_geometry(). But for RAID56, __btrfs_map_block() returns @map_length as stripe_len. This strange behavior is going to hurt incoming bio split at btrfs_map_bio() time, as we will use @map_length as bio split size. Fix this behavior by return @map_length by the same calculatioin as other profiles Signed-off-by: Qu Wenruo [hch: rebased] Signed-off-by: Christoph Hellwig Reviewed-by: Johannes Thumshirn --- fs/btrfs/volumes.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 07b1e005d89df..ff777e39d5f4a 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6469,7 +6469,10 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info, num_stripes = map->num_stripes; max_errors = nr_parity_stripes(map); - *length = map->stripe_len; + /* Return the length to the full stripe end */ + *length = min(raid56_full_stripe_start + em->start + + data_stripes * stripe_len, + logical + *length) - logical; stripe_index = 0; stripe_offset = 0; } else { From patchwork Wed Jun 15 15:15:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12882517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE553CCA47E for ; Wed, 15 Jun 2022 15:15:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1346340AbiFOPPc (ORCPT ); Wed, 15 Jun 2022 11:15:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1345078AbiFOPP3 (ORCPT ); Wed, 15 Jun 2022 11:15:29 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD79F3C713 for ; Wed, 15 Jun 2022 08:15:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3NnSAB/RPpuGhz6i2owQNCMnDkcdwKTiYbOtjgYm9yE=; b=vQ5KCZND4fV4cEIioyxC6IWujl KTlRoKsNJ0y51yn9vXGatiwY8jLWYLca34W7cHRTA3PlTiE037+nkVCISxvnRd65ihj8NCNdLRss+ pyqZlLXWWDtddzG2jQ6I5sBa0FwNmzR9D7GktWu5VHKXg6MrJY+U2j3XPnKNWSi1guWXeEE8PzYLk GnHMBM6L6/286trrpyZQmtO556FpEKt41nvZ7EarGqo8a71N3phT01VqRTF++YtxtmCYr0ooY/ApK QWJOwwAZkFarudVIsEcceNDXyn8YlWRoBch2IerunZgzMKWYcc3b8LSiBJrEv7/hYWF9NqfHMy6+j yVXaTGEw==; Received: from [2001:4bb8:180:36f6:5ab4:8a8:5e48:3d75] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o1Uju-00F8sO-Qn; Wed, 15 Jun 2022 15:15:27 +0000 From: Christoph Hellwig To: David Sterba , Josef Bacik , Qu Wenruo Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 3/5] btrfs: remove the btrfs_map_bio return value Date: Wed, 15 Jun 2022 17:15:13 +0200 Message-Id: <20220615151515.888424-4-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220615151515.888424-1-hch@lst.de> References: <20220615151515.888424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Always consume the bio and call the end_io handler on error instead of returning an error and letting the caller handle it. This matches what the block layer submission does and avoids any confusion on who needs to handle errors. As this requires touching all the callers, rename the function to btrfs_submit_bio, which describes the functionality much better. Signed-off-by: Christoph Hellwig Reviewed-by: Qu Wenruo Reviewed-by: Johannes Thumshirn --- fs/btrfs/compression.c | 8 ++------ fs/btrfs/disk-io.c | 21 ++++++++++----------- fs/btrfs/inode.c | 25 ++++++++++--------------- fs/btrfs/volumes.c | 13 ++++++++----- fs/btrfs/volumes.h | 4 ++-- 5 files changed, 32 insertions(+), 39 deletions(-) diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c index 63d542961b78a..907fc8a4c092c 100644 --- a/fs/btrfs/compression.c +++ b/fs/btrfs/compression.c @@ -593,9 +593,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, } ASSERT(bio->bi_iter.bi_size); - ret = btrfs_map_bio(fs_info, bio, 0); - if (ret) - goto finish_cb; + btrfs_submit_bio(fs_info, bio, 0); bio = NULL; } cond_resched(); @@ -931,9 +929,7 @@ void btrfs_submit_compressed_read(struct inode *inode, struct bio *bio, sums += fs_info->csum_size * nr_sectors; ASSERT(comp_bio->bi_iter.bi_size); - ret = btrfs_map_bio(fs_info, comp_bio, mirror_num); - if (ret) - goto finish_cb; + btrfs_submit_bio(fs_info, comp_bio, mirror_num); comp_bio = NULL; } } diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c index 800ad3a9c68ed..5df6865428a5c 100644 --- a/fs/btrfs/disk-io.c +++ b/fs/btrfs/disk-io.c @@ -728,7 +728,6 @@ static void run_one_async_done(struct btrfs_work *work) { struct async_submit_bio *async; struct inode *inode; - blk_status_t ret; async = container_of(work, struct async_submit_bio, work); inode = async->inode; @@ -746,11 +745,7 @@ static void run_one_async_done(struct btrfs_work *work) * This changes nothing when cgroups aren't in use. */ async->bio->bi_opf |= REQ_CGROUP_PUNT; - ret = btrfs_map_bio(btrfs_sb(inode->i_sb), async->bio, async->mirror_num); - if (ret) { - async->bio->bi_status = ret; - bio_endio(async->bio); - } + btrfs_submit_bio(btrfs_sb(inode->i_sb), async->bio, async->mirror_num); } static void run_one_async_free(struct btrfs_work *work) @@ -814,7 +809,7 @@ static blk_status_t btree_submit_bio_start(struct inode *inode, struct bio *bio, { /* * when we're called for a write, we're already in the async - * submission context. Just jump into btrfs_map_bio + * submission context. Just jump into btrfs_submit_bio. */ return btree_csum_one_bio(bio); } @@ -839,11 +834,15 @@ void btrfs_submit_metadata_bio(struct inode *inode, struct bio *bio, int mirror_ bio->bi_opf |= REQ_META; if (btrfs_op(bio) != BTRFS_MAP_WRITE) { - ret = btrfs_map_bio(fs_info, bio, mirror_num); - } else if (!should_async_write(fs_info, BTRFS_I(inode))) { + btrfs_submit_bio(fs_info, bio, mirror_num); + return; + } + if (!should_async_write(fs_info, BTRFS_I(inode))) { ret = btree_csum_one_bio(bio); - if (!ret) - ret = btrfs_map_bio(fs_info, bio, mirror_num); + if (!ret) { + btrfs_submit_bio(fs_info, bio, mirror_num); + return; + } } else { /* * kthread helpers are used to submit writes so that diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c index 7a54f964ff378..6f665bf59f620 100644 --- a/fs/btrfs/inode.c +++ b/fs/btrfs/inode.c @@ -2617,7 +2617,8 @@ void btrfs_submit_data_write_bio(struct inode *inode, struct bio *bio, int mirro goto out; } } - ret = btrfs_map_bio(fs_info, bio, mirror_num); + btrfs_submit_bio(fs_info, bio, mirror_num); + return; out: if (ret) { bio->bi_status = ret; @@ -2645,14 +2646,13 @@ void btrfs_submit_data_read_bio(struct inode *inode, struct bio *bio, * not, which is why we ignore skip_sum here. */ ret = btrfs_lookup_bio_sums(inode, bio, NULL); - if (ret) - goto out; - ret = btrfs_map_bio(fs_info, bio, mirror_num); -out: if (ret) { bio->bi_status = ret; bio_endio(bio); + return; } + + btrfs_submit_bio(fs_info, bio, mirror_num); } /* @@ -7863,8 +7863,7 @@ static void submit_dio_repair_bio(struct inode *inode, struct bio *bio, BUG_ON(bio_op(bio) == REQ_OP_WRITE); refcount_inc(&dip->refs); - if (btrfs_map_bio(fs_info, bio, mirror_num)) - refcount_dec(&dip->refs); + btrfs_submit_bio(fs_info, bio, mirror_num); } static blk_status_t btrfs_check_read_dio_bio(struct btrfs_dio_private *dip, @@ -7972,7 +7971,8 @@ static inline blk_status_t btrfs_submit_dio_bio(struct bio *bio, file_offset - dip->file_offset); } map: - return btrfs_map_bio(fs_info, bio, 0); + btrfs_submit_bio(fs_info, bio, 0); + return BLK_STS_OK; } static void btrfs_submit_direct(const struct iomap_iter *iter, @@ -10277,7 +10277,6 @@ static blk_status_t submit_encoded_read_bio(struct btrfs_inode *inode, struct bio *bio, int mirror_num) { struct btrfs_encoded_read_private *priv = bio->bi_private; - struct btrfs_bio *bbio = btrfs_bio(bio); struct btrfs_fs_info *fs_info = inode->root->fs_info; blk_status_t ret; @@ -10288,12 +10287,8 @@ static blk_status_t submit_encoded_read_bio(struct btrfs_inode *inode, } atomic_inc(&priv->pending); - ret = btrfs_map_bio(fs_info, bio, mirror_num); - if (ret) { - atomic_dec(&priv->pending); - btrfs_bio_free_csum(bbio); - } - return ret; + btrfs_submit_bio(fs_info, bio, mirror_num); + return BLK_STS_OK; } static blk_status_t btrfs_encoded_read_verify_csum(struct btrfs_bio *bbio) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index ff777e39d5f4a..739676944e94d 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6724,8 +6724,8 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc, } } btrfs_debug_in_rcu(fs_info, - "btrfs_map_bio: rw %d 0x%x, sector=%llu, dev=%lu (%s id %llu), size=%u", - bio_op(bio), bio->bi_opf, bio->bi_iter.bi_sector, + "%s: rw %d 0x%x, sector=%llu, dev=%lu (%s id %llu), size=%u", + __func__, bio_op(bio), bio->bi_opf, bio->bi_iter.bi_sector, (unsigned long)dev->bdev->bd_dev, rcu_str_deref(dev->name), dev->devid, bio->bi_iter.bi_size); @@ -6735,8 +6735,8 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc, submit_bio(bio); } -blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, - int mirror_num) +void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, + int mirror_num) { u64 logical = bio->bi_iter.bi_sector << 9; u64 length = bio->bi_iter.bi_size; @@ -6781,7 +6781,10 @@ blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, } out_dec: btrfs_bio_counter_dec(fs_info); - return errno_to_blk_status(ret); + if (ret) { + bio->bi_status = errno_to_blk_status(ret); + bio_endio(bio); + } } static bool dev_args_match_fs_devices(const struct btrfs_dev_lookup_args *args, diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index 588367c76c463..c0f5bbba9c6ac 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -573,8 +573,8 @@ int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info); struct btrfs_block_group *btrfs_create_chunk(struct btrfs_trans_handle *trans, u64 type); void btrfs_mapping_tree_free(struct extent_map_tree *tree); -blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio, - int mirror_num); +void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, + int mirror_num); int btrfs_open_devices(struct btrfs_fs_devices *fs_devices, fmode_t flags, void *holder); struct btrfs_device *btrfs_scan_one_device(const char *path, From patchwork Wed Jun 15 15:15:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12882518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DC3F5CCA473 for ; Wed, 15 Jun 2022 15:15:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347664AbiFOPPo (ORCPT ); Wed, 15 Jun 2022 11:15:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346864AbiFOPPd (ORCPT ); Wed, 15 Jun 2022 11:15:33 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52733C4B4 for ; Wed, 15 Jun 2022 08:15:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=kus4Ml/QHgHrVV4bg/0yNIWLerZTBLlAvrqG6dFIrzY=; b=c+vBWOUNcfeU+kUF2YXHkIc3b+ xKF0D6LEs9Ebqa1FQSJlQ+MP9MGZPl3CS99g4guh3ad3GpEfzHuYOQpYD2OYnKLlq9wK+MCV4l5M1 01rAHjoHDF+2ev7EAtOBduzIpchpADRvUtn++Uvc7rc9wW3e3Zq6F3W0WW6TjTehdP5lO3xizJoJ7 VjMcI9sbDzisUNSsJKkySjsT9mhZSLAz7ogY4dKGDa0BPBnN6sfM2bsn29+JWkh+tXCuqP+Dq+ZL4 JjtaoVlN4XRzoNxCGgjBv6xWsPpv6wiUJvVbB3xq/Gx391OlVq97Tl3IgQpMC36ZYn88xtZxilK4G a8Jjm+RQ==; Received: from [2001:4bb8:180:36f6:5ab4:8a8:5e48:3d75] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o1Ujx-00F8tI-F5; Wed, 15 Jun 2022 15:15:29 +0000 From: Christoph Hellwig To: David Sterba , Josef Bacik , Qu Wenruo Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 4/5] btrfs: remove the raid56_parity_{write,recover} return value Date: Wed, 15 Jun 2022 17:15:14 +0200 Message-Id: <20220615151515.888424-5-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220615151515.888424-1-hch@lst.de> References: <20220615151515.888424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Follow the pattern of the main bio submission helper and do not return an error and always consume the bio. This allows these functions to consume the btrfs_bio_counter_ from the caller and thus avoid additional roundtrips on that counter. Signed-off-by: Christoph Hellwig --- fs/btrfs/raid56.c | 51 ++++++++++++++++++++++++---------------------- fs/btrfs/raid56.h | 6 +++--- fs/btrfs/scrub.c | 10 ++------- fs/btrfs/volumes.c | 19 ++++++++--------- 4 files changed, 41 insertions(+), 45 deletions(-) diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c index e071648f2c591..bd64147bd8bab 100644 --- a/fs/btrfs/raid56.c +++ b/fs/btrfs/raid56.c @@ -1809,7 +1809,7 @@ static void rbio_add_bio(struct btrfs_raid_bio *rbio, struct bio *orig_bio) /* * our main entry point for writes from the rest of the FS. */ -int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) +void raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; @@ -1820,12 +1820,12 @@ int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) { btrfs_put_bioc(bioc); - return PTR_ERR(rbio); + ret = PTR_ERR(rbio); + goto out_err; } rbio->operation = BTRFS_RBIO_WRITE; rbio_add_bio(rbio, bio); - btrfs_bio_counter_inc_noblocked(fs_info); rbio->generic_bio_cnt = 1; /* @@ -1835,8 +1835,8 @@ int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) if (rbio_is_full(rbio)) { ret = full_stripe_write(rbio); if (ret) - btrfs_bio_counter_dec(fs_info); - return ret; + goto out_err; + return; } cb = blk_check_plugged(btrfs_raid_unplug, fs_info, sizeof(*plug)); @@ -1851,9 +1851,14 @@ int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc) } else { ret = __raid56_parity_write(rbio); if (ret) - btrfs_bio_counter_dec(fs_info); + goto out_err; } - return ret; + return; + +out_err: + btrfs_bio_counter_dec(fs_info); + bio->bi_status = errno_to_blk_status(ret); + bio_endio(bio); } /* @@ -2199,8 +2204,8 @@ static int __raid56_parity_recover(struct btrfs_raid_bio *rbio) * so we assume the bio they send down corresponds to a failed part * of the drive. */ -int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, - int mirror_num, int generic_io) +void raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, + int mirror_num, bool generic_io) { struct btrfs_fs_info *fs_info = bioc->fs_info; struct btrfs_raid_bio *rbio; @@ -2209,13 +2214,14 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, if (generic_io) { ASSERT(bioc->mirror_num == mirror_num); btrfs_bio(bio)->mirror_num = mirror_num; + } else { + btrfs_get_bioc(bioc); } rbio = alloc_rbio(fs_info, bioc); if (IS_ERR(rbio)) { - if (generic_io) - btrfs_put_bioc(bioc); - return PTR_ERR(rbio); + ret = PTR_ERR(rbio); + goto out_err; } rbio->operation = BTRFS_RBIO_READ_REBUILD; @@ -2227,18 +2233,12 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, "%s could not find the bad stripe in raid56 so that we cannot recover any more (bio has logical %llu len %llu, bioc has map_type %llu)", __func__, bio->bi_iter.bi_sector << 9, (u64)bio->bi_iter.bi_size, bioc->map_type); - if (generic_io) - btrfs_put_bioc(bioc); kfree(rbio); - return -EIO; + goto out_err; } - if (generic_io) { - btrfs_bio_counter_inc_noblocked(fs_info); + if (generic_io) rbio->generic_bio_cnt = 1; - } else { - btrfs_get_bioc(bioc); - } /* * Loop retry: @@ -2257,8 +2257,6 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, rbio->failb--; } - ret = lock_stripe_add(rbio); - /* * __raid56_parity_recover will end the bio with * any errors it hits. We don't want to return @@ -2266,15 +2264,20 @@ int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, * will end up calling bio_endio with any nonzero * return */ - if (ret == 0) + if (lock_stripe_add(rbio) == 0) __raid56_parity_recover(rbio); + /* * our rbio has been added to the list of * rbios that will be handled after the * currently lock owner is done */ - return 0; + return; +out_err: + btrfs_put_bioc(bioc); + bio->bi_status = errno_to_blk_status(ret); + bio_endio(bio); } static void rmw_work(struct work_struct *work) diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h index 9e4e0501e4e89..c94f503eb3832 100644 --- a/fs/btrfs/raid56.h +++ b/fs/btrfs/raid56.h @@ -175,9 +175,9 @@ static inline int nr_data_stripes(const struct map_lookup *map) struct btrfs_device; -int raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, - int mirror_num, int generic_io); -int raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc); +void raid56_parity_recover(struct bio *bio, struct btrfs_io_context *bioc, + int mirror_num, bool generic_io); +void raid56_parity_write(struct bio *bio, struct btrfs_io_context *bioc); void raid56_add_scrub_pages(struct btrfs_raid_bio *rbio, struct page *page, unsigned int pgoff, u64 logical); diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 18986d062cf63..f091e57222082 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1376,18 +1376,12 @@ static int scrub_submit_raid56_bio_wait(struct btrfs_fs_info *fs_info, struct scrub_sector *sector) { DECLARE_COMPLETION_ONSTACK(done); - int ret; - int mirror_num; bio->bi_iter.bi_sector = sector->logical >> 9; bio->bi_private = &done; bio->bi_end_io = scrub_bio_wait_endio; - - mirror_num = sector->sblock->sectors[0]->mirror_num; - ret = raid56_parity_recover(bio, sector->recover->bioc, - mirror_num, 0); - if (ret) - return ret; + raid56_parity_recover(bio, sector->recover->bioc, + sector->sblock->sectors[0]->mirror_num, false); wait_for_completion_io(&done); return blk_status_to_errno(bio->bi_status); diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 739676944e94d..3a8c437bdd65b 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -6749,8 +6749,12 @@ void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, btrfs_bio_counter_inc_blocked(fs_info); ret = __btrfs_map_block(fs_info, btrfs_op(bio), logical, &map_length, &bioc, mirror_num, 1); - if (ret) - goto out_dec; + if (ret) { + btrfs_bio_counter_dec(fs_info); + bio->bi_status = errno_to_blk_status(ret); + bio_endio(bio); + return; + } total_devs = bioc->num_stripes; bioc->orig_bio = bio; @@ -6761,10 +6765,10 @@ void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, if ((bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) && ((btrfs_op(bio) == BTRFS_MAP_WRITE) || (mirror_num > 1))) { if (btrfs_op(bio) == BTRFS_MAP_WRITE) - ret = raid56_parity_write(bio, bioc); + raid56_parity_write(bio, bioc); else - ret = raid56_parity_recover(bio, bioc, mirror_num, 1); - goto out_dec; + raid56_parity_recover(bio, bioc, mirror_num, true); + return; } if (map_length < length) { @@ -6779,12 +6783,7 @@ void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, submit_stripe_bio(bioc, bio, dev_nr, should_clone); } -out_dec: btrfs_bio_counter_dec(fs_info); - if (ret) { - bio->bi_status = errno_to_blk_status(ret); - bio_endio(bio); - } } static bool dev_args_match_fs_devices(const struct btrfs_dev_lookup_args *args, From patchwork Wed Jun 15 15:15:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 12882519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7843CCCA480 for ; Wed, 15 Jun 2022 15:15:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231665AbiFOPPp (ORCPT ); Wed, 15 Jun 2022 11:15:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37754 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344978AbiFOPPh (ORCPT ); Wed, 15 Jun 2022 11:15:37 -0400 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C7A829CAC for ; Wed, 15 Jun 2022 08:15:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=zzi5SbyXzqn0T72VCzOne8atSxMZTUN835WGPfibC7c=; b=jKlnhRMpusz4Ab9cOqz7krQHBn 0BtCrmwFVgB8xM3sBoW7aOG0TVciiEHZ6yqRsj6MzZ3vvGFqQzTYeJoAjH16hMufCz/2Nev6odauF fEtJAWwaSF1vS3hV3HbUlrU4Fw7c5yf2CwhX5ZBC5+LHFZWPJ2McXwHlXVj2qlHbWgGih0La+WnWp KZNeTTWADpMKZvoPHk8/lqaqw8ljKmDmzKFLrZuR4Bu7EHB8sPVO/w5M+yTIfRXeevk6CoY5fugUj GqhbJHwGfe7pjv08YS9DnMQbt4sTMcl50d+y1IYsKMU64CwdLEtEbkvnNFELfdIvSuwuXg5/9nikI nDvRBwmQ==; Received: from [2001:4bb8:180:36f6:5ab4:8a8:5e48:3d75] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1o1Uk0-00F8u2-2Y; Wed, 15 Jun 2022 15:15:32 +0000 From: Christoph Hellwig To: David Sterba , Josef Bacik , Qu Wenruo Cc: linux-btrfs@vger.kernel.org Subject: [PATCH 5/5] btrfs: remove bioc->stripes_pending Date: Wed, 15 Jun 2022 17:15:15 +0200 Message-Id: <20220615151515.888424-6-hch@lst.de> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20220615151515.888424-1-hch@lst.de> References: <20220615151515.888424-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org Replace the the stripes_pending field with the pending counter in the bio. Signed-off-by: Christoph Hellwig --- fs/btrfs/volumes.c | 100 ++++++++++++++++++++++----------------------- fs/btrfs/volumes.h | 1 - 2 files changed, 48 insertions(+), 53 deletions(-) diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 3a8c437bdd65b..bf80a850347cd 100644 --- a/fs/btrfs/volumes.c +++ b/fs/btrfs/volumes.c @@ -5889,7 +5889,6 @@ static struct btrfs_io_context *alloc_btrfs_io_context(struct btrfs_fs_info *fs_ sizeof(u64) * (total_stripes), GFP_NOFS|__GFP_NOFAIL); - atomic_set(&bioc->error, 0); refcount_set(&bioc->refs, 1); bioc->fs_info = fs_info; @@ -6619,46 +6618,21 @@ static void btrfs_end_bio_work(struct work_struct *work) struct btrfs_bio *bbio = container_of(work, struct btrfs_bio, end_io_work); - bio_endio(&bbio->bio); -} - -static void btrfs_end_bioc(struct btrfs_io_context *bioc, bool async) -{ - struct bio *orig_bio = bioc->orig_bio; - struct btrfs_bio *bbio = btrfs_bio(orig_bio); - - bbio->mirror_num = bioc->mirror_num; - orig_bio->bi_private = bioc->private; - orig_bio->bi_end_io = bioc->end_io; - - /* - * Only send an error to the higher layers if it is beyond the tolerance - * threshold. - */ - if (atomic_read(&bioc->error) > bioc->max_errors) - orig_bio->bi_status = BLK_STS_IOERR; - else - orig_bio->bi_status = BLK_STS_OK; - - if (btrfs_op(orig_bio) == BTRFS_MAP_READ && async) { - INIT_WORK(&bbio->end_io_work, btrfs_end_bio_work); - queue_work(btrfs_end_io_wq(bioc), &bbio->end_io_work); - } else { - bio_endio(orig_bio); - } - - btrfs_put_bioc(bioc); + bbio->bio.bi_end_io(&bbio->bio); } static void btrfs_end_bio(struct bio *bio) { struct btrfs_io_stripe *stripe = bio->bi_private; struct btrfs_io_context *bioc = stripe->bioc; + struct bio *orig_bio = bioc->orig_bio; + struct btrfs_bio *bbio = btrfs_bio(orig_bio); if (bio->bi_status) { atomic_inc(&bioc->error); - if (bio->bi_status == BLK_STS_IOERR || - bio->bi_status == BLK_STS_TARGET) { + if (stripe->dev && stripe->dev->bdev && + (bio->bi_status == BLK_STS_IOERR || + bio->bi_status == BLK_STS_TARGET)) { if (btrfs_op(bio) == BTRFS_MAP_WRITE) btrfs_dev_stat_inc_and_print(stripe->dev, BTRFS_DEV_STAT_WRITE_ERRS); @@ -6671,12 +6645,35 @@ static void btrfs_end_bio(struct bio *bio) } } - if (bio != bioc->orig_bio) + btrfs_bio_counter_dec(bioc->fs_info); + + if (bio != orig_bio) { + bio_endio(orig_bio); bio_put(bio); + return; + } - btrfs_bio_counter_dec(bioc->fs_info); - if (atomic_dec_and_test(&bioc->stripes_pending)) - btrfs_end_bioc(bioc, true); + /* + * Only send an error to the higher layers if it is beyond the tolerance + * threshold. + */ + if (atomic_read(&bioc->error) > bioc->max_errors) + orig_bio->bi_status = BLK_STS_IOERR; + else + orig_bio->bi_status = BLK_STS_OK; + + bbio->mirror_num = bioc->mirror_num; + orig_bio->bi_end_io = bioc->end_io; + orig_bio->bi_private = bioc->private; + if (btrfs_op(orig_bio) == BTRFS_MAP_READ) { + bbio->device = stripe->dev; + INIT_WORK(&bbio->end_io_work, btrfs_end_bio_work); + queue_work(btrfs_end_io_wq(bioc), &bbio->end_io_work); + } else { + orig_bio->bi_end_io(orig_bio); + } + + btrfs_put_bioc(bioc); } static void submit_stripe_bio(struct btrfs_io_context *bioc, @@ -6687,28 +6684,30 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc, u64 physical = bioc->stripes[dev_nr].physical; struct bio *bio; - if (!dev || !dev->bdev || - test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) || - (btrfs_op(orig_bio) == BTRFS_MAP_WRITE && - !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))) { - atomic_inc(&bioc->error); - if (atomic_dec_and_test(&bioc->stripes_pending)) - btrfs_end_bioc(bioc, false); - return; - } - if (clone) { - bio = bio_alloc_clone(dev->bdev, orig_bio, GFP_NOFS, &fs_bio_set); + bio = bio_alloc_clone(NULL, orig_bio, GFP_NOFS, &fs_bio_set); + bio_inc_remaining(orig_bio); } else { bio = orig_bio; - bio_set_dev(bio, dev->bdev); - btrfs_bio(bio)->device = dev; } bioc->stripes[dev_nr].bioc = bioc; bio->bi_private = &bioc->stripes[dev_nr]; bio->bi_end_io = btrfs_end_bio; bio->bi_iter.bi_sector = physical >> 9; + + btrfs_bio_counter_inc_noblocked(fs_info); + + if (!dev || !dev->bdev || + test_bit(BTRFS_DEV_STATE_MISSING, &dev->dev_state) || + (btrfs_op(bio) == BTRFS_MAP_WRITE && + !test_bit(BTRFS_DEV_STATE_WRITEABLE, &dev->dev_state))) { + bio_io_error(bio); + return; + } + + bio_set_dev(bio, dev->bdev); + /* * For zone append writing, bi_sector must point the beginning of the * zone @@ -6729,8 +6728,6 @@ static void submit_stripe_bio(struct btrfs_io_context *bioc, (unsigned long)dev->bdev->bd_dev, rcu_str_deref(dev->name), dev->devid, bio->bi_iter.bi_size); - btrfs_bio_counter_inc_noblocked(fs_info); - btrfsic_check_bio(bio); submit_bio(bio); } @@ -6760,7 +6757,6 @@ void btrfs_submit_bio(struct btrfs_fs_info *fs_info, struct bio *bio, bioc->orig_bio = bio; bioc->private = bio->bi_private; bioc->end_io = bio->bi_end_io; - atomic_set(&bioc->stripes_pending, total_devs); if ((bioc->map_type & BTRFS_BLOCK_GROUP_RAID56_MASK) && ((btrfs_op(bio) == BTRFS_MAP_WRITE) || (mirror_num > 1))) { diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h index c0f5bbba9c6ac..ecbaf92323030 100644 --- a/fs/btrfs/volumes.h +++ b/fs/btrfs/volumes.h @@ -444,7 +444,6 @@ struct btrfs_discard_stripe { */ struct btrfs_io_context { refcount_t refs; - atomic_t stripes_pending; struct btrfs_fs_info *fs_info; u64 map_type; /* get from map_lookup->type */ bio_end_io_t *end_io;