From patchwork Sun Mar 3 14:01:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579779 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 100FBC2C4; Sun, 3 Mar 2024 14:02:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474522; cv=none; b=Dat2vD+959qhJzOz/2Q8QG3HJeN9ioULurrbxNJ0rxognBd/U9Myd0QXImM2zQaHfDVhpXLiBbETkObNwDXE+zyu7rKvc56OIKxfJY/vEUkqsCEhCgFqC1fIexHbXyeSmfnd50/7GlI6jtUeAuovLQofDm9DnWRtFVDpXUDGK4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474522; c=relaxed/simple; bh=L2Dfi+WHboPGCy+uIY56V3HXOOCpsd3P0JdlWw98OeA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nZZApuQWA3OhbuLG2G1624fG8JKFeuEhbPM3PREHdWmm2I+nbhh+4M8UHj9tonuatPmlTrDbiV9KU/ovbm/2WHAlDs7HU+rqDAGDFMICLe1Bainaub5N/9UIiAQx4cOmYoWxAEr/d9t5gkg+5sNKj9nKU5uCz0BTYw3W/rnJdY4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=LTAbP/Su; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="LTAbP/Su" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hUnMHyYS/bN0lEima/6NxSSVK1YzsLlczHeQb0d6LVk=; b=LTAbP/SusRR36ikvEBXJNYVfDe WU3WzZ6IxTjntJZykkhgK0WnCaD5d+wvWIu3pvZVegWIoNJdgUhJTejbx1sp5284Ehertrsy5dUT1 SS0Bcio+G0C/gPO9OimdDqpTl6mgR6ObImYiknJCUSXI6x7vSJrtdsbQp/ibsU6THfoOXAJwpSfdi mfhZSotyZwWxpkCimiZAUoRZgH38G6lY8JfqHYUCZ57yhHLSH8WRUUaUSC0DdkYWJSXjzC6KrktRl okYT3djnEL/1jIIDyM4taAv7gEP8h9klmD5c9v9y09tdvZdwYu8nLGS10UFprRYVLeN3v1NfCZolg 4rRAsWTw==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPd-000000061i9-1Wj7; Sun, 03 Mar 2024 14:01:57 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 01/11] md: add a mddev_trace_remap helper Date: Sun, 3 Mar 2024 07:01:40 -0700 Message-Id: <20240303140150.5435-2-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a helper to trace bio remapping that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md.c | 6 +----- drivers/md/md.h | 8 ++++++++ drivers/md/raid0.c | 5 +---- drivers/md/raid1.c | 11 ++--------- drivers/md/raid10.c | 10 ++-------- drivers/md/raid5.c | 14 +++----------- 6 files changed, 17 insertions(+), 37 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 48ae2b1cb57af5..bbf84fdb879cd0 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -65,7 +65,6 @@ #include #include -#include #include "md.h" #include "md-bitmap.h" #include "md-cluster.h" @@ -8662,10 +8661,7 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev, bio_chain(discard_bio, bio); bio_clone_blkg_association(discard_bio, bio); - if (mddev->gendisk) - trace_block_bio_remap(discard_bio, - disk_devt(mddev->gendisk), - bio->bi_iter.bi_sector); + mddev_trace_remap(mddev, discard_bio, bio->bi_iter.bi_sector); submit_bio_noacct(discard_bio); } EXPORT_SYMBOL_GPL(md_submit_discard_bio); diff --git a/drivers/md/md.h b/drivers/md/md.h index b2076a165c1050..468bccbf206b71 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -18,6 +18,7 @@ #include #include #include +#include #include "md-cluster.h" #define MaxSector (~(sector_t)0) @@ -874,4 +875,11 @@ int do_md_run(struct mddev *mddev); extern const struct block_device_operations md_fops; +static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, + sector_t sector) +{ + if (mddev->gendisk) + trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); +} + #endif /* _MD_MD_H */ diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index c50a7abda744ad..aff094de974347 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -578,10 +578,7 @@ static void raid0_map_submit_bio(struct mddev *mddev, struct bio *bio) bio_set_dev(bio, tmp_dev->bdev); bio->bi_iter.bi_sector = sector + zone->dev_start + tmp_dev->data_offset; - - if (mddev->gendisk) - trace_block_bio_remap(bio, disk_devt(mddev->gendisk), - bio_sector); + mddev_trace_remap(mddev, bio, bio_sector); mddev_check_write_zeroes(mddev, bio); submit_bio_noacct(bio); } diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index afca975ec7f314..421154430f241c 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1418,11 +1418,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, test_bit(R1BIO_FailFast, &r1_bio->state)) read_bio->bi_opf |= MD_FAILFAST; read_bio->bi_private = r1_bio; - - if (mddev->gendisk) - trace_block_bio_remap(read_bio, disk_devt(mddev->gendisk), - r1_bio->sector); - + mddev_trace_remap(mddev, read_bio, r1_bio->sector); submit_bio_noacct(read_bio); } @@ -1655,10 +1651,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, mbio->bi_private = r1_bio; atomic_inc(&r1_bio->remaining); - - if (mddev->gendisk) - trace_block_bio_remap(mbio, disk_devt(mddev->gendisk), - r1_bio->sector); + mddev_trace_remap(mddev, mbio, r1_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; if (!raid1_add_bio_to_plug(mddev, mbio, raid1_unplug, disks)) { diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 8aecdb1ccc169a..9335a1620e6c10 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -1235,10 +1235,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio, test_bit(R10BIO_FailFast, &r10_bio->state)) read_bio->bi_opf |= MD_FAILFAST; read_bio->bi_private = r10_bio; - - if (mddev->gendisk) - trace_block_bio_remap(read_bio, disk_devt(mddev->gendisk), - r10_bio->sector); + mddev_trace_remap(mddev, read_bio, r10_bio->sector); submit_bio_noacct(read_bio); return; } @@ -1274,10 +1271,7 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio, && enough(conf, devnum)) mbio->bi_opf |= MD_FAILFAST; mbio->bi_private = r10_bio; - - if (conf->mddev->gendisk) - trace_block_bio_remap(mbio, disk_devt(conf->mddev->gendisk), - r10_bio->sector); + mddev_trace_remap(mddev, mbio, r10_bio->sector); /* flush_pending_writes() needs access to the rdev so...*/ mbio->bi_bdev = (void *)rdev; diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 48129de21aecc2..db8fe9e92965be 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -1293,10 +1293,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) if (rrdev) set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags); - if (conf->mddev->gendisk) - trace_block_bio_remap(bi, - disk_devt(conf->mddev->gendisk), - sh->dev[i].sector); + mddev_trace_remap(conf->mddev, bi, sh->dev[i].sector); if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, bi); else @@ -1340,10 +1337,7 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s) */ if (op == REQ_OP_DISCARD) rbi->bi_vcnt = 0; - if (conf->mddev->gendisk) - trace_block_bio_remap(rbi, - disk_devt(conf->mddev->gendisk), - sh->dev[i].sector); + mddev_trace_remap(conf->mddev, rbi, sh->dev[i].sector); if (should_defer && op_is_write(op)) bio_list_add(&pending_bios, rbi); else @@ -5521,9 +5515,7 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio) spin_unlock_irq(&conf->device_lock); } - if (mddev->gendisk) - trace_block_bio_remap(align_bio, disk_devt(mddev->gendisk), - raid_bio->bi_iter.bi_sector); + mddev_trace_remap(mddev, align_bio, raid_bio->bi_iter.bi_sector); submit_bio_noacct(align_bio); return 1; } From patchwork Sun Mar 3 14:01:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579780 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 077AC5A0F7; Sun, 3 Mar 2024 14:02:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474526; cv=none; b=C0RW3sfBozH2Cq3bdwrTiYxNGuXmCchwoRZyhLYl1qGS7DJVRrTIt7drJqcpGm6MIfWEDXAMPgio7L8u8O21j6X4nRQG7FC5F3Hvdas0N/RErZn+EI2VPnrHal1YXeqDyrFHzgo97a9Lak5pVxH2PtG4L/sW9Feye6YgrQcQMso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474526; c=relaxed/simple; bh=f79t/hc2+41/iXLSQ/Z5PV1waVcJx1+NImQJ9P8Le8c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TIEyiEKWiszs9wPBQjJrYr7cesMx+Y76mIYsYhDVJoG4MhrhEratjHTAu/3Tq227OplJeNf78riFkSgaUOYr8W/JqoPp/gqxdmKnSahjMZ87L4U0uGLBbySSEdThhlfPlaEQ5XeHH7btUY36vURa1AVgdmInV6IiQwFg7ZBRR1M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=n1ZUyong; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="n1ZUyong" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=cvhN1/oRkTaRS0ZwZpQf56Xr08g6BU3kBimgDdDPet0=; b=n1ZUyong16OoWwp67GxBuEGfDj SkZR5dz5FoRWs3qTPgYRQoITZ3bvsmmj1pu4FC/I8s9jkIp+hVmx66qF/GUJE0rrTvTfqVhP3usks Z/7Rl8A+zneM2blopnypQHOu37TPYIFF4zNiXCsz/osxBWbxNkm+phCoSj0I+JuJv17N4T7x4zGW5 cxHvD9RLsoVzxqbb86+rXAsP5FqYu6Svt7og1rc9R/xYy8XBqENXFvvpizk/k8o6itn0iGT44aFbE jEXkaYUWg8rIHgeNo0usANc4kwk/TUzUcQdq0RFw+Zx6yFvGzt9k4q3r9Ml4XIezBgBILw9kKcBbN G+Q/OyIg==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPh-000000061j4-42Tr; Sun, 03 Mar 2024 14:02:02 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 02/11] md: add a mddev_add_trace_msg helper Date: Sun, 3 Mar 2024 07:01:41 -0700 Message-Id: <20240303140150.5435-3-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a small wrapper around blk_add_trace_msg that hides some argument dereferences and the check for a DM-mapped MD device. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md-bitmap.c | 9 +++------ drivers/md/md.c | 3 +-- drivers/md/md.h | 6 ++++++ drivers/md/raid1.c | 10 ++++------ drivers/md/raid10.c | 15 +++++++-------- drivers/md/raid5.c | 14 +++++++------- 6 files changed, 28 insertions(+), 29 deletions(-) diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c index a4976ceae8688a..059afc24c08bec 100644 --- a/drivers/md/md-bitmap.c +++ b/drivers/md/md-bitmap.c @@ -1046,9 +1046,8 @@ void md_bitmap_unplug(struct bitmap *bitmap) if (dirty || need_write) { if (!writing) { md_bitmap_wait_writes(bitmap); - if (bitmap->mddev->queue) - blk_add_trace_msg(bitmap->mddev->queue, - "md bitmap_unplug"); + mddev_add_trace_msg(bitmap->mddev, + "md bitmap_unplug"); } clear_page_attr(bitmap, i, BITMAP_PAGE_PENDING); filemap_write_page(bitmap, i, false); @@ -1319,9 +1318,7 @@ void md_bitmap_daemon_work(struct mddev *mddev) } bitmap->allclean = 1; - if (bitmap->mddev->queue) - blk_add_trace_msg(bitmap->mddev->queue, - "md bitmap_daemon_work"); + mddev_add_trace_msg(bitmap->mddev, "md bitmap_daemon_work"); /* Any file-page which is PENDING now needs to be written. * So set NEEDWRITE now, then after we make any last-minute changes diff --git a/drivers/md/md.c b/drivers/md/md.c index bbf84fdb879cd0..6cfa6812697f51 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -2865,8 +2865,7 @@ void md_update_sb(struct mddev *mddev, int force_change) pr_debug("md: updating %s RAID superblock on device (in sync %d)\n", mdname(mddev), mddev->in_sync); - if (mddev->queue) - blk_add_trace_msg(mddev->queue, "md md_update_sb"); + mddev_add_trace_msg(mddev, "md md_update_sb"); rewrite: md_bitmap_update_sb(mddev->bitmap); rdev_for_each(rdev, mddev) { diff --git a/drivers/md/md.h b/drivers/md/md.h index 468bccbf206b71..b7c2ade8260391 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -882,4 +882,10 @@ static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); } +#define mddev_add_trace_msg(mddev, fmt, args...) \ +do { \ + if ((mddev)->gendisk) \ + blk_add_trace_msg((mddev)->queue, fmt, ##args); \ +} while (0) + #endif /* _MD_MD_H */ diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 421154430f241c..05870a4565fc71 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -46,9 +46,6 @@ static void allow_barrier(struct r1conf *conf, sector_t sector_nr); static void lower_barrier(struct r1conf *conf, sector_t sector_nr); -#define raid1_log(md, fmt, args...) \ - do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid1 " fmt, ##args); } while (0) - #define RAID_1_10_NAME "raid1" #include "raid1-10.c" @@ -1196,7 +1193,7 @@ static void freeze_array(struct r1conf *conf, int extra) */ spin_lock_irq(&conf->resync_lock); conf->array_frozen = 1; - raid1_log(conf->mddev, "wait freeze"); + mddev_add_trace_msg(conf->mddev, "raid1 wait freeze"); wait_event_lock_irq_cmd( conf->wait_barrier, get_unqueued_pending(conf) == extra, @@ -1385,7 +1382,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio, * Reading from a write-mostly device must take care not to * over-take any writes that are 'behind' */ - raid1_log(mddev, "wait behind writes"); + mddev_add_trace_msg(mddev, "raid1 wait behind writes"); wait_event(bitmap->behind_wait, atomic_read(&bitmap->behind_writes) == 0); } @@ -1568,7 +1565,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio, bio_wouldblock_error(bio); return; } - raid1_log(mddev, "wait rdev %d blocked", blocked_rdev->raid_disk); + mddev_add_trace_msg(mddev, "raid1 wait rdev %d blocked", + blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); wait_barrier(conf, bio->bi_iter.bi_sector, false); goto retry_write; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 9335a1620e6c10..1447cb1e441455 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -76,9 +76,6 @@ static void reshape_request_write(struct mddev *mddev, struct r10bio *r10_bio); static void end_reshape_write(struct bio *bio); static void end_reshape(struct r10conf *conf); -#define raid10_log(md, fmt, args...) \ - do { if ((md)->queue) blk_add_trace_msg((md)->queue, "raid10 " fmt, ##args); } while (0) - #include "raid1-10.c" #define NULL_CMD @@ -1019,7 +1016,7 @@ static bool wait_barrier(struct r10conf *conf, bool nowait) ret = false; } else { conf->nr_waiting++; - raid10_log(conf->mddev, "wait barrier"); + mddev_add_trace_msg(conf->mddev, "raid10 wait barrier"); wait_event_barrier(conf, stop_waiting_barrier(conf)); conf->nr_waiting--; } @@ -1138,7 +1135,7 @@ static bool regular_request_wait(struct mddev *mddev, struct r10conf *conf, bio_wouldblock_error(bio); return false; } - raid10_log(conf->mddev, "wait reshape"); + mddev_add_trace_msg(conf->mddev, "raid10 wait reshape"); wait_event(conf->wait_barrier, conf->reshape_progress <= bio->bi_iter.bi_sector || conf->reshape_progress >= bio->bi_iter.bi_sector + @@ -1336,8 +1333,9 @@ static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio) if (unlikely(blocked_rdev)) { /* Have to wait for this device to get unblocked, then retry */ allow_barrier(conf); - raid10_log(conf->mddev, "%s wait rdev %d blocked", - __func__, blocked_rdev->raid_disk); + mddev_add_trace_msg(conf->mddev, + "raid10 %s wait rdev %d blocked", + __func__, blocked_rdev->raid_disk); md_wait_for_blocked_rdev(blocked_rdev, mddev); wait_barrier(conf, false); goto retry_wait; @@ -1392,7 +1390,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio, bio_wouldblock_error(bio); return; } - raid10_log(conf->mddev, "wait reshape metadata"); + mddev_add_trace_msg(conf->mddev, + "raid10 wait reshape metadata"); wait_event(mddev->sb_wait, !test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags)); diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index db8fe9e92965be..2000fc5d01ba54 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -4193,10 +4193,9 @@ static int handle_stripe_dirtying(struct r5conf *conf, set_bit(STRIPE_HANDLE, &sh->state); if ((rmw < rcw || (rmw == rcw && conf->rmw_level == PARITY_PREFER_RMW)) && rmw > 0) { /* prefer read-modify-write, but need to get some data */ - if (conf->mddev->queue) - blk_add_trace_msg(conf->mddev->queue, - "raid5 rmw %llu %d", - (unsigned long long)sh->sector, rmw); + mddev_add_trace_msg(conf->mddev, "raid5 rmw %llu %d", + sh->sector, rmw); + for (i = disks; i--; ) { struct r5dev *dev = &sh->dev[i]; if (test_bit(R5_InJournal, &dev->flags) && @@ -4274,9 +4273,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, } } if (rcw && conf->mddev->queue) - blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", - (unsigned long long)sh->sector, - rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); + mddev_add_trace_msg(conf->mddev, + "raid5 rcw %llu %d %d %d", + sh->sector, rcw, qread, + test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && From patchwork Sun Mar 3 14:01:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579781 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D16C5A102; Sun, 3 Mar 2024 14:02:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474529; cv=none; b=ccWzOw+S64Y9QvJ7NH5GG9vTheij5Ab/zexFhDGEZ0SXNnnLX5f68uDn/YW1EErtKmZXlGfEGRB9Qp2T650ccm+eyrnJ18DsFd0P1xe+vnTarlv0ESNe8KEZ8baclyZIOAww8crAgBBwfsE1uru27+1PdMxWKeExi9kGfAAXzg0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474529; c=relaxed/simple; bh=8MHP4Dl0v5AwsxO9CidAfFRFJ0Ra+nEY8ae353U5D3Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZrE9MBNUgr69ccNxhUO6dkxaYDSgW46Iftt8zK3FT5H4atupkRThBOfguChNcrCGsxQBrhKXtW2AMl8iZ5D/pIgWTbycaMJcVbrbfqpOiIf9PV9MveWwBF2ojqhpiJl34SgAAE7PBMZEm61+WkNdGHFWo+Yx4SeZWA+5lCOszsY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=dJpsXMVT; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dJpsXMVT" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=YE5UbB+FLL9gNqYMqc7m5KQfT4NkelJlyQPONKCILoo=; b=dJpsXMVTKp2BWBYhWL/MEVLvxy BmRo4SmceD4amSgHoQ20WC1fY3Q/9GUcRqCQIFuoN33rtRi/ENjea/E/acsJs7YS598lyNRGZT9Tm k57F6Y6SMwjA62yprQ5E5hLSFYRzvu6af8qJow8GVIJAN0roPWXFuk4Sr9kcC5tlnqK54juaW5SWy Yg47xmdBmA7+F7fpJ2C6wUZY5QjjxOUIRzfxyLhx18sDaXm3hc33aezuVrfMyGCM5aXj9/DureVdW fwkhrsbXsinDlcrRmMZKDY8Io/EB26KbXXgmq2y2eDgkxs9jSVra2HALSpA8sK2Nww9VCvr2fSqVu uYOTLYpA==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPm-000000061kA-2MUB; Sun, 03 Mar 2024 14:02:07 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 03/11] md: add a mddev_is_dm helper Date: Sun, 3 Mar 2024 07:01:42 -0700 Message-Id: <20240303140150.5435-4-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a helper to check for a DM-mapped MD device instead of using the obfuscated ->gendisk or ->queue NULL checks. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md.c | 15 +++++++-------- drivers/md/md.h | 12 ++++++++++-- drivers/md/raid0.c | 2 +- drivers/md/raid1.c | 13 +++++-------- drivers/md/raid10.c | 10 +++++----- drivers/md/raid5.c | 21 ++++++++++----------- 6 files changed, 38 insertions(+), 35 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 6cfa6812697f51..9ce4b5f2324dab 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -2419,7 +2419,7 @@ int md_integrity_register(struct mddev *mddev) if (list_empty(&mddev->disks)) return 0; /* nothing to do */ - if (!mddev->gendisk || blk_get_integrity(mddev->gendisk)) + if (mddev_is_dm(mddev) || blk_get_integrity(mddev->gendisk)) return 0; /* shouldn't register, or already is */ rdev_for_each(rdev, mddev) { /* skip spares and non-functional disks */ @@ -2472,7 +2472,7 @@ int md_integrity_add_rdev(struct md_rdev *rdev, struct mddev *mddev) { struct blk_integrity *bi_mddev; - if (!mddev->gendisk) + if (mddev_is_dm(mddev)) return 0; bi_mddev = blk_get_integrity(mddev->gendisk); @@ -5957,7 +5957,7 @@ int md_run(struct mddev *mddev) invalidate_bdev(rdev->bdev); if (mddev->ro != MD_RDONLY && rdev_read_only(rdev)) { mddev->ro = MD_RDONLY; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) set_disk_ro(mddev->gendisk, 1); } @@ -6116,7 +6116,7 @@ int md_run(struct mddev *mddev) } } - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { bool nonrot = true; rdev_for_each(rdev, mddev) { @@ -6380,7 +6380,7 @@ static void mddev_detach(struct mddev *mddev) mddev->pers->quiesce(mddev, 0); } md_unregister_thread(mddev, &mddev->thread); - if (mddev->queue) + if (!mddev_is_dm(mddev)) blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ } @@ -7336,10 +7336,9 @@ static int update_size(struct mddev *mddev, sector_t num_sectors) if (!rv) { if (mddev_is_clustered(mddev)) md_cluster_ops->update_size(mddev, old_dev_sectors); - else if (mddev->queue) { + else if (!mddev_is_dm(mddev)) set_capacity_and_notify(mddev->gendisk, mddev->array_sectors); - } } return rv; } @@ -9136,7 +9135,7 @@ void md_do_sync(struct md_thread *thread) mddev->delta_disks > 0 && mddev->pers->finish_reshape && mddev->pers->size && - mddev->queue) { + !mddev_is_dm(mddev)) { mddev_lock_nointr(mddev); md_set_array_sectors(mddev, mddev->pers->size(mddev, 0, 0)); mddev_unlock(mddev); diff --git a/drivers/md/md.h b/drivers/md/md.h index b7c2ade8260391..786b0eebd1cad6 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -875,16 +875,24 @@ int do_md_run(struct mddev *mddev); extern const struct block_device_operations md_fops; +/* + * MD devices can be used undeneath by DM, in which case ->gendisk is NULL. + */ +static inline bool mddev_is_dm(struct mddev *mddev) +{ + return !mddev->gendisk; +} + static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, sector_t sector) { - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) trace_block_bio_remap(bio, disk_devt(mddev->gendisk), sector); } #define mddev_add_trace_msg(mddev, fmt, args...) \ do { \ - if ((mddev)->gendisk) \ + if (!mddev_is_dm(mddev)) \ blk_add_trace_msg((mddev)->queue, fmt, ##args); \ } while (0) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index aff094de974347..9f787ae77ede88 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -399,7 +399,7 @@ static int raid0_run(struct mddev *mddev) mddev->private = conf; } conf = mddev->private; - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { struct md_rdev *rdev; blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 05870a4565fc71..dd1393d0f08461 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1926,7 +1926,7 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev) for (mirror = first; mirror <= last; mirror++) { p = conf->mirrors + mirror; if (!p->rdev) { - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -3227,14 +3227,11 @@ static int raid1_run(struct mddev *mddev) if (IS_ERR(conf)) return PTR_ERR(conf); - if (mddev->queue) + if (!mddev_is_dm(mddev)) { blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - if (!mddev->gendisk) - continue; - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + rdev_for_each(rdev, mddev) + disk_stack_limits(mddev->gendisk, rdev->bdev, + rdev->data_offset << 9); } mddev->degraded = 0; diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 1447cb1e441455..4021cf06b3a616 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2106,7 +2106,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) continue; } - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -2126,7 +2126,7 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) set_bit(Replacement, &rdev->flags); rdev->raid_disk = repl_slot; err = 0; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); conf->fullsync = 1; @@ -4014,7 +4014,7 @@ static int raid10_run(struct mddev *mddev) } } - if (mddev->queue) { + if (!mddev_is_dm(conf->mddev)) { blk_queue_max_write_zeroes_sectors(mddev->queue, 0); blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); raid10_set_io_opt(conf); @@ -4048,7 +4048,7 @@ static int raid10_run(struct mddev *mddev) if (first || diff < min_offset_diff) min_offset_diff = diff; - if (mddev->gendisk) + if (!mddev_is_dm(mddev)) disk_stack_limits(mddev->gendisk, rdev->bdev, rdev->data_offset << 9); @@ -4933,7 +4933,7 @@ static void end_reshape(struct r10conf *conf) conf->reshape_safe = MaxSector; spin_unlock_irq(&conf->device_lock); - if (conf->mddev->queue) + if (!mddev_is_dm(conf->mddev)) raid10_set_io_opt(conf); conf->fullsync = 0; } diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 2000fc5d01ba54..a6350eb711fb36 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -2414,12 +2414,12 @@ static int grow_stripes(struct r5conf *conf, int num) size_t namelen = sizeof(conf->cache_name[0]); int devs = max(conf->raid_disks, conf->previous_raid_disks); - if (conf->mddev->gendisk) + if (mddev_is_dm(conf->mddev)) snprintf(conf->cache_name[0], namelen, - "raid%d-%s", conf->level, mdname(conf->mddev)); + "raid%d-%p", conf->level, conf->mddev); else snprintf(conf->cache_name[0], namelen, - "raid%d-%p", conf->level, conf->mddev); + "raid%d-%s", conf->level, mdname(conf->mddev)); snprintf(conf->cache_name[1], namelen, "%.27s-alt", conf->cache_name[0]); conf->active_name = 0; @@ -4272,11 +4272,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, set_bit(STRIPE_DELAYED, &sh->state); } } - if (rcw && conf->mddev->queue) - mddev_add_trace_msg(conf->mddev, - "raid5 rcw %llu %d %d %d", - sh->sector, rcw, qread, - test_bit(STRIPE_DELAYED, &sh->state)); + if (rcw && !mddev_is_dm(conf->mddev)) + blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", + (unsigned long long)sh->sector, + rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && @@ -5684,7 +5683,7 @@ static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule) } release_inactive_stripe_list(conf, cb->temp_inactive_list, NR_STRIPE_HASH_LOCKS); - if (mddev->queue) + if (!mddev_is_dm(mddev)) trace_block_unplug(mddev->queue, cnt, !from_schedule); kfree(cb); } @@ -7935,7 +7934,7 @@ static int raid5_run(struct mddev *mddev) mdname(mddev)); md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); - if (mddev->queue) { + if (!mddev_is_dm(mddev)) { int chunk_size; /* read-ahead size must cover two whole stripes, which * is 2 * (datadisks) * chunksize where 'n' is the @@ -8539,7 +8538,7 @@ static void end_reshape(struct r5conf *conf) spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_for_overlap); - if (conf->mddev->queue) + if (!mddev_is_dm(conf->mddev)) raid5_set_io_opt(conf); } } From patchwork Sun Mar 3 14:01:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579782 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 992665A4FD; Sun, 3 Mar 2024 14:02:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474533; cv=none; b=eozCnIgOi2HlO73br5VNxuxvYxmgiBOUhOzn74UsqiK1Fmle3SW4GqRUctrzxIbbfjqAzjvGNWYoUu7VOK8UfQxuMU32A6IMek9YJZWH41Gz3MiXmqrSxbbOunj0Zz5z7xxT9r1UM2h4HF3xUSESijm4ygeaRzYeioJWaSGWW8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474533; c=relaxed/simple; bh=iNTtym882ygluTq4ryeqOEHvFlL9+HUBeumRbVpb5QY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=aKdJB6kwtJkhVmMoXjKoGC1XpXdO9B5OEkC/OffIJv8dp56uYgc1VOv6zqTcJ/doiTajSFDA3YMSgbi06/DbMTvNLoW3C/lxOa05qxXTXYEv/gVwsQX0GAy59BTqlnUVLgSq20O0HP4Qo/B5gzS3UAofcLi3LBser14MDse2Vzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=g3VmozA3; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="g3VmozA3" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1qAXyHcvDJOSQjP0wb23hu4/G3KFhlCZvPwkDtR2WH0=; b=g3VmozA3d83VKAfFLNgmu9Cs04 h24A5CNN7pnyYzIRYXGt+v9v9M/jFWdA+wT4M40Ma8DGeRADaM6EY3ix+A1Ap1dy8y/hFy+N1q5d9 E6gMrH8SUNt+zcLC+FojXVj2WKcMnbMoPfu01ymambV3M8P8nUC+VgYTuwbnP8DJwQxwmWy5W6+RG gxVUTqCMOsh0N1PaydG9INB2PdRi0ySS5SgoykypOc1K+Cob7a6XXtCtN/bgPMS16rSidGqbqUHN8 XKLGE7Zxrdevi9eGHaB21JWTwrms7o+uhnJc3EdbjuXaF7LZff6qPlmvTRgLSMo+hQ478Bw4mdbJJ uWT/mrgw==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPq-000000061lb-0JEt; Sun, 03 Mar 2024 14:02:10 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 04/11] md: add queue limit helpers Date: Sun, 3 Mar 2024 07:01:43 -0700 Message-Id: <20240303140150.5435-5-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Add a few helpers that wrap the block queue limits API for use in MD. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ drivers/md/md.h | 3 +++ 2 files changed, 48 insertions(+) diff --git a/drivers/md/md.c b/drivers/md/md.c index 9ce4b5f2324dab..9a7f3d2b8c2d16 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5731,6 +5731,51 @@ static const struct kobj_type md_ktype = { int mdp_major = 0; +/* stack the limit for all rdevs into lim */ +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim) +{ + struct md_rdev *rdev; + + rdev_for_each(rdev, mddev) { + queue_limits_stack_bdev(lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + } +} +EXPORT_SYMBOL_GPL(mddev_stack_rdev_limits); + +/* apply the extra stacking limits from a new rdev into mddev */ +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) +{ + struct queue_limits lim; + + if (mddev_is_dm(mddev)) + return 0; + + lim = queue_limits_start_update(mddev->queue); + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, + mddev->gendisk->disk_name); + return queue_limits_commit_update(mddev->queue, &lim); +} +EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); + +/* update the optimal I/O size after a reshape */ +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes) +{ + struct queue_limits lim; + + if (mddev_is_dm(mddev)) + return; + + /* don't bother updating io_opt if we can't suspend the array */ + if (mddev_suspend(mddev, false) < 0) + return; + lim = queue_limits_start_update(mddev->gendisk->queue); + lim.io_opt = lim.io_min * nr_stripes; + queue_limits_commit_update(mddev->gendisk->queue, &lim); + mddev_resume(mddev); +} +EXPORT_SYMBOL_GPL(mddev_update_io_opt); + static void mddev_delayed_delete(struct work_struct *ws) { struct mddev *mddev = container_of(ws, struct mddev, del_work); diff --git a/drivers/md/md.h b/drivers/md/md.h index 786b0eebd1cad6..003db35b4b5926 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -872,6 +872,9 @@ void md_autostart_arrays(int part); int md_set_array_info(struct mddev *mddev, struct mdu_array_info_s *info); int md_add_new_disk(struct mddev *mddev, struct mdu_disk_info_s *info); int do_md_run(struct mddev *mddev); +void mddev_stack_rdev_limits(struct mddev *mddev, struct queue_limits *lim); +int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev); +void mddev_update_io_opt(struct mddev *mddev, unsigned int nr_stripes); extern const struct block_device_operations md_fops; From patchwork Sun Mar 3 14:01:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579783 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 36AAE5A118; Sun, 3 Mar 2024 14:02:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474536; cv=none; b=GPVr3VE/h/K494zsZbpk5Kr00FpU2fLBURpJQthEQRFJmiSNGoYFIJpTcVy/taSmJIltA3eO3kbrYHeCBxBsrb3cxQeNdmAu5D+krEfb2dtQb9/874cJwIiTIezZxCpadDelOf/G+3riILPVrh5n60HfoNUQRHF5uBnkXE7O44w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474536; c=relaxed/simple; bh=ZzfpEOuvgeMSuMUZdhvRZQIN9XZ3gwVF9qBcyNd3nKM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=iNZNWisc76vPUlvYjKKpgq7/FzLXJgWu7aKquSJXKi3RRh/13Sj/jWHSJ3FFdm9trbPOC9JpSV8yCkdxf1GPTqj4O58qnv7wDlkLfzPO+CWZ1s9phjdlOJrr2UbvwNFV+cXKtu6u6J5qm3/PNDjMx04ls6DyDpVjTaQfFrUJl24= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=YCRV9FSX; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="YCRV9FSX" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=mNFEgTD6S5LPbjlCSvkTSmFiasWMd7udr81oR1lVtKE=; b=YCRV9FSXFYMSxrPjyFsKzmw1vD MO88qU/pM3e5OFAYDPCggB5NB9qEZxUF6nk4iekOK61eJrevn9snBzG1nHxB5YMrjTI8tlpToDof8 add9C84BllFX8xwJ4NXwXcPGGIwxrJNreI1uwNdXikbGS6sM5U8E72gc+NuVv4eWaWds+7z1tPqM8 Z/z7uL6rZ+8Y2rWd5yaqncpAkK39t2BJGLI/fyQZhhaCE0f2gp9QhTr99j/18s6m15mPbmBQdO09K jHsqQDAuJ/kjlOofGnr4L2ZnqG+p1bWMgHsC0XmGD3ljp9FgFajaIrNnoH8MVG3pXxQdimJQ0dS4Q 8f2G7FCw==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPt-000000061mS-2roI; Sun, 03 Mar 2024 14:02:14 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 05/11] md/raid0: use the atomic queue limit update APIs Date: Sun, 3 Mar 2024 07:01:44 -0700 Message-Id: <20240303140150.5435-6-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/raid0.c | 35 ++++++++++++++++++++--------------- 1 file changed, 20 insertions(+), 15 deletions(-) diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index 9f787ae77ede88..f65aa6ecec0482 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -379,6 +379,19 @@ static void raid0_free(struct mddev *mddev, void *priv) free_conf(mddev, conf); } +static int raid0_set_limits(struct mddev *mddev) +{ + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_hw_sectors = mddev->chunk_sectors; + lim.max_write_zeroes_sectors = mddev->chunk_sectors; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * mddev->raid_disks; + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); +} + static int raid0_run(struct mddev *mddev) { struct r0conf *conf; @@ -400,19 +413,9 @@ static int raid0_run(struct mddev *mddev) } conf = mddev->private; if (!mddev_is_dm(mddev)) { - struct md_rdev *rdev; - - blk_queue_max_hw_sectors(mddev->queue, mddev->chunk_sectors); - blk_queue_max_write_zeroes_sectors(mddev->queue, mddev->chunk_sectors); - - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - blk_queue_io_opt(mddev->queue, - (mddev->chunk_sectors << 9) * mddev->raid_disks); - - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - } + ret = raid0_set_limits(mddev); + if (ret) + goto out_free_conf; } /* calculate array device size */ @@ -426,8 +429,10 @@ static int raid0_run(struct mddev *mddev) ret = md_integrity_register(mddev); if (ret) - free_conf(mddev, conf); - + goto out_free_conf; + return 0; +out_free_conf: + free_conf(mddev, conf); return ret; } From patchwork Sun Mar 3 14:01:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579784 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A8A85A11A; Sun, 3 Mar 2024 14:02:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474540; cv=none; b=kR5icVVxqP9hWg8BJQJNhjKttQi7fJg1+AN0fjgKRs1wOVzj2wqMVdljIE39E+DuDbnn/i+OJbZr59Clc3+ZjJeGKp6tMinQJJ4ub+9xCh1hg939+ddXoofibuIJ+HsImUrpA/uMd93jR3Qzhobemc5sDYEdqY0NooVwCT8IpzU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474540; c=relaxed/simple; bh=PJknENnzGqKTYL6XbypJX3zmTOAN1fFogRAd/0U/z2Q=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=FpePEgleKpSb+2vxrOoPRo+fjTr3MTTgaeI7/7lLxFUuVsHWjsptU2sqi+VYBSoaQz+Q9nz/pcy3VNg10tSsz8pbKjMeZwM4h3mDJejxrQ2qp4E5NMhrSiAhdvhQWoc4y6ruyDdspk3Bx1c8PRIJEljAm4+pI4rQ2rHaasgBuf8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=vHiM/gF9; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="vHiM/gF9" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=2bEVfBtv2paAJxkJku8hAomPcVgK2LRowJNeyncOGuE=; b=vHiM/gF9vdMkOa5fM61xrGHem2 eRx88FyYG8XM/N6OJlDDlmiVTQx5fyKxMeP8HOKGmhD9azoFe6scMEd6srzXBuq2XnorcKlF4coI2 RbAsCsLsBWHnIpowt59F+4dwzaY4IXgYdD5gcYQWiWHaHKgBwAqH8brTT9Jr4Vq5aw6zH1Kk9XmbI x/ufwoH7n4pzftGR+cDSGNOPbXs1Gj1RsaVndDO/86/H+iGvqrY0hK9Yum5B6Na3kg+vn21B4mnOH btYdxMYlYitCR585eEolVrPOeEm0OVL32mRjB+Xk9O7qb/QynkfY9c1/BtQ412eZh3t80ngbfh6bZ 0EEktxjg==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmPx-000000061o5-1m5A; Sun, 03 Mar 2024 14:02:17 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 06/11] md/raid1: use the atomic queue limit update APIs Date: Sun, 3 Mar 2024 07:01:45 -0700 Message-Id: <20240303140150.5435-7-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into a separate helper function. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/raid1.c | 25 ++++++++++++++++--------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index dd1393d0f08461..c3496837837720 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -1926,12 +1926,11 @@ static int raid1_add_disk(struct mddev *mddev, struct md_rdev *rdev) for (mirror = first; mirror <= last; mirror++) { p = conf->mirrors + mirror; if (!p->rdev) { - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; raid1_add_conf(conf, rdev, mirror, false); - err = 0; /* As all devices are equivalent, we don't need a full recovery * if this was recently any drive of the array */ @@ -3195,12 +3194,21 @@ static struct r1conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } +static int raid1_set_limits(struct mddev *mddev) +{ + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); +} + static void raid1_free(struct mddev *mddev, void *priv); static int raid1_run(struct mddev *mddev) { struct r1conf *conf; int i; - struct md_rdev *rdev; int ret; if (mddev->level != 1) { @@ -3228,10 +3236,9 @@ static int raid1_run(struct mddev *mddev) return PTR_ERR(conf); if (!mddev_is_dm(mddev)) { - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - rdev_for_each(rdev, mddev) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + ret = raid1_set_limits(mddev); + if (ret) + goto abort; } mddev->degraded = 0; From patchwork Sun Mar 3 14:01:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579785 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED2D45A7A0; Sun, 3 Mar 2024 14:02:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474544; cv=none; b=noX/PkWAUosFeFDcQAcd3J/ZVhUlFuFnLn2w19XTRTudtazXjQ9LVg63pOg7MVL5l92F4aJ2YwPwYKtUzMBp9FYfAY9CirRmRZxLzL8zvSoRB6T3iMDCASM6KugvuJKAT/gCDitb+u4B50dN54F0QmL7PG4khZsoUzDydMDwMLg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474544; c=relaxed/simple; bh=A9IMjYVSdvW9jBasTZVfOoJ/6Dsd8niPdUDp6DdAfcw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sINK2QNCLHbyy/oafqtq+mPG+swiusICZqsu4uaicNTrnDbDhQ1ebDX9pFJoD3PRYJg4U64RdbaHGmNYwG4XAx1nnzTaXNi8fIBEeOeeMLlA9GrZs2Fkw4L8+kIGs47eziAReSuBh88kVQDuRvu2Nz8y5NOmVLvC68xu1/P8KDk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=NKNiNoEq; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="NKNiNoEq" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZgQHbFU7C/QRg6uw4BrO/e1HS93i/3UQ/wGQ2/QcTGM=; b=NKNiNoEqsU+VnDcwUmTZzjglnj WjLX8c/m0XHR4Yl3vBPYQmOV3UZTVTbWmEfIcNc1L1u/LoUURp1w8Gq7Z0SYcbzJwJWJRIME/JnxF +/1ptrs+greV19tKouF6Acp943aTaKS8r+I/Vke/8Co1Pqdnjwqi8oTjtnM4AojNXlha1oQqdv73y HJhCKD/fZ1wPVyq0r/z8ZBHJPlmi9TunTFFEV4rJuUw+PQq8Vl5Ylxj39VD9nHGyuv3CbXtzY/yoM OruOFC1+ztKujXcQZ4qJpliXj/6E9gtBFdWyvdJePAU4LjS+dzG60Avc3o+ak8LjgHHxIvICLBwqJ 7GEjK7xA==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmQ1-000000061ps-1MzJ; Sun, 03 Mar 2024 14:02:21 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 07/11] md/raid5: use the atomic queue limit update APIs Date: Sun, 3 Mar 2024 07:01:46 -0700 Message-Id: <20240303140150.5435-8-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/raid5.c | 130 ++++++++++++++++++++++----------------------- 1 file changed, 65 insertions(+), 65 deletions(-) diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index a6350eb711fb36..0c10fdc0dfdcf1 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -7666,10 +7666,65 @@ static int only_parity(int raid_disk, int algo, int raid_disks, int max_degraded return 0; } -static void raid5_set_io_opt(struct r5conf *conf) +static int raid5_set_limits(struct mddev *mddev) { - blk_queue_io_opt(conf->mddev->queue, (conf->chunk_sectors << 9) * - (conf->raid_disks - conf->max_degraded)); + struct r5conf *conf = mddev->private; + struct queue_limits lim; + int data_disks, stripe; + struct md_rdev *rdev; + + /* + * The read-ahead size must cover two whole stripes, which is + * 2 * (datadisks) * chunksize where 'n' is the number of raid devices. + */ + data_disks = conf->previous_raid_disks - conf->max_degraded; + + /* + * We can only discard a whole stripe. It doesn't make sense to + * discard data disk but write parity disk + */ + stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9)); + + blk_set_stacking_limits(&lim); + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded); + lim.raid_partial_stripes_expensive = 1; + lim.discard_granularity = stripe; + lim.max_write_zeroes_sectors = 0; + mddev_stack_rdev_limits(mddev, &lim); + rdev_for_each(rdev, mddev) + queue_limits_stack_bdev(&lim, rdev->bdev, rdev->new_data_offset, + mddev->gendisk->disk_name); + + /* + * Zeroing is required for discard, otherwise data could be lost. + * + * Consider a scenario: discard a stripe (the stripe could be + * inconsistent if discard_zeroes_data is 0); write one disk of the + * stripe (the stripe could be inconsistent again depending on which + * disks are used to calculate parity); the disk is broken; The stripe + * data of this disk is lost. + * + * We only allow DISCARD if the sysadmin has confirmed that only safe + * devices are in use by setting a module parameter. A better idea + * might be to turn DISCARD into WRITE_ZEROES requests, as that is + * required to be safe. + */ + if (!devices_handle_discard_safely || + lim.max_discard_sectors < (stripe >> 9) || + lim.discard_granularity < stripe) + lim.max_hw_discard_sectors = 0; + + /* + * Requests require having a bitmap for each stripe. + * Limit the max sectors based on this. + */ + lim.max_hw_sectors = RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf); + + /* No restrictions on the number of segments in the request */ + lim.max_segments = USHRT_MAX; + + return queue_limits_set(mddev->queue, &lim); } static int raid5_run(struct mddev *mddev) @@ -7682,6 +7737,7 @@ static int raid5_run(struct mddev *mddev) int i; long long min_offset_diff = 0; int first = 1; + int ret = -EIO; if (mddev->recovery_cp != MaxSector) pr_notice("md/raid:%s: not clean -- starting background reconstruction\n", @@ -7935,65 +7991,9 @@ static int raid5_run(struct mddev *mddev) md_set_array_sectors(mddev, raid5_size(mddev, 0, 0)); if (!mddev_is_dm(mddev)) { - int chunk_size; - /* read-ahead size must cover two whole stripes, which - * is 2 * (datadisks) * chunksize where 'n' is the - * number of raid devices - */ - int data_disks = conf->previous_raid_disks - conf->max_degraded; - int stripe = data_disks * - ((mddev->chunk_sectors << 9) / PAGE_SIZE); - - chunk_size = mddev->chunk_sectors << 9; - blk_queue_io_min(mddev->queue, chunk_size); - raid5_set_io_opt(conf); - mddev->queue->limits.raid_partial_stripes_expensive = 1; - /* - * We can only discard a whole stripe. It doesn't make sense to - * discard data disk but write parity disk - */ - stripe = stripe * PAGE_SIZE; - stripe = roundup_pow_of_two(stripe); - mddev->queue->limits.discard_granularity = stripe; - - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - - rdev_for_each(rdev, mddev) { - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->new_data_offset << 9); - } - - /* - * zeroing is required, otherwise data - * could be lost. Consider a scenario: discard a stripe - * (the stripe could be inconsistent if - * discard_zeroes_data is 0); write one disk of the - * stripe (the stripe could be inconsistent again - * depending on which disks are used to calculate - * parity); the disk is broken; The stripe data of this - * disk is lost. - * - * We only allow DISCARD if the sysadmin has confirmed that - * only safe devices are in use by setting a module parameter. - * A better idea might be to turn DISCARD into WRITE_ZEROES - * requests, as that is required to be safe. - */ - if (!devices_handle_discard_safely || - mddev->queue->limits.max_discard_sectors < (stripe >> 9) || - mddev->queue->limits.discard_granularity < stripe) - blk_queue_max_discard_sectors(mddev->queue, 0); - - /* - * Requests require having a bitmap for each stripe. - * Limit the max sectors based on this. - */ - blk_queue_max_hw_sectors(mddev->queue, - RAID5_MAX_REQ_STRIPES << RAID5_STRIPE_SHIFT(conf)); - - /* No restrictions on the number of segments in the request */ - blk_queue_max_segments(mddev->queue, USHRT_MAX); + ret = raid5_set_limits(mddev); + if (ret) + goto abort; } if (log_init(conf, journal_dev, raid5_has_ppl(conf))) @@ -8006,7 +8006,7 @@ static int raid5_run(struct mddev *mddev) free_conf(conf); mddev->private = NULL; pr_warn("md/raid:%s: failed to run raid set.\n", mdname(mddev)); - return -EIO; + return ret; } static void raid5_free(struct mddev *mddev, void *priv) @@ -8538,8 +8538,8 @@ static void end_reshape(struct r5conf *conf) spin_unlock_irq(&conf->device_lock); wake_up(&conf->wait_for_overlap); - if (!mddev_is_dm(conf->mddev)) - raid5_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, + conf->raid_disks - conf->max_degraded); } } From patchwork Sun Mar 3 14:01:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579786 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7605B5A0FC; Sun, 3 Mar 2024 14:02:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474548; cv=none; b=QB7ftM6JFCOcQyS7CF0uF+nKhTHzkumrpxxysN5/umqny2ywwzk6cVd4wswydgZsafjbQHDqLWiuIpjvXA6VKzZHSBoFA+y7NsBMRWM2hoRuAE1r4dZ4hz1JRwJm/T/CkVBBviaFBhPy2K+XYclnAQOMjV3WusBeeNSPaqrgH6o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474548; c=relaxed/simple; bh=vE5B2s3pYzCwOf8ukdjCpBr5MTVS+V8rLxzmIFjANmU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=BMTS/UjCy6tsm2+3nnAVyDs1ET7SlKO8m/jh88FL4b/tPF2Ci1NkCllTDxTxASWV2hlSrTzB+ZAeVrTo87Re+ndLGV/rMlQhPhb77fN+ArJRtYQ97JEEhh+u8uURhx80eKlGIcsMHBRAwcgLo9y8Qe51hp6/M5L9O5eGoZauVJM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=cm9D7kyT; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="cm9D7kyT" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=INCkpSOFdWHvBHvn1C9B+ofio+TVaN9EpUC34P2iI2U=; b=cm9D7kyT1TAp3ZZDc0ZGEi8+Gz E9S6h7XZrEV/qm8CKHpqCMO0oFY2i3bZnusMJh7VegFAJ7N9em+ABb4RvcfWtdLjkwAargmE+jTYK i4TfZivof1K39NuG5VxWkKn2JY4bBl5jqUEqb39QGXb4FEySlSl7YFSYXB48rPlBghbhb/zO8I1I+ CUsI+UtMMxcRX2HFGMhF0j1vNVcYMCoKcfSq33OGg5YTKy4caF6LmDqhHKTIFahR/VdqpAOMm0PSW rwvd/PKoFYXE/5uQMuLSSZ/wNBcLwToSURf/qyIk0yFnCJd26iyTzh5+1CIVpnlf6f2wwQ//qieVu bI7zuz9g==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmQ5-000000061ry-35CV; Sun, 03 Mar 2024 14:02:26 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 08/11] md/raid10: use the atomic queue limit update APIs Date: Sun, 3 Mar 2024 07:01:47 -0700 Message-Id: <20240303140150.5435-9-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Build the queue limits outside the queue and apply them using queue_limits_set. To make the code more obvious also split the queue limits handling into separate helpers. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/raid10.c | 60 +++++++++++++++++++++++++-------------------- 1 file changed, 33 insertions(+), 27 deletions(-) diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index 4021cf06b3a616..e96fdf47319fd0 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -2106,10 +2106,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) continue; } - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; p->head_position = 0; p->recovery_disabled = mddev->recovery_disabled - 1; rdev->raid_disk = mirror; @@ -2125,10 +2124,9 @@ static int raid10_add_disk(struct mddev *mddev, struct md_rdev *rdev) clear_bit(In_sync, &rdev->flags); set_bit(Replacement, &rdev->flags); rdev->raid_disk = repl_slot; - err = 0; - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); + err = mddev_stack_new_rdev(mddev, rdev); + if (err) + return err; conf->fullsync = 1; WRITE_ONCE(p->replacement, rdev); } @@ -3969,14 +3967,26 @@ static struct r10conf *setup_conf(struct mddev *mddev) return ERR_PTR(err); } -static void raid10_set_io_opt(struct r10conf *conf) +static unsigned int raid10_nr_stripes(struct r10conf *conf) { - int raid_disks = conf->geo.raid_disks; + unsigned int raid_disks = conf->geo.raid_disks; + + if (conf->geo.raid_disks % conf->geo.near_copies) + return raid_disks; + return raid_disks / conf->geo.near_copies; +} - if (!(conf->geo.raid_disks % conf->geo.near_copies)) - raid_disks /= conf->geo.near_copies; - blk_queue_io_opt(conf->mddev->queue, (conf->mddev->chunk_sectors << 9) * - raid_disks); +static int raid10_set_queue_limits(struct mddev *mddev) +{ + struct r10conf *conf = mddev->private; + struct queue_limits lim; + + blk_set_stacking_limits(&lim); + lim.max_write_zeroes_sectors = 0; + lim.io_min = mddev->chunk_sectors << 9; + lim.io_opt = lim.io_min * raid10_nr_stripes(conf); + mddev_stack_rdev_limits(mddev, &lim); + return queue_limits_set(mddev->queue, &lim); } static int raid10_run(struct mddev *mddev) @@ -3988,6 +3998,7 @@ static int raid10_run(struct mddev *mddev) sector_t size; sector_t min_offset_diff = 0; int first = 1; + int ret = -EIO; if (mddev->private == NULL) { conf = setup_conf(mddev); @@ -4014,12 +4025,6 @@ static int raid10_run(struct mddev *mddev) } } - if (!mddev_is_dm(conf->mddev)) { - blk_queue_max_write_zeroes_sectors(mddev->queue, 0); - blk_queue_io_min(mddev->queue, mddev->chunk_sectors << 9); - raid10_set_io_opt(conf); - } - rdev_for_each(rdev, mddev) { long long diff; @@ -4048,14 +4053,16 @@ static int raid10_run(struct mddev *mddev) if (first || diff < min_offset_diff) min_offset_diff = diff; - if (!mddev_is_dm(mddev)) - disk_stack_limits(mddev->gendisk, rdev->bdev, - rdev->data_offset << 9); - disk->head_position = 0; first = 0; } + if (!mddev_is_dm(conf->mddev)) { + ret = raid10_set_queue_limits(mddev); + if (ret) + goto out_free_conf; + } + /* need to check that every block has at least one working mirror */ if (!enough(conf, -1)) { pr_err("md/raid10:%s: not enough operational mirrors.\n", @@ -4156,7 +4163,7 @@ static int raid10_run(struct mddev *mddev) raid10_free_conf(conf); mddev->private = NULL; out: - return -EIO; + return ret; } static void raid10_free(struct mddev *mddev, void *priv) @@ -4933,8 +4940,7 @@ static void end_reshape(struct r10conf *conf) conf->reshape_safe = MaxSector; spin_unlock_irq(&conf->device_lock); - if (!mddev_is_dm(conf->mddev)) - raid10_set_io_opt(conf); + mddev_update_io_opt(conf->mddev, raid10_nr_stripes(conf)); conf->fullsync = 0; } From patchwork Sun Mar 3 14:01:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579787 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D08025A0FC; Sun, 3 Mar 2024 14:02:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474552; cv=none; b=WfRm0dnY4aQmJaDifJTe0gqeNNCsxbghGr/fk4fIBZ2Xd+jntFpmc6W6u6aIZ6V5Z/SQa+L1sVqtfZMspsR65kHA4fxBDkSaSKGluRL8FisEOBPwlyPKiMqSX6ZQAVqNGjRRLYQIa+7jEpPj6K3Ryvnx3N02DeJQN8349D2QChM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474552; c=relaxed/simple; bh=kFfvcvIwAY3mFC3pnSuEqjTVxH9IzhjT91DbCCSoaR8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=OlBt/nkhRSz4rWzSWXHi+4MkvnqmiRMYfsckLBax7BXtXHdmT71Fk20DSWtvuzL5OaOZATXx6EU0y98PZoCYm51G4YwMyqfikmprgid0owzegr6Sg6wyuKXMBom/ZPdyz+zVcMGEyO1eh9EB/XGU9T/GuJwBgv5tzTQgcyip2VU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=p/EM0UDN; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="p/EM0UDN" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1hEyHs3RXys2pPpM0x+T8ikO/Q+ab47lWa7kxrpT+8k=; b=p/EM0UDNMFze8WhB5VPiCOxOJN dFnXlxSayouoYNiCU/b7VvAPCuunWfpAvDGyjPkhtoxQHhey92g0UGi5HLJNuSmOPJSRKXBJSRt/P M1FV7D+kzlZ4xggg8VdTpD/CPJNXHdCOpq4pdJMqLitBkhkOOIaWHD8tqwAiAdcBhILbhEZjpUyR4 0qT4wXPxwZBGxW3gQ1hrzMCUG23o7MTrdmAaezBGEXeMDUX8a2y72qH6soGvHVgoUXTsd0ICHFcqt xZSpXAqaNoHUwRLBOGQzCY1XZQ/Nw0FupgdmefpykF3W0dst2Yxoxzd988hCYCgxC+kFeVnmIZ3EJ WJmPOpvA==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmQ9-000000061vc-1Kjh; Sun, 03 Mar 2024 14:02:29 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 09/11] md: don't initialize queue limits Date: Sun, 3 Mar 2024 07:01:48 -0700 Message-Id: <20240303140150.5435-10-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Initial queue limits are now set from ->run. Remove the superfluous initialization in md_alloc and level_store. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index 9a7f3d2b8c2d16..f564ad051a427d 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -4173,7 +4173,6 @@ level_store(struct mddev *mddev, const char *buf, size_t len) mddev->in_sync = 1; del_timer_sync(&mddev->safemode_timer); } - blk_set_stacking_limits(&mddev->queue->limits); pers->run(mddev); set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags); if (!mddev->thread) @@ -5859,7 +5858,6 @@ struct mddev *md_alloc(dev_t dev, char *name) disk->private_data = mddev; mddev->queue = disk->queue; - blk_set_stacking_limits(&mddev->queue->limits); blk_queue_write_cache(mddev->queue, true, true); disk->events |= DISK_EVENT_MEDIA_CHANGE; mddev->gendisk = disk; From patchwork Sun Mar 3 14:01:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579788 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 22F9F5A0FC; Sun, 3 Mar 2024 14:02:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474555; cv=none; b=AiLNk+62lWQnm+VgvK3VM1oN71JDmHgGzvKeIRWlAybmVCObfEywUY3JK0vZDwqhO6mD48rym6nZgtdWOkI74Rjp9QebOAMiRjggRuDJ9qD19R30AxnItlQFogGE6xvUp9/LIdpQCu665ydr+6hTPtbD1B0FlV7Hnl74nulnOsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474555; c=relaxed/simple; bh=PYGqzMOyZxRPZx0/G18y700/zqlLHAbe9PQB2oMamu8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=dWDrUecpMYj+xgQ3yuaP7soNNIa1UHNtIlRhune6wnpp28jvOUOzVUPt/GVzL/i8i2OuqhMtUly351VEzbBrxN29yWe8TqpQ9Vd78LTL7zDAcS20CPrR495jx4m9IS4y4wt7G13QR5Sxrv3xBLnwjrivJZcUNRvsmoI/D27WFGQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=VoylhsnL; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="VoylhsnL" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=8soMuUaLp0MawEwWPgYziwbw8cJRlhM5EXSBa8SKG48=; b=VoylhsnLrsGdkZXdjND/1GeHv0 4mINPU961eLALy8tiDanZJsVCmRBmXYvSRrv0XQ4pbV4HnQiU2FAuO1kXzNXeD6EOLKg4Y6fX/lKS 2gt7FQXbSCh2K0Kkjmh6ohomXmy4WFiuSb15w7aIRRr41XoS6onBymr8/5lFZc3z5AyWXqB/xNsVK oC/lLUz8zMb0oekOjlhxQf+zifOcKwEiuHEG3d02LdWUFx1RsjbmoxJOjjeThXx0ao081pXNXqRVc YyUfXByWZAerhtzo3sCksLVuSQPH49HY4W0Vi5LsP5PMuzdUf6wb/3i07ZQ+KgeADeMUAn3CtMXqy ptaPBBdQ==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmQC-000000061yC-221m; Sun, 03 Mar 2024 14:02:33 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 10/11] md: remove mddev->queue Date: Sun, 3 Mar 2024 07:01:49 -0700 Message-Id: <20240303140150.5435-11-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Just use the request_queue from the gendisk pointer in the relatively few places that sill need it. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- drivers/md/md.c | 22 ++++++++++++---------- drivers/md/md.h | 5 ++--- drivers/md/raid0.c | 2 +- drivers/md/raid1.c | 2 +- drivers/md/raid10.c | 2 +- drivers/md/raid5-ppl.c | 3 ++- drivers/md/raid5.c | 13 +++++++------ 7 files changed, 26 insertions(+), 23 deletions(-) diff --git a/drivers/md/md.c b/drivers/md/md.c index f564ad051a427d..8d963b887eecc7 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -5750,10 +5750,10 @@ int mddev_stack_new_rdev(struct mddev *mddev, struct md_rdev *rdev) if (mddev_is_dm(mddev)) return 0; - lim = queue_limits_start_update(mddev->queue); + lim = queue_limits_start_update(mddev->gendisk->queue); queue_limits_stack_bdev(&lim, rdev->bdev, rdev->data_offset, mddev->gendisk->disk_name); - return queue_limits_commit_update(mddev->queue, &lim); + return queue_limits_commit_update(mddev->gendisk->queue, &lim); } EXPORT_SYMBOL_GPL(mddev_stack_new_rdev); @@ -5857,8 +5857,7 @@ struct mddev *md_alloc(dev_t dev, char *name) disk->fops = &md_fops; disk->private_data = mddev; - mddev->queue = disk->queue; - blk_queue_write_cache(mddev->queue, true, true); + blk_queue_write_cache(disk->queue, true, true); disk->events |= DISK_EVENT_MEDIA_CHANGE; mddev->gendisk = disk; error = add_disk(disk); @@ -6160,6 +6159,7 @@ int md_run(struct mddev *mddev) } if (!mddev_is_dm(mddev)) { + struct request_queue *q = mddev->gendisk->queue; bool nonrot = true; rdev_for_each(rdev, mddev) { @@ -6171,14 +6171,14 @@ int md_run(struct mddev *mddev) if (mddev->degraded) nonrot = false; if (nonrot) - blk_queue_flag_set(QUEUE_FLAG_NONROT, mddev->queue); + blk_queue_flag_set(QUEUE_FLAG_NONROT, q); else - blk_queue_flag_clear(QUEUE_FLAG_NONROT, mddev->queue); - blk_queue_flag_set(QUEUE_FLAG_IO_STAT, mddev->queue); + blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); + blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q); /* Set the NOWAIT flags if all underlying devices support it */ if (nowait) - blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue); + blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q); } if (pers->sync_request) { if (mddev->kobj.sd && @@ -6423,8 +6423,10 @@ static void mddev_detach(struct mddev *mddev) mddev->pers->quiesce(mddev, 0); } md_unregister_thread(mddev, &mddev->thread); + + /* the unplug fn references 'conf' */ if (!mddev_is_dm(mddev)) - blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/ + blk_sync_queue(mddev->gendisk->queue); } static void __md_stop(struct mddev *mddev) @@ -7142,7 +7144,7 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev) if (!bdev_nowait(rdev->bdev)) { pr_info("%s: Disabling nowait because %pg does not support nowait\n", mdname(mddev), rdev->bdev); - blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->queue); + blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->gendisk->queue); } /* * Kick recovery, maybe this spare has to be added to the diff --git a/drivers/md/md.h b/drivers/md/md.h index 003db35b4b5926..b2299924f0766a 100644 --- a/drivers/md/md.h +++ b/drivers/md/md.h @@ -480,7 +480,6 @@ struct mddev { struct timer_list safemode_timer; struct percpu_ref writes_pending; int sync_checkers; /* # of threads checking writes_pending */ - struct request_queue *queue; /* for plugging ... */ struct bitmap *bitmap; /* the bitmap for the device */ struct { @@ -833,7 +832,7 @@ static inline void mddev_check_write_zeroes(struct mddev *mddev, struct bio *bio { if (bio_op(bio) == REQ_OP_WRITE_ZEROES && !bio->bi_bdev->bd_disk->queue->limits.max_write_zeroes_sectors) - mddev->queue->limits.max_write_zeroes_sectors = 0; + mddev->gendisk->queue->limits.max_write_zeroes_sectors = 0; } static inline int mddev_suspend_and_lock(struct mddev *mddev) @@ -896,7 +895,7 @@ static inline void mddev_trace_remap(struct mddev *mddev, struct bio *bio, #define mddev_add_trace_msg(mddev, fmt, args...) \ do { \ if (!mddev_is_dm(mddev)) \ - blk_add_trace_msg((mddev)->queue, fmt, ##args); \ + blk_add_trace_msg((mddev)->gendisk->queue, fmt, ##args); \ } while (0) #endif /* _MD_MD_H */ diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index f65aa6ecec0482..c5d4aeb68404c9 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -389,7 +389,7 @@ static int raid0_set_limits(struct mddev *mddev) lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * mddev->raid_disks; mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid0_run(struct mddev *mddev) diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index c3496837837720..be8ac24f50b6ad 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -3201,7 +3201,7 @@ static int raid1_set_limits(struct mddev *mddev) blk_set_stacking_limits(&lim); lim.max_write_zeroes_sectors = 0; mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static void raid1_free(struct mddev *mddev, void *priv); diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c index e96fdf47319fd0..b0fd3005f5c18f 100644 --- a/drivers/md/raid10.c +++ b/drivers/md/raid10.c @@ -3986,7 +3986,7 @@ static int raid10_set_queue_limits(struct mddev *mddev) lim.io_min = mddev->chunk_sectors << 9; lim.io_opt = lim.io_min * raid10_nr_stripes(conf); mddev_stack_rdev_limits(mddev, &lim); - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid10_run(struct mddev *mddev) diff --git a/drivers/md/raid5-ppl.c b/drivers/md/raid5-ppl.c index da4ba736c4f0c9..a70cbec12ed017 100644 --- a/drivers/md/raid5-ppl.c +++ b/drivers/md/raid5-ppl.c @@ -1393,7 +1393,8 @@ int ppl_init_log(struct r5conf *conf) ppl_conf->signature = ~crc32c_le(~0, mddev->uuid, sizeof(mddev->uuid)); ppl_conf->block_size = 512; } else { - ppl_conf->block_size = queue_logical_block_size(mddev->queue); + ppl_conf->block_size = + queue_logical_block_size(mddev->gendisk->queue); } for (i = 0; i < ppl_conf->count; i++) { diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 0c10fdc0dfdcf1..b7515638fdcf80 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -4273,9 +4273,10 @@ static int handle_stripe_dirtying(struct r5conf *conf, } } if (rcw && !mddev_is_dm(conf->mddev)) - blk_add_trace_msg(conf->mddev->queue, "raid5 rcw %llu %d %d %d", - (unsigned long long)sh->sector, - rcw, qread, test_bit(STRIPE_DELAYED, &sh->state)); + blk_add_trace_msg(conf->mddev->gendisk->queue, + "raid5 rcw %llu %d %d %d", + (unsigned long long)sh->sector, rcw, qread, + test_bit(STRIPE_DELAYED, &sh->state)); } if (rcw > disks && rmw > disks && @@ -5684,7 +5685,7 @@ static void raid5_unplug(struct blk_plug_cb *blk_cb, bool from_schedule) release_inactive_stripe_list(conf, cb->temp_inactive_list, NR_STRIPE_HASH_LOCKS); if (!mddev_is_dm(mddev)) - trace_block_unplug(mddev->queue, cnt, !from_schedule); + trace_block_unplug(mddev->gendisk->queue, cnt, !from_schedule); kfree(cb); } @@ -7064,7 +7065,7 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len) if (!conf) err = -ENODEV; else if (new != conf->skip_copy) { - struct request_queue *q = mddev->queue; + struct request_queue *q = mddev->gendisk->queue; conf->skip_copy = new; if (new) @@ -7724,7 +7725,7 @@ static int raid5_set_limits(struct mddev *mddev) /* No restrictions on the number of segments in the request */ lim.max_segments = USHRT_MAX; - return queue_limits_set(mddev->queue, &lim); + return queue_limits_set(mddev->gendisk->queue, &lim); } static int raid5_run(struct mddev *mddev) From patchwork Sun Mar 3 14:01:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 13579789 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E9805A0F7; Sun, 3 Mar 2024 14:02:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.137.202.133 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474559; cv=none; b=Df1CwJes/NprARGjcdZLrLNwXxBZFKIr6H381MQ2Xzgh/EsJ+30pqxt2/ZeX2OzfpArN3+mAGDgWbZ4YmxnXUXvbTaY8ns5/UlWj5G/d6C1gwoKHBCdegwnZJECv3v8wBvJO759hwYVaqNW94nk7ACy5Im25RniohACJ8Ga+R00= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709474559; c=relaxed/simple; bh=O7yEWXZ9fHK9QOWLffBiz9IyVlNCkdIboDq52GlXE4o=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sHrXp3/6CSGQNPbHHB7LJXHeuS1Xugwk45AKCZFFDIbJQi4sjnoK18PQrKh/UnPkbTJMMoCV46am7/LvdXOxRCCNc3jXNkLVWYsX1emBXUJ7gglJ5G/B7nOjku3Dk/BrxmAhmqSOUcKH1X5fZ7GC+mpX4R7XYkimwahbuXok5IQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de; spf=none smtp.mailfrom=bombadil.srs.infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=m6JZTDoH; arc=none smtp.client-ip=198.137.202.133 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=bombadil.srs.infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="m6JZTDoH" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=LouTDMtzVuxrFMO501DZoH7HEGvjrTztbnPvbM/LE60=; b=m6JZTDoH+DJWSlJSKxKxx7aRd+ l6u6HHlyUJFOWG/iVBQTmiI0n54jQf0gvjH5qzB2gwO2E+ejjreutpINtWvotQP8iUSWm5B9/Scxw iwrz/yaehOsFvY6EUQ+OlA36DiL5ZRvkM2dgmr2vNpR7AHOAuRHe346KP7hEMXOHfThJc3476jXd6 u2NknJemHEEmp0VtU5SzNl+zqXUHawJBmeW9H91se/Kq/OxqSWAXcIEWbIdICVmCwE5I/qQvJsu2g 8+21xPKwjeBysn3hdCyANtoHkrZLUxFkc2cGxtMdiC1osRyaK24jTk01cTX3bLUpX7tapvvb1rwcL x0oS94/A==; Received: from [206.0.71.27] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux)) id 1rgmQF-0000000620C-42Qt; Sun, 03 Mar 2024 14:02:36 +0000 From: Christoph Hellwig To: Jens Axboe , Mike Snitzer , Mikulas Patocka , Song Liu , Yu Kuai Cc: dm-devel@lists.linux.dev, linux-block@vger.kernel.org, linux-raid@vger.kernel.org Subject: [PATCH 11/11] block: remove disk_stack_limits Date: Sun, 3 Mar 2024 07:01:50 -0700 Message-Id: <20240303140150.5435-12-hch@lst.de> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240303140150.5435-1-hch@lst.de> References: <20240303140150.5435-1-hch@lst.de> Precedence: bulk X-Mailing-List: linux-raid@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html disk_stack_limits is unused now, remove it. Signed-off-by: Christoph Hellwig Reviewed--by: Song Liu Tested-by: Song Liu --- block/blk-settings.c | 24 ------------------------ include/linux/blkdev.h | 2 -- 2 files changed, 26 deletions(-) diff --git a/block/blk-settings.c b/block/blk-settings.c index 13865a9f89726c..3c7d8d638ab59d 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -916,30 +916,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, } EXPORT_SYMBOL_GPL(queue_limits_stack_bdev); -/** - * disk_stack_limits - adjust queue limits for stacked drivers - * @disk: MD/DM gendisk (top) - * @bdev: the underlying block device (bottom) - * @offset: offset to beginning of data within component device - * - * Description: - * Merges the limits for a top level gendisk and a bottom level - * block_device. - */ -void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset) -{ - struct request_queue *t = disk->queue; - - if (blk_stack_limits(&t->limits, &bdev_get_queue(bdev)->limits, - get_start_sect(bdev) + (offset >> 9)) < 0) - pr_notice("%s: Warning: Device %pg is misaligned\n", - disk->disk_name, bdev); - - disk_update_readahead(disk); -} -EXPORT_SYMBOL(disk_stack_limits); - /** * blk_queue_update_dma_pad - update pad mask * @q: the request queue for the device diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 285e82723d641f..75c909865a8b7b 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -926,8 +926,6 @@ extern int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, sector_t offset); void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev, sector_t offset, const char *pfx); -extern void disk_stack_limits(struct gendisk *disk, struct block_device *bdev, - sector_t offset); extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int); extern void blk_queue_segment_boundary(struct request_queue *, unsigned long); extern void blk_queue_virt_boundary(struct request_queue *, unsigned long);