From patchwork Wed Dec 20 10:03:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Jin X-Patchwork-Id: 13499765 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED8E121345; Wed, 20 Dec 2023 10:03:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BY1tCjnn" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-1d3dc30ae01so12983995ad.0; Wed, 20 Dec 2023 02:03:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703066635; x=1703671435; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0ZD8z1FUk2T6KpKO8Jv7plBSlXMXIfnWviiaK22s/Yk=; b=BY1tCjnnWwP1L30doQcnHKGIqU9qUlXONgsp/JX0RtY3uypWTFgGmM7ETfHGpbDOTo VBMEjTHfo8kyf3+Dvwq5FOKtch3KkOjA8agWAS4059AjGdwIf+7hB/sQf6Hr7fGUpzwe pTlI3S36icF1MUKp9rMv5rS16bbwGM4DiYQkHyA2XafzKqns+BrApS/7semClNlU2BFR jJjob4Vj/Aup1c+scE7tWFiAoP8ckbspSAh7oAcmkFiNxBLxo+uG5oo61SKRP7nHoAT3 6LBffd4EfKyjMBRO4XzsUftoEFZj9OCt8sB89HO0YUelI/GGrvEv/fjk82atbUWt7+02 ncvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066635; x=1703671435; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0ZD8z1FUk2T6KpKO8Jv7plBSlXMXIfnWviiaK22s/Yk=; b=Gzcw362t9Vfk6dwMC6slzJKZDne9e9i2JGvYSir/aUTIshMMcozJTPN5LQ1qwAsHUY M/jd53hqMe8TF/YAtG5453LgyaVrUlog188D7k9daHk6q6DG6Aoi3W1RKNrhquLab4mB lbLIH7U96EbwUh8RaAVXVfyemN8/xCDdaGsaX8+6EknPQOebyNgotwpENTT6Ugc5Jt6Q lb7a8poukhAh8y7DfNX7En2rA9izhpTMBcg6k1BOJQHvMusED0lvW36sqoDUKSjivGkY rsMm0TcOJ6h8aXoUBVBP5tlST9x9Uk0seWyXQD5hzmz0qFKofzpQmwEWB61Ww/9IYAcU qsSA== X-Gm-Message-State: AOJu0Yylj/aXkqXlXMWpPPlI6lFXih50zIIBCg4DE21jGoYDahfBu8Wt C9oabixYNIr3Rmj+705gWZM= X-Google-Smtp-Source: AGHT+IF7ZiYJ4GnoDvucGYc1U7p/pTj2piTcD+H+C1aMmX1RkhKX0TVTSy6/AzZMmH1wFQj5/23ryQ== X-Received: by 2002:a17:902:e810:b0:1d3:eb97:9446 with SMTP id u16-20020a170902e81000b001d3eb979446mr1466202plg.9.1703066635364; Wed, 20 Dec 2023 02:03:55 -0800 (PST) Received: from ubuntu.. ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x3-20020a170902fe8300b001d3b7c5776asm5721619plm.160.2023.12.20.02.03.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 02:03:54 -0800 (PST) From: Hongyu Jin To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, axboe@kernel.dk, ebiggers@kernel.org Cc: zhiguo.niu@unisoc.com, ke.wang@unisoc.com, yibin.ding@unisoc.com, hongyu.jin@unisoc.com, linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, linux-block@vger.kernel.org Subject: [PATCH v6 1/5] block: Fix bio IO priority setting Date: Wed, 20 Dec 2023 18:03:29 +0800 Message-Id: <20231220100333.107049-2-hongyu.jin.cn@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220100333.107049-1-hongyu.jin.cn@gmail.com> References: <20231213104216.27845-6-hongyu.jin.cn@gmail.com> <20231220100333.107049-1-hongyu.jin.cn@gmail.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Hongyu Jin Move bio_set_ioprio() into submit_bio(): 1. Only call bio_set_ioprio() once to set the priority of original bio, the bio that cloned and splited from original bio will auto inherit the priority of original bio in clone process. 2. The IO priority can be passed to module that implement struct gendisk::fops::submit_bio, help resolve some of the IO priority loss issues. This patch depends on commit 82b74cac2849 ("blk-ioprio: Convert from rqos policy to direct call") Fixes: a78418e6a04c ("block: Always initialize bio IO priority on submit") Co-developed-by: Yibin Ding Signed-off-by: Yibin Ding Signed-off-by: Hongyu Jin --- block/blk-core.c | 10 ++++++++++ block/blk-mq.c | 11 ----------- 2 files changed, 10 insertions(+), 11 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index 2eca76ccf4ee..d707ec056f34 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -49,6 +49,7 @@ #include "blk-pm.h" #include "blk-cgroup.h" #include "blk-throttle.h" +#include "blk-ioprio.h" struct dentry *blk_debugfs_root; @@ -817,6 +818,14 @@ void submit_bio_noacct(struct bio *bio) } EXPORT_SYMBOL(submit_bio_noacct); +static void bio_set_ioprio(struct bio *bio) +{ + /* Nobody set ioprio so far? Initialize it based on task's nice value */ + if (IOPRIO_PRIO_CLASS(bio->bi_ioprio) == IOPRIO_CLASS_NONE) + bio->bi_ioprio = get_current_ioprio(); + blkcg_set_ioprio(bio); +} + /** * submit_bio - submit a bio to the block device layer for I/O * @bio: The &struct bio which describes the I/O @@ -839,6 +848,7 @@ void submit_bio(struct bio *bio) count_vm_events(PGPGOUT, bio_sectors(bio)); } + bio_set_ioprio(bio); submit_bio_noacct(bio); } EXPORT_SYMBOL(submit_bio); diff --git a/block/blk-mq.c b/block/blk-mq.c index ac18f802c027..351e8283eda1 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -40,7 +40,6 @@ #include "blk-stat.h" #include "blk-mq-sched.h" #include "blk-rq-qos.h" -#include "blk-ioprio.h" static DEFINE_PER_CPU(struct llist_head, blk_cpu_done); static DEFINE_PER_CPU(call_single_data_t, blk_cpu_csd); @@ -2919,14 +2918,6 @@ static bool blk_mq_can_use_cached_rq(struct request *rq, struct blk_plug *plug, return true; } -static void bio_set_ioprio(struct bio *bio) -{ - /* Nobody set ioprio so far? Initialize it based on task's nice value */ - if (IOPRIO_PRIO_CLASS(bio->bi_ioprio) == IOPRIO_CLASS_NONE) - bio->bi_ioprio = get_current_ioprio(); - blkcg_set_ioprio(bio); -} - /** * blk_mq_submit_bio - Create and send a request to block device. * @bio: Bio pointer. @@ -2957,8 +2948,6 @@ void blk_mq_submit_bio(struct bio *bio) return; } - bio_set_ioprio(bio); - if (plug) { rq = rq_list_peek(&plug->cached_rq); if (rq && rq->q != q) From patchwork Wed Dec 20 10:03:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Jin X-Patchwork-Id: 13499766 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB69821360; Wed, 20 Dec 2023 10:04:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FeGCE/JU" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-1d3ab37d0d1so18602615ad.0; Wed, 20 Dec 2023 02:04:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703066644; x=1703671444; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ic04N878dOfCIhintKPz2yLXKjHUgT5SZheoyUkoQc0=; b=FeGCE/JUjzrGlI9UWApMtPSwNrEooll4BT+M2O2GqxLIhSHfYHVcvVpYvAkR4Hel03 PvjxRDWqMHGy7kJ/a58AELl5b+4YkCvZwv4ixgs9E5h4FGjbZ2Gril2Yy13unZk5daJe v0d/VtwQwCZQPXvQAqbgzknxMBiPo/991R4B0RDBc8crwnsip8v+gTvXD2wTso/pXfzo OcrLPRxvLIUGayTgir+mwD/mGrR7uQfnURwOvz+AU8kEbUB+rtFyKAEJ6Gla7KRxvA1F gOMzm47B+EHk1tpEYDRt+T3fGl6Y2TzIv0o9MOS91L8tX0ptj7ZEyYkBsMVQ0RuUu46K n5Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066644; x=1703671444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ic04N878dOfCIhintKPz2yLXKjHUgT5SZheoyUkoQc0=; b=wUI9spNWISi47aVovGYsX7o5dTIzhsXX+9FKsZXrSgF0ZLtp6faxePyXVMolXp0O/4 ZpYU0SwSGqBZ2MKzbCQhgf9lsx4UdjSyiwAoBUvRw8C10s5ivj2iqNxllU11d8NqYoXV WB1iRak61t8hyZUyrdWaBXouLtIKclCPl+PrIHqmco+qVHJY/LciGb8i1FeA9Z3mt6+O xC1Anu6/xgR1JXtWZ1iPrg4wLtxDeLSro2YElvsc2+BGBJOl0oCH9grGDwgQRujwc9AM Xjt8J4iTE6hUS7ciYoL1CV8+7mQfQMEKPrBDqzQ7rkHSYLufLqpNjuAcbxcfD6jGHkkb TH5Q== X-Gm-Message-State: AOJu0YylZpk+V0gl8ZOjnzr2ln7B1OxnCpQMx9qUyiD5Jk3lFYgf/AAo 16iSmwhBvw6votsGt9iRUls= X-Google-Smtp-Source: AGHT+IE0xXraIbEFFzq/bPw0G8F3KG7jfQfYKT4o4rJRejB5UZPIP1XdOdSV9WBW9MigTHZK2eF1XA== X-Received: by 2002:a17:902:a9c4:b0:1cf:8bb6:f9c1 with SMTP id b4-20020a170902a9c400b001cf8bb6f9c1mr10302747plr.59.1703066643967; Wed, 20 Dec 2023 02:04:03 -0800 (PST) Received: from ubuntu.. ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x3-20020a170902fe8300b001d3b7c5776asm5721619plm.160.2023.12.20.02.03.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 02:04:03 -0800 (PST) From: Hongyu Jin To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, axboe@kernel.dk, ebiggers@kernel.org Cc: zhiguo.niu@unisoc.com, ke.wang@unisoc.com, yibin.ding@unisoc.com, hongyu.jin@unisoc.com, linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, linux-block@vger.kernel.org Subject: [PATCH v6 2/5] dm: Support I/O priority for dm_io() Date: Wed, 20 Dec 2023 18:03:30 +0800 Message-Id: <20231220100333.107049-3-hongyu.jin.cn@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220100333.107049-1-hongyu.jin.cn@gmail.com> References: <20231213104216.27845-6-hongyu.jin.cn@gmail.com> <20231220100333.107049-1-hongyu.jin.cn@gmail.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Hongyu Jin Some I/O will dispatch from kworker with different io_context settings than the submitting task, we may need to specify a priority to avoid losing priority. Add I/O priority parameter for dm_io(). Co-developed-by: Yibin Ding Signed-off-by: Yibin Ding Signed-off-by: Hongyu Jin --- drivers/md/dm-bufio.c | 6 +++--- drivers/md/dm-integrity.c | 10 +++++----- drivers/md/dm-io.c | 23 +++++++++++++---------- drivers/md/dm-kcopyd.c | 4 ++-- drivers/md/dm-log.c | 4 ++-- drivers/md/dm-raid1.c | 6 +++--- drivers/md/dm-snap-persistent.c | 4 ++-- drivers/md/dm-writecache.c | 8 ++++---- include/linux/dm-io.h | 3 ++- 9 files changed, 36 insertions(+), 32 deletions(-) diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index f03d7dba270c..4f2808ef387f 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -1315,7 +1315,7 @@ static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector, io_req.mem.ptr.vma = (char *)b->data + offset; } - r = dm_io(&io_req, 1, ®ion, NULL); + r = dm_io(&io_req, 1, ®ion, NULL, IOPRIO_DEFAULT); if (unlikely(r)) b->end_io(b, errno_to_blk_status(r)); } @@ -2167,7 +2167,7 @@ int dm_bufio_issue_flush(struct dm_bufio_client *c) if (WARN_ON_ONCE(dm_bufio_in_request())) return -EINVAL; - return dm_io(&io_req, 1, &io_reg, NULL); + return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT); } EXPORT_SYMBOL_GPL(dm_bufio_issue_flush); @@ -2191,7 +2191,7 @@ int dm_bufio_issue_discard(struct dm_bufio_client *c, sector_t block, sector_t c if (WARN_ON_ONCE(dm_bufio_in_request())) return -EINVAL; /* discards are optional */ - return dm_io(&io_req, 1, &io_reg, NULL); + return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT); } EXPORT_SYMBOL_GPL(dm_bufio_issue_discard); diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index e85c688fd91e..9ffd093ad6cc 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -553,7 +553,7 @@ static int sync_rw_sb(struct dm_integrity_c *ic, blk_opf_t opf) } } - r = dm_io(&io_req, 1, &io_loc, NULL); + r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) return r; @@ -1071,7 +1071,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf, io_loc.sector = ic->start + SB_SECTORS + sector; io_loc.count = n_sectors; - r = dm_io(&io_req, 1, &io_loc, NULL); + r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) { dm_integrity_io_error(ic, (opf & REQ_OP_MASK) == REQ_OP_READ ? "reading journal" : "writing journal", r); @@ -1188,7 +1188,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned int section, u io_loc.sector = target; io_loc.count = n_sectors; - r = dm_io(&io_req, 1, &io_loc, NULL); + r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) { WARN_ONCE(1, "asynchronous dm_io failed: %d", r); fn(-1UL, data); @@ -1517,7 +1517,7 @@ static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_dat fr.io_reg.count = 0, fr.ic = ic; init_completion(&fr.comp); - r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL); + r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL, IOPRIO_DEFAULT); BUG_ON(r); } @@ -2739,7 +2739,7 @@ static void integrity_recalc(struct work_struct *w) io_loc.sector = get_data_sector(ic, area, offset); io_loc.count = n_sectors; - r = dm_io(&io_req, 1, &io_loc, NULL); + r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT); if (unlikely(r)) { dm_integrity_io_error(ic, "reading data", r); goto err; diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c index f053ce245814..7409490259d1 100644 --- a/drivers/md/dm-io.c +++ b/drivers/md/dm-io.c @@ -305,7 +305,7 @@ static void km_dp_init(struct dpages *dp, void *data) */ static void do_region(const blk_opf_t opf, unsigned int region, struct dm_io_region *where, struct dpages *dp, - struct io *io) + struct io *io, unsigned short ioprio) { struct bio *bio; struct page *page; @@ -354,6 +354,7 @@ static void do_region(const blk_opf_t opf, unsigned int region, &io->client->bios); bio->bi_iter.bi_sector = where->sector + (where->count - remaining); bio->bi_end_io = endio; + bio->bi_ioprio = ioprio; store_io_and_region_in_bio(bio, io, region); if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) { @@ -383,7 +384,7 @@ static void do_region(const blk_opf_t opf, unsigned int region, static void dispatch_io(blk_opf_t opf, unsigned int num_regions, struct dm_io_region *where, struct dpages *dp, - struct io *io, int sync) + struct io *io, int sync, unsigned short ioprio) { int i; struct dpages old_pages = *dp; @@ -400,7 +401,7 @@ static void dispatch_io(blk_opf_t opf, unsigned int num_regions, for (i = 0; i < num_regions; i++) { *dp = old_pages; if (where[i].count || (opf & REQ_PREFLUSH)) - do_region(opf, i, where + i, dp, io); + do_region(opf, i, where + i, dp, io, ioprio); } /* @@ -425,7 +426,7 @@ static void sync_io_complete(unsigned long error, void *context) static int sync_io(struct dm_io_client *client, unsigned int num_regions, struct dm_io_region *where, blk_opf_t opf, struct dpages *dp, - unsigned long *error_bits) + unsigned long *error_bits, unsigned short ioprio) { struct io *io; struct sync_io sio; @@ -447,7 +448,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, io->vma_invalidate_address = dp->vma_invalidate_address; io->vma_invalidate_size = dp->vma_invalidate_size; - dispatch_io(opf, num_regions, where, dp, io, 1); + dispatch_io(opf, num_regions, where, dp, io, 1, ioprio); wait_for_completion_io(&sio.wait); @@ -459,7 +460,8 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions, static int async_io(struct dm_io_client *client, unsigned int num_regions, struct dm_io_region *where, blk_opf_t opf, - struct dpages *dp, io_notify_fn fn, void *context) + struct dpages *dp, io_notify_fn fn, void *context, + unsigned short ioprio) { struct io *io; @@ -479,7 +481,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions, io->vma_invalidate_address = dp->vma_invalidate_address; io->vma_invalidate_size = dp->vma_invalidate_size; - dispatch_io(opf, num_regions, where, dp, io, 0); + dispatch_io(opf, num_regions, where, dp, io, 0, ioprio); return 0; } @@ -521,7 +523,8 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp, } int dm_io(struct dm_io_request *io_req, unsigned int num_regions, - struct dm_io_region *where, unsigned long *sync_error_bits) + struct dm_io_region *where, unsigned long *sync_error_bits, + unsigned short ioprio) { int r; struct dpages dp; @@ -532,11 +535,11 @@ int dm_io(struct dm_io_request *io_req, unsigned int num_regions, if (!io_req->notify.fn) return sync_io(io_req->client, num_regions, where, - io_req->bi_opf, &dp, sync_error_bits); + io_req->bi_opf, &dp, sync_error_bits, ioprio); return async_io(io_req->client, num_regions, where, io_req->bi_opf, &dp, io_req->notify.fn, - io_req->notify.context); + io_req->notify.context, ioprio); } EXPORT_SYMBOL(dm_io); diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c index d01807c50f20..79c65c9ad5fa 100644 --- a/drivers/md/dm-kcopyd.c +++ b/drivers/md/dm-kcopyd.c @@ -578,9 +578,9 @@ static int run_io_job(struct kcopyd_job *job) io_job_start(job->kc->throttle); if (job->op == REQ_OP_READ) - r = dm_io(&io_req, 1, &job->source, NULL); + r = dm_io(&io_req, 1, &job->source, NULL, IOPRIO_DEFAULT); else - r = dm_io(&io_req, job->num_dests, job->dests, NULL); + r = dm_io(&io_req, job->num_dests, job->dests, NULL, IOPRIO_DEFAULT); return r; } diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c index f9f84236dfcd..f7f9c2100937 100644 --- a/drivers/md/dm-log.c +++ b/drivers/md/dm-log.c @@ -300,7 +300,7 @@ static int rw_header(struct log_c *lc, enum req_op op) { lc->io_req.bi_opf = op; - return dm_io(&lc->io_req, 1, &lc->header_location, NULL); + return dm_io(&lc->io_req, 1, &lc->header_location, NULL, IOPRIO_DEFAULT); } static int flush_header(struct log_c *lc) @@ -313,7 +313,7 @@ static int flush_header(struct log_c *lc) lc->io_req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH; - return dm_io(&lc->io_req, 1, &null_location, NULL); + return dm_io(&lc->io_req, 1, &null_location, NULL, IOPRIO_DEFAULT); } static int read_header(struct log_c *log) diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c index ddcb2bc4a617..9511dae5b556 100644 --- a/drivers/md/dm-raid1.c +++ b/drivers/md/dm-raid1.c @@ -278,7 +278,7 @@ static int mirror_flush(struct dm_target *ti) } error_bits = -1; - dm_io(&io_req, ms->nr_mirrors, io, &error_bits); + dm_io(&io_req, ms->nr_mirrors, io, &error_bits, IOPRIO_DEFAULT); if (unlikely(error_bits != 0)) { for (i = 0; i < ms->nr_mirrors; i++) if (test_bit(i, &error_bits)) @@ -554,7 +554,7 @@ static void read_async_bio(struct mirror *m, struct bio *bio) map_region(&io, m, bio); bio_set_m(bio, m); - BUG_ON(dm_io(&io_req, 1, &io, NULL)); + BUG_ON(dm_io(&io_req, 1, &io, NULL, IOPRIO_DEFAULT)); } static inline int region_in_sync(struct mirror_set *ms, region_t region, @@ -681,7 +681,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio) */ bio_set_m(bio, get_default_mirror(ms)); - BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL)); + BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL, IOPRIO_DEFAULT)); } static void do_writes(struct mirror_set *ms, struct bio_list *writes) diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c index 15649921f2a9..568d10842b1f 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -223,7 +223,7 @@ static void do_metadata(struct work_struct *work) { struct mdata_req *req = container_of(work, struct mdata_req, work); - req->result = dm_io(req->io_req, 1, req->where, NULL); + req->result = dm_io(req->io_req, 1, req->where, NULL, IOPRIO_DEFAULT); } /* @@ -247,7 +247,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, blk_opf_t opf, struct mdata_req req; if (!metadata) - return dm_io(&io_req, 1, &where, NULL); + return dm_io(&io_req, 1, &where, NULL, IOPRIO_DEFAULT); req.where = &where; req.io_req = &io_req; diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c index 074cb785eafc..6a4279bfb1e7 100644 --- a/drivers/md/dm-writecache.c +++ b/drivers/md/dm-writecache.c @@ -531,7 +531,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios) req.notify.context = &endio; /* writing via async dm-io (implied by notify.fn above) won't return an error */ - (void) dm_io(&req, 1, ®ion, NULL); + (void) dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT); i = j; } @@ -568,7 +568,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc) req.notify.fn = NULL; req.notify.context = NULL; - r = dm_io(&req, 1, ®ion, NULL); + r = dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT); if (unlikely(r)) writecache_error(wc, r, "error writing superblock"); } @@ -596,7 +596,7 @@ static void writecache_disk_flush(struct dm_writecache *wc, struct dm_dev *dev) req.client = wc->dm_io; req.notify.fn = NULL; - r = dm_io(&req, 1, ®ion, NULL); + r = dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT); if (unlikely(r)) writecache_error(wc, r, "error flushing metadata: %d", r); } @@ -990,7 +990,7 @@ static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors req.client = wc->dm_io; req.notify.fn = NULL; - return dm_io(&req, 1, ®ion, NULL); + return dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT); } static void writecache_resume(struct dm_target *ti) diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h index 7595142f3fc5..7b2968612b7e 100644 --- a/include/linux/dm-io.h +++ b/include/linux/dm-io.h @@ -80,7 +80,8 @@ void dm_io_client_destroy(struct dm_io_client *client); * error occurred doing io to the corresponding region. */ int dm_io(struct dm_io_request *io_req, unsigned int num_regions, - struct dm_io_region *region, unsigned int long *sync_error_bits); + struct dm_io_region *region, unsigned int long *sync_error_bits, + unsigned short ioprio); #endif /* __KERNEL__ */ #endif /* _LINUX_DM_IO_H */ From patchwork Wed Dec 20 10:03:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Jin X-Patchwork-Id: 13499767 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB6392575E; Wed, 20 Dec 2023 10:04:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="A/HGB5Zy" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d3aa0321b5so30872425ad.2; Wed, 20 Dec 2023 02:04:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703066649; x=1703671449; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VKixK6wUub2oT9F5UaVSCIFPs2y49HxiR7qbfx7aLFI=; b=A/HGB5Zy9d0naQtj6GW90eXu3+q4kH/7b+XAkcPrVUuU03RpRcNP6kRBL1InmEniF9 ENHUKs2ktkbZ4qSY5QYsUQfGlOxCwpQj+IXtU9oa/qpLyeI5/z6aA2wYAtlsO2mJTYUf 2lcccgyQp44bz7naKUKerPthKdhyE4WU53asWCp1IYiMzCvsL5cWnvcAk6i3KrRg9jno C1FAdkbVB6PD7QyjYrIFVUknvrgNcSygwsoabiRMYpbu9rgta/7YSAq3+YqgkvZuMNyG HV3FrBN3rasXeIQtUVjiW5ilkv+D2kOu5S9d7WMBAURCQl6Cb0UNgNkww9tmgac/qMnL E09Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066649; x=1703671449; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VKixK6wUub2oT9F5UaVSCIFPs2y49HxiR7qbfx7aLFI=; b=fu0YEfuyqC7Ftitd2GQdpJkPB3ls+8dxX1YFlrWMtLusHESOFAn92G6VGNAlWbRTFm U1Kby1pOSJKRma5OU924L3kKn7HI4IJy3W1HHe4m4uHJMt/5IBCbEJWp8aOorIULOIjt DzNXPaph9oF7p0s5/EGqToNykUITAMjshJoJBE4B5b9VfW8rkFBNXujmEIXukoc6H+i9 mx7+OTm+wYob0BdZuVIw2vksaMQcI5cc7AG0nE/On3NkSQpKj+oX1YCZcO1rSOVtdPS1 uN0ly7P5YGdwuCh6k5uOPWHJLSOaLb9ZL5fQcn39cl+fNxZOvdIzFI2xqfFg34rAINMY I0ug== X-Gm-Message-State: AOJu0Yw5wjMOcVkkX1wtVFp1+Gzjfu+aaOkmatmKf331tbButVlHHM6f S6lDAMHDHvoB/zXwyfPIuWY= X-Google-Smtp-Source: AGHT+IFabCSqyIt8lz3m6J0SFYAwUgp81oPaC68zoxOYUq1maSe7wd77aP6b0jRPmSk+ln/GSwHG/A== X-Received: by 2002:a17:903:120e:b0:1d3:2a94:cb53 with SMTP id l14-20020a170903120e00b001d32a94cb53mr19156798plh.5.1703066649017; Wed, 20 Dec 2023 02:04:09 -0800 (PST) Received: from ubuntu.. ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x3-20020a170902fe8300b001d3b7c5776asm5721619plm.160.2023.12.20.02.04.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 02:04:08 -0800 (PST) From: Hongyu Jin To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, axboe@kernel.dk, ebiggers@kernel.org Cc: zhiguo.niu@unisoc.com, ke.wang@unisoc.com, yibin.ding@unisoc.com, hongyu.jin@unisoc.com, linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, linux-block@vger.kernel.org Subject: [PATCH v6 3/5] dm-bufio: Support I/O priority Date: Wed, 20 Dec 2023 18:03:31 +0800 Message-Id: <20231220100333.107049-4-hongyu.jin.cn@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220100333.107049-1-hongyu.jin.cn@gmail.com> References: <20231213104216.27845-6-hongyu.jin.cn@gmail.com> <20231220100333.107049-1-hongyu.jin.cn@gmail.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Hongyu Jin Some I/O will dispatch from kworker with different io_context settings than the submitting task, we may need to specify a priority to avoid losing priority. Add I/O priority parameter for dm_bufio_read() and dm_bufio_prefetch(). Co-developed-by: Yibin Ding Signed-off-by: Yibin Ding Signed-off-by: Hongyu Jin --- drivers/md/dm-bufio.c | 39 +++++++++++-------- drivers/md/dm-ebs-target.c | 8 ++-- drivers/md/dm-integrity.c | 2 +- drivers/md/dm-snap-persistent.c | 4 +- drivers/md/dm-verity-fec.c | 4 +- drivers/md/dm-verity-target.c | 5 ++- drivers/md/persistent-data/dm-block-manager.c | 6 +-- include/linux/dm-bufio.h | 5 ++- 8 files changed, 40 insertions(+), 33 deletions(-) diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index 4f2808ef387f..a6974ecab68e 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -1292,7 +1292,8 @@ static void dmio_complete(unsigned long error, void *context) } static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector, - unsigned int n_sectors, unsigned int offset) + unsigned int n_sectors, unsigned int offset, + unsigned short ioprio) { int r; struct dm_io_request io_req = { @@ -1315,7 +1316,7 @@ static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector, io_req.mem.ptr.vma = (char *)b->data + offset; } - r = dm_io(&io_req, 1, ®ion, NULL, IOPRIO_DEFAULT); + r = dm_io(&io_req, 1, ®ion, NULL, ioprio); if (unlikely(r)) b->end_io(b, errno_to_blk_status(r)); } @@ -1331,7 +1332,8 @@ static void bio_complete(struct bio *bio) } static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector, - unsigned int n_sectors, unsigned int offset) + unsigned int n_sectors, unsigned int offset, + unsigned short ioprio) { struct bio *bio; char *ptr; @@ -1339,13 +1341,14 @@ static void use_bio(struct dm_buffer *b, enum req_op op, sector_t sector, bio = bio_kmalloc(1, GFP_NOWAIT | __GFP_NORETRY | __GFP_NOWARN); if (!bio) { - use_dmio(b, op, sector, n_sectors, offset); + use_dmio(b, op, sector, n_sectors, offset, ioprio); return; } bio_init(bio, b->c->bdev, bio->bi_inline_vecs, 1, op); bio->bi_iter.bi_sector = sector; bio->bi_end_io = bio_complete; bio->bi_private = b; + bio->bi_ioprio = ioprio; ptr = (char *)b->data + offset; len = n_sectors << SECTOR_SHIFT; @@ -1368,7 +1371,7 @@ static inline sector_t block_to_sector(struct dm_bufio_client *c, sector_t block return sector; } -static void submit_io(struct dm_buffer *b, enum req_op op, +static void submit_io(struct dm_buffer *b, enum req_op op, unsigned short ioprio, void (*end_io)(struct dm_buffer *, blk_status_t)) { unsigned int n_sectors; @@ -1398,9 +1401,9 @@ static void submit_io(struct dm_buffer *b, enum req_op op, } if (b->data_mode != DATA_MODE_VMALLOC) - use_bio(b, op, sector, n_sectors, offset); + use_bio(b, op, sector, n_sectors, offset, ioprio); else - use_dmio(b, op, sector, n_sectors, offset); + use_dmio(b, op, sector, n_sectors, offset, ioprio); } /* @@ -1456,7 +1459,7 @@ static void __write_dirty_buffer(struct dm_buffer *b, b->write_end = b->dirty_end; if (!write_list) - submit_io(b, REQ_OP_WRITE, write_endio); + submit_io(b, REQ_OP_WRITE, IOPRIO_DEFAULT, write_endio); else list_add_tail(&b->write_list, write_list); } @@ -1470,7 +1473,7 @@ static void __flush_write_list(struct list_head *write_list) struct dm_buffer *b = list_entry(write_list->next, struct dm_buffer, write_list); list_del(&b->write_list); - submit_io(b, REQ_OP_WRITE, write_endio); + submit_io(b, REQ_OP_WRITE, IOPRIO_DEFAULT, write_endio); cond_resched(); } blk_finish_plug(&plug); @@ -1852,7 +1855,8 @@ static void read_endio(struct dm_buffer *b, blk_status_t status) * and uses dm_bufio_mark_buffer_dirty to write new data back). */ static void *new_read(struct dm_bufio_client *c, sector_t block, - enum new_flag nf, struct dm_buffer **bp) + enum new_flag nf, struct dm_buffer **bp, + unsigned short ioprio) { int need_submit = 0; struct dm_buffer *b; @@ -1905,7 +1909,7 @@ static void *new_read(struct dm_bufio_client *c, sector_t block, return NULL; if (need_submit) - submit_io(b, REQ_OP_READ, read_endio); + submit_io(b, REQ_OP_READ, ioprio, read_endio); if (nf != NF_GET) /* we already tested this condition above */ wait_on_bit_io(&b->state, B_READING, TASK_UNINTERRUPTIBLE); @@ -1926,17 +1930,17 @@ static void *new_read(struct dm_bufio_client *c, sector_t block, void *dm_bufio_get(struct dm_bufio_client *c, sector_t block, struct dm_buffer **bp) { - return new_read(c, block, NF_GET, bp); + return new_read(c, block, NF_GET, bp, IOPRIO_DEFAULT); } EXPORT_SYMBOL_GPL(dm_bufio_get); void *dm_bufio_read(struct dm_bufio_client *c, sector_t block, - struct dm_buffer **bp) + struct dm_buffer **bp, unsigned short ioprio) { if (WARN_ON_ONCE(dm_bufio_in_request())) return ERR_PTR(-EINVAL); - return new_read(c, block, NF_READ, bp); + return new_read(c, block, NF_READ, bp, ioprio); } EXPORT_SYMBOL_GPL(dm_bufio_read); @@ -1946,12 +1950,13 @@ void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, if (WARN_ON_ONCE(dm_bufio_in_request())) return ERR_PTR(-EINVAL); - return new_read(c, block, NF_FRESH, bp); + return new_read(c, block, NF_FRESH, bp, IOPRIO_DEFAULT); } EXPORT_SYMBOL_GPL(dm_bufio_new); void dm_bufio_prefetch(struct dm_bufio_client *c, - sector_t block, unsigned int n_blocks) + sector_t block, unsigned int n_blocks, + unsigned short ioprio) { struct blk_plug plug; @@ -1987,7 +1992,7 @@ void dm_bufio_prefetch(struct dm_bufio_client *c, dm_bufio_unlock(c); if (need_submit) - submit_io(b, REQ_OP_READ, read_endio); + submit_io(b, REQ_OP_READ, ioprio, read_endio); dm_bufio_release(b); cond_resched(); diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c index 435b45201f4d..8198c8a7b416 100644 --- a/drivers/md/dm-ebs-target.c +++ b/drivers/md/dm-ebs-target.c @@ -84,7 +84,7 @@ static int __ebs_rw_bvec(struct ebs_c *ec, enum req_op op, struct bio_vec *bv, /* Avoid reading for writes in case bio vector's page overwrites block completely. */ if (op == REQ_OP_READ || buf_off || bv_len < dm_bufio_get_block_size(ec->bufio)) - ba = dm_bufio_read(ec->bufio, block, &b); + ba = dm_bufio_read(ec->bufio, block, &b, IOPRIO_DEFAULT); else ba = dm_bufio_new(ec->bufio, block, &b); @@ -194,13 +194,13 @@ static void __ebs_process_bios(struct work_struct *ws) bio_list_for_each(bio, &bios) { block1 = __sector_to_block(ec, bio->bi_iter.bi_sector); if (bio_op(bio) == REQ_OP_READ) - dm_bufio_prefetch(ec->bufio, block1, __nr_blocks(ec, bio)); + dm_bufio_prefetch(ec->bufio, block1, __nr_blocks(ec, bio), IOPRIO_DEFAULT); else if (bio_op(bio) == REQ_OP_WRITE && !(bio->bi_opf & REQ_PREFLUSH)) { block2 = __sector_to_block(ec, bio_end_sector(bio)); if (__block_mod(bio->bi_iter.bi_sector, ec->u_bs)) - dm_bufio_prefetch(ec->bufio, block1, 1); + dm_bufio_prefetch(ec->bufio, block1, 1, IOPRIO_DEFAULT); if (__block_mod(bio_end_sector(bio), ec->u_bs) && block2 != block1) - dm_bufio_prefetch(ec->bufio, block2, 1); + dm_bufio_prefetch(ec->bufio, block2, 1, IOPRIO_DEFAULT); } } diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c index 9ffd093ad6cc..1e40e712bcd7 100644 --- a/drivers/md/dm-integrity.c +++ b/drivers/md/dm-integrity.c @@ -1418,7 +1418,7 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se if (unlikely(r)) return r; - data = dm_bufio_read(ic->bufio, *metadata_block, &b); + data = dm_bufio_read(ic->bufio, *metadata_block, &b, IOPRIO_DEFAULT); if (IS_ERR(data)) return PTR_ERR(data); diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c index 568d10842b1f..a2072b95e28c 100644 --- a/drivers/md/dm-snap-persistent.c +++ b/drivers/md/dm-snap-persistent.c @@ -524,7 +524,7 @@ static int read_exceptions(struct pstore *ps, if (unlikely(pf_chunk >= dm_bufio_get_device_size(client))) break; - dm_bufio_prefetch(client, pf_chunk, 1); + dm_bufio_prefetch(client, pf_chunk, 1, IOPRIO_DEFAULT); prefetch_area++; if (unlikely(!prefetch_area)) break; @@ -533,7 +533,7 @@ static int read_exceptions(struct pstore *ps, chunk = area_location(ps, ps->current_area); - area = dm_bufio_read(client, chunk, &bp); + area = dm_bufio_read(client, chunk, &bp, IOPRIO_DEFAULT); if (IS_ERR(area)) { r = PTR_ERR(area); goto ret_destroy_bufio; diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index b475200d8586..49db19e537f9 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -69,7 +69,7 @@ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, block = div64_u64_rem(position, v->fec->io_size, &rem); *offset = (unsigned int)rem; - res = dm_bufio_read(v->fec->bufio, block, buf); + res = dm_bufio_read(v->fec->bufio, block, buf, IOPRIO_DEFAULT); if (IS_ERR(res)) { DMERR("%s: FEC %llu: parity read failed (block %llu): %ld", v->data_dev->name, (unsigned long long)rsb, @@ -248,7 +248,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, bufio = v->bufio; } - bbuf = dm_bufio_read(bufio, block, &buf); + bbuf = dm_bufio_read(bufio, block, &buf, IOPRIO_DEFAULT); if (IS_ERR(bbuf)) { DMWARN_LIMIT("%s: FEC %llu: read failed (%llu): %ld", v->data_dev->name, diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 14e58ae70521..4758bfe2c156 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -308,7 +308,7 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, return -EAGAIN; } } else - data = dm_bufio_read(v->bufio, hash_block, &buf); + data = dm_bufio_read(v->bufio, hash_block, &buf, IOPRIO_DEFAULT); if (IS_ERR(data)) return PTR_ERR(data); @@ -719,7 +719,8 @@ static void verity_prefetch_io(struct work_struct *work) } no_prefetch_cluster: dm_bufio_prefetch(v->bufio, hash_block_start, - hash_block_end - hash_block_start + 1); + hash_block_end - hash_block_start + 1, + IOPRIO_DEFAULT); } kfree(pw); diff --git a/drivers/md/persistent-data/dm-block-manager.c b/drivers/md/persistent-data/dm-block-manager.c index 0e010e1204aa..86a4f73d2f3d 100644 --- a/drivers/md/persistent-data/dm-block-manager.c +++ b/drivers/md/persistent-data/dm-block-manager.c @@ -474,7 +474,7 @@ int dm_bm_read_lock(struct dm_block_manager *bm, dm_block_t b, void *p; int r; - p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result); + p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result, IOPRIO_DEFAULT); if (IS_ERR(p)) return PTR_ERR(p); @@ -510,7 +510,7 @@ int dm_bm_write_lock(struct dm_block_manager *bm, if (dm_bm_is_read_only(bm)) return -EPERM; - p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result); + p = dm_bufio_read(bm->bufio, b, (struct dm_buffer **) result, IOPRIO_DEFAULT); if (IS_ERR(p)) return PTR_ERR(p); @@ -624,7 +624,7 @@ EXPORT_SYMBOL_GPL(dm_bm_flush); void dm_bm_prefetch(struct dm_block_manager *bm, dm_block_t b) { - dm_bufio_prefetch(bm->bufio, b, 1); + dm_bufio_prefetch(bm->bufio, b, 1, IOPRIO_DEFAULT); } bool dm_bm_is_read_only(struct dm_block_manager *bm) diff --git a/include/linux/dm-bufio.h b/include/linux/dm-bufio.h index 75e7d8cbb532..256a246c7b97 100644 --- a/include/linux/dm-bufio.h +++ b/include/linux/dm-bufio.h @@ -62,7 +62,7 @@ void dm_bufio_set_sector_offset(struct dm_bufio_client *c, sector_t start); * it dirty. */ void *dm_bufio_read(struct dm_bufio_client *c, sector_t block, - struct dm_buffer **bp); + struct dm_buffer **bp, unsigned short ioprio); /* * Like dm_bufio_read, but return buffer from cache, don't read @@ -84,7 +84,8 @@ void *dm_bufio_new(struct dm_bufio_client *c, sector_t block, * I/O to finish. */ void dm_bufio_prefetch(struct dm_bufio_client *c, - sector_t block, unsigned int n_blocks); + sector_t block, unsigned int n_blocks, + unsigned short ioprio); /* * Release a reference obtained with dm_bufio_{read,get,new}. The data From patchwork Wed Dec 20 10:03:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Jin X-Patchwork-Id: 13499768 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 60A692D63B; Wed, 20 Dec 2023 10:04:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="JYUD389D" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1d3e05abcaeso8934345ad.1; Wed, 20 Dec 2023 02:04:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703066654; x=1703671454; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sW3eW9PuHv4Fe1koP1TqpWUtRi+VDbD5jjvTdO++WUw=; b=JYUD389DJ7ENWDkRUNH4V4edYKbfh/3LLWvvihiNNvgbSVVjB7S69XndJ588yt9AMb soVIl2oBW+e7F6k5F1kH1laZI95OnbOOvIPH8Oa1S4AU+tSFAI5CaRQaokqR/AOvh5gJ eO1ucSUWFtbaQF8OXKQCuTrVDDAguo5rthPT63oN+OPM6WIgqvTjiVGqsh+Hbaj/kBsv IiHQqbkWulGSKsvt/wrKQJ+0xYcl3t0m1s56CuYQg2/r4Zwxl/hYfwxq6b5SNArRKZPK +Xg2NVWGo3ovwPqHzS6erXkmayDkA7zUVbvnQZDOwKKuBzSXgZKuZyFRMGP8LOeVH0Za 3tqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066654; x=1703671454; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sW3eW9PuHv4Fe1koP1TqpWUtRi+VDbD5jjvTdO++WUw=; b=GiN6+eE5z/3kmNsRR9CwTslh0QF/PVKmCa8OI8JVGoCw4V63fuZPDiR0QDXvX4iZ6O Lh36yNUtHbep6+tSDUvyoDXDXzS3I8K7v9SgCZYq4H+pSCmHz3f/rEnI51Yqjx0K/zoP covPnuq7X0oEK6CHf8UkXnNCRQexhHAHIiJ/OjELBRRYe05SnQhVxJoDPU0BcUjX9oDl pKVOsHJNYK8xjgLUtiMk508jU2Dj0CJxWtwUo9Mer+SdGLdYlqc0n88uYsArBST9QtF5 rpmiI95UKt+xPoWRxFjB2ms2//1lY4AWooAQWrNnK5OeakBXENeSgEBV12Iyy2eUUkxA dkVA== X-Gm-Message-State: AOJu0YyYTdyjOxgejgDqkSofbK99hGEhivtYyawjeMXMLiX4IeWxWlR+ fAWjxV3pOg+WC/tiQ0I49whL6zKCnzg= X-Google-Smtp-Source: AGHT+IEABYPlMU3TR3IPjOsGIWXWOL2ujzrd03OfhVqkiAPH7mYY9i4qCbjDSxekpUopR0MPHdHHpg== X-Received: by 2002:a17:902:d2cb:b0:1d0:8db6:17d0 with SMTP id n11-20020a170902d2cb00b001d08db617d0mr12006219plc.25.1703066653776; Wed, 20 Dec 2023 02:04:13 -0800 (PST) Received: from ubuntu.. ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x3-20020a170902fe8300b001d3b7c5776asm5721619plm.160.2023.12.20.02.04.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 02:04:13 -0800 (PST) From: Hongyu Jin To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, axboe@kernel.dk, ebiggers@kernel.org Cc: zhiguo.niu@unisoc.com, ke.wang@unisoc.com, yibin.ding@unisoc.com, hongyu.jin@unisoc.com, linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, linux-block@vger.kernel.org Subject: [PATCH v6 4/5] dm verity: Fix I/O priority lost when read FEC and hash Date: Wed, 20 Dec 2023 18:03:32 +0800 Message-Id: <20231220100333.107049-5-hongyu.jin.cn@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220100333.107049-1-hongyu.jin.cn@gmail.com> References: <20231213104216.27845-6-hongyu.jin.cn@gmail.com> <20231220100333.107049-1-hongyu.jin.cn@gmail.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Hongyu Jin After obtaining the data, verification or error correction process may trigger a new I/O that loses the priority of the original I/O, that is, the verification of the higher priority IO may be blocked by the lower priority IO. Make the I/O of verification and error correction follow the priority of original I/O. Co-developed-by: Yibin Ding Signed-off-by: Yibin Ding Signed-off-by: Hongyu Jin --- drivers/md/dm-verity-fec.c | 20 +++++++++++++++----- drivers/md/dm-verity-target.c | 12 ++++++++---- 2 files changed, 23 insertions(+), 9 deletions(-) diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c index 49db19e537f9..a9e5402e3d43 100644 --- a/drivers/md/dm-verity-fec.c +++ b/drivers/md/dm-verity-fec.c @@ -18,6 +18,12 @@ bool verity_fec_is_enabled(struct dm_verity *v) return v->fec && v->fec->dev; } +static inline struct dm_verity_io *verity_io(struct dm_verity *v, struct dm_verity_fec_io *fio) +{ + return (struct dm_verity_io *) + ((char *)fio + sizeof(struct dm_verity_fec_io) - v->ti->per_io_data_size); +} + /* * Return a pointer to dm_verity_fec_io after dm_verity_io and its variable * length fields. @@ -60,7 +66,8 @@ static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio, * to the data block. Caller is responsible for releasing buf. */ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, - unsigned int *offset, struct dm_buffer **buf) + unsigned int *offset, struct dm_buffer **buf, + unsigned short ioprio) { u64 position, block, rem; u8 *res; @@ -69,7 +76,7 @@ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index, block = div64_u64_rem(position, v->fec->io_size, &rem); *offset = (unsigned int)rem; - res = dm_bufio_read(v->fec->bufio, block, buf, IOPRIO_DEFAULT); + res = dm_bufio_read(v->fec->bufio, block, buf, ioprio); if (IS_ERR(res)) { DMERR("%s: FEC %llu: parity read failed (block %llu): %ld", v->data_dev->name, (unsigned long long)rsb, @@ -129,8 +136,10 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio, struct dm_buffer *buf; unsigned int n, i, offset; u8 *par, *block; + struct dm_verity_io *io = verity_io(v, fio); + struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); - par = fec_read_parity(v, rsb, block_offset, &offset, &buf); + par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); if (IS_ERR(par)) return PTR_ERR(par); @@ -158,7 +167,7 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio, if (offset >= v->fec->io_size) { dm_bufio_release(buf); - par = fec_read_parity(v, rsb, block_offset, &offset, &buf); + par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio)); if (IS_ERR(par)) return PTR_ERR(par); } @@ -210,6 +219,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, u8 *bbuf, *rs_block; u8 want_digest[HASH_MAX_DIGESTSIZE]; unsigned int n, k; + struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); if (neras) *neras = 0; @@ -248,7 +258,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io, bufio = v->bufio; } - bbuf = dm_bufio_read(bufio, block, &buf, IOPRIO_DEFAULT); + bbuf = dm_bufio_read(bufio, block, &buf, bio_prio(bio)); if (IS_ERR(bbuf)) { DMWARN_LIMIT("%s: FEC %llu: read failed (%llu): %ld", v->data_dev->name, diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c index 4758bfe2c156..8cbf81fc0031 100644 --- a/drivers/md/dm-verity-target.c +++ b/drivers/md/dm-verity-target.c @@ -51,6 +51,7 @@ static DEFINE_STATIC_KEY_FALSE(use_tasklet_enabled); struct dm_verity_prefetch_work { struct work_struct work; struct dm_verity *v; + unsigned short ioprio; sector_t block; unsigned int n_blocks; }; @@ -294,6 +295,7 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, int r; sector_t hash_block; unsigned int offset; + struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size); verity_hash_at_level(v, block, level, &hash_block, &offset); @@ -308,7 +310,7 @@ static int verity_verify_level(struct dm_verity *v, struct dm_verity_io *io, return -EAGAIN; } } else - data = dm_bufio_read(v->bufio, hash_block, &buf, IOPRIO_DEFAULT); + data = dm_bufio_read(v->bufio, hash_block, &buf, bio_prio(bio)); if (IS_ERR(data)) return PTR_ERR(data); @@ -720,13 +722,14 @@ static void verity_prefetch_io(struct work_struct *work) no_prefetch_cluster: dm_bufio_prefetch(v->bufio, hash_block_start, hash_block_end - hash_block_start + 1, - IOPRIO_DEFAULT); + pw->ioprio); } kfree(pw); } -static void verity_submit_prefetch(struct dm_verity *v, struct dm_verity_io *io) +static void verity_submit_prefetch(struct dm_verity *v, struct dm_verity_io *io, + unsigned short ioprio) { sector_t block = io->block; unsigned int n_blocks = io->n_blocks; @@ -754,6 +757,7 @@ static void verity_submit_prefetch(struct dm_verity *v, struct dm_verity_io *io) pw->v = v; pw->block = block; pw->n_blocks = n_blocks; + pw->ioprio = ioprio; queue_work(v->verify_wq, &pw->work); } @@ -796,7 +800,7 @@ static int verity_map(struct dm_target *ti, struct bio *bio) verity_fec_init_io(io); - verity_submit_prefetch(v, io); + verity_submit_prefetch(v, io, bio_prio(bio)); submit_bio_noacct(bio); From patchwork Wed Dec 20 10:03:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Jin X-Patchwork-Id: 13499769 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 205AE20DCE; Wed, 20 Dec 2023 10:04:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QtMbx3AE" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1d3cfb1568eso21267125ad.1; Wed, 20 Dec 2023 02:04:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1703066658; x=1703671458; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9HEt/Agnjbcv9DZ/M0GOa62SZcHFmpwSvE3fYxb9KVA=; b=QtMbx3AENOISAfOlauZEfEURi2E2Xjxhscrc3OJVwydeJzFTDoyIZVubCwKJJC9mKg 2Y8t9r1pdMKWHiQfDvRZkoMV3oU7MfFA4gRP/RdnuKIX1HTlS1jjw1qadvzfePkodhBh boPmiSeMQib/XOkHKP5iPFJveYrgutnOs5LSu7l/luVEi0l0iRbs4kqAN2e+SS0xF+XS hsfMq0nDAZvgH8VKxhEugon+nBbT8j2Fw2RYLhHNNwv1t9daEUYAo4vsWtbV1g+dSdNn OX4IP+Iwi/c6zxAqjZ2wysx1QoTg+4k+xHour0pQx0S+vexEJ4l85wQFdRTgpvn8R4NA QDdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703066658; x=1703671458; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9HEt/Agnjbcv9DZ/M0GOa62SZcHFmpwSvE3fYxb9KVA=; b=LNlrExYrh5WgQfHx7G/USbSlz7AM64+FgOJ5WaBNSt42z/jMmPxWRk9O+LU/XpFqNF pwuElWg2anqTGFWSNPmLOMdduG8wDYUP952CS6pbyZGa6tzyd5ObM7WgSGzz6+PNpHh4 gkXr6mps9DKgRcc1/pKJPwvplHxV12M45iwkgcRx7uhlzYmceq6480nXSbH/yi4ule7J XehYyIJQKMOghmAJHF4rYxyd3g3F+3QMCrc4PK7UgQsR8PFG+MfO0uqrfXqMKWlgCkHz cBD1nxboFNjgNnq+74LsdMn3ZAMzF2H3O8y/RJw3oYPXnrPG2EZMYozn4gEQtUytRlHS 41kQ== X-Gm-Message-State: AOJu0Yxre3yRrBkFV9+88i3kWdDfhF19yw+DCAfj4GJQcx4rohrOefIj JUSuVhfsSnuB4VjVznZ8aGI= X-Google-Smtp-Source: AGHT+IEbaFfYHbR6pC1D8QsJ2uRB5r/HvmJFeTQAJTPxOJxDG1Vafjsid5EASELuV3KYMogm+e4Yyw== X-Received: by 2002:a17:902:d4ca:b0:1d3:45c8:bc12 with SMTP id o10-20020a170902d4ca00b001d345c8bc12mr16994882plg.38.1703066658460; Wed, 20 Dec 2023 02:04:18 -0800 (PST) Received: from ubuntu.. ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id x3-20020a170902fe8300b001d3b7c5776asm5721619plm.160.2023.12.20.02.04.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 20 Dec 2023 02:04:18 -0800 (PST) From: Hongyu Jin To: agk@redhat.com, snitzer@kernel.org, mpatocka@redhat.com, axboe@kernel.dk, ebiggers@kernel.org Cc: zhiguo.niu@unisoc.com, ke.wang@unisoc.com, yibin.ding@unisoc.com, hongyu.jin@unisoc.com, linux-kernel@vger.kernel.org, dm-devel@lists.linux.dev, linux-block@vger.kernel.org Subject: [PATCH v6 5/5] dm-crypt: Fix lost ioprio when queuing write bios Date: Wed, 20 Dec 2023 18:03:33 +0800 Message-Id: <20231220100333.107049-6-hongyu.jin.cn@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231220100333.107049-1-hongyu.jin.cn@gmail.com> References: <20231213104216.27845-6-hongyu.jin.cn@gmail.com> <20231220100333.107049-1-hongyu.jin.cn@gmail.com> Precedence: bulk X-Mailing-List: linux-block@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Hongyu Jin Since dm-crypt queues writes to a different kernel thread (workqueue), the bios will dispatch from tasks with different io_context->ioprio settings and blkcg than the submitting task, thus giving incorrect ioprio to the io scheduler. Get the original io priority setting via struct dm_crypt_io::base_bio and set this priority to the bio for write. Link: https://lore.kernel.org/dm-devel/alpine.LRH.2.11.1612141049250.13402@mail.ewheeler.net Signed-off-by: Hongyu Jin --- drivers/md/dm-crypt.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c index 2ae8560b6a14..ba6e794f7871 100644 --- a/drivers/md/dm-crypt.c +++ b/drivers/md/dm-crypt.c @@ -1683,6 +1683,7 @@ static struct bio *crypt_alloc_buffer(struct dm_crypt_io *io, unsigned int size) GFP_NOIO, &cc->bs); clone->bi_private = io; clone->bi_end_io = crypt_endio; + clone->bi_ioprio = io->base_bio->bi_ioprio; remaining_size = size;