From patchwork Wed Oct 6 23:13:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12540759 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7E8BC433F5 for ; Wed, 6 Oct 2021 23:13:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 813FB611CA for ; Wed, 6 Oct 2021 23:13:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230322AbhJFXP3 (ORCPT ); Wed, 6 Oct 2021 19:15:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230492AbhJFXP2 (ORCPT ); Wed, 6 Oct 2021 19:15:28 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF612C061746 for ; Wed, 6 Oct 2021 16:13:35 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id 134so4621792iou.12 for ; Wed, 06 Oct 2021 16:13:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=V3ile7Y+Lk23ZR7EdRYyUdG6ABjNHxlyVI4hZ5R+jUM=; b=XA7mV0yGcVRHdSZ+QmTvpQav5MhU6JKmpDPoYFju3LI6oQwDflQnRAWHmEqL6XTgSx Q31GKdwIQnEyS2A86adwPe9NSkhuC0plzfeLPQzKfyFroLnCoydJ/JHs9qr0GmWnUBYR x9oxPq1WVGC/K42PzptC4tCGSWP2ovyAFDGCIBZJJkOY6EKwQ2H87D/xnPoKo3kSIq47 lypSZHt2FwjD97FcYXbUI1yx+L3wRXVje941OyoaZlWoY3CsM7e7c7vPSMoQPIAq+ZbZ Hu4JJCqZWgRzLxcH0MKV4x2SjrKwg6Adbf7Hnm+PDDP07FfdTTpT/dztzdT07IVPLkVf 5b3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V3ile7Y+Lk23ZR7EdRYyUdG6ABjNHxlyVI4hZ5R+jUM=; b=vJVr1oSRYRbRJI+JF6aNEt7iW76Iyva2QKwnJ34VpPlw4UgDuuwThrcJKYdBqG86hZ t5eGMOkDQ6OHECzYCY249URtv05rZtpECrqPGtVsDqFJqIkyw5KIj3eKdUZMsbC1AHX3 d2UCF0+OHr9IMnN9/4wwiSjPc/EFFKAELvkIPc/sFyQgkBqHP+gYyJ20N907wGjtfkff xKQ8Uxsn6NxWwPuvoAKl/83N41l/Gd4pfOCIAmXV1580ikWO2R65NLhphM61xKHADqFr Mmd1xwemNPtNWhA9ot86gDCcEZ1l5Xla/dsM7yDL+K/NMuhcjP/hMGm2lC+OaQbKQBI4 5Cmg== X-Gm-Message-State: AOAM533xZz0Ur4B+9FYE8CBaT1VzGhkb1ebQs9NWPOdgh0gYG2PppXTc ZmjNx8QubMsp6c87CjC33FB/1rMtxXJVbQ== X-Google-Smtp-Source: ABdhPJzFmIMLO9P4hPZ1WlUrCzVnrP6wI3Jb3t19SwxcmGZ1mFYzmnZqp/IKSaihmrUI2OamW+cdog== X-Received: by 2002:a05:6638:2482:: with SMTP id x2mr405176jat.32.1633562014890; Wed, 06 Oct 2021 16:13:34 -0700 (PDT) Received: from p1.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id o1sm12955203ilj.41.2021.10.06.16.13.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 16:13:34 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: io-uring@vger.kernel.org, Jens Axboe Subject: [PATCH 1/3] block: bump max plugged deferred size from 16 to 32 Date: Wed, 6 Oct 2021 17:13:28 -0600 Message-Id: <20211006231330.20268-2-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211006231330.20268-1-axboe@kernel.dk> References: <20211006231330.20268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Particularly for NVMe with efficient deferred submission for many requests, there are nice benefits to be seen by bumping the default max plug count from 16 to 32. This is especially true for virtualized setups, where the submit part is more expensive. But can be noticed even on native hardware. Reduce the multiple queue factor from 4 to 2, since we're changing the default size. While changing it, move the defines into the block layer private header. These aren't values that anyone outside of the block layer uses, or should use. Signed-off-by: Jens Axboe Reviewed-by: Christoph Hellwig --- block/blk-mq.c | 4 ++-- block/blk.h | 6 ++++++ include/linux/blkdev.h | 2 -- 3 files changed, 8 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index a40c94505680..5327abbefbab 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2145,14 +2145,14 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) } /* - * Allow 4x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple + * Allow 2x BLK_MAX_REQUEST_COUNT requests on plug queue for multiple * queues. This is important for md arrays to benefit from merging * requests. */ static inline unsigned short blk_plug_max_rq_count(struct blk_plug *plug) { if (plug->multiple_queues) - return BLK_MAX_REQUEST_COUNT * 4; + return BLK_MAX_REQUEST_COUNT * 2; return BLK_MAX_REQUEST_COUNT; } diff --git a/block/blk.h b/block/blk.h index 21283541a99f..38867b4c5c7e 100644 --- a/block/blk.h +++ b/block/blk.h @@ -222,6 +222,12 @@ bool blk_bio_list_merge(struct request_queue *q, struct list_head *list, void blk_account_io_start(struct request *req); void blk_account_io_done(struct request *req, u64 now); +/* + * Plug flush limits + */ +#define BLK_MAX_REQUEST_COUNT 32 +#define BLK_PLUG_FLUSH_SIZE (128 * 1024) + /* * Internal elevator interface */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index b19172db7eef..472b4ab007c6 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -727,8 +727,6 @@ struct blk_plug { bool multiple_queues; bool nowait; }; -#define BLK_MAX_REQUEST_COUNT 16 -#define BLK_PLUG_FLUSH_SIZE (128 * 1024) struct blk_plug_cb; typedef void (*blk_plug_cb_fn)(struct blk_plug_cb *, bool); From patchwork Wed Oct 6 23:13:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12540765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B6CEC43217 for ; Wed, 6 Oct 2021 23:13:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51DED610CE for ; Wed, 6 Oct 2021 23:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbhJFXPa (ORCPT ); Wed, 6 Oct 2021 19:15:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231154AbhJFXP3 (ORCPT ); Wed, 6 Oct 2021 19:15:29 -0400 Received: from mail-io1-xd2e.google.com (mail-io1-xd2e.google.com [IPv6:2607:f8b0:4864:20::d2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9319AC061753 for ; Wed, 6 Oct 2021 16:13:36 -0700 (PDT) Received: by mail-io1-xd2e.google.com with SMTP id m20so3936968iol.4 for ; Wed, 06 Oct 2021 16:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f0CDhZ3n9c9Wbis2PmGVaPgugzsGmtUSf+fxsIMVGmw=; b=phyA3KEXMh7wFtX0TNaStaUPlzsaKWMq5abW5zZGdgeA4IonWOPutibqdId0q9sYK2 PBTzKCzmlfQAytFcdXvgBhLO37dK/Wooy0AAmFw0ZMW9RN1ajgCXzQUJpl+WlRoui8RE 3mi659LwzDjaXs4jlZE9/ALpU9tdiYevtaD5pMCp2E+zisOzgp4+CRXV5lho9kX3nri9 Um+hMRVpc/MVzO7xxLc4a6ZJS7Mhwxf8E27wvcVOxgIsaWt1OawixvBXD/ECmVF1y/Jt qCBwNw+1eoyzXaVwlNpFrPOWJgld2/xcGrGO9gYZebzPqeaVADnSARqcUYWhEW06Ekdr nqaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f0CDhZ3n9c9Wbis2PmGVaPgugzsGmtUSf+fxsIMVGmw=; b=jpFw/Lu6UYf5ypue8JJ5zwgnCiGKTTVeH/tM6c/Ikqrt5ep+lFmO6AF2Ol7t0WTLhm j2KbqVZXUpZuFRAHbMVYBx8Ex7Id1v2TE30MzSsXKkBd8/pUpOb4g2WasoIT34UzAtm2 5BkIdVbbHFnb+Y79BNPZw/7Hfll2um5GOVhDBZaWUb3sv7TQsMMPMNuB8X9L3lnipRnc 2VG8RjPfvaba7EJJLsLV35JvxhmrHavXxXUptOiNiOyuU3fJv4MCoqFoQHCXsXyPid3A FcjCAcNdzufaLKit5PRedAeKeakQXxFGu65M1V9w9g5TSdaNo0w6p5j/LAxdTQvS/9lX Et+w== X-Gm-Message-State: AOAM5331WNRmL4ID7zVtrKmviyJLEaMEn7n9Erp6Dy1B/rsHzg8B9/CG 8RYcBJl+vJfWqxMYWUWAEQcB12aaVIcyyA== X-Google-Smtp-Source: ABdhPJymVS9ebXOcm3pIk39zBp0lxakeabhdHebYp44tR5qTHf/9daOekgARzxH1f0Xt/fCZN5CrEw== X-Received: by 2002:a6b:8b52:: with SMTP id n79mr808914iod.8.1633562015797; Wed, 06 Oct 2021 16:13:35 -0700 (PDT) Received: from p1.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id o1sm12955203ilj.41.2021.10.06.16.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 16:13:35 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: io-uring@vger.kernel.org, Jens Axboe Subject: [PATCH 2/3] block: pre-allocate requests if plug is started and is a batch Date: Wed, 6 Oct 2021 17:13:29 -0600 Message-Id: <20211006231330.20268-3-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211006231330.20268-1-axboe@kernel.dk> References: <20211006231330.20268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The caller typically has a good (or even exact) idea of how many requests it needs to submit. We can make the request/tag allocation a lot more efficient if we just allocate N requests/tags upfront when we queue the first bio from the batch. Provide a new plug start helper that allows the caller to specify how many IOs are expected. This sets plug->nr_ios, and we can use that for smarter request allocation. The plug provides a holding spot for requests, and request allocation will check it before calling into the normal request allocation path. The blk_finish_plug() is called, check if there are unused requests and free them. This should not happen in normal operations. The exception is if we get merging, then we may be left with requests that need freeing when done. This raises the per-core performance on my setup from ~5.8M to ~6.1M IOPS. Signed-off-by: Jens Axboe --- block/blk-core.c | 47 ++++++++++++++++------------ block/blk-mq.c | 70 ++++++++++++++++++++++++++++++++++-------- block/blk-mq.h | 5 +++ include/linux/blk-mq.h | 5 ++- include/linux/blkdev.h | 15 ++++++++- 5 files changed, 109 insertions(+), 33 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index d83e56b2f64e..9b8c70670190 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1624,6 +1624,31 @@ int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, } EXPORT_SYMBOL(kblockd_mod_delayed_work_on); +void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios) +{ + struct task_struct *tsk = current; + + /* + * If this is a nested plug, don't actually assign it. + */ + if (tsk->plug) + return; + + INIT_LIST_HEAD(&plug->mq_list); + plug->cached_rq = NULL; + plug->nr_ios = min_t(unsigned short, nr_ios, BLK_MAX_REQUEST_COUNT); + plug->rq_count = 0; + plug->multiple_queues = false; + plug->nowait = false; + INIT_LIST_HEAD(&plug->cb_list); + + /* + * Store ordering should not be needed here, since a potential + * preempt will imply a full memory barrier + */ + tsk->plug = plug; +} + /** * blk_start_plug - initialize blk_plug and track it inside the task_struct * @plug: The &struct blk_plug that needs to be initialized @@ -1649,25 +1674,7 @@ EXPORT_SYMBOL(kblockd_mod_delayed_work_on); */ void blk_start_plug(struct blk_plug *plug) { - struct task_struct *tsk = current; - - /* - * If this is a nested plug, don't actually assign it. - */ - if (tsk->plug) - return; - - INIT_LIST_HEAD(&plug->mq_list); - INIT_LIST_HEAD(&plug->cb_list); - plug->rq_count = 0; - plug->multiple_queues = false; - plug->nowait = false; - - /* - * Store ordering should not be needed here, since a potential - * preempt will imply a full memory barrier - */ - tsk->plug = plug; + blk_start_plug_nr_ios(plug, 1); } EXPORT_SYMBOL(blk_start_plug); @@ -1719,6 +1726,8 @@ void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule) if (!list_empty(&plug->mq_list)) blk_mq_flush_plug_list(plug, from_schedule); + if (unlikely(!from_schedule && plug->cached_rq)) + blk_mq_free_plug_rqs(plug); } /** diff --git a/block/blk-mq.c b/block/blk-mq.c index 5327abbefbab..ced94eb8e297 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -352,6 +352,7 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) struct request_queue *q = data->q; struct elevator_queue *e = q->elevator; u64 alloc_time_ns = 0; + struct request *rq; unsigned int tag; /* alloc_time includes depth and tag waits */ @@ -385,10 +386,21 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) * case just retry the hctx assignment and tag allocation as CPU hotplug * should have migrated us to an online CPU by now. */ - tag = blk_mq_get_tag(data); - if (tag == BLK_MQ_NO_TAG) { + do { + tag = blk_mq_get_tag(data); + if (tag != BLK_MQ_NO_TAG) { + rq = blk_mq_rq_ctx_init(data, tag, alloc_time_ns); + if (!--data->nr_tags) + return rq; + if (e || data->hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED) + return rq; + rq->rq_next = *data->cached_rq; + *data->cached_rq = rq; + data->flags |= BLK_MQ_REQ_NOWAIT; + continue; + } if (data->flags & BLK_MQ_REQ_NOWAIT) - return NULL; + break; /* * Give up the CPU and sleep for a random short time to ensure @@ -397,8 +409,15 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data) */ msleep(3); goto retry; + } while (1); + + if (data->cached_rq) { + rq = *data->cached_rq; + *data->cached_rq = rq->rq_next; + return rq; } - return blk_mq_rq_ctx_init(data, tag, alloc_time_ns); + + return NULL; } struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, @@ -408,6 +427,7 @@ struct request *blk_mq_alloc_request(struct request_queue *q, unsigned int op, .q = q, .flags = flags, .cmd_flags = op, + .nr_tags = 1, }; struct request *rq; int ret; @@ -436,6 +456,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q, .q = q, .flags = flags, .cmd_flags = op, + .nr_tags = 1, }; u64 alloc_time_ns = 0; unsigned int cpu; @@ -537,6 +558,18 @@ void blk_mq_free_request(struct request *rq) } EXPORT_SYMBOL_GPL(blk_mq_free_request); +void blk_mq_free_plug_rqs(struct blk_plug *plug) +{ + while (plug->cached_rq) { + struct request *rq; + + rq = plug->cached_rq; + plug->cached_rq = rq->rq_next; + percpu_ref_get(&rq->q->q_usage_counter); + blk_mq_free_request(rq); + } +} + inline void __blk_mq_end_request(struct request *rq, blk_status_t error) { u64 now = 0; @@ -2178,6 +2211,7 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) const int is_flush_fua = op_is_flush(bio->bi_opf); struct blk_mq_alloc_data data = { .q = q, + .nr_tags = 1, }; struct request *rq; struct blk_plug *plug; @@ -2204,13 +2238,26 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) hipri = bio->bi_opf & REQ_HIPRI; - data.cmd_flags = bio->bi_opf; - rq = __blk_mq_alloc_request(&data); - if (unlikely(!rq)) { - rq_qos_cleanup(q, bio); - if (bio->bi_opf & REQ_NOWAIT) - bio_wouldblock_error(bio); - goto queue_exit; + plug = blk_mq_plug(q, bio); + if (plug && plug->cached_rq) { + rq = plug->cached_rq; + plug->cached_rq = rq->rq_next; + INIT_LIST_HEAD(&rq->queuelist); + data.hctx = rq->mq_hctx; + } else { + data.cmd_flags = bio->bi_opf; + if (plug) { + data.nr_tags = plug->nr_ios; + plug->nr_ios = 1; + data.cached_rq = &plug->cached_rq; + } + rq = __blk_mq_alloc_request(&data); + if (unlikely(!rq)) { + rq_qos_cleanup(q, bio); + if (bio->bi_opf & REQ_NOWAIT) + bio_wouldblock_error(bio); + goto queue_exit; + } } trace_block_getrq(bio); @@ -2229,7 +2276,6 @@ blk_qc_t blk_mq_submit_bio(struct bio *bio) return BLK_QC_T_NONE; } - plug = blk_mq_plug(q, bio); if (unlikely(is_flush_fua)) { /* Bypass scheduler for flush requests */ blk_insert_flush(rq); diff --git a/block/blk-mq.h b/block/blk-mq.h index 171e8cdcff54..5da970bb8865 100644 --- a/block/blk-mq.h +++ b/block/blk-mq.h @@ -125,6 +125,7 @@ extern int __blk_mq_register_dev(struct device *dev, struct request_queue *q); extern int blk_mq_sysfs_register(struct request_queue *q); extern void blk_mq_sysfs_unregister(struct request_queue *q); extern void blk_mq_hctx_kobj_init(struct blk_mq_hw_ctx *hctx); +void blk_mq_free_plug_rqs(struct blk_plug *plug); void blk_mq_release(struct request_queue *q); @@ -152,6 +153,10 @@ struct blk_mq_alloc_data { unsigned int shallow_depth; unsigned int cmd_flags; + /* allocate multiple requests/tags in one go */ + unsigned int nr_tags; + struct request **cached_rq; + /* input & output parameter */ struct blk_mq_ctx *ctx; struct blk_mq_hw_ctx *hctx; diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 75d75657df21..0e941f217578 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -90,7 +90,10 @@ struct request { struct bio *bio; struct bio *biotail; - struct list_head queuelist; + union { + struct list_head queuelist; + struct request *rq_next; + }; /* * The hash is used inside the scheduler, and killed once the diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 472b4ab007c6..17705c970d7e 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -722,10 +722,17 @@ extern void blk_set_queue_dying(struct request_queue *); */ struct blk_plug { struct list_head mq_list; /* blk-mq requests */ - struct list_head cb_list; /* md requires an unplug callback */ + + /* if ios_left is > 1, we can batch tag/rq allocations */ + struct request *cached_rq; + unsigned short nr_ios; + unsigned short rq_count; + bool multiple_queues; bool nowait; + + struct list_head cb_list; /* md requires an unplug callback */ }; struct blk_plug_cb; @@ -738,6 +745,7 @@ struct blk_plug_cb { extern struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug, void *data, int size); extern void blk_start_plug(struct blk_plug *); +extern void blk_start_plug_nr_ios(struct blk_plug *, unsigned short); extern void blk_finish_plug(struct blk_plug *); extern void blk_flush_plug_list(struct blk_plug *, bool); @@ -772,6 +780,11 @@ long nr_blockdev_pages(void); struct blk_plug { }; +static inline void blk_start_plug_nr_ios(struct blk_plug *plug, + unsigned short nr_ios) +{ +} + static inline void blk_start_plug(struct blk_plug *plug) { } From patchwork Wed Oct 6 23:13:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12540761 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93786C4167B for ; Wed, 6 Oct 2021 23:13:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 75B7761077 for ; Wed, 6 Oct 2021 23:13:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231154AbhJFXPb (ORCPT ); Wed, 6 Oct 2021 19:15:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46866 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbhJFXPa (ORCPT ); Wed, 6 Oct 2021 19:15:30 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D5EBC061746 for ; Wed, 6 Oct 2021 16:13:37 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id x1so724956iof.7 for ; Wed, 06 Oct 2021 16:13:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fB0J1oLaWn8aQX1qmT2pF+15JxKPoNeiCpTQYqkrAY4=; b=nLas33MUIpZaohjazlCL17urrtmqiKZ3QzMe/8Om2Fpq5nCQQwiuLkYYuUviOpjacy VSqAu7nWldscev6+JBRyILC0rX1rrfCu6TUlWdG5CZwDgClU8DzgW22kUQ3WZi9UfZSm u/AcvsJotbJPNr5Bkugzbdak0iZLfJcMbYFTB7RWD2nOtJTYbzjCjdb4ZLp1yXEVNccx HnALQWp2Fa1RyGTI4dTH50GP+lgJlIJu/abSLCJAMGWQw+cIV2QxflOOTfYgQaWQMv48 Vi4p35pX0oCT1fMS9jWIXM7T7YgyRPT7JiQyWwhWV4FYo4vgMs+JoY2dM3v9YRTyUlRK PECA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fB0J1oLaWn8aQX1qmT2pF+15JxKPoNeiCpTQYqkrAY4=; b=xTzIKrIpXYvo/WCZ/1qhBKNyceS3T5+DMpSjqiSI7gjP7vEiMly91vSXQ4rARS0a8G NR7OKcPy/ljRYtIxkz5vshYC4F0T+lnzZ9VmXw6Hv9nE0J5r/drxSVhfj8C+fuaIArpA XAo96SLz5cgpvZXzYViu41IxnnVAhYkdFuO2zCyo4t97v6nrVYe0ZpfhnyezBSpTGUqe 0Spx5ZdB/c1XH+56iDYn9lwio9c3GHG19tVDpFVp8gygOjnLtRKu2NTDoHbaQ2tI8Xw/ Xpc5RPnCR1BygmM8d3+pgrdtKC/CHbJX/zW1u3qmLEHm4brTyMO+P7eX9L/fpHxaHrgr Q49Q== X-Gm-Message-State: AOAM533AT/pXVpPle8mN7/2IJEpxXmiU274OtfLQzRNTb/JmTfQ6u15p QmbKRGf1ny1QjqWshmi0kvKJ2dy+qrMOmQ== X-Google-Smtp-Source: ABdhPJw0I/o3+XDl8OxTztxBYD/piLLj1nuhTQ8r5NvmhLggRDnuU+3L1lqRAAeRa8IGwiWX/YKUew== X-Received: by 2002:a02:c7d2:: with SMTP id s18mr385807jao.68.1633562016617; Wed, 06 Oct 2021 16:13:36 -0700 (PDT) Received: from p1.localdomain ([66.219.217.159]) by smtp.gmail.com with ESMTPSA id o1sm12955203ilj.41.2021.10.06.16.13.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 06 Oct 2021 16:13:36 -0700 (PDT) From: Jens Axboe To: linux-block@vger.kernel.org Cc: io-uring@vger.kernel.org, Jens Axboe Subject: [PATCH 3/3] io_uring: inform block layer of how many requests we are submitting Date: Wed, 6 Oct 2021 17:13:30 -0600 Message-Id: <20211006231330.20268-4-axboe@kernel.dk> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20211006231330.20268-1-axboe@kernel.dk> References: <20211006231330.20268-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The block layer can use this knowledge to make smarter decisions on how to handle the request, if it knows that N more may be coming. Switch to using blk_start_plug_nr_ios() to pass in that information. Signed-off-by: Jens Axboe --- fs/io_uring.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 73135c5c6168..90af264fdac6 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -316,6 +316,7 @@ struct io_submit_state { bool plug_started; bool need_plug; + unsigned short submit_nr; struct blk_plug plug; }; @@ -7027,7 +7028,7 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, if (state->need_plug && io_op_defs[opcode].plug) { state->plug_started = true; state->need_plug = false; - blk_start_plug(&state->plug); + blk_start_plug_nr_ios(&state->plug, state->submit_nr); } req->file = io_file_get(ctx, req, READ_ONCE(sqe->fd), @@ -7148,6 +7149,7 @@ static void io_submit_state_start(struct io_submit_state *state, { state->plug_started = false; state->need_plug = max_ios > 2; + state->submit_nr = max_ios; /* set only head, no need to init link_last in advance */ state->link.head = NULL; }