From patchwork Thu Dec 16 16:05:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12681551 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BC52C433FE for ; Thu, 16 Dec 2021 16:05:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232897AbhLPQFm (ORCPT ); Thu, 16 Dec 2021 11:05:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238786AbhLPQFm (ORCPT ); Thu, 16 Dec 2021 11:05:42 -0500 Received: from mail-io1-xd2f.google.com (mail-io1-xd2f.google.com [IPv6:2607:f8b0:4864:20::d2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EADD2C061574 for ; Thu, 16 Dec 2021 08:05:41 -0800 (PST) Received: by mail-io1-xd2f.google.com with SMTP id p23so35810218iod.7 for ; Thu, 16 Dec 2021 08:05:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/pPKkXtT6Xrl9aqdgr4UGNl+ZJEtLSBkw/CIGk0Ps8Y=; b=hrAibA8VCiRbTVdDg+0x+InGlOJ+7hx7X3m+HVrRE6NeGaojnjD2KfBEJAt57Bx4kM ACupiwLU2vUmauKIDN301FUJJ6n1BgnD02odN8w3JnW7LCGe5nmxQDPO273GM4btJjUX V1FopdaIX6h0oB3RIpJX+b7FoyBH4NuoiYCF5pdu9KIJKIsQAk4iCep/Q66gGjd76o2a UIihO0OWoyCB2n2zPJ1YKWbBZhvMd420Jj3btrYZtg9ZwebXjibml4Gj9S3+5e4tpSBT wpX/Q6ij0l9dTKlSmI1nrvWocgsVnUXDiKwRHT32LoOmXssog1YLnFfkefG6El7MMS0Q RzXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/pPKkXtT6Xrl9aqdgr4UGNl+ZJEtLSBkw/CIGk0Ps8Y=; b=gdlb5CG1cT0Z52+RPu43PxcG0X8D971Bi789nhPwm3zsQyA6q7i0FUqm661q3fiNwU /o4HjNuOH5hdERHBJzIL1HG9xOJTw8fbIB/l4xwv8/FxuqnDaPdNsVL8zlKrAu/ADCx7 C4w3Zhwikhjukxi1uBFXq0POqIAMQGHKvJjyc7IXlkUxpmsYbpTM/au7oZLuapVd9GXv YkioKq7ha9kVjc1fX4aYVxKuWCMs3H0JKhV51DOfjhZWL8sWd3n7komJm+SQFVNIouHb IMywbk+LDHd1deQrUMlutah94J4eAvswGDwbG9ia5NqTSdK+fqnVuEWobTnHUxFubTSu RcNQ== X-Gm-Message-State: AOAM533icke6isVXA/J/c4dKFiKyRAKLVQ5K7ONOL9PynDq7+zhnmsqy CuM0Cwi+LYSgtIbRY0JiT1BiutrztFgtUQ== X-Google-Smtp-Source: ABdhPJyZvGZ7Csc3Ozp6Tn4H9lRpoYb6zejMLBe6rUw9wqFXCf+CxyyNi5MRqbS6MX2EEwjVvOZdiQ== X-Received: by 2002:a05:6638:2055:: with SMTP id t21mr10015876jaj.90.1639670741326; Thu, 16 Dec 2021 08:05:41 -0800 (PST) Received: from x1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s9sm3237155ild.14.2021.12.16.08.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Dec 2021 08:05:40 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe , Christoph Hellwig Subject: [PATCH 1/4] block: add mq_ops->queue_rqs hook Date: Thu, 16 Dec 2021 09:05:34 -0700 Message-Id: <20211216160537.73236-2-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211216160537.73236-1-axboe@kernel.dk> References: <20211216160537.73236-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we have a list of requests in our plug list, send it to the driver in one go, if possible. The driver must set mq_ops->queue_rqs() to support this, if not the usual one-by-one path is used. Reviewed-by: Christoph Hellwig Signed-off-by: Jens Axboe --- block/blk-mq.c | 26 +++++++++++++++++++++++--- include/linux/blk-mq.h | 8 ++++++++ 2 files changed, 31 insertions(+), 3 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index 75154cc788db..51991232824a 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -2553,6 +2553,7 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) { struct blk_mq_hw_ctx *this_hctx; struct blk_mq_ctx *this_ctx; + struct request *rq; unsigned int depth; LIST_HEAD(list); @@ -2561,7 +2562,28 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) plug->rq_count = 0; if (!plug->multiple_queues && !plug->has_elevator && !from_schedule) { - struct request_queue *q = rq_list_peek(&plug->mq_list)->q; + struct request_queue *q; + + rq = rq_list_peek(&plug->mq_list); + q = rq->q; + + /* + * Peek first request and see if we have a ->queue_rqs() hook. + * If we do, we can dispatch the whole plug list in one go. We + * already know at this point that all requests belong to the + * same queue, caller must ensure that's the case. + * + * Since we pass off the full list to the driver at this point, + * we do not increment the active request count for the queue. + * Bypass shared tags for now because of that. + */ + if (q->mq_ops->queue_rqs && + !(rq->mq_hctx->flags & BLK_MQ_F_TAG_QUEUE_SHARED)) { + blk_mq_run_dispatch_ops(q, + q->mq_ops->queue_rqs(&plug->mq_list)); + if (rq_list_empty(plug->mq_list)) + return; + } blk_mq_run_dispatch_ops(q, blk_mq_plug_issue_direct(plug, false)); @@ -2573,8 +2595,6 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) this_ctx = NULL; depth = 0; do { - struct request *rq; - rq = rq_list_pop(&plug->mq_list); if (!this_hctx) { diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index 772f8f921526..550996cf419c 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -492,6 +492,14 @@ struct blk_mq_ops { */ void (*commit_rqs)(struct blk_mq_hw_ctx *); + /** + * @queue_rqs: Queue a list of new requests. Driver is guaranteed + * that each request belongs to the same queue. If the driver doesn't + * empty the @rqlist completely, then the rest will be queued + * individually by the block layer upon return. + */ + void (*queue_rqs)(struct request **rqlist); + /** * @get_budget: Reserve budget before queue request, once .queue_rq is * run, it is driver's responsibility to release the From patchwork Thu Dec 16 16:05:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12681553 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48138C43217 for ; Thu, 16 Dec 2021 16:05:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238787AbhLPQFn (ORCPT ); Thu, 16 Dec 2021 11:05:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238785AbhLPQFn (ORCPT ); Thu, 16 Dec 2021 11:05:43 -0500 Received: from mail-io1-xd2f.google.com (mail-io1-xd2f.google.com [IPv6:2607:f8b0:4864:20::d2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDFAFC061574 for ; Thu, 16 Dec 2021 08:05:42 -0800 (PST) Received: by mail-io1-xd2f.google.com with SMTP id x6so35532632iol.13 for ; Thu, 16 Dec 2021 08:05:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mrsB3UCm4bmi73wguDQBqQCr2m0pev3vraLfK00rcq4=; b=NASmxMQXOzlzQl2P97eko9egq2Ee1UYORkMMfIMKOdwKqjMhjXLDTv/9JYPlKBgGCr oYeCIdjtUfNRkeKpro/j7f8I9AC7YQ0f6tSQUYAppLrQW5NSx2adWPwop+FeVVvPMwZf hMX8tXRXS3IJ8x86apPVybcmoWRw1Z2knYpIGBBBQUXUrDGaRZx5bl1rSsx0h2pZQEFB STbDmk7bPjwm2NWSA2mf6VHl2AUdvkGw+UN6veCxGHs8D5dUpLXg+hs6p8WvgR+vSz5j JJLipSh854HDnsQdrDVbPWzmviIyo/gLLA4+x4MDcBYeHq/0Ij1UZ6uSDBO0o9/DHqpd UdDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mrsB3UCm4bmi73wguDQBqQCr2m0pev3vraLfK00rcq4=; b=kGJzEmBXu4oAAVPImcyQdvcb2GsO4kfNA/NF9J/uiwhUFr493bbRZuWFmAUZ+gQjvP Pl3Df3WuCyflq5HMJo4xa6VCsyIa4RDAwWaLGZphyBSiImjkfDGVpQIJc7FVmobce0Gh PoJO0ES61a1BzGZo6cQ2xIBz5uQgWnWTGJfeNMVNFms6xYIuuhNCdrVrX39Mn688f3hu AfjZzcJQuNYhkn7iF/Z8qLLCO5b5StBOPD7aBwNXRxv/BM7iyyuUrqGapMF5XKeF9eK1 DEESBsgZs+xo59a3AhGFvGu5DE1G6zqYsMxYFW5g1uvLHeQsAxzfO2EvchCXbIQt4/gC U52Q== X-Gm-Message-State: AOAM531uWWSNzFfipPHyKPyMhO6nQ7RlosXrSnqP1YMYzRP788u/dCq4 md+qcgvPjSOR7/ZSv9qPU87G7w== X-Google-Smtp-Source: ABdhPJzr/bFJjYd6Il7Y+kIQ7ggNza0pgizMOaeK51YYrulffnpxoSLrnypU+JIpE8iPqtsuB2JQEQ== X-Received: by 2002:a05:6638:4089:: with SMTP id m9mr9928722jam.187.1639670742320; Thu, 16 Dec 2021 08:05:42 -0800 (PST) Received: from x1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s9sm3237155ild.14.2021.12.16.08.05.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Dec 2021 08:05:41 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe , Chaitanya Kulkarni , Hannes Reinecke , Max Gurtovoy Subject: [PATCH 2/4] nvme: split command copy into a helper Date: Thu, 16 Dec 2021 09:05:35 -0700 Message-Id: <20211216160537.73236-3-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211216160537.73236-1-axboe@kernel.dk> References: <20211216160537.73236-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We'll need it for batched submit as well. Reviewed-by: Chaitanya Kulkarni Reviewed-by: Hannes Reinecke Reviewed-by: Max Gurtovoy Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 8637538f3fd5..09ea21f75439 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -500,6 +500,15 @@ static inline void nvme_write_sq_db(struct nvme_queue *nvmeq, bool write_sq) nvmeq->last_sq_tail = nvmeq->sq_tail; } +static inline void nvme_sq_copy_cmd(struct nvme_queue *nvmeq, + struct nvme_command *cmd) +{ + memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), cmd, + sizeof(*cmd)); + if (++nvmeq->sq_tail == nvmeq->q_depth) + nvmeq->sq_tail = 0; +} + /** * nvme_submit_cmd() - Copy a command into a queue and ring the doorbell * @nvmeq: The queue to use @@ -510,10 +519,7 @@ static void nvme_submit_cmd(struct nvme_queue *nvmeq, struct nvme_command *cmd, bool write_sq) { spin_lock(&nvmeq->sq_lock); - memcpy(nvmeq->sq_cmds + (nvmeq->sq_tail << nvmeq->sqes), - cmd, sizeof(*cmd)); - if (++nvmeq->sq_tail == nvmeq->q_depth) - nvmeq->sq_tail = 0; + nvme_sq_copy_cmd(nvmeq, cmd); nvme_write_sq_db(nvmeq, write_sq); spin_unlock(&nvmeq->sq_lock); } From patchwork Thu Dec 16 16:05:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12681555 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 288BBC4321E for ; Thu, 16 Dec 2021 16:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238786AbhLPQFo (ORCPT ); Thu, 16 Dec 2021 11:05:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238788AbhLPQFo (ORCPT ); Thu, 16 Dec 2021 11:05:44 -0500 Received: from mail-io1-xd36.google.com (mail-io1-xd36.google.com [IPv6:2607:f8b0:4864:20::d36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCC53C061574 for ; Thu, 16 Dec 2021 08:05:43 -0800 (PST) Received: by mail-io1-xd36.google.com with SMTP id z18so35805742iof.5 for ; Thu, 16 Dec 2021 08:05:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JCw+S8MiRnjhm6Un4sTvVrQkhG8i170ZABgo8Q4BBso=; b=NkcbH8mZTCgYOSB2aFWY8GOOMtrhf9xV8eOfa4N5jDew+OFiLEvGEPti7SXdNShxvr 1maa4XtP5vVq/ChUxmmQsZ2ulmFCHwe/QYA+AdDQigyxivBncO/p1XEDAQAgOLBVgJ5V YCY16xYo7UjQoTrqw9TnYBHow6kxGTzfgyNBrK8hfch+ETpTg882tWdTFo0S6leO1GKH T1LfT+UENDhltpm/lK46aH0ARilQEBU1Hh1qKNmqOLRfFsjhIUoItx6lRSxLLo4Pk4RO Y+6abOKfcmkllsf8muRmySKKmZp2YBQgZU1euOX6/m904BiqDqoljPaXr/aw5nyndYi/ SYDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JCw+S8MiRnjhm6Un4sTvVrQkhG8i170ZABgo8Q4BBso=; b=aOvg46+u0Kn4SGEwewwsRd91LVaINpaWwbpolRa5XHYxNuCw2P0r8I07x6msKdRckD gCcvFkWSQsoKWYTa3FlzlgIom9KKJ7fHZ6W0QpqaQ9D+KXYJ0yQSm6Gw0oC9naPj2J2u 71Oyfj0iO+jagc9+1/y4XcTVEXbwdhfVerzbCyxqoT1uLZo5zHXHj1AlSd9Gf0zefMkk Sv1OzxVhHLfe2sDyGbSRjbU0b7MK3bcWqZk/d3P4Q61xOWs5lco1+C+pg3MpcX+9X8dU ooCm6ZndXbmAgZyx0wbEj7y95FowcKSI7UYlwSRtsqLZwMPwwAwCjoBLMDADJV5wgj2p dXwQ== X-Gm-Message-State: AOAM533lgaM9+D2tEPclwpFyn4sItHc1NDfTlFhdFTCdumh4d2PiZaQ3 uGWpPd+vSTaaPMKlmexoEo80abZDCM4Yeg== X-Google-Smtp-Source: ABdhPJzYVUiNQvpOOg/ChWSy3WurmZn+VAg7Kx8Ahru1rOdFB2u10I4Slb8DQn/We9TEzwUaeUPVjQ== X-Received: by 2002:a5d:9e04:: with SMTP id h4mr9632236ioh.192.1639670743239; Thu, 16 Dec 2021 08:05:43 -0800 (PST) Received: from x1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s9sm3237155ild.14.2021.12.16.08.05.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Dec 2021 08:05:42 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe , Hannes Reinecke , Christoph Hellwig Subject: [PATCH 3/4] nvme: separate command prep and issue Date: Thu, 16 Dec 2021 09:05:36 -0700 Message-Id: <20211216160537.73236-4-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211216160537.73236-1-axboe@kernel.dk> References: <20211216160537.73236-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is adapted to use this helper. Reviewed-by: Hannes Reinecke Reviewed-by: Christoph Hellwig Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 57 ++++++++++++++++++++++++----------------- 1 file changed, 33 insertions(+), 24 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 09ea21f75439..6be6b1ab4285 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -918,52 +918,32 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req, return BLK_STS_OK; } -/* - * NOTE: ns is NULL when called on the admin queue. - */ -static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, - const struct blk_mq_queue_data *bd) +static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req) { - struct nvme_ns *ns = hctx->queue->queuedata; - struct nvme_queue *nvmeq = hctx->driver_data; - struct nvme_dev *dev = nvmeq->dev; - struct request *req = bd->rq; struct nvme_iod *iod = blk_mq_rq_to_pdu(req); - struct nvme_command *cmnd = &iod->cmd; blk_status_t ret; iod->aborted = 0; iod->npages = -1; iod->nents = 0; - /* - * We should not need to do this, but we're still using this to - * ensure we can drain requests on a dying queue. - */ - if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) - return BLK_STS_IOERR; - - if (!nvme_check_ready(&dev->ctrl, req, true)) - return nvme_fail_nonready_command(&dev->ctrl, req); - - ret = nvme_setup_cmd(ns, req); + ret = nvme_setup_cmd(req->q->queuedata, req); if (ret) return ret; if (blk_rq_nr_phys_segments(req)) { - ret = nvme_map_data(dev, req, cmnd); + ret = nvme_map_data(dev, req, &iod->cmd); if (ret) goto out_free_cmd; } if (blk_integrity_rq(req)) { - ret = nvme_map_metadata(dev, req, cmnd); + ret = nvme_map_metadata(dev, req, &iod->cmd); if (ret) goto out_unmap_data; } blk_mq_start_request(req); - nvme_submit_cmd(nvmeq, cmnd, bd->last); return BLK_STS_OK; out_unmap_data: nvme_unmap_data(dev, req); @@ -972,6 +952,35 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return ret; } +/* + * NOTE: ns is NULL when called on the admin queue. + */ +static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + struct nvme_queue *nvmeq = hctx->driver_data; + struct nvme_dev *dev = nvmeq->dev; + struct request *req = bd->rq; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + blk_status_t ret; + + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return BLK_STS_IOERR; + + if (unlikely(!nvme_check_ready(&dev->ctrl, req, true))) + return nvme_fail_nonready_command(&dev->ctrl, req); + + ret = nvme_prep_rq(dev, req); + if (unlikely(ret)) + return ret; + nvme_submit_cmd(nvmeq, &iod->cmd, bd->last); + return BLK_STS_OK; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); From patchwork Thu Dec 16 16:05:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 12681557 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9090C4332F for ; Thu, 16 Dec 2021 16:05:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238791AbhLPQFp (ORCPT ); Thu, 16 Dec 2021 11:05:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238785AbhLPQFo (ORCPT ); Thu, 16 Dec 2021 11:05:44 -0500 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC76DC061574 for ; Thu, 16 Dec 2021 08:05:44 -0800 (PST) Received: by mail-io1-xd2a.google.com with SMTP id z18so35805818iof.5 for ; Thu, 16 Dec 2021 08:05:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/PWmmDkqw4hfM6o+w9vrmK9zC/wyrsfBGmnpHNH1A4s=; b=cZQk8FmAyyc4dLcZnnc4aVaa72YqVCZRdQ7I9wkBxidqK2R84aK3YnNFoJkKWR26BL vTog6F1FcDQdAchOgFXsg5mNA+bSKdknh8fAeU22bmRev82PqjrmvE60YbMvp7BL0ADS 7EFAvQN+Hb9Liti7xR+iQfSq4ALn+vam4aXlIaYvsIWSNsN5KROKocReO3KStx/hTwxY kbI92/9DCl8JYB4NzqA0IQRhF5gFVxAFoyR2GjAp+OFqkHh6+EOrn1+17rWsBKcaL5Mo moykKhDJzWtOEAoacgrLq5fqbVMQtF2VoGXy8smC3nGyq7YG7bFySrJI0cotKZwtHMLC CKRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/PWmmDkqw4hfM6o+w9vrmK9zC/wyrsfBGmnpHNH1A4s=; b=igpBtT47llKkLEmkBtgMj5WMhvBkLBxQqBFAzb+EP3LhFt+cF+n58iNM4hTu+KuGwN HMWnwIOcY8Y0ukYCJQKzUVGpBnYONFXPhr/hI2aqNyOeTTSNT/ZVRvmOmoOnci8OGgjx VAxuY/hSOlZQupMyCQToAEXTgxcBikld3RWmudvozeWqmH30dJQnrxVeun/AoW1F8m3s UDbl9io7jbWY/7hCHcMwQz2n+jsM5eadCpQqQ5K507TnaNejmCbG7kfKTGLeGvDWocn9 hG/joX1VEaiIU72/sKk80qp1HPv8Xwh34gljPzlz54k7l31bPrk4+GB7g6Zr5y19W+K4 LVzg== X-Gm-Message-State: AOAM532aQg8/4hLbnW33m5KxrsJoQ6nP8X1oh0G0Ppohzuc8DnoPO/Ja Ir1KvfZtaBIiABrDnKkU106OSA== X-Google-Smtp-Source: ABdhPJyw3+fT/a7gefxPigqVMegk1UxofIMMCBv+uVydXXF56pofuEw+FLJtbXuk0WLtOds7Jal1xQ== X-Received: by 2002:a05:6638:389a:: with SMTP id b26mr9781380jav.197.1639670744180; Thu, 16 Dec 2021 08:05:44 -0800 (PST) Received: from x1.localdomain ([207.135.234.126]) by smtp.gmail.com with ESMTPSA id s9sm3237155ild.14.2021.12.16.08.05.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Dec 2021 08:05:43 -0800 (PST) From: Jens Axboe To: io-uring@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: Jens Axboe , Hannes Reinecke , Keith Busch Subject: [PATCH 4/4] nvme: add support for mq_ops->queue_rqs() Date: Thu, 16 Dec 2021 09:05:37 -0700 Message-Id: <20211216160537.73236-5-axboe@kernel.dk> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20211216160537.73236-1-axboe@kernel.dk> References: <20211216160537.73236-1-axboe@kernel.dk> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This enables the block layer to send us a full plug list of requests that need submitting. The block layer guarantees that they all belong to the same queue, but we do have to check the hardware queue mapping for each request. If errors are encountered, leave them in the passed in list. Then the block layer will handle them individually. This is good for about a 4% improvement in peak performance, taking us from 9.6M to 10M IOPS/core. Reviewed-by: Hannes Reinecke Reviewed-by: Keith Busch Signed-off-by: Jens Axboe --- drivers/nvme/host/pci.c | 58 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 6be6b1ab4285..e34ad67c4c41 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -981,6 +981,63 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, return BLK_STS_OK; } +static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist) +{ + spin_lock(&nvmeq->sq_lock); + while (!rq_list_empty(*rqlist)) { + struct request *req = rq_list_pop(rqlist); + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); + + nvme_sq_copy_cmd(nvmeq, &iod->cmd); + } + nvme_write_sq_db(nvmeq, true); + spin_unlock(&nvmeq->sq_lock); +} + +static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req) +{ + /* + * We should not need to do this, but we're still using this to + * ensure we can drain requests on a dying queue. + */ + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return false; + if (unlikely(!nvme_check_ready(&nvmeq->dev->ctrl, req, true))) + return false; + + req->mq_hctx->tags->rqs[req->tag] = req; + return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK; +} + +static void nvme_queue_rqs(struct request **rqlist) +{ + struct request *req = rq_list_peek(rqlist), *prev = NULL; + struct request *requeue_list = NULL; + + do { + struct nvme_queue *nvmeq = req->mq_hctx->driver_data; + + if (!nvme_prep_rq_batch(nvmeq, req)) { + /* detach 'req' and add to remainder list */ + if (prev) + prev->rq_next = req->rq_next; + rq_list_add(&requeue_list, req); + } else { + prev = req; + } + + req = rq_list_next(req); + if (!req || (prev && req->mq_hctx != prev->mq_hctx)) { + /* detach rest of list, and submit */ + prev->rq_next = NULL; + nvme_submit_cmds(nvmeq, rqlist); + *rqlist = req; + } + } while (req); + + *rqlist = requeue_list; +} + static __always_inline void nvme_pci_unmap_rq(struct request *req) { struct nvme_iod *iod = blk_mq_rq_to_pdu(req); @@ -1678,6 +1735,7 @@ static const struct blk_mq_ops nvme_mq_admin_ops = { static const struct blk_mq_ops nvme_mq_ops = { .queue_rq = nvme_queue_rq, + .queue_rqs = nvme_queue_rqs, .complete = nvme_pci_complete_rq, .commit_rqs = nvme_commit_rqs, .init_hctx = nvme_init_hctx,