From patchwork Thu Oct 22 01:02:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850069 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CA6CC2D0A3 for ; Thu, 22 Oct 2020 01:02:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BFA612244C for ; Thu, 22 Oct 2020 01:02:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="lwml0aQW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2442638AbgJVBCr (ORCPT ); Wed, 21 Oct 2020 21:02:47 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:3921 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBCr (ORCPT ); Wed, 21 Oct 2020 21:02:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603328567; x=1634864567; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6Qu06twpDWdtgyaEYknFhF6JbcU21xCqovBDAlsLidY=; b=lwml0aQWpo6ajS2Tb9fu3l6zqst4LGf4h2tPhOETBwe+2cXtpJDUGKJe xPCH+cvi1Iy3ptilXyR3uQ2CHijVNtnmmhCQ+ar3Bz4r3qF8cXw77F3Vx egMF61f0McNBTieuvQP1v7BMgHMDWWLqIjBLjLbw7UkOytUSBBPKHdn6O 9SgCiAY7EvWxrWTDnXUmVGLu9gFmFg+r9Zvvv0UzvqQ2lclHIEGP0Ed+v Hi7XJ2I36+XDnOmOKHExfgVR65CCGdjm7y3CNMgx/g2diu06Vtyd9YNuC HIETtp7VfxcgPaFZdB2hA+7GgmJ9n+0M38aLyY4M7V9XBDKYp0yKW3sMt Q==; IronPort-SDR: GgKHO7krZjdpklh7E1bqDneDuRE6YpLSXEF+CvRX1Qm25iqWp0jWa3ppLZQ3vrAXE0SXxly9MM EOKjfByE3TfxKgfzv+lMLRfUhK9kDbVX1vxaVHoKlJxTnnk8dJWn0GspMWdlPErVjkk3ya8hQx oUfrI3cl4j2ipdZmo2duxv/Pa7MbroPvIczmCb0Ip9XWMy4KWl670ObtYJ8hIIC6pku8HkE2HR /IaCN6RccqlLJtdsJdLbhf2X21z2UIzVrMwdCQOUC00F8cMEJ4tfrwRd3UV7KFCN3wwSOjbMPw QI8= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="155006294" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:02:47 +0800 IronPort-SDR: hhEpJIhmaMvdkZt5D5vmwSyaH9X28VDbtiQ+9yrj+ho24+uxIrTTF5FiJceZepKlhPhgFLkehD xNWRfmtNDn7RchedCyufoNaFj5jfXUwcPelb83PHXrM6pYVd3P5pVHWC62xh9wGxLQObifNdlx YPkSG7xa9lZAYnqC/rGADP1PJab9rmfAptYSERFQZo1IruZOWEQ4He6dub2Y0q46cKcghQ9/Xb yW9fFZg5oRh/d5D1crdy2V0bUqFsrTXkhCvwvqkWqgBNpwjNWIluuSIHfxKC+Jvn2GWpdiSYUm VoSchYUcsOP3VN0BE/Sn+eOe Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:49:10 -0700 IronPort-SDR: Lmfm8H8beSCgrYyPjwizv4+CTO+vvNtMMd56nRYSxBxHr8A9WM7b7PKHELXms/4+7QjVpZUBfr epGCZKz1Dw4pTuCmo74lQTlwd+wGsl3+IvkM7yN1qxErbxOaUS8Rj+wOzvGHfsyXxnpEf2TO4R 7mxzyp7FFS6tivVFOOGNR3HBxWF+vx/tpV6tqt1WItH35gyEAH/4xby2cw0liI04e6STpOmER8 WTTdR1OIIZOvD0M5XjdBogzWh3aEXWGIDtRX/XJbJI97VNeMf7SFuKt3UY5Ot66+W8dN8I4Bpm +gM= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:02:46 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 1/6] nvme-core: add a helper to init req from nvme cmd Date: Wed, 21 Oct 2020 18:02:29 -0700 Message-Id: <20201022010234.8304-2-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a preparation patch which adds a helper that we use in the next patch that splits the nvme_alloc_request() into nvme_alloc_request_qid_any() and nvme_alloc_request_qid(). The new functions shares the code to initialize the allocated request from NVMe cmd. Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/host/core.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 56e2a22e8a02..5bc52594fe63 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -514,6 +514,14 @@ static inline void nvme_clear_nvme_request(struct request *req) } } +static inline void nvme_init_req_from_cmd(struct request *req, + struct nvme_command *cmd) +{ + req->cmd_flags |= REQ_FAILFAST_DRIVER; + nvme_clear_nvme_request(req); + nvme_req(req)->cmd = cmd; +} + struct request *nvme_alloc_request(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) { @@ -529,9 +537,7 @@ struct request *nvme_alloc_request(struct request_queue *q, if (IS_ERR(req)) return req; - req->cmd_flags |= REQ_FAILFAST_DRIVER; - nvme_clear_nvme_request(req); - nvme_req(req)->cmd = cmd; + nvme_init_req_from_cmd(req, cmd); return req; } From patchwork Thu Oct 22 01:02:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850067 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00769C2D0A3 for ; Thu, 22 Oct 2020 01:02:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A19B62245D for ; Thu, 22 Oct 2020 01:02:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=permerror (0-bit key) header.d=wdc.com header.i=@wdc.com header.b="mvuKTc/c" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2441028AbgJVBC4 (ORCPT ); Wed, 21 Oct 2020 21:02:56 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:28406 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBCz (ORCPT ); Wed, 21 Oct 2020 21:02:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603329071; x=1634865071; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=A+mA7oBvbm2Iqh+FaZrW3l1Udy7Fnvf4SGqRIEGDfP4=; b=mvuKTc/cSxd8J89T5KoOCWoZcohTu4jvxEQ+fRb/KHc/Wx2daC8Im83s AVxkKwCh1NgG9svKqVDJH0MchMPQ6rTUCuD8v4VAbixt6MYtB7/Mb5elX XURX4082G3xN4IUmkfB2gkZPwKxym0jadFCeukw3qN2k4IO6ib4N4OMTv Y9jTHscfJrj3zWZMLlTpf0YmtuSyYV3XMwa/Po9kF0pzFuURL94Y75Cot Mpy0+VxhEd3CDljM1lHCYvfrgSIUhI+gDYrmqwqxpmnDCV6eR8Z27oUPw px6bY3Ee29SEzpHWmbKWvA3525rQzy2M54RxDhdUby7YGdg13vQBDwa/W g==; IronPort-SDR: Qrs5+4K5sLTfJ4Py0cnLxy3keEEupIlTZFoBrYHifqTa0EBJT3NrrwdvLgN8PeLFVPD0IBz22j +YZRJHhT6LKDPY4vSBPu5IRiKAFonIbNmliY0AYnOmFmg5tvBNDTq66puEKBbvym+hlWHzTYPh rrD8WdbF6upvTfY2jfPRlSXwUzDVDOQcTN3w+CzfgYFkaW1PW0sy5BhHPzeuWsiYtIED3N9ORx XQQBVBFnlhyXwByqYcA5v/58yuAQWR9YoWga2ImT3lBodGgVDByg3dd0g2SK2pFRbBcjkTU4Ru Tl0= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="254069096" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:11:11 +0800 IronPort-SDR: fHi8p2RosO0NMgdYK7KiEUCF89ZZgSFb7yjutLXVcpYmXMMmDEp3L+ihP1+claPQIVr5c//2VP 4HlFpwggkBKz8LynKmkemvQgRKfeod/CtTkXizPXw0ipY9eZaYM3W5M6x9/jp4vaVomfeFs5mM V8x7ERV3qfJdNwxs2rVKG9mya8Rkzhj67cC1KBMCxeB3nbHMOiSzsUEFfVHHifs2b3hwsT6rc0 MWH8tmRYbM1x8zLzlkPlXeX+Saio1fMRMfjW/kYg7oodF+ZsGvQn0967Y00+rNCp0pLLUYElDb pk+Mg6qyZWgqJQLurniTJVAM Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:48:27 -0700 IronPort-SDR: JhUkZAQSF0AWZ+4qgrou+1gIV6RT0YXFRNNhlOfVtsKjkUjINKWmWwJcWBoDkbQPCXVx6aOtFk 2i+L3PWEgZDleh+wlgDrjwzauCH9I7GzoKi+5aC/UEeF+Stu4PoDXu7DZD+vclilWFrNh5unap nPh1LkRqlslwYMfUi28Q43NPi8asB0LezEaa4V33tskKP7GxjS2aU7k24vAH3KLgY/sFJlc0+y 91tMKg5rAJ8/fUZhdQP2kVhYVrPbjja0dX/BXtS51lK+6EqY8GUgJm7piaakdjP/i7cdLUu4K7 2c8= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:02:55 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 2/6] nvme-core: split nvme_alloc_request() Date: Wed, 21 Oct 2020 18:02:30 -0700 Message-Id: <20201022010234.8304-3-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Right now nvme_alloc_request() allocates a request from block layer based on the value of the qid. When qid set to NVME_QID_ANY it used blk_mq_alloc_request() else blk_mq_alloc_request_hctx(). The function nvme_alloc_request() is called from different context, The only place where it uses non NVME_QID_ANY value is for fabrics connect commands :- nvme_submit_sync_cmd() NVME_QID_ANY nvme_features() NVME_QID_ANY nvme_sec_submit() NVME_QID_ANY nvmf_reg_read32() NVME_QID_ANY nvmf_reg_read64() NVME_QID_ANY nvmf_reg_write32() NVME_QID_ANY nvmf_connect_admin_queue() NVME_QID_ANY nvme_submit_user_cmd() NVME_QID_ANY nvme_alloc_request() nvme_keep_alive() NVME_QID_ANY nvme_alloc_request() nvme_timeout() NVME_QID_ANY nvme_alloc_request() nvme_delete_queue() NVME_QID_ANY nvme_alloc_request() nvmet_passthru_execute_cmd() NVME_QID_ANY nvme_alloc_request() nvmf_connect_io_queue() QID __nvme_submit_sync_cmd() nvme_alloc_request() With passthru nvme_alloc_request() now falls into the I/O fast path such that blk_mq_alloc_request_hctx() is never gets called and that adds additional branch check and the code in fast path. Split the nvme_alloc_request() into nvme_alloc_request_qid_any() and nvme_alloc_request_qid(). Replace each call of the nvme_alloc_request() with NVME_QID_ANY param with a call to newly added nvme_alloc_request_qid_any(). Replace a call to nvme_alloc_request() with QID param with a call to newly added nvme_alloc_request_qid_any() and nvme_alloc_request_qid() based on the qid value set in the __nvme_submit_sync_cmd(). Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/host/core.c | 44 +++++++++++++++++++++++----------- drivers/nvme/host/lightnvm.c | 5 ++-- drivers/nvme/host/nvme.h | 4 ++-- drivers/nvme/host/pci.c | 6 ++--- drivers/nvme/target/passthru.c | 2 +- 5 files changed, 38 insertions(+), 23 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 5bc52594fe63..87e56ef48f5d 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -522,26 +522,38 @@ static inline void nvme_init_req_from_cmd(struct request *req, nvme_req(req)->cmd = cmd; } -struct request *nvme_alloc_request(struct request_queue *q, +static inline unsigned int nvme_req_op(struct nvme_command *cmd) +{ + return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; +} + +struct request *nvme_alloc_request_qid_any(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags) +{ + struct request *req; + + req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags); + if (unlikely(IS_ERR(req))) + return req; + + nvme_init_req_from_cmd(req, cmd); + return req; +} +EXPORT_SYMBOL_GPL(nvme_alloc_request_qid_any); + +static struct request *nvme_alloc_request_qid(struct request_queue *q, struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) { - unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; struct request *req; - if (qid == NVME_QID_ANY) { - req = blk_mq_alloc_request(q, op, flags); - } else { - req = blk_mq_alloc_request_hctx(q, op, flags, - qid ? qid - 1 : 0); - } + req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags, + qid ? qid - 1 : 0); if (IS_ERR(req)) return req; nvme_init_req_from_cmd(req, cmd); - return req; } -EXPORT_SYMBOL_GPL(nvme_alloc_request); static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable) { @@ -899,7 +911,11 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, struct request *req; int ret; - req = nvme_alloc_request(q, cmd, flags, qid); + if (qid == NVME_QID_ANY) + req = nvme_alloc_request_qid_any(q, cmd, flags); + else + req = nvme_alloc_request_qid(q, cmd, flags, qid); + if (IS_ERR(req)) return PTR_ERR(req); @@ -1069,7 +1085,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, void *meta = NULL; int ret; - req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY); + req = nvme_alloc_request_qid_any(q, cmd, 0); if (IS_ERR(req)) return PTR_ERR(req); @@ -1143,8 +1159,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl) { struct request *rq; - rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED, - NVME_QID_ANY); + rq = nvme_alloc_request_qid_any(ctrl->admin_q, &ctrl->ka_cmd, + BLK_MQ_REQ_RESERVED); if (IS_ERR(rq)) return PTR_ERR(rq); diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c index 8e562d0f2c30..b1ee1a0310f6 100644 --- a/drivers/nvme/host/lightnvm.c +++ b/drivers/nvme/host/lightnvm.c @@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, nvme_nvm_rqtocmd(rqd, ns, cmd); - rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY); + rq = nvme_alloc_request_qid_any(q, (struct nvme_command *)cmd, 0); if (IS_ERR(rq)) return rq; @@ -767,8 +767,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, DECLARE_COMPLETION_ONSTACK(wait); int ret = 0; - rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, - NVME_QID_ANY); + rq = nvme_alloc_request_qid_any(q, (struct nvme_command *)vcmd, 0); if (IS_ERR(rq)) { ret = -ENOMEM; goto err_cmd; diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index cc111136a981..f39a0a387a51 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -608,8 +608,8 @@ int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout); void nvme_start_freeze(struct nvme_ctrl *ctrl); #define NVME_QID_ANY -1 -struct request *nvme_alloc_request(struct request_queue *q, - struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid); +struct request *nvme_alloc_request_qid_any(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags); void nvme_cleanup_cmd(struct request *req); blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, struct nvme_command *cmd); diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index df8f3612107f..94f329b5f980 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -1289,8 +1289,8 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) "I/O %d QID %d timeout, aborting\n", req->tag, nvmeq->qid); - abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd, - BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + abort_req = nvme_alloc_request_qid_any(dev->ctrl.admin_q, &cmd, + BLK_MQ_REQ_NOWAIT); if (IS_ERR(abort_req)) { atomic_inc(&dev->ctrl.abort_limit); return BLK_EH_RESET_TIMER; @@ -2204,7 +2204,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) cmd.delete_queue.opcode = opcode; cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid); - req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + req = nvme_alloc_request_qid_any(q, &cmd, BLK_MQ_REQ_NOWAIT); if (IS_ERR(req)) return PTR_ERR(req); diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 56c571052216..76affbc3bd9a 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -236,7 +236,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) q = ns->queue; } - rq = nvme_alloc_request(q, req->cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + rq = nvme_alloc_request_qid_any(q, req->cmd, BLK_MQ_REQ_NOWAIT); if (IS_ERR(rq)) { status = NVME_SC_INTERNAL; goto out_put_ns; From patchwork Thu Oct 22 01:02:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850065 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07C6EC2D0A3 for ; Thu, 22 Oct 2020 01:03:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9C2BB2245D for ; Thu, 22 Oct 2020 01:03:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="FOcXEKAi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2442682AbgJVBDE (ORCPT ); Wed, 21 Oct 2020 21:03:04 -0400 Received: from esa3.hgst.iphmx.com ([216.71.153.141]:3948 "EHLO esa3.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBDD (ORCPT ); Wed, 21 Oct 2020 21:03:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603328584; x=1634864584; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6NOKkrJ+dd5+a5TidOKmE++eASzCFtwy/mCMZlmbUoE=; b=FOcXEKAiY9v1S+da3S+HBcth2/D5IyrkyMJRy/OT4s/G7B81L7+GRlWF GT0SWDw6IvaYufW9mHnml1oM9c2y7XfCAk1/xFvDMy18NJ/7a+p1RGL5k uyA9tpWFa9JwUjprjAFZpEhMIb4Ruszi+bmYHYY/WBo/WmGevKrjAl1rl 9KzxBVCC5LkozPpx3/Tz/gTA11FIXAH/NLqhReQmf5gcksjP3stf3774d ToykfnwyLfgk/+8YmBTLm9+VnlC5LKpCfTnWZIwHOyC98UBgUpp6JF6ag ghyWenc+sxDKQ+bIn1Gqxn7wBLvaVyinMPUrIuqbbA3AqMfGJ6ejBWHOH Q==; IronPort-SDR: SVEV77F8bdccisV4IWwdGBDsQKvCTuJSOheezw33PvRkpao+qjuWBXSV7tX0oNivUCvp0ISx7e Q2Phd84Tf6T31yDxwR6TumDNCZ+DokpeXSM0Loa0+3xfV1BNcMQu0ahHTtoqbAPbg0bc9s2A7Z HaKH5PhFvd+O/h46DQjph2SSsBLywYwdZ0m1SGDcGefVBOuGhkaFtNbgO5QIPlIZ5e3Hd1X5Ad HB4HoogAKAuBPFxyVdeSmDszHcc+90OnLP6Ok5hvdihVJueNy1hcwAD+OVOe8gkR+oZUzs1owz vvw= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="155006319" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:03:04 +0800 IronPort-SDR: uFMhofvZ94iuLX5Em8b1g6OFyuEgwr2ErcgDzwnlzTsOgOd2n4KkVO9dxbaZCc1y9AezPeJPXc SiAA6czsRmPnfaIiMwxrh9QnXpmtRff4iwgtrCS4AQoX/VeMqK6Ggk33W9yZpP2AUpIVhcyOIG unhXoSpnpECZCpVsGGooJMsKuzM0AIszHGvRyCoYS+6T3mny1cAoBbestW13EXLlRnI3/tXlpE GxkYBltnqUvWk2uTzWIdrYb6ypuTpkbF2wdyxHi6Ibb9zuSbKIEdG/m1MDOtQAHeaV1wuQ3u92 kd8M0yW9khpkhwOjpiTVWnzL Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:49:27 -0700 IronPort-SDR: trAgIIh25ehnatCW/4qXNGzjgAUmq7TkPPX5YoqIxeABlzFSK8Lurn3uEl9xrpoWUeXJtzmRL6 rkK2N9Nb4hr3tsmhLsLjLKqxlJ3houHQXF98mlFOffbun2PiB01b+jGIPKp+8sBlJvA/hGw7sE ppy28nYBsLIjIKDSMqC94PetTxQaroySA6TYun954j+dwZCnqJsdlHwxMW/oRCyWntAY6cmrKi 7kmAh8KOe4u0CBO4DyDfSalCewJZ7WabArjn8PipsQaTD4C4yfjsknqH2iF7p+wKaILeabmXAj MZc= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:03:03 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 3/6] nvmet: remove op_flags for passthru commands Date: Wed, 21 Oct 2020 18:02:31 -0700 Message-Id: <20201022010234.8304-4-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org For passthru commands setting op_flags has no meaning. Remove the code that sets the op flags in nvmet_passthru_map_sg(). Signed-off-by: Chaitanya Kulkarni --- drivers/nvme/target/passthru.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 76affbc3bd9a..e5b5657bcce2 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -182,18 +182,12 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) { int sg_cnt = req->sg_cnt; struct scatterlist *sg; - int op_flags = 0; struct bio *bio; int i, ret; - if (req->cmd->common.opcode == nvme_cmd_flush) - op_flags = REQ_FUA; - else if (nvme_is_write(req->cmd)) - op_flags = REQ_SYNC | REQ_IDLE; - bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); bio->bi_end_io = bio_put; - bio->bi_opf = req_op(rq) | op_flags; + bio->bi_opf = req_op(rq); for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, From patchwork Thu Oct 22 01:02:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850071 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A450C2D0A3 for ; Thu, 22 Oct 2020 01:03:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B52992244C for ; Thu, 22 Oct 2020 01:03:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=permerror (0-bit key) header.d=wdc.com header.i=@wdc.com header.b="mfY8c4nV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2441063AbgJVBDO (ORCPT ); Wed, 21 Oct 2020 21:03:14 -0400 Received: from esa2.hgst.iphmx.com ([68.232.143.124]:28452 "EHLO esa2.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBDN (ORCPT ); Wed, 21 Oct 2020 21:03:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603329098; x=1634865098; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=/RfTDNkwgpWkWBDLqJjIjFnT/dbeT/R3kr2Wn8mjMuQ=; b=mfY8c4nVJFbLYrKuR5JjjEmYLfoXjgmr03T6bcUSjPwmZvMiC7KaG7pZ eJWRmjYIl1tZnGBMNQTmUgGgl223C02TK1HT3CxJ763VWf3+FnjO7AdDK /so97oTq+HnYS9vJfJyG/jC0GiEpp3mqI1kHnDAqcC7ve/NcGaeqXvgZy Dpetzl00oJzZaX3UF4Xf/UHsHHmAtPwWRrp/yrogKF1K3psc3B1R76gfV vLRD1Y1LDifXOI6VVzSSYCsuipeBXl5CEiHkxqPqAUj4ztUzkhhRnKrnJ sne5CHdRkYjHtNiY9XrH63AkctAflKYU0L1nON27jkfulwF9O7QSTQUzQ Q==; IronPort-SDR: 0JX9FqkuQHwf0LmadtJ5LP/ZeqTJL+D213zg/HSr2nk3kPR/rFcCsacCHu8rTIV5jAq8WfXGCr klYNtslMrQw8RWYEjT/Sv06xWkTi1AOXbKw/9Wkk66OL5nAAeqS2A1O8MfphMYhHK97KhGnKMw EcBn0A6Dda9G6bqwhtS8aI14PnDwcHT3V/Dj8eMPjzMeKMWBObU7ESPW0Ox1xftw3QE94pXuCX cNcaVy/gwWiKqmAPJqT9beYTzUroyvx9Y4omm+nkCEbnhi+1k28juCTu3aMAGJau2kzmu85Y43 bTY= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="254069127" Received: from uls-op-cesaip01.wdc.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:11:38 +0800 IronPort-SDR: GTXmbWd+wiWIDu68zguGYopOewtwsxijIyh09jY20BGp9G89f7nRtoo2sDPdr8AjWNq4v1antA dREcHA0mqUJLNGAyR9sltiraDCu0of2sX7ytJtPQPi67yrmaZa2NFuzHxPIRAZ54tevygH2mHC XFi2IogXoea2BR0QW5udbh3MylBUwrdXZ0rPl44xhBLUEIOR3NOuhPyLWDwotzH5JO+5FHHGD5 81mbNqKGQ63mwXtBzFgYyjZl4O7zpRJxVx80YS5Q++OiePW7heHcdVOONuo1ut8hfj6XQWEKb2 WToeriBilMxc6xmBNHOMgz3z Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:49:37 -0700 IronPort-SDR: R3PBID0f94uoiQ6ju64A/GLZ0XL/NIYbC2b0dR2GUtfcspSUZDFR0ENg/IrW2Ne1/h5dyHGvf0 7hl+sMUvNM4dxTMcOcFD5WyxfBT59aRnqRUtemncthgRfF42nLwrr9Il5y4fF0u84wdUdhXxxb HmNNBYlsaF9q7Lm8IITGcmV69bL6xuOotmIAEIE8fTfz2HPr8I9cgz89xMBn4dwQhCgJTg+/Qr Fuh7agPymWeQj/CviyKDo0c+XKra0++jIl+j5SLkoMd0ckvnoV7/+e0HpOT/ZPiwv7Yy336yK3 /Ig= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:03:13 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 4/6] block: move blk_rq_bio_prep() to linux/blk-mq.h Date: Wed, 21 Oct 2020 18:02:32 -0700 Message-Id: <20201022010234.8304-5-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org This is a preparation patch to have minimal block layer request bio append functionality in the context of the NVMeOF Passthru driver which falls in the fast path and doesn't need calls from blk_rq_append_bio(). Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- block/blk.h | 12 ------------ include/linux/blk-mq.h | 12 ++++++++++++ 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/block/blk.h b/block/blk.h index dfab98465db9..e05507a8d1e3 100644 --- a/block/blk.h +++ b/block/blk.h @@ -91,18 +91,6 @@ static inline bool bvec_gap_to_prev(struct request_queue *q, return __bvec_gap_to_prev(q, bprv, offset); } -static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio, - unsigned int nr_segs) -{ - rq->nr_phys_segments = nr_segs; - rq->__data_len = bio->bi_iter.bi_size; - rq->bio = rq->biotail = bio; - rq->ioprio = bio_prio(bio); - - if (bio->bi_disk) - rq->rq_disk = bio->bi_disk; -} - #ifdef CONFIG_BLK_DEV_INTEGRITY void blk_flush_integrity(void); bool __bio_integrity_endio(struct bio *); diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b23eeca4d677..d1d277073761 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -591,6 +591,18 @@ static inline void blk_mq_cleanup_rq(struct request *rq) rq->q->mq_ops->cleanup_rq(rq); } +static inline void blk_rq_bio_prep(struct request *rq, struct bio *bio, + unsigned int nr_segs) +{ + rq->nr_phys_segments = nr_segs; + rq->__data_len = bio->bi_iter.bi_size; + rq->bio = rq->biotail = bio; + rq->ioprio = bio_prio(bio); + + if (bio->bi_disk) + rq->rq_disk = bio->bi_disk; +} + blk_qc_t blk_mq_submit_bio(struct bio *bio); #endif From patchwork Thu Oct 22 01:02:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850073 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3368BC55178 for ; Thu, 22 Oct 2020 01:03:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CA5402244C for ; Thu, 22 Oct 2020 01:03:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="irGT1CRy" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2894554AbgJVBDW (ORCPT ); Wed, 21 Oct 2020 21:03:22 -0400 Received: from esa6.hgst.iphmx.com ([216.71.154.45]:59966 "EHLO esa6.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBDW (ORCPT ); Wed, 21 Oct 2020 21:03:22 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603328601; x=1634864601; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ZjuoyMHl/0ocfowOMPnoFX7SM4xH6ht94j8DCRztaGo=; b=irGT1CRyNy11TFDohyCRY5q2niqjUFE3zf7IRpk9C3+b8vH076Zqyynd 38MjTdTMvqtLQreJgiN9DtNlyMnHqAPGe1wjZoaoae0USAmBy+KOepGRo 9luaM7F80mSM/7/lxtD1XBzEr8pcyFVd/RWGQRkVzE2K1tDeoWays2Y8G IWX3TUD9GkQskjkDavn7n3rssNXra96Hhvvh4atwoKlrADtRrdoki0ZXW ZX9fakxeT41fXjmgY+bOApa3C9hM94ro+z5bKSJ+XeKq8TES1mteKXlAC DecVopCx0e/DY9GLFSPXavaPMenHgcVhXv8abXvLnOfhVizOzkK8gPyUL w==; IronPort-SDR: /llUuxA1XZxSC+AQfwwN7ZnHB3gMbV0nbiu+Q7Cv9TA6qIlFA0VpKnuFuAYpU0aBkps54o7tmI fEfzIzv0WnNHqt5ZQMYiUBht/CxhMqPZxMjWQ6Sg+nCBvWLgu0TlE4NqHkcqw8wHe9Om3Ro5kw t2yKAulIMqG7pbq3nK5a/ewu1a1RnLgBgxYpgG2sIKh5w0xjj/oS1kW32sZmSW4SxlGeu8/0Uz 5z9Bf650btNS3bLx13WU5/gp39GdGiYEH+rsatrv5ICCPXa+EGzjsrzCNl9GT3p17eTi2Fc+lw 9JY= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="151798028" Received: from h199-255-45-15.hgst.com (HELO uls-op-cesaep02.wdc.com) ([199.255.45.15]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:03:20 +0800 IronPort-SDR: G59NYVSisCYU+lZeI00huyuEVdYGE+avnJ9YD5zoLn+fmv3/0DNjRnsmVox1BQEVtEClC1tKHF Ji7LWJAN+iv5Sa4ZrGIYUIWBjSYTMVIGHMMRC9sL61cmkxWAlAZMRIFzfuP6FXx+vUxhJLh5Eb qzHGV/0cusdOmTOIGVCvEu5ZVNZ7n3GAfr8qZhTiY9hmKccFC2QxxtmdkaBJYS1mmlq6+Eyd80 aqLwQbuyKueux3AXYz112CuSs+AqwpLZmsvd2Qi4hG+eV56A8w+GmzW//MwvKzY6R76pRo+TPn yA4ZFJVD+/TFT0V0+rLNEpkL Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep02.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:48:53 -0700 IronPort-SDR: tFvkaRu8x1mKxkoX4SXoM8AXOzEb5BaJcg/3a7oijkA4ynzu0K8QAYWv9/9B8jD9j6JyAI16os 5DET0GzAGIAi3VE0NUF7KXw+PU88vagDlKzHGDoSrhCfQBAjVaiUAbRn/1GrQ/nOJnAV03moUx GMNuOooj9wuAxW+/XY97o7hBbdOWlkqAOLQJ2I2xEo1a9IQ6YEsVuIXSqf/ae4Cl9IW4RsUk1L PZSBP/SmWghO6lWVjfnIl1zo9m5092tlltDKqD32GzQIHLfNDCXDt/CPhYRYHCI6PR3bshdKYc USM= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:03:21 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 5/6] nvmet: use minimized version of blk_rq_append_bio Date: Wed, 21 Oct 2020 18:02:33 -0700 Message-Id: <20201022010234.8304-6-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The function blk_rq_append_bio() is a genereric API written for all types driver (having bounce buffers) and different context (where request is already having a bio i.e. rq->bio != NULL). It does mainly three things: calculating the segments, bounce queue and if req->bio == NULL call blk_rq_bio_prep() or handle low level merge() case. The NVMe PCIe and fabrics transports currently does not use queue bounce mechanism. In order to find this for each request processing in the passthru blk_rq_append_bio() does extra work in the fast path for each request. When I ran I/Os with different block sizes on the passthru controller I found that we can reuse the req->sg_cnt instead of iterating over the bvecs to find out nr_segs in blk_rq_append_bio(). This calculation in blk_rq_append_bio() is a duplication of work given that we have the value in req->sg_cnt. (correct me here if I'm wrong). With NVMe passthru request based driver we allocate fresh request each time, so every call to blk_rq_append_bio() rq->bio will be NULL i.e. we don't really need the second condition in the blk_rq_append_bio() and the resulting error condition in the caller of blk_rq_append_bio(). So for NVMeOF passthru driver recalculating the segments, bounce check and ll_back_merge code is not needed such that we can get away with the minimal version of the blk_rq_append_bio() which removes the error check in the fast path along with extra variable in nvmet_passthru_map_sg(). This patch updates the nvmet_passthru_map_sg() such that it does only appending the bio to the request in the context of the NVMeOF Passthru driver. Following are perf numbers :- With current implementation (blk_rq_append_bio()) :- ---------------------------------------------------- + 5.80% 0.02% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.44% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.88% 0.00% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.44% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.86% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.17% 0.00% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd With this patch using blk_rq_bio_prep() :- ---------------------------------------------------- + 3.14% 0.02% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 3.26% 0.01% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 5.37% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 5.18% 0.02% kworker/0:2-eve [nvmet] [k] nvmet_passthru_execute_cmd + 4.84% 0.02% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd + 4.87% 0.01% kworker/0:2-mm_ [nvmet] [k] nvmet_passthru_execute_cmd Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/target/passthru.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index e5b5657bcce2..496ffedb77dc 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -183,7 +183,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) int sg_cnt = req->sg_cnt; struct scatterlist *sg; struct bio *bio; - int i, ret; + int i; bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); bio->bi_end_io = bio_put; @@ -198,11 +198,7 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) sg_cnt--; } - ret = blk_rq_append_bio(rq, &bio); - if (unlikely(ret)) { - bio_put(bio); - return ret; - } + blk_rq_bio_prep(rq, bio, req->sg_cnt); return 0; } From patchwork Thu Oct 22 01:02:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaitanya Kulkarni X-Patchwork-Id: 11850075 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EDE2C55179 for ; Thu, 22 Oct 2020 01:03:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DBD072245D for ; Thu, 22 Oct 2020 01:03:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=wdc.com header.i=@wdc.com header.b="KjDSVdvF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2894555AbgJVBD3 (ORCPT ); Wed, 21 Oct 2020 21:03:29 -0400 Received: from esa1.hgst.iphmx.com ([68.232.141.245]:15516 "EHLO esa1.hgst.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2440469AbgJVBD3 (ORCPT ); Wed, 21 Oct 2020 21:03:29 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com; t=1603328608; x=1634864608; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Y+hmrbRo5JQNhnhjQXfoPvp530DmB5g2Qp7dpvys4W0=; b=KjDSVdvFJu4ZV82yw/LPl9sWXjTISIbehxAGEZq1cmZ0jg657VfDOAHM x/uoXIx4TMesiJnmxyT8vdUkDrawhxIOGKHWQdo1kvkEKMmBEkO48g4GF ZD7bNvGKvvy0+YWlMqfrihb5VOi/Xg8WqfHa7itvg5o8/fuA10noPYFoL Bwnp5Kk6C0R8W0EvwIaruXNakbF4OJE4FVNtAsmACW8aP9iI9VAAYba/x r2fm3gpqPc1hDrdpk8hsRYGdAHqQM06Z5QvGR13kpws54Bq/dqSsPwnx9 q7qmWXMIf2o6rZqxYXSp6RLynwt+xfHUVokcpJ31oqsoTxjqomZITrO1D w==; IronPort-SDR: s/5bgYRFwwwvDUEcY22upekmL47Z3oxc2Xt/e8TAI8hqw0ON3c85bbIw6KRLXD0G8SvyHA02Kl XcZMXjOrNmHd3BFcKy5I2UMgvhhGX/vCqes+36Oa05LyZHoHQVH01+nesV0QbBXMloP+9cTzVC NuySp25FFpxE3Oa2JiRItt3sQeama7P/vgIWvjcIhp5ECeuoo13DHBssZFnppcer4HN9wEOV/U ynnB8CqFm2kpBkGdfCd6DT2UTX1ISccvQPY4RzsNVCPaXmiG1Zjw0h3uW0rAm9SR3IjAVJP5tT w6E= X-IronPort-AV: E=Sophos;i="5.77,402,1596470400"; d="scan'208";a="260463136" Received: from h199-255-45-14.hgst.com (HELO uls-op-cesaep01.wdc.com) ([199.255.45.14]) by ob1.hgst.iphmx.com with ESMTP; 22 Oct 2020 09:03:28 +0800 IronPort-SDR: rIrZN4lAQB8G/4CoGsG1NhA0t4dcms6pY7lbETr4l8DjR3WkOc5JZdaxzFzsUwegB3muNFMv2F 1lHHJJAUhvVdIgHYsQgQFd7NzEpQsyxjCWG1VvJ/1SzlVuOj4Hxr+0vdoRE53ADtQublZCWiCG piQEj+10wy42xyQHfXkEt4usM/YfZor+3SV8YyjErXjOHlrELvQYX5VmWr20XAfOZScwl0A27A yYCOaTbuOP3EbFnjWgW3JFY2DVhQKyVCaqy8axhZJIh664j++nq2cfS/Z36Bx46JLmF/AgEMwK xSY2tPKK02Uw1Zonwf9WhFyu Received: from uls-op-cesaip02.wdc.com ([10.248.3.37]) by uls-op-cesaep01.wdc.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Oct 2020 17:49:52 -0700 IronPort-SDR: KJa3ahp/xsHYGqmeHzjSY7mOl6UdUWE+WcWU7cmfHYmIPtTOHbscpC9l9IZ4qummNULL3RURns 1UFw8FwrrzZ2JzD2HXIR/yuyOzZLVruLJrzGDaF7temfUDF2o6A2t6guOW2Rtb/DYB8plpWRcX DCa8LGZJm+NGpPuB/x88DadWQmccBf/5ehus7oWEUxKRtNEeVz4pXKofc8WqmgTmEa1C2rg8Jd vApZvQwPe9oofya9sjmDMMsGqaE+UAf8m9s12X8qtr1kCbaWr8mcfiJQH9kGEFL72v4d2BYftJ MGs= WDCIronportException: Internal Received: from vm.labspan.wdc.com (HELO vm.sc.wdc.com) ([10.6.137.102]) by uls-op-cesaip02.wdc.com with ESMTP; 21 Oct 2020 18:03:28 -0700 From: Chaitanya Kulkarni To: linux-block@vger.kernel.org, linux-nvme@lists.infradead.org Cc: hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, logang@deltatee.com, Chaitanya Kulkarni Subject: [PATCH V3 6/6] nvmet: use inline bio for passthru fast path Date: Wed, 21 Oct 2020 18:02:34 -0700 Message-Id: <20201022010234.8304-7-chaitanya.kulkarni@wdc.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> References: <20201022010234.8304-1-chaitanya.kulkarni@wdc.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org In nvmet_passthru_execute_cmd() which is a high frequency function it uses bio_alloc() which leads to memory allocation from the fs pool for each I/O. For NVMeoF nvmet_req we already have inline_bvec allocated as a part of request allocation that can be used with preallocated bio when we already know the size of request before bio allocation with bio_alloc(), which we already do. Introduce a bio member for the nvmet_req passthru anon union. In the fast path, check if we can get away with inline bvec and bio from nvmet_req with bio_init() call before actually allocating from the bio_alloc(). This will be useful to avoid any new memory allocation under high memory pressure situation and get rid of any extra work of allocation (bio_alloc()) vs initialization (bio_init()) when transfer len is < NVMET_MAX_INLINE_DATA_LEN that user can configure at compile time. Signed-off-by: Chaitanya Kulkarni Reviewed-by: Logan Gunthorpe --- drivers/nvme/target/nvmet.h | 1 + drivers/nvme/target/passthru.c | 20 ++++++++++++++++++-- 2 files changed, 19 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 559a15ccc322..408a13084fb4 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -330,6 +330,7 @@ struct nvmet_req { struct work_struct work; } f; struct { + struct bio inline_bio; struct request *rq; struct work_struct work; bool use_workqueue; diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c index 496ffedb77dc..32498b4302cc 100644 --- a/drivers/nvme/target/passthru.c +++ b/drivers/nvme/target/passthru.c @@ -178,6 +178,14 @@ static void nvmet_passthru_req_done(struct request *rq, blk_mq_free_request(rq); } +static void nvmet_passthru_bio_done(struct bio *bio) +{ + struct nvmet_req *req = bio->bi_private; + + if (bio != &req->p.inline_bio) + bio_put(bio); +} + static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) { int sg_cnt = req->sg_cnt; @@ -186,13 +194,21 @@ static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) int i; bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); - bio->bi_end_io = bio_put; + if (req->transfer_len <= NVMET_MAX_INLINE_DATA_LEN) { + bio = &req->p.inline_bio; + bio_init(bio, req->inline_bvec, ARRAY_SIZE(req->inline_bvec)); + } else { + bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); + } + + bio->bi_end_io = nvmet_passthru_bio_done; bio->bi_opf = req_op(rq); + bio->bi_private = req; for_each_sg(req->sg, sg, req->sg_cnt, i) { if (bio_add_pc_page(rq->q, bio, sg_page(sg), sg->length, sg->offset) < sg->length) { - bio_put(bio); + nvmet_passthru_bio_done(bio); return -EINVAL; } sg_cnt--;