From patchwork Wed Oct 9 19:25:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 578071864 for ; Wed, 9 Oct 2019 19:26:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 434F1218DE for ; Wed, 9 Oct 2019 19:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731959AbfJITZm (ORCPT ); Wed, 9 Oct 2019 15:25:42 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37632 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731911AbfJITZm (ORCPT ); Wed, 9 Oct 2019 15:25:42 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g1-6d; Wed, 09 Oct 2019 13:25:40 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003Pv-49; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:18 -0600 Message-Id: <20191009192530.13079-2-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 01/12] nvme-core: introduce nvme_ctrl_get_by_path() X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org nvme_ctrl_get_by_path() is analagous to blkdev_get_by_path() except it gets a struct nvme_ctrl from the path to its char dev (/dev/nvme0). It makes use of filp_open() to open the file and uses the private data to obtain a pointer to the struct nvme_ctrl. If the fops of the file do not match, -EINVAL is returned. The purpose of this function is to support NVMe-OF target passthru. Signed-off-by: Logan Gunthorpe Reviewed-by: Max Gurtovoy Reviewed-by: Sagi Grimberg Reviewed-by: Keith Busch --- drivers/nvme/host/core.c | 24 ++++++++++++++++++++++++ drivers/nvme/host/nvme.h | 2 ++ 2 files changed, 26 insertions(+) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index fd7dea36c3b6..30f9d91e25e3 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -2950,6 +2950,30 @@ static const struct file_operations nvme_dev_fops = { .compat_ioctl = nvme_dev_ioctl, }; +struct nvme_ctrl *nvme_ctrl_get_by_path(const char *path) +{ + struct nvme_ctrl *ctrl; + struct file *f; + + f = filp_open(path, O_RDWR, 0); + if (IS_ERR(f)) + return ERR_CAST(f); + + if (f->f_op != &nvme_dev_fops) { + ctrl = ERR_PTR(-EINVAL); + goto out_close; + } + + ctrl = f->private_data; + nvme_get_ctrl(ctrl); + +out_close: + filp_close(f, NULL); + + return ctrl; +} +EXPORT_SYMBOL_GPL(nvme_ctrl_get_by_path); + static ssize_t nvme_sysfs_reset(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 38a83ef5bcd3..f83f990e409d 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -507,6 +507,8 @@ int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, extern const struct attribute_group *nvme_ns_id_attr_groups[]; extern const struct block_device_operations nvme_ns_head_ops; +struct nvme_ctrl *nvme_ctrl_get_by_path(const char *path); + #ifdef CONFIG_NVME_MULTIPATH static inline bool nvme_ctrl_use_ana(struct nvme_ctrl *ctrl) { From patchwork Wed Oct 9 19:25:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEB351575 for ; Wed, 9 Oct 2019 19:26:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A1F36206B6 for ; Wed, 9 Oct 2019 19:26:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732105AbfJITZ4 (ORCPT ); Wed, 9 Oct 2019 15:25:56 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37728 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732034AbfJITZs (ORCPT ); Wed, 9 Oct 2019 15:25:48 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g2-6l; Wed, 09 Oct 2019 13:25:46 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003Py-7i; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Chaitanya Kulkarni , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:19 -0600 Message-Id: <20191009192530.13079-3-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=unavailable autolearn_force=no version=3.4.2 Subject: [PATCH v9 02/12] nvme-core: export existing ctrl and ns interfaces X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Chaitanya Kulkarni We export existing nvme ctrl and ns management APIs so that target passthru code can manage the nvme ctrl. This is a preparation patch for implementing NVMe Over Fabric target passthru feature. Signed-off-by: Chaitanya Kulkarni Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg Reviewed-by: Keith Busch --- drivers/nvme/host/core.c | 19 ++++++++++++------- drivers/nvme/host/nvme.h | 7 +++++++ 2 files changed, 19 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c index 30f9d91e25e3..c6303493a5b7 100644 --- a/drivers/nvme/host/core.c +++ b/drivers/nvme/host/core.c @@ -422,10 +422,11 @@ static void nvme_free_ns(struct kref *kref) kfree(ns); } -static void nvme_put_ns(struct nvme_ns *ns) +void nvme_put_ns(struct nvme_ns *ns) { kref_put(&ns->kref, nvme_free_ns); } +EXPORT_SYMBOL_GPL(nvme_put_ns); static inline void nvme_clear_nvme_request(struct request *req) { @@ -1089,7 +1090,7 @@ static int nvme_identify_ns_list(struct nvme_ctrl *dev, unsigned nsid, __le32 *n NVME_IDENTIFY_DATA_SIZE); } -static int nvme_identify_ns(struct nvme_ctrl *ctrl, +int nvme_identify_ns(struct nvme_ctrl *ctrl, unsigned nsid, struct nvme_id_ns **id) { struct nvme_command c = { }; @@ -1112,6 +1113,7 @@ static int nvme_identify_ns(struct nvme_ctrl *ctrl, return error; } +EXPORT_SYMBOL_GPL(nvme_identify_ns); static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid, unsigned int dword11, void *buffer, size_t buflen, u32 *result) @@ -1263,8 +1265,8 @@ static u32 nvme_known_admin_effects(u8 opcode) return 0; } -static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, - u8 opcode) +u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 opcode) { u32 effects = 0; @@ -1296,6 +1298,7 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, } return effects; } +EXPORT_SYMBOL_GPL(nvme_passthru_start); static void nvme_update_formats(struct nvme_ctrl *ctrl) { @@ -1310,7 +1313,7 @@ static void nvme_update_formats(struct nvme_ctrl *ctrl) nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL); } -static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects) +void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects) { /* * Revalidate LBA changes prior to unfreezing. This is necessary to @@ -1330,6 +1333,7 @@ static void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects) if (effects & (NVME_CMD_EFFECTS_NIC | NVME_CMD_EFFECTS_NCC)) nvme_queue_scan(ctrl); } +EXPORT_SYMBOL_GPL(nvme_passthru_end); static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns, struct nvme_passthru_cmd __user *ucmd) @@ -2844,7 +2848,7 @@ int nvme_init_identify(struct nvme_ctrl *ctrl) ret = nvme_configure_apst(ctrl); if (ret < 0) return ret; - + ret = nvme_configure_timestamp(ctrl); if (ret < 0) return ret; @@ -3411,7 +3415,7 @@ static int ns_cmp(void *priv, struct list_head *a, struct list_head *b) return nsa->head->ns_id - nsb->head->ns_id; } -static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid) +struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid) { struct nvme_ns *ns, *ret = NULL; @@ -3429,6 +3433,7 @@ static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid) up_read(&ctrl->namespaces_rwsem); return ret; } +EXPORT_SYMBOL_GPL(nvme_find_get_ns); static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns) { diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index f83f990e409d..31f42610e9bb 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -459,6 +459,8 @@ void nvme_start_ctrl(struct nvme_ctrl *ctrl); void nvme_stop_ctrl(struct nvme_ctrl *ctrl); void nvme_put_ctrl(struct nvme_ctrl *ctrl); int nvme_init_identify(struct nvme_ctrl *ctrl); +struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid); +void nvme_put_ns(struct nvme_ns *ns); void nvme_remove_namespaces(struct nvme_ctrl *ctrl); @@ -495,8 +497,13 @@ int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid, int nvme_get_features(struct nvme_ctrl *dev, unsigned int fid, unsigned int dword11, void *buffer, size_t buflen, u32 *result); +u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns, + u8 opcode); +void nvme_passthru_end(struct nvme_ctrl *ctrl, u32 effects); int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count); void nvme_stop_keep_alive(struct nvme_ctrl *ctrl); +int nvme_identify_ns(struct nvme_ctrl *ctrl, + unsigned nsid, struct nvme_id_ns **id); int nvme_reset_ctrl(struct nvme_ctrl *ctrl); int nvme_reset_ctrl_sync(struct nvme_ctrl *ctrl); int nvme_delete_ctrl(struct nvme_ctrl *ctrl); From patchwork Wed Oct 9 19:25:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182011 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 872911575 for ; Wed, 9 Oct 2019 19:26:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 72240218DE for ; Wed, 9 Oct 2019 19:26:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731934AbfJITZl (ORCPT ); Wed, 9 Oct 2019 15:25:41 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37626 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731874AbfJITZl (ORCPT ); Wed, 9 Oct 2019 15:25:41 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g3-6l; Wed, 09 Oct 2019 13:25:40 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003Q1-AO; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Chaitanya Kulkarni , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:20 -0600 Message-Id: <20191009192530.13079-4-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 03/12] nvmet: add return value to nvmet_add_async_event() X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Chaitanya Kulkarni Change the return value for nvmet_add_async_event(). This change is needed for the target passthru code which will submit async events on namespaces changes and can fail the command should adding the event fail (on -ENOMEM). Signed-off-by: Chaitanya Kulkarni [logang@deltatee.com: * fleshed out commit message * change to using int as a return type instead of bool ] Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg Reviewed-by: Keith Busch --- drivers/nvme/target/core.c | 6 ++++-- drivers/nvme/target/nvmet.h | 2 +- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 3a67e244e568..d6dcb86d8be7 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -173,14 +173,14 @@ static void nvmet_async_event_work(struct work_struct *work) } } -void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, +int nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, u8 event_info, u8 log_page) { struct nvmet_async_event *aen; aen = kmalloc(sizeof(*aen), GFP_KERNEL); if (!aen) - return; + return -ENOMEM; aen->event_type = event_type; aen->event_info = event_info; @@ -191,6 +191,8 @@ void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, mutex_unlock(&ctrl->lock); schedule_work(&ctrl->async_event_work); + + return 0; } static void nvmet_add_to_changed_ns_log(struct nvmet_ctrl *ctrl, __le32 nsid) diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index c51f8dd01dc4..3d313a6452cc 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -441,7 +441,7 @@ void nvmet_port_disc_changed(struct nvmet_port *port, struct nvmet_subsys *subsys); void nvmet_subsys_disc_changed(struct nvmet_subsys *subsys, struct nvmet_host *host); -void nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, +int nvmet_add_async_event(struct nvmet_ctrl *ctrl, u8 event_type, u8 event_info, u8 log_page); #define NVMET_QUEUE_SIZE 1024 From patchwork Wed Oct 9 19:25:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181999 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D6A2913BD for ; Wed, 9 Oct 2019 19:26:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C19C82190F for ; Wed, 9 Oct 2019 19:26:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732243AbfJIT0V (ORCPT ); Wed, 9 Oct 2019 15:26:21 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37650 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731957AbfJITZn (ORCPT ); Wed, 9 Oct 2019 15:25:43 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g4-6l; Wed, 09 Oct 2019 13:25:42 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003Q4-DL; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe , Chaitanya Kulkarni Date: Wed, 9 Oct 2019 13:25:21 -0600 Message-Id: <20191009192530.13079-5-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 04/12] nvmet: make nvmet_copy_ns_identifier() non-static X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This function will be needed by the upcoming passthru code. Passthru will need an emulated version of identify_desclist which copies the eui64, uuid and nguid from the passed-thru controller into the request SGL. [chaitanya.kulkarni@wdc.com: this was factored out of a patch originally authored by Chaitanya] Signed-off-by: Chaitanya Kulkarni Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg Reviewed-by: Keith Busch --- drivers/nvme/target/admin-cmd.c | 4 ++-- drivers/nvme/target/nvmet.h | 2 ++ 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c index 831a062d27cb..67b6642bb628 100644 --- a/drivers/nvme/target/admin-cmd.c +++ b/drivers/nvme/target/admin-cmd.c @@ -506,8 +506,8 @@ static void nvmet_execute_identify_nslist(struct nvmet_req *req) nvmet_req_complete(req, status); } -static u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len, - void *id, off_t *off) +u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len, + void *id, off_t *off) { struct nvme_ns_id_desc desc = { .nidt = type, diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 3d313a6452cc..5dfd4e0ae096 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -489,6 +489,8 @@ u16 nvmet_bdev_flush(struct nvmet_req *req); u16 nvmet_file_flush(struct nvmet_req *req); void nvmet_ns_changed(struct nvmet_subsys *subsys, u32 nsid); +u16 nvmet_copy_ns_identifier(struct nvmet_req *req, u8 type, u8 len, + void *id, off_t *off); static inline u32 nvmet_rw_len(struct nvmet_req *req) { return ((u32)le16_to_cpu(req->cmd->rw.length) + 1) << From patchwork Wed Oct 9 19:25:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181975 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E77E13BD for ; Wed, 9 Oct 2019 19:25:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0A41421D56 for ; Wed, 9 Oct 2019 19:25:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732087AbfJITZv (ORCPT ); Wed, 9 Oct 2019 15:25:51 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37744 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732050AbfJITZt (ORCPT ); Wed, 9 Oct 2019 15:25:49 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g6-6k; Wed, 09 Oct 2019 13:25:47 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003QA-Kg; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Chaitanya Kulkarni , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:23 -0600 Message-Id: <20191009192530.13079-7-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 05/12] nvmet-passthru: add passthru code to process commands X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Chaitanya Kulkarni Add passthru command handling capability for the NVMeOF target and export passthru APIs which are used to integrate passthru code with nvmet-core. A passthru ns member is added to the target request to hold the ns reference for respective commands. The new file io-cmd-passthru.c handles passthru cmd parsing and execution. In the passthru mode, we create a block layer request from the nvmet request and map the data on to the block layer request. For handling the side effects of the passthru admin commands we add two functions similar to the nvme_passthru[start|end]() functions present in the nvme-core. Only admin commands on a white list are let through which includes vendor unique commands. We introduce new passthru workqueue similar to the one we have for the file backend for NVMeOF target to execute the NVMe Admin passthru commands. All the new passthrtu code is enabled or disabled by a new KConfig option for the NVMeOF target. Signed-off-by: Chaitanya Kulkarni [logang@deltatee.com: * renamed passthru-cmd.c to io-cmd-passthru.c for consistency * squashed "update target makefile for passthru" * squashed "integrate passthru request processing" * squashed "update KConfig with config passthru option" * added appropriate CONFIG_NVME_TARGET_PASSTHRU #ifdefs * pushed passthru_wq into passthrtu.c and introduced nvmet_passthru_init() and nvmet_passthru_destroy() to avoid inline #ifdef mess * renamed nvmet_passthru_ctrl() to nvmet_req_passthru_ctrl() and provided nvmet_passthr_ctrl() to get the ctrl from a subsys * fixed failure path in nvmet_passthru_execute_cmd() to ensure we always complete the request (with an error when appropriate) * restructered out nvmet_passthru_make_request() and nvmet_passthru_execute_cmd() to create nvmet_passthru_map_sg() which makes the code simpler and more readable. * move call to nvme_find_get_ns() into nvmet_passthru_execute_cmd() to prevent a lockdep error. nvme_find_get_ns() takes a lock and can sleep but nvme_init_req() is called while hctx_lock() is held (in the loop transport) and therefore should not sleep. * added check in nvmet_passthru_execute_cmd() to ensure we don't violate queue_max_segments or queue_max_hw_sectors. * added nvmet_passthru_set_mdts() to prevent requests that exceed max_segments * convert admin command blacklist to a white list with vendor unique commands specifically allowed * force setting cmic bit to support multipath for connections to the target * dropped le16_to_cpu() conversion in nvmet_passthru_req_done() as it's currently already done in nvme_end_request() * unabbreviated 'VUC' in a comment as it's not a commonly known acronym * removed unnecessary inline tags on static functions * minor edits to commit message ] Signed-off-by: Logan Gunthorpe --- drivers/nvme/target/Kconfig | 10 + drivers/nvme/target/Makefile | 1 + drivers/nvme/target/core.c | 11 +- drivers/nvme/target/io-cmd-passthru.c | 567 ++++++++++++++++++++++++++ drivers/nvme/target/nvmet.h | 46 +++ include/linux/nvme.h | 1 + 6 files changed, 635 insertions(+), 1 deletion(-) create mode 100644 drivers/nvme/target/io-cmd-passthru.c diff --git a/drivers/nvme/target/Kconfig b/drivers/nvme/target/Kconfig index d7f48c0fb311..2478cb5a932d 100644 --- a/drivers/nvme/target/Kconfig +++ b/drivers/nvme/target/Kconfig @@ -15,6 +15,16 @@ config NVME_TARGET To configure the NVMe target you probably want to use the nvmetcli tool from http://git.infradead.org/users/hch/nvmetcli.git. +config NVME_TARGET_PASSTHRU + bool "NVMe Target Passthrough support" + depends on NVME_CORE + depends on NVME_TARGET + help + This enables target side NVMe passthru controller support for the + NVMe Over Fabrics protocol. It allows for hosts to manage and + directly access an actual NVMe controller residing on the target + side, incuding executing Vendor Unique Commands. + config NVME_TARGET_LOOP tristate "NVMe loopback device support" depends on NVME_TARGET diff --git a/drivers/nvme/target/Makefile b/drivers/nvme/target/Makefile index 2b33836f3d3e..bf57799fde63 100644 --- a/drivers/nvme/target/Makefile +++ b/drivers/nvme/target/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_NVME_TARGET_TCP) += nvmet-tcp.o nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \ discovery.o io-cmd-file.o io-cmd-bdev.o +nvmet-$(CONFIG_NVME_TARGET_PASSTHRU) += io-cmd-passthru.o nvme-loop-y += loop.o nvmet-rdma-y += rdma.o nvmet-fc-y += fc.o diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index d6dcb86d8be7..256f765e772b 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -896,6 +896,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq, if (unlikely(!req->sq->ctrl)) /* will return an error for any Non-connect command: */ status = nvmet_parse_connect_cmd(req); + else if (nvmet_req_passthru_ctrl(req)) + status = nvmet_parse_passthru_cmd(req); else if (likely(req->sq->qid != 0)) status = nvmet_parse_io_cmd(req); else if (nvme_is_fabrics(req->cmd)) @@ -1463,11 +1465,15 @@ static int __init nvmet_init(void) nvmet_ana_group_enabled[NVMET_DEFAULT_ANA_GRPID] = 1; + error = nvmet_passthru_init(); + if (error) + goto out; + buffered_io_wq = alloc_workqueue("nvmet-buffered-io-wq", WQ_MEM_RECLAIM, 0); if (!buffered_io_wq) { error = -ENOMEM; - goto out; + goto out_passthru_destroy; } error = nvmet_init_discovery(); @@ -1483,6 +1489,8 @@ static int __init nvmet_init(void) nvmet_exit_discovery(); out_free_work_queue: destroy_workqueue(buffered_io_wq); +out_passthru_destroy: + nvmet_passthru_destroy(); out: return error; } @@ -1493,6 +1501,7 @@ static void __exit nvmet_exit(void) nvmet_exit_discovery(); ida_destroy(&cntlid_ida); destroy_workqueue(buffered_io_wq); + nvmet_passthru_destroy(); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_entry) != 1024); BUILD_BUG_ON(sizeof(struct nvmf_disc_rsp_page_hdr) != 1024); diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c new file mode 100644 index 000000000000..1eb855b4071c --- /dev/null +++ b/drivers/nvme/target/io-cmd-passthru.c @@ -0,0 +1,567 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * NVMe Over Fabrics Target Passthrough command implementation. + * + * Copyright (c) 2017-2018 Western Digital Corporation or its + * affiliates. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include + +#include "../host/nvme.h" +#include "nvmet.h" + +static struct workqueue_struct *passthru_wq; + +int nvmet_passthru_init(void) +{ + passthru_wq = alloc_workqueue("nvmet-passthru-wq", WQ_MEM_RECLAIM, 0); + if (!passthru_wq) + return -ENOMEM; + + return 0; +} + +void nvmet_passthru_destroy(void) +{ + destroy_workqueue(passthru_wq); +} + +static void nvmet_passthru_req_complete(struct nvmet_req *req, + struct request *rq, u16 status) +{ + nvmet_req_complete(req, status); + + if (rq) + blk_put_request(rq); +} + +static void nvmet_passthru_req_done(struct request *rq, + blk_status_t blk_status) +{ + struct nvmet_req *req = rq->end_io_data; + u16 status = nvme_req(rq)->status; + + req->cqe->result.u32 = nvme_req(rq)->result.u32; + + nvmet_passthru_req_complete(req, rq, status); +} + +static u16 nvmet_passthru_override_format_nvm(struct nvmet_req *req) +{ + int lbaf = le32_to_cpu(req->cmd->format.cdw10) & 0x0000000F; + int nsid = le32_to_cpu(req->cmd->format.nsid); + u16 status = NVME_SC_SUCCESS; + struct nvme_id_ns *id; + int ret; + + ret = nvme_identify_ns(nvmet_req_passthru_ctrl(req), nsid, &id); + if (ret) + return NVME_SC_INTERNAL; + /* + * XXX: Please update this code once NVMeOF target starts supporting + * metadata. We don't support ns lba format with metadata over fabrics + * right now, so report an error if format nvm cmd tries to format + * a namespace with the LBA format which has metadata. + */ + if (id->lbaf[lbaf].ms) + status = NVME_SC_INVALID_NS; + + kfree(id); + return status; +} + +static void nvmet_passthru_set_mdts(struct nvmet_ctrl *ctrl, + struct nvme_id_ctrl *id) +{ + struct nvme_ctrl *pctrl = ctrl->subsys->passthru_ctrl; + u32 max_hw_sectors; + int page_shift; + + /* + * The passthru NVMe driver may have a limit on the number + * of segments which depends on the host's memory fragementation. + * To solve this, ensure mdts is limitted to the pages equal to + * the number of segments. + */ + + max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9), + pctrl->max_hw_sectors); + + page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12; + + id->mdts = ilog2(max_hw_sectors) + 9 - page_shift; +} + +static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req) +{ + struct nvmet_ctrl *ctrl = req->sq->ctrl; + u16 status = NVME_SC_SUCCESS; + struct nvme_id_ctrl *id; + + id = kzalloc(sizeof(*id), GFP_KERNEL); + if (!id) { + status = NVME_SC_INTERNAL; + goto out; + } + status = nvmet_copy_from_sgl(req, 0, id, sizeof(*id)); + if (status) + goto out_free; + + id->cntlid = cpu_to_le16(ctrl->cntlid); + id->ver = cpu_to_le32(ctrl->subsys->ver); + + nvmet_passthru_set_mdts(ctrl, id); + + id->acl = 3; + /* + * We export aerl limit for the fabrics controller, update this when + * passthru based aerl support is added. + */ + id->aerl = NVMET_ASYNC_EVENTS - 1; + + /* emulate kas as most of the PCIe ctrl don't have a support for kas */ + id->kas = cpu_to_le16(NVMET_KAS); + + /* don't support host memory buffer */ + id->hmpre = 0; + id->hmmin = 0; + + id->sqes = min_t(__u8, ((0x6 << 4) | 0x6), id->sqes); + id->cqes = min_t(__u8, ((0x4 << 4) | 0x4), id->cqes); + id->maxcmd = cpu_to_le16(NVMET_MAX_CMD); + + /* don't support fuse commands */ + id->fuses = 0; + + id->sgls = cpu_to_le32(1 << 0); /* we always support SGLs */ + if (ctrl->ops->has_keyed_sgls) + id->sgls |= cpu_to_le32(1 << 2); + if (req->port->inline_data_size) + id->sgls |= cpu_to_le32(1 << 20); + + /* + * When passsthru controller is setup using nvme-loop transport it will + * export the passthru ctrl subsysnqn (PCIe NVMe ctrl) and will fail in + * the nvme/host/core.c in the nvme_init_subsystem()->nvme_active_ctrl() + * code path with duplicate ctr subsynqn. In order to prevent that we + * mask the passthru-ctrl subsysnqn with the target ctrl subsysnqn. + */ + memcpy(id->subnqn, ctrl->subsysnqn, sizeof(id->subnqn)); + + /* use fabric id-ctrl values */ + id->ioccsz = cpu_to_le32((sizeof(struct nvme_command) + + req->port->inline_data_size) / 16); + id->iorcsz = cpu_to_le32(sizeof(struct nvme_completion) / 16); + + id->msdbd = ctrl->ops->msdbd; + + /* Support multipath connections with fabrics */ + id->cmic |= 1 << 1; + + status = nvmet_copy_to_sgl(req, 0, id, sizeof(struct nvme_id_ctrl)); + +out_free: + kfree(id); +out: + return status; +} + +static u16 nvmet_passthru_override_id_ns(struct nvmet_req *req) +{ + u16 status = NVME_SC_SUCCESS; + struct nvme_id_ns *id; + int i; + + id = kzalloc(sizeof(*id), GFP_KERNEL); + if (!id) { + status = NVME_SC_INTERNAL; + goto out; + } + + status = nvmet_copy_from_sgl(req, 0, id, sizeof(struct nvme_id_ns)); + if (status) + goto out_free; + + for (i = 0; i < (id->nlbaf + 1); i++) + if (id->lbaf[i].ms) + memset(&id->lbaf[i], 0, sizeof(id->lbaf[i])); + + id->flbas = id->flbas & ~(1 << 4); + id->mc = 0; + + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); + +out_free: + kfree(id); +out: + return status; +} + +static u16 nvmet_passthru_fixup_identify(struct nvmet_req *req) +{ + u16 status = NVME_SC_SUCCESS; + + switch (req->cmd->identify.cns) { + case NVME_ID_CNS_CTRL: + status = nvmet_passthru_override_id_ctrl(req); + break; + case NVME_ID_CNS_NS: + status = nvmet_passthru_override_id_ns(req); + break; + } + return status; +} + +static u16 nvmet_passthru_admin_passthru_start(struct nvmet_req *req) +{ + u16 status = NVME_SC_SUCCESS; + + switch (req->cmd->common.opcode) { + case nvme_admin_format_nvm: + status = nvmet_passthru_override_format_nvm(req); + break; + } + return status; +} + +static u16 nvmet_passthru_admin_passthru_end(struct nvmet_req *req) +{ + u8 aer_type = NVME_AER_TYPE_NOTICE; + u16 status = NVME_SC_SUCCESS; + + switch (req->cmd->common.opcode) { + case nvme_admin_identify: + status = nvmet_passthru_fixup_identify(req); + break; + case nvme_admin_ns_mgmt: + case nvme_admin_ns_attach: + case nvme_admin_format_nvm: + if (nvmet_add_async_event(req->sq->ctrl, aer_type, 0, 0)) + status = NVME_SC_INTERNAL; + break; + } + return status; +} + +static void nvmet_passthru_execute_admin_cmd(struct nvmet_req *req) +{ + u8 opcode = req->cmd->common.opcode; + u32 effects; + u16 status; + + status = nvmet_passthru_admin_passthru_start(req); + if (status) + goto out; + + effects = nvme_passthru_start(nvmet_req_passthru_ctrl(req), NULL, + opcode); + + /* + * Admin Commands have side effects and it is better to handle those + * side effects in the submission thread context than in the request + * completion path, which is in the interrupt context. Also in this + * way, we keep the passhru admin command code path consistent with the + * nvme/host/core.c sync command submission APIs/IOCTLs and use + * nvme_passthru_start/end() to handle side effects consistently. + */ + blk_execute_rq(req->p.rq->q, NULL, req->p.rq, 0); + + nvme_passthru_end(nvmet_req_passthru_ctrl(req), effects); + status = nvmet_passthru_admin_passthru_end(req); +out: + if (status == NVME_SC_SUCCESS) { + nvmet_set_result(req, nvme_req(req->p.rq)->result.u32); + status = nvme_req(req->p.rq)->status; + } + + nvmet_passthru_req_complete(req, req->p.rq, status); +} + +static int nvmet_passthru_map_sg(struct nvmet_req *req, struct request *rq) +{ + int sg_cnt = req->sg_cnt; + struct scatterlist *sg; + int op = REQ_OP_READ; + int op_flags = 0; + struct bio *bio; + int i, ret; + + if (req->cmd->common.opcode == nvme_cmd_flush) { + op_flags = REQ_FUA; + } else if (nvme_is_write(req->cmd)) { + op = REQ_OP_WRITE; + op_flags = REQ_SYNC | REQ_IDLE; + } + + bio = bio_alloc(GFP_KERNEL, min(sg_cnt, BIO_MAX_PAGES)); + bio->bi_end_io = bio_put; + + for_each_sg(req->sg, sg, req->sg_cnt, i) { + if (bio_add_page(bio, sg_page(sg), sg->length, + sg->offset) != sg->length) { + ret = blk_rq_append_bio(rq, &bio); + if (unlikely(ret)) + return ret; + + bio_set_op_attrs(bio, op, op_flags); + bio = bio_alloc(GFP_KERNEL, + min(sg_cnt, BIO_MAX_PAGES)); + } + sg_cnt--; + } + + ret = blk_rq_append_bio(rq, &bio); + if (unlikely(ret)) + return ret; + + return 0; +} + +static struct request *nvmet_passthru_blk_make_request(struct nvmet_req *req, + struct nvme_ns *ns, gfp_t gfp_mask) +{ + struct nvme_ctrl *passthru_ctrl = nvmet_req_passthru_ctrl(req); + struct nvme_command *cmd = req->cmd; + struct request_queue *q; + struct request *rq; + int ret; + + if (ns) + q = ns->queue; + else + q = passthru_ctrl->admin_q; + + rq = nvme_alloc_request(q, cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); + if (unlikely(IS_ERR(rq))) + return rq; + + if (req->sg_cnt) { + ret = nvmet_passthru_map_sg(req, rq); + if (unlikely(ret)) { + blk_put_request(rq); + return ERR_PTR(ret); + } + } + + /* + * We don't support fused cmds, also nvme-pci driver uses its own + * sgl_threshold parameter to decide whether to use SGLs or PRPs hence + * turn off those bits in the flags. + */ + req->cmd->common.flags &= ~(NVME_CMD_FUSE_FIRST | NVME_CMD_FUSE_SECOND | + NVME_CMD_SGL_ALL); + return rq; +} + + +static void nvmet_passthru_execute_admin_work(struct work_struct *w) +{ + struct nvmet_req *req = container_of(w, struct nvmet_req, p.work); + + nvmet_passthru_execute_admin_cmd(req); +} + +static void nvmet_passthru_submit_admin_cmd(struct nvmet_req *req) +{ + INIT_WORK(&req->p.work, nvmet_passthru_execute_admin_work); + queue_work(passthru_wq, &req->p.work); +} + +static void nvmet_passthru_execute_cmd(struct nvmet_req *req) +{ + struct request *rq = NULL; + struct nvme_ns *ns = NULL; + u16 status; + + if (likely(req->sq->qid != 0)) { + u32 nsid = le32_to_cpu(req->cmd->common.nsid); + + ns = nvme_find_get_ns(nvmet_req_passthru_ctrl(req), nsid); + if (unlikely(!ns)) { + pr_err("failed to get passthru ns nsid:%u\n", nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto fail_out; + } + } + + rq = nvmet_passthru_blk_make_request(req, ns, GFP_KERNEL); + if (unlikely(IS_ERR(rq))) { + rq = NULL; + status = NVME_SC_INTERNAL; + goto fail_out; + } + + if (unlikely(blk_rq_nr_phys_segments(rq) > queue_max_segments(rq->q) || + (blk_rq_payload_bytes(rq) >> 9) > queue_max_hw_sectors(rq->q))) { + status = NVME_SC_INVALID_FIELD; + goto fail_out; + } + + rq->end_io_data = req; + if (req->sq->qid != 0) { + blk_execute_rq_nowait(rq->q, NULL, rq, 0, + nvmet_passthru_req_done); + } else { + req->p.rq = rq; + nvmet_passthru_submit_admin_cmd(req); + } + + if (ns) + nvme_put_ns(ns); + + return; + +fail_out: + if (ns) + nvme_put_ns(ns); + nvmet_passthru_req_complete(req, rq, status); +} + +/* + * We emulate commands which are not routed through the existing target + * code and not supported by the passthru ctrl. E.g consider a scenario where + * passthru ctrl version is < 1.3.0. Target Fabrics ctrl version is >= 1.3.0 + * in that case in order to be fabrics compliant we need to emulate ns-desc-list + * command which is 1.3.0 compliant but not present for the passthru ctrl due + * to lower version. + */ +static void nvmet_passthru_emulate_id_desclist(struct nvmet_req *req) +{ + int nsid = le32_to_cpu(req->cmd->common.nsid); + u16 status = NVME_SC_SUCCESS; + struct nvme_ns_ids *ids; + struct nvme_ns *ns; + off_t off = 0; + + ns = nvme_find_get_ns(nvmet_req_passthru_ctrl(req), nsid); + if (unlikely(!ns)) { + pr_err("failed to get passthru ns nsid:%u\n", nsid); + status = NVME_SC_INVALID_NS | NVME_SC_DNR; + goto out; + } + /* + * Instead of refactoring and creating helpers, keep it simple and + * just re-use the code from admin-cmd.c -> + * nvmet_execute_identify_ns_desclist(). + */ + ids = &ns->head->ids; + if (memchr_inv(ids->eui64, 0, sizeof(ids->eui64))) { + status = nvmet_copy_ns_identifier(req, NVME_NIDT_EUI64, + NVME_NIDT_EUI64_LEN, + &ids->eui64, &off); + if (status) + goto out_put_ns; + } + if (memchr_inv(&ids->uuid, 0, sizeof(ids->uuid))) { + status = nvmet_copy_ns_identifier(req, NVME_NIDT_UUID, + NVME_NIDT_UUID_LEN, + &ids->uuid, &off); + if (status) + goto out_put_ns; + } + if (memchr_inv(ids->nguid, 0, sizeof(ids->nguid))) { + status = nvmet_copy_ns_identifier(req, NVME_NIDT_NGUID, + NVME_NIDT_NGUID_LEN, + &ids->nguid, &off); + if (status) + goto out_put_ns; + } + + if (sg_zero_buffer(req->sg, req->sg_cnt, NVME_IDENTIFY_DATA_SIZE - off, + off) != NVME_IDENTIFY_DATA_SIZE - off) + status = NVME_SC_INTERNAL | NVME_SC_DNR; +out_put_ns: + nvme_put_ns(ns); +out: + nvmet_req_complete(req, status); +} + +/* + * In the passthru mode we support three types for commands:- + * 1. Commands which are black-listed. + * 2. Commands which are routed through target code. + * 3. Commands which are emulated in the target code, since we can't rely + * on passthru-ctrl and cannot route through the target code. + */ +static u16 nvmet_parse_passthru_admin_cmd(struct nvmet_req *req) +{ + struct nvme_command *cmd = req->cmd; + u16 status = 0; + + if (cmd->common.opcode >= nvme_admin_vendor_unique_start) { + /* + * Passthru all vendor unique commands + */ + req->execute = nvmet_passthru_execute_cmd; + return status; + } + + switch (cmd->common.opcode) { + /* 2. commands which are routed through target code */ + case nvme_admin_async_event: + /* + * Right now we don't monitor any events for the passthru controller. + * Instead generate asyn event notice for the ns-mgmt/format/attach + * commands so that host can update it's ns-inventory. + */ + /* fallthru */ + case nvme_admin_keep_alive: + /* + * Most PCIe ctrls don't support keep alive cmd, we route keep alive + * to the non-passthru mode. In future please change this code when + * PCIe ctrls with keep alive support available. + */ + status = nvmet_parse_admin_cmd(req); + break; + case nvme_admin_set_features: + switch (le32_to_cpu(req->cmd->features.fid)) { + case NVME_FEAT_ASYNC_EVENT: + case NVME_FEAT_KATO: + case NVME_FEAT_NUM_QUEUES: + status = nvmet_parse_admin_cmd(req); + break; + default: + req->execute = nvmet_passthru_execute_cmd; + } + break; + /* 3. commands which are emulated in the passthru code */ + case nvme_admin_identify: + switch (req->cmd->identify.cns) { + case NVME_ID_CNS_NS_DESC_LIST: + req->execute = nvmet_passthru_emulate_id_desclist; + break; + default: + req->execute = nvmet_passthru_execute_cmd; + } + break; + /* 4. By default, blacklist all admin commands */ + default: + + status = NVME_SC_INVALID_OPCODE | NVME_SC_DNR; + req->execute = NULL; + break; + } + + return status; +} + +u16 nvmet_parse_passthru_cmd(struct nvmet_req *req) +{ + int ret; + + if (unlikely(req->cmd->common.opcode == nvme_fabrics_command)) + return nvmet_parse_fabrics_cmd(req); + else if (unlikely(req->sq->ctrl->subsys->type == NVME_NQN_DISC)) + return nvmet_parse_discovery_cmd(req); + + ret = nvmet_check_ctrl_status(req, req->cmd); + if (unlikely(ret)) + return ret; + + if (unlikely(req->sq->qid == 0)) + return nvmet_parse_passthru_admin_cmd(req); + + req->execute = nvmet_passthru_execute_cmd; + return NVME_SC_SUCCESS; +} diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 5dfd4e0ae096..daec1240307c 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -227,6 +227,10 @@ struct nvmet_subsys { struct config_group namespaces_group; struct config_group allowed_hosts_group; + +#ifdef CONFIG_NVME_TARGET_PASSTHRU + struct nvme_ctrl *passthru_ctrl; +#endif /* CONFIG_NVME_TARGET_PASSTHRU */ }; static inline struct nvmet_subsys *to_subsys(struct config_item *item) @@ -302,6 +306,10 @@ struct nvmet_req { struct bio_vec *bvec; struct work_struct work; } f; + struct { + struct request *rq; + struct work_struct work; + } p; }; int sg_cnt; /* data length as parsed from the command: */ @@ -497,6 +505,44 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req) req->ns->blksize_shift; } +#ifdef CONFIG_NVME_TARGET_PASSTHRU + +int nvmet_passthru_init(void); +void nvmet_passthru_destroy(void); +u16 nvmet_parse_passthru_cmd(struct nvmet_req *req); + +static inline +struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys) +{ + return subsys->passthru_ctrl; +} + +#else /* CONFIG_NVME_TARGET_PASSTHRU */ + +static inline int nvmet_passthru_init(void) +{ + return 0; +} +static inline void nvmet_passthru_destroy(void) +{ +} +static inline u16 nvmet_parse_passthru_cmd(struct nvmet_req *req) +{ + return 0; +} +static inline +struct nvme_ctrl *nvmet_passthru_ctrl(struct nvmet_subsys *subsys) +{ + return NULL; +} + +#endif /* CONFIG_NVME_TARGET_PASSTHRU */ + +static inline struct nvme_ctrl *nvmet_req_passthru_ctrl(struct nvmet_req *req) +{ + return nvmet_passthru_ctrl(req->sq->ctrl->subsys); +} + u16 errno_to_nvme_status(struct nvmet_req *req, int errno); /* Convert a 32-bit number to a 16-bit 0's based number */ diff --git a/include/linux/nvme.h b/include/linux/nvme.h index f61d6906e59d..94e730b5d0a3 100644 --- a/include/linux/nvme.h +++ b/include/linux/nvme.h @@ -816,6 +816,7 @@ enum nvme_admin_opcode { nvme_admin_security_recv = 0x82, nvme_admin_sanitize_nvm = 0x84, nvme_admin_get_lba_status = 0x86, + nvme_admin_vendor_unique_start = 0xC0, }; #define nvme_admin_opcode_name(opcode) { opcode, #opcode } From patchwork Wed Oct 9 19:25:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182015 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC21D13BD for ; Wed, 9 Oct 2019 19:26:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B5C09206B6 for ; Wed, 9 Oct 2019 19:26:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731914AbfJITZl (ORCPT ); Wed, 9 Oct 2019 15:25:41 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37620 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731850AbfJITZl (ORCPT ); Wed, 9 Oct 2019 15:25:41 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g7-6l; Wed, 09 Oct 2019 13:25:39 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003QD-Ns; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe , Chaitanya Kulkarni Date: Wed, 9 Oct 2019 13:25:24 -0600 Message-Id: <20191009192530.13079-8-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 06/12] nvmet-passthru: add enable/disable helpers X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org This patch adds helper functions which are used in the NVMeOF configfs when the user is configuring the passthru subsystem. Here we ensure that only one subsys is assigned to each nvme_ctrl by using an xarray on the cntlid. [chaitanya.kulkarni@wdc.com: this patch is very roughly based on a similar one by Chaitanya] Signed-off-by: Chaitanya Kulkarni Signed-off-by: Logan Gunthorpe --- drivers/nvme/target/core.c | 8 +++ drivers/nvme/target/io-cmd-passthru.c | 77 +++++++++++++++++++++++++++ drivers/nvme/target/nvmet.h | 10 ++++ 3 files changed, 95 insertions(+) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 256f765e772b..986b2511d284 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -520,6 +520,12 @@ int nvmet_ns_enable(struct nvmet_ns *ns) mutex_lock(&subsys->lock); ret = 0; + + if (nvmet_passthru_ctrl(subsys)) { + pr_info("cannot enable both passthru and regular namespaces for a single subsystem"); + goto out_unlock; + } + if (ns->enabled) goto out_unlock; @@ -1440,6 +1446,8 @@ static void nvmet_subsys_free(struct kref *ref) WARN_ON_ONCE(!list_empty(&subsys->namespaces)); + nvmet_passthru_subsys_free(subsys); + kfree(subsys->subsysnqn); kfree(subsys); } diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c index 1eb855b4071c..c482e55f0fb8 100644 --- a/drivers/nvme/target/io-cmd-passthru.c +++ b/drivers/nvme/target/io-cmd-passthru.c @@ -11,6 +11,11 @@ #include "../host/nvme.h" #include "nvmet.h" +/* + * xarray to maintain one passthru subsystem per nvme controller. + */ +static DEFINE_XARRAY(passthru_subsystems); + static struct workqueue_struct *passthru_wq; int nvmet_passthru_init(void) @@ -27,6 +32,78 @@ void nvmet_passthru_destroy(void) destroy_workqueue(passthru_wq); } +int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys) +{ + struct nvme_ctrl *ctrl; + int ret = -EINVAL; + void *old; + + mutex_lock(&subsys->lock); + if (!subsys->passthru_ctrl_path) + goto out_unlock; + if (subsys->passthru_ctrl) + goto out_unlock; + + if (subsys->nr_namespaces) { + pr_info("cannot enable both passthru and regular namespaces for a single subsystem"); + goto out_unlock; + } + + ctrl = nvme_ctrl_get_by_path(subsys->passthru_ctrl_path); + if (IS_ERR(ctrl)) { + ret = PTR_ERR(ctrl); + pr_err("failed to open nvme controller %s\n", + subsys->passthru_ctrl_path); + + goto out_unlock; + } + + old = xa_cmpxchg(&passthru_subsystems, ctrl->cntlid, NULL, + subsys, GFP_KERNEL); + if (xa_is_err(old)) { + ret = xa_err(old); + goto out_put_ctrl; + } + + if (old) + goto out_put_ctrl; + + subsys->passthru_ctrl = ctrl; + + mutex_unlock(&subsys->lock); + return 0; + +out_put_ctrl: + nvme_put_ctrl(ctrl); +out_unlock: + mutex_unlock(&subsys->lock); + return ret; +} + +static void __nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys) +{ + if (subsys->passthru_ctrl) { + xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid); + nvme_put_ctrl(subsys->passthru_ctrl); + } + subsys->passthru_ctrl = NULL; +} + +void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys) +{ + mutex_lock(&subsys->lock); + __nvmet_passthru_ctrl_disable(subsys); + mutex_unlock(&subsys->lock); +} + +void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys) +{ + mutex_lock(&subsys->lock); + __nvmet_passthru_ctrl_disable(subsys); + kfree(subsys->passthru_ctrl_path); + mutex_unlock(&subsys->lock); +} + static void nvmet_passthru_req_complete(struct nvmet_req *req, struct request *rq, u16 status) { diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index daec1240307c..2c287d13ed83 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -230,6 +230,7 @@ struct nvmet_subsys { #ifdef CONFIG_NVME_TARGET_PASSTHRU struct nvme_ctrl *passthru_ctrl; + char *passthru_ctrl_path; #endif /* CONFIG_NVME_TARGET_PASSTHRU */ }; @@ -509,6 +510,9 @@ static inline u32 nvmet_rw_len(struct nvmet_req *req) int nvmet_passthru_init(void); void nvmet_passthru_destroy(void); +void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys); +int nvmet_passthru_ctrl_enable(struct nvmet_subsys *subsys); +void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys); u16 nvmet_parse_passthru_cmd(struct nvmet_req *req); static inline @@ -526,6 +530,12 @@ static inline int nvmet_passthru_init(void) static inline void nvmet_passthru_destroy(void) { } +static inline void nvmet_passthru_subsys_free(struct nvmet_subsys *subsys) +{ +} +static inline void nvmet_passthru_ctrl_disable(struct nvmet_subsys *subsys) +{ +} static inline u16 nvmet_parse_passthru_cmd(struct nvmet_req *req) { return 0; From patchwork Wed Oct 9 19:25:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EA6413BD for ; Wed, 9 Oct 2019 19:25:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8A2BB21D7C for ; Wed, 9 Oct 2019 19:25:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732015AbfJITZp (ORCPT ); Wed, 9 Oct 2019 15:25:45 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37666 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731985AbfJITZo (ORCPT ); Wed, 9 Oct 2019 15:25:44 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g8-6k; Wed, 09 Oct 2019 13:25:43 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003QG-R7; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Chaitanya Kulkarni , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:25 -0600 Message-Id: <20191009192530.13079-9-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, maxg@mellanox.com, sbates@raithlin.com, Chaitanya.Kulkarni@wdc.com, chaitanya.kulkarni@wdc.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 07/12] nvmet-core: don't check the data len for pt-ctrl X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Chaitanya Kulkarni Right now, data_len is calculated before the transfer len after we parse the command, With passthru interface we allow VUCs (Vendor-Unique Commands). In order to make the code simple and compact, instead of assigning the data len or each VUC in the command parse function just use the transfer len as it is. This may result in error if expected data_len != transfer_len. Signed-off-by: Chaitanya Kulkarni [logang@deltatee.com: * added definition of VUC to the commit message and comment * use nvmet_req_passthru_ctrl() helper seeing we can't dereference subsys->passthru_ctrl if CONFIG_NVME_TARGET_PASSTHRU is not set] Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg Signed-off-by: Christoph Hellwig --- drivers/nvme/target/core.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c index 986b2511d284..f9d46354f9ae 100644 --- a/drivers/nvme/target/core.c +++ b/drivers/nvme/target/core.c @@ -942,7 +942,16 @@ EXPORT_SYMBOL_GPL(nvmet_req_uninit); void nvmet_req_execute(struct nvmet_req *req) { - if (unlikely(req->data_len != req->transfer_len)) { + /* + * data_len is calculated before the transfer len after we parse + * the command, With passthru interface we allow VUC (Vendor-Unique + * Commands)'s. In order to make the code simple and compact, + * instead of assinging the dala len for each VUC in the command + * parse function just use the transfer len as it is. This may + * result in error if expected data_len != transfer_len. + */ + if (!(req->sq->ctrl && nvmet_req_passthru_ctrl(req)) && + unlikely(req->data_len != req->transfer_len)) { req->error_loc = offsetof(struct nvme_common_command, dptr); nvmet_req_complete(req, NVME_SC_SGL_INVALID_DATA | NVME_SC_DNR); } else From patchwork Wed Oct 9 19:25:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182023 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81F0613BD for ; Wed, 9 Oct 2019 19:26:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 603EE206B6 for ; Wed, 9 Oct 2019 19:26:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731809AbfJITZi (ORCPT ); Wed, 9 Oct 2019 15:25:38 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37610 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728804AbfJITZi (ORCPT ); Wed, 9 Oct 2019 15:25:38 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002g9-6c; Wed, 09 Oct 2019 13:25:37 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaY-0003QJ-UV; Wed, 09 Oct 2019 13:25:34 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:26 -0600 Message-Id: <20191009192530.13079-10-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 08/12] nvmet-tcp: don't check data_len in nvmet_tcp_map_data() X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org With passthru, the data_len is no longer guaranteed to be set for all requests. Therefore, we should not check for it to be non-zero. Instead check if the SGL length is zero and map when appropriate. None of the other transports check data_len which is verified in core code. Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg --- drivers/nvme/target/tcp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c index d535080b781f..1e2d92f818ad 100644 --- a/drivers/nvme/target/tcp.c +++ b/drivers/nvme/target/tcp.c @@ -320,7 +320,7 @@ static int nvmet_tcp_map_data(struct nvmet_tcp_cmd *cmd) struct nvme_sgl_desc *sgl = &cmd->req.cmd->common.dptr.sgl; u32 len = le32_to_cpu(sgl->length); - if (!cmd->req.data_len) + if (!len) return 0; if (sgl->type == ((NVME_SGL_FMT_DATA_DESC << 4) | From patchwork Wed Oct 9 19:25:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A03611575 for ; Wed, 9 Oct 2019 19:26:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8A52F206B6 for ; Wed, 9 Oct 2019 19:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732277AbfJIT02 (ORCPT ); Wed, 9 Oct 2019 15:26:28 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37640 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731937AbfJITZm (ORCPT ); Wed, 9 Oct 2019 15:25:42 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHae-0002gA-7l; Wed, 09 Oct 2019 13:25:41 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaZ-0003QM-1N; Wed, 09 Oct 2019 13:25:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:27 -0600 Message-Id: <20191009192530.13079-11-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.5 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_FREE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 09/12] nvmet-configfs: introduce passthru configfs interface X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When CONFIG_NVME_TARGET_PASSTHRU as 'passthru' directory will be added to each subsystem. The directory is similar to a namespace and has two attributes: device_path and enable. The user must set the path to the nvme controller's char device and write '1' to enable the subsystem to use passthru. Any given subsystem is prevented from enabling both a regular namespace and the passthru device. If one is enabled, enabling the other will produce an error. Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg --- drivers/nvme/target/configfs.c | 99 ++++++++++++++++++++++++++++++++++ drivers/nvme/target/nvmet.h | 1 + 2 files changed, 100 insertions(+) diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c index 98613a45bd3b..302689081cd7 100644 --- a/drivers/nvme/target/configfs.c +++ b/drivers/nvme/target/configfs.c @@ -615,6 +615,103 @@ static const struct config_item_type nvmet_namespaces_type = { .ct_owner = THIS_MODULE, }; +#ifdef CONFIG_NVME_TARGET_PASSTHRU + +static ssize_t nvmet_passthru_device_path_show(struct config_item *item, + char *page) +{ + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); + + return snprintf(page, PAGE_SIZE, "%s\n", subsys->passthru_ctrl_path); +} + +static ssize_t nvmet_passthru_device_path_store(struct config_item *item, + const char *page, size_t count) +{ + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); + size_t len; + int ret; + + mutex_lock(&subsys->lock); + + ret = -EBUSY; + if (subsys->passthru_ctrl) + goto out_unlock; + + ret = -EINVAL; + len = strcspn(page, "\n"); + if (!len) + goto out_unlock; + + kfree(subsys->passthru_ctrl_path); + ret = -ENOMEM; + subsys->passthru_ctrl_path = kstrndup(page, len, GFP_KERNEL); + if (!subsys->passthru_ctrl_path) + goto out_unlock; + + mutex_unlock(&subsys->lock); + + return count; +out_unlock: + mutex_unlock(&subsys->lock); + return ret; +} +CONFIGFS_ATTR(nvmet_passthru_, device_path); + +static ssize_t nvmet_passthru_enable_show(struct config_item *item, + char *page) +{ + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); + + return sprintf(page, "%d\n", subsys->passthru_ctrl ? 1 : 0); +} + +static ssize_t nvmet_passthru_enable_store(struct config_item *item, + const char *page, size_t count) +{ + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); + bool enable; + int ret = 0; + + if (strtobool(page, &enable)) + return -EINVAL; + + if (enable) + ret = nvmet_passthru_ctrl_enable(subsys); + else + nvmet_passthru_ctrl_disable(subsys); + + return ret ? ret : count; +} +CONFIGFS_ATTR(nvmet_passthru_, enable); + +static struct configfs_attribute *nvmet_passthru_attrs[] = { + &nvmet_passthru_attr_device_path, + &nvmet_passthru_attr_enable, + NULL, +}; + +static const struct config_item_type nvmet_passthru_type = { + .ct_attrs = nvmet_passthru_attrs, + .ct_owner = THIS_MODULE, +}; + +static void nvmet_add_passthru_group(struct nvmet_subsys *subsys) +{ + config_group_init_type_name(&subsys->passthru_group, + "passthru", &nvmet_passthru_type); + configfs_add_default_group(&subsys->passthru_group, + &subsys->group); +} + +#else /* CONFIG_NVME_TARGET_PASSTHRU */ + +static void nvmet_add_passthru_group(struct nvmet_subsys *subsys) +{ +} + +#endif /* CONFIG_NVME_TARGET_PASSTHRU */ + static int nvmet_port_subsys_allow_link(struct config_item *parent, struct config_item *target) { @@ -915,6 +1012,8 @@ static struct config_group *nvmet_subsys_make(struct config_group *group, configfs_add_default_group(&subsys->allowed_hosts_group, &subsys->group); + nvmet_add_passthru_group(subsys); + return &subsys->group; } diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h index 2c287d13ed83..ba7690979661 100644 --- a/drivers/nvme/target/nvmet.h +++ b/drivers/nvme/target/nvmet.h @@ -231,6 +231,7 @@ struct nvmet_subsys { #ifdef CONFIG_NVME_TARGET_PASSTHRU struct nvme_ctrl *passthru_ctrl; char *passthru_ctrl_path; + struct config_group passthru_group; #endif /* CONFIG_NVME_TARGET_PASSTHRU */ }; From patchwork Wed Oct 9 19:25:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11182019 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 307DB1864 for ; Wed, 9 Oct 2019 19:26:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1B30A20B7C for ; Wed, 9 Oct 2019 19:26:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731882AbfJITZk (ORCPT ); Wed, 9 Oct 2019 15:25:40 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37616 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728804AbfJITZj (ORCPT ); Wed, 9 Oct 2019 15:25:39 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002gB-6e; Wed, 09 Oct 2019 13:25:38 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaZ-0003QP-4s; Wed, 09 Oct 2019 13:25:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:28 -0600 Message-Id: <20191009192530.13079-12-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-6.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, MYRULES_NO_TEXT autolearn=no autolearn_force=no version=3.4.2 Subject: [PATCH v9 10/12] block: don't check blk_rq_is_passthrough() in blk_do_io_stat() X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Instead of checking blk_rq_is_passthruough() for every call to blk_do_io_stat(), don't set RQF_IO_STAT for passthrough requests. This should be equivalent, and opens the possibility of passthrough requests specifically requesting statistics tracking. Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg --- block/blk-mq.c | 2 +- block/blk.h | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/block/blk-mq.c b/block/blk-mq.c index ec791156e9cc..618814fcc8a7 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -319,7 +319,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data, rq->cmd_flags = op; if (data->flags & BLK_MQ_REQ_PREEMPT) rq->rq_flags |= RQF_PREEMPT; - if (blk_queue_io_stat(data->q)) + if (blk_queue_io_stat(data->q) && !blk_rq_is_passthrough(rq)) rq->rq_flags |= RQF_IO_STAT; INIT_LIST_HEAD(&rq->queuelist); INIT_HLIST_NODE(&rq->hash); diff --git a/block/blk.h b/block/blk.h index 47fba9362e60..6b7ebc2e61b8 100644 --- a/block/blk.h +++ b/block/blk.h @@ -243,13 +243,12 @@ int blk_dev_init(void); * * a) it's attached to a gendisk, and * b) the queue had IO stats enabled when this request was started, and - * c) it's a file system request + * c) it's a file system request (RQF_IO_STAT will not be set otherwise) */ static inline bool blk_do_io_stat(struct request *rq) { return rq->rq_disk && - (rq->rq_flags & RQF_IO_STAT) && - !blk_rq_is_passthrough(rq); + (rq->rq_flags & RQF_IO_STAT); } static inline void req_set_nomerge(struct request_queue *q, struct request *req) From patchwork Wed Oct 9 19:25:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CED1613BD for ; Wed, 9 Oct 2019 19:26:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B9B8821BE5 for ; Wed, 9 Oct 2019 19:26:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732133AbfJIT0C (ORCPT ); Wed, 9 Oct 2019 15:26:02 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37692 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732013AbfJITZq (ORCPT ); Wed, 9 Oct 2019 15:25:46 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002gC-6f; Wed, 09 Oct 2019 13:25:45 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaZ-0003QS-8A; Wed, 09 Oct 2019 13:25:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:29 -0600 Message-Id: <20191009192530.13079-13-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 11/12] block: call blk_account_io_start() in blk_execute_rq_nowait() X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org All existing users of blk_execute_rq[_nowait]() are for passthrough commands and will thus be rejected by blk_do_io_stat(). This allows passthrough requests to opt-in to IO accounting by setting RQF_IO_STAT. Signed-off-by: Logan Gunthorpe Reviewed-by: Sagi Grimberg --- block/blk-exec.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/blk-exec.c b/block/blk-exec.c index 1db44ca0f4a6..e20a852ae432 100644 --- a/block/blk-exec.c +++ b/block/blk-exec.c @@ -55,6 +55,8 @@ void blk_execute_rq_nowait(struct request_queue *q, struct gendisk *bd_disk, rq->rq_disk = bd_disk; rq->end_io = done; + blk_account_io_start(rq, true); + /* * don't check dying flag for MQ because the request won't * be reused after dying flag is set From patchwork Wed Oct 9 19:25:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Logan Gunthorpe X-Patchwork-Id: 11181995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89EF51575 for ; Wed, 9 Oct 2019 19:26:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 73E2F2190F for ; Wed, 9 Oct 2019 19:26:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732195AbfJIT0O (ORCPT ); Wed, 9 Oct 2019 15:26:14 -0400 Received: from ale.deltatee.com ([207.54.116.67]:37656 "EHLO ale.deltatee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731974AbfJITZn (ORCPT ); Wed, 9 Oct 2019 15:25:43 -0400 Received: from cgy1-donard.priv.deltatee.com ([172.16.1.31]) by ale.deltatee.com with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89) (envelope-from ) id 1iIHaa-0002gD-6f; Wed, 09 Oct 2019 13:25:42 -0600 Received: from gunthorp by cgy1-donard.priv.deltatee.com with local (Exim 4.92) (envelope-from ) id 1iIHaZ-0003QV-B6; Wed, 09 Oct 2019 13:25:35 -0600 From: Logan Gunthorpe To: linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Christoph Hellwig , Sagi Grimberg , Keith Busch , Jens Axboe , Chaitanya Kulkarni , Max Gurtovoy , Stephen Bates , Logan Gunthorpe Date: Wed, 9 Oct 2019 13:25:30 -0600 Message-Id: <20191009192530.13079-14-logang@deltatee.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191009192530.13079-1-logang@deltatee.com> References: <20191009192530.13079-1-logang@deltatee.com> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 172.16.1.31 X-SA-Exim-Rcpt-To: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, hch@lst.de, sagi@grimberg.me, kbusch@kernel.org, axboe@fb.com, Chaitanya.Kulkarni@wdc.com, maxg@mellanox.com, sbates@raithlin.com, logang@deltatee.com X-SA-Exim-Mail-From: gunthorp@deltatee.com X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ale.deltatee.com X-Spam-Level: X-Spam-Status: No, score=-8.7 required=5.0 tests=ALL_TRUSTED,BAYES_00, GREYLIST_ISWHITE,MYRULES_NO_TEXT autolearn=ham autolearn_force=no version=3.4.2 Subject: [PATCH v9 12/12] nvmet-passthru: support block accounting X-SA-Exim-Version: 4.2.1 (built Tue, 02 Aug 2016 21:08:31 +0000) X-SA-Exim-Scanned: Yes (on ale.deltatee.com) Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Support block disk accounting by setting the RQF_IO_STAT flag and gendisk in the request. After this change, IO counts will be reflected correctly in /proc/diskstats for drives being used by passthru. Signed-off-by: Logan Gunthorpe --- drivers/nvme/target/io-cmd-passthru.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/target/io-cmd-passthru.c b/drivers/nvme/target/io-cmd-passthru.c index c482e55f0fb8..37d06ebcbd0f 100644 --- a/drivers/nvme/target/io-cmd-passthru.c +++ b/drivers/nvme/target/io-cmd-passthru.c @@ -413,6 +413,9 @@ static struct request *nvmet_passthru_blk_make_request(struct nvmet_req *req, if (unlikely(IS_ERR(rq))) return rq; + if (blk_queue_io_stat(q)) + rq->rq_flags |= RQF_IO_STAT; + if (req->sg_cnt) { ret = nvmet_passthru_map_sg(req, rq); if (unlikely(ret)) { @@ -477,7 +480,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) rq->end_io_data = req; if (req->sq->qid != 0) { - blk_execute_rq_nowait(rq->q, NULL, rq, 0, + blk_execute_rq_nowait(rq->q, ns->disk, rq, 0, nvmet_passthru_req_done); } else { req->p.rq = rq;