From patchwork Sun Feb 7 18:13:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073251 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C3E7C43381 for ; Sun, 7 Feb 2021 18:14:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E638F61606 for ; Sun, 7 Feb 2021 18:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229669AbhBGSOt (ORCPT ); Sun, 7 Feb 2021 13:14:49 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:10268 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229491AbhBGSOj (ORCPT ); Sun, 7 Feb 2021 13:14:39 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I6FSi016859; Sun, 7 Feb 2021 10:13:36 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugq2dmu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:36 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:34 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:33 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:31 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 01/11] nvme-tcp-offload: Add nvme-tcp-offload - NVMeTCP HW offload ULP Date: Sun, 7 Feb 2021 20:13:14 +0200 Message-ID: <20210207181324.11429-2-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This patch will present the structure for the NVMeTCP offload common layer driver. This module is added under "drivers/nvme/host/" and future offload drivers which will register to it will be placed under "drivers/nvme/hw". This new driver will be enabled by the Kconfig "NVM Express over Fabrics TCP offload commmon layer". In order to support the new transport type, for host mode, no change is needed. Each new vendor-specific offload driver will register to this ULP during its probe function, by filling out the nvme_tcp_ofld_dev->ops and nvme_tcp_ofld_dev->private_data and calling nvme_tcp_ofld_register_dev with the initialized struct. The internal implementation: - tcp-offload.h: Includes all common structs and ops to be used and shared by offload drivers. - tcp-offload.c: Includes the init function which registers as a NVMf transport just like any other transport. Signed-off-by: Dean Balandin Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/Kconfig | 16 +++ drivers/nvme/host/Makefile | 3 + drivers/nvme/host/tcp-offload.c | 124 ++++++++++++++++++++++++ drivers/nvme/host/tcp-offload.h | 167 ++++++++++++++++++++++++++++++++ 4 files changed, 310 insertions(+) create mode 100644 drivers/nvme/host/tcp-offload.c create mode 100644 drivers/nvme/host/tcp-offload.h diff --git a/drivers/nvme/host/Kconfig b/drivers/nvme/host/Kconfig index a44d49d63968..6e869e94e67f 100644 --- a/drivers/nvme/host/Kconfig +++ b/drivers/nvme/host/Kconfig @@ -84,3 +84,19 @@ config NVME_TCP from https://github.com/linux-nvme/nvme-cli. If unsure, say N. + +config NVME_TCP_OFFLOAD + tristate "NVM Express over Fabrics TCP offload common layer" + default m + depends on INET + depends on BLK_DEV_NVME + select NVME_FABRICS + help + This provides support for the NVMe over Fabrics protocol using + the TCP offload transport. This allows you to use remote block devices + exported using the NVMe protocol set. + + To configure a NVMe over Fabrics controller use the nvme-cli tool + from https://github.com/linux-nvme/nvme-cli. + + If unsure, say N. diff --git a/drivers/nvme/host/Makefile b/drivers/nvme/host/Makefile index d7f6a87687b8..0e7ef044cf29 100644 --- a/drivers/nvme/host/Makefile +++ b/drivers/nvme/host/Makefile @@ -8,6 +8,7 @@ obj-$(CONFIG_NVME_FABRICS) += nvme-fabrics.o obj-$(CONFIG_NVME_RDMA) += nvme-rdma.o obj-$(CONFIG_NVME_FC) += nvme-fc.o obj-$(CONFIG_NVME_TCP) += nvme-tcp.o +obj-$(CONFIG_NVME_TCP_OFFLOAD) += nvme-tcp-offload.o nvme-core-y := core.o nvme-core-$(CONFIG_TRACING) += trace.o @@ -26,3 +27,5 @@ nvme-rdma-y += rdma.o nvme-fc-y += fc.o nvme-tcp-y += tcp.o + +nvme-tcp-offload-y += tcp-offload.o diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c new file mode 100644 index 000000000000..ee3800250e47 --- /dev/null +++ b/drivers/nvme/host/tcp-offload.c @@ -0,0 +1,124 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright 2021 Marvell. All rights reserved. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +/* Kernel includes */ +#include +#include + +/* Driver includes */ +#include "tcp-offload.h" + +static LIST_HEAD(nvme_tcp_ofld_devices); +static DECLARE_RWSEM(nvme_tcp_ofld_devices_rwsem); + +/** + * nvme_tcp_ofld_register_dev() - NVMeTCP Offload Library registration + * function. + * @dev: NVMeTCP offload device instance to be registered to the + * common tcp offload instance. + * + * API function that registers the type of vendor specific driver + * being implemented to the common NVMe over TCP offload library. Part of + * the overall init sequence of starting up an offload driver. + */ +int nvme_tcp_ofld_register_dev(struct nvme_tcp_ofld_dev *dev) +{ + struct nvme_tcp_ofld_ops *ops = dev->ops; + + if (!ops->claim_dev || + !ops->create_queue || + !ops->drain_queue || + !ops->destroy_queue || + !ops->poll_queue || + !ops->init_req || + !ops->map_sg || + !ops->send_req) + return -EINVAL; + + down_write(&nvme_tcp_ofld_devices_rwsem); + list_add_tail(&dev->entry, &nvme_tcp_ofld_devices); + up_write(&nvme_tcp_ofld_devices_rwsem); + + return 0; +} +EXPORT_SYMBOL_GPL(nvme_tcp_ofld_register_dev); + +/** + * nvme_tcp_ofld_unregister_dev() - NVMeTCP Offload Library unregistration + * function. + * @dev: NVMeTCP offload device instance to be unregistered from the + * common tcp offload instance. + * + * API function that unregisters the type of vendor specific driver being + * implemented from the common NVMe over TCP offload library. + * Part of the overall exit sequence of unloading the implemented driver. + */ +void nvme_tcp_ofld_unregister_dev(struct nvme_tcp_ofld_dev *dev) +{ + down_write(&nvme_tcp_ofld_devices_rwsem); + list_del(&dev->entry); + up_write(&nvme_tcp_ofld_devices_rwsem); +} +EXPORT_SYMBOL_GPL(nvme_tcp_ofld_unregister_dev); + +/** + * nvme_tcp_ofld_report_queue_err() - NVMeTCP Offload report error event + * callback function. Pointed to by nvme_tcp_ofld_queue->report_err. + * @queue: NVMeTCP offload queue instance on which the error has occurred. + * + * API function that allows the vendor specific offload driver to reports errors + * to the common offload layer, to invoke error recovery. + */ +int nvme_tcp_ofld_report_queue_err(struct nvme_tcp_ofld_queue *queue) +{ + /* Placeholder - invoke error recovery flow */ + + return 0; +} + +/** + * nvme_tcp_ofld_req_done() - NVMeTCP Offload request done callback + * function. Pointed to by nvme_tcp_ofld_req->done. + * @req: NVMeTCP offload request to complete. + * @result: The nvme_result. + * @status: The completion status. + * + * API function that allows the vendor specific offload driver to report request + * completions to the common offload layer. + */ +void +nvme_tcp_ofld_req_done(struct nvme_tcp_ofld_req *req, + union nvme_result *result, + __le16 status) +{ + /* Placeholder - complete request with/without error */ +} + +static struct nvmf_transport_ops nvme_tcp_ofld_transport = { + .name = "tcp_offload", + .module = THIS_MODULE, + .required_opts = NVMF_OPT_TRADDR, + .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_DISABLE_SQFLOW | + NVMF_OPT_NR_WRITE_QUEUES | NVMF_OPT_HOST_TRADDR | + NVMF_OPT_CTRL_LOSS_TMO | NVMF_OPT_RECONNECT_DELAY | + NVMF_OPT_HDR_DIGEST | NVMF_OPT_DATA_DIGEST | + NVMF_OPT_NR_POLL_QUEUES | NVMF_OPT_TOS, +}; + +static int __init nvme_tcp_ofld_init_module(void) +{ + nvmf_register_transport(&nvme_tcp_ofld_transport); + + return 0; +} + +static void __exit nvme_tcp_ofld_cleanup_module(void) +{ + nvmf_unregister_transport(&nvme_tcp_ofld_transport); +} + +module_init(nvme_tcp_ofld_init_module); +module_exit(nvme_tcp_ofld_cleanup_module); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/nvme/host/tcp-offload.h b/drivers/nvme/host/tcp-offload.h new file mode 100644 index 000000000000..468617a58e34 --- /dev/null +++ b/drivers/nvme/host/tcp-offload.h @@ -0,0 +1,167 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +/* Linux includes */ +#include +#include +#include +#include + +/* Driver includes */ +#include "nvme.h" +#include "fabrics.h" + +/* Forward declarations */ +struct nvme_tcp_ofld_ops; + +/* Representation of a vendor-specific device. This is the struct used to + * register to the offload layer by the vendor-specific driver during its probe + * function. + * Allocated by vendor-specific driver. + */ +struct nvme_tcp_ofld_dev { + struct list_head entry; + struct nvme_tcp_ofld_ops *ops; +}; + +/* Per IO struct holding the nvme_request and command + * Allocated by blk-mq. + */ +struct nvme_tcp_ofld_req { + struct nvme_request req; + struct nvme_command nvme_cmd; + struct nvme_tcp_ofld_queue *queue; + + /* Vendor specific driver context */ + void *private_data; + + void (*done)(struct nvme_tcp_ofld_req *req, + union nvme_result *result, + __le16 status); +}; + +/* Allocated by nvme_tcp_ofld */ +struct nvme_tcp_ofld_queue { + /* Offload device associated to this queue */ + struct nvme_tcp_ofld_dev *dev; + struct nvme_tcp_ofld_ctrl *ctrl; + + /* Vendor specific driver context */ + void *private_data; + + /* Error callback function */ + int (*report_err)(struct nvme_tcp_ofld_queue *queue); +}; + +/* Connectivity (routing) params used for establishing a connection */ +struct nvme_tcp_ofld_ctrl_con_params { + /* Input params */ + struct sockaddr_storage remote_ip_addr; + + /* If NVMF_OPT_HOST_TRADDR is provided it will be set in local_ip_addr + * in nvme_tcp_ofld_create_ctrl(). + * If NVMF_OPT_HOST_TRADDR is not provided the local_ip_addr will be + * initialized by claim_dev(). + */ + struct sockaddr_storage local_ip_addr; + + /* Output params */ + struct sockaddr remote_mac_addr; + struct sockaddr local_mac_addr; + u16 vlan_id; +}; + +/* Allocated by nvme_tcp_ofld */ +struct nvme_tcp_ofld_ctrl { + struct nvme_ctrl nctrl; + struct nvme_tcp_ofld_dev *dev; + + /* admin and IO queues */ + struct blk_mq_tag_set tag_set; + struct blk_mq_tag_set admin_tag_set; + struct nvme_tcp_ofld_queue *queues; + + /* Connectivity params */ + struct nvme_tcp_ofld_ctrl_con_params conn_params; + + /* Vendor specific driver context */ + void *private_data; +}; + +struct nvme_tcp_ofld_ops { + const char *name; + struct module *module; + + /* For vendor-specific driver to report what opts it supports */ + int required_opts; /* bitmap using enum nvmf_parsing_opts */ + int allowed_opts; /* bitmap using enum nvmf_parsing_opts */ + + /** + * claim_dev: Return True if addr is reachable via offload device. + * @dev: The offload device to check. + * @conn_params: ptr to routing params to be filled by the lower + * driver. Input+Output argument. + */ + int (*claim_dev)(struct nvme_tcp_ofld_dev *dev, + struct nvme_tcp_ofld_ctrl_con_params *conn_params); + + /** + * create_queue: Create offload queue and establish TCP + NVMeTCP + * (icreq+icresp) connection. Return true on successful connection. + * Based on nvme_tcp_alloc_queue. + * @queue: The queue itself - used as input and output. + * @qid: The queue ID associated with the requested queue. + * @q_size: The queue depth. + */ + int (*create_queue)(struct nvme_tcp_ofld_queue *queue, int qid, + size_t q_size); + + /** + * drain_queue: Drain a given queue - Returning from this function + * ensures that no additional completions will arrive on this queue. + * @queue: The queue to drain. + */ + void (*drain_queue)(struct nvme_tcp_ofld_queue *queue); + + /** + * destroy_queue: Close the TCP + NVMeTCP connection of a given queue + * and make sure its no longer active (no completions will arrive on the + * queue). + * @queue: The queue to destroy. + */ + void (*destroy_queue)(struct nvme_tcp_ofld_queue *queue); + + /** + * poll_queue: Poll a given queue for completions. + * @queue: The queue to poll. + */ + int (*poll_queue)(struct nvme_tcp_ofld_queue *queue); + + /** + * init_req: Initialize vendor-specific params for a new request. + * @req: Ptr to request to be initialized. Input+Output argument. + */ + int (*init_req)(struct nvme_tcp_ofld_req *req); + + /** + * send_req: Dispatch a request. Returns the execution status. + * @req: Ptr to request to be sent. + */ + int (*send_req)(struct nvme_tcp_ofld_req *req); + + /** + * map_sg: Map a scatter/gather list to DMA addresses Returns the + * number of SGs entries mapped successfully. + * @dev: The device for which the DMA addresses are to be created. + * @req: The request corresponding to the SGs, to allow vendor-specific + * driver to initialize additional params if it needs to. + */ + int (*map_sg)(struct nvme_tcp_ofld_dev *dev, + struct nvme_tcp_ofld_req *req); +}; + +/* Exported functions for lower vendor specific offload drivers */ +int nvme_tcp_ofld_register_dev(struct nvme_tcp_ofld_dev *dev); +void nvme_tcp_ofld_unregister_dev(struct nvme_tcp_ofld_dev *dev); From patchwork Sun Feb 7 18:13:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073253 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDE4CC433E0 for ; Sun, 7 Feb 2021 18:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9EA4261606 for ; Sun, 7 Feb 2021 18:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229698AbhBGSPB (ORCPT ); Sun, 7 Feb 2021 13:15:01 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:2924 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229581AbhBGSOf (ORCPT ); Sun, 7 Feb 2021 13:14:35 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I5cIp016215; Sun, 7 Feb 2021 10:13:40 -0800 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugq2dn1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:40 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:38 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:37 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:34 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Arie Gershberg Subject: [RFC PATCH v3 02/11] nvme-fabrics: Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions Date: Sun, 7 Feb 2021 20:13:15 +0200 Message-ID: <20210207181324.11429-3-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Arie Gershberg Move NVMF_ALLOWED_OPTS and NVMF_REQUIRED_OPTS definitions to header file, so it can be used by transport modules. Signed-off-by: Arie Gershberg Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/fabrics.c | 7 ------- drivers/nvme/host/fabrics.h | 7 +++++++ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 72ac00173500..ccc626cbf6d0 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -1002,13 +1002,6 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts) } EXPORT_SYMBOL_GPL(nvmf_free_options); -#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) -#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ - NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ - NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ - NVMF_OPT_DISABLE_SQFLOW |\ - NVMF_OPT_FAIL_FAST_TMO) - static struct nvme_ctrl * nvmf_create_ctrl(struct device *dev, const char *buf) { diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index 733010d2eafd..e44955ed6fa9 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -61,6 +61,13 @@ enum { NVMF_OPT_FAIL_FAST_TMO = 1 << 20, }; +#define NVMF_REQUIRED_OPTS (NVMF_OPT_TRANSPORT | NVMF_OPT_NQN) +#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \ + NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \ + NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\ + NVMF_OPT_DISABLE_SQFLOW |\ + NVMF_OPT_FAIL_FAST_TMO) + /** * struct nvmf_ctrl_options - Used to hold the options specified * with the parsing opts enum. From patchwork Sun Feb 7 18:13:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073245 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB06CC433E0 for ; Sun, 7 Feb 2021 18:14:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A77E064D9A for ; Sun, 7 Feb 2021 18:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229587AbhBGSOe (ORCPT ); Sun, 7 Feb 2021 13:14:34 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:2010 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229445AbhBGSOd (ORCPT ); Sun, 7 Feb 2021 13:14:33 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I6xsC031068; Sun, 7 Feb 2021 10:13:43 -0800 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakm5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:43 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:42 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:41 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:38 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 03/11] nvme-tcp-offload: Add device scan implementation Date: Sun, 7 Feb 2021 20:13:16 +0200 Message-ID: <20210207181324.11429-4-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Dean Balandin This patch implements the create_ctrl skeleton, specifcially to demonstrate how the claim_dev op will be used. The driver scans the registered devices and calls the claim_dev op on each of them, to find the first devices that matches the connection params. Once the correct devices is found (claim_dev returns true), we raise the refcnt of that device and return that device as the device to be used for ctrl currently being created. Signed-off-by: Dean Balandin Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/tcp-offload.c | 82 +++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c index ee3800250e47..49e11638b4fd 100644 --- a/drivers/nvme/host/tcp-offload.c +++ b/drivers/nvme/host/tcp-offload.c @@ -13,6 +13,12 @@ static LIST_HEAD(nvme_tcp_ofld_devices); static DECLARE_RWSEM(nvme_tcp_ofld_devices_rwsem); +static inline struct nvme_tcp_ofld_ctrl * +to_tcp_ofld_ctrl(struct nvme_ctrl *nctrl) +{ + return container_of(nctrl, struct nvme_tcp_ofld_ctrl, nctrl); +} + /** * nvme_tcp_ofld_register_dev() - NVMeTCP Offload Library registration * function. @@ -96,6 +102,81 @@ nvme_tcp_ofld_req_done(struct nvme_tcp_ofld_req *req, /* Placeholder - complete request with/without error */ } +struct nvme_tcp_ofld_dev * +nvme_tcp_ofld_lookup_dev(struct nvme_tcp_ofld_ctrl *ctrl) +{ + struct nvme_tcp_ofld_dev *dev; + + down_read(&nvme_tcp_ofld_devices_rwsem); + list_for_each_entry(dev, &nvme_tcp_ofld_devices, entry) { + if (dev->ops->claim_dev(dev, &ctrl->conn_params)) { + /* Increase driver refcnt */ + if (!try_module_get(dev->ops->module)) { + pr_err("try_module_get failed\n"); + dev = NULL; + } + + goto out; + } + } + + dev = NULL; +out: + up_read(&nvme_tcp_ofld_devices_rwsem); + + return dev; +} + +static int nvme_tcp_ofld_setup_ctrl(struct nvme_ctrl *nctrl, bool new) +{ + /* Placeholder - validates inputs and creates admin and IO queues */ + + return 0; +} + +static struct nvme_ctrl * +nvme_tcp_ofld_create_ctrl(struct device *ndev, struct nvmf_ctrl_options *opts) +{ + struct nvme_tcp_ofld_ctrl *ctrl; + struct nvme_tcp_ofld_dev *dev; + struct nvme_ctrl *nctrl; + int rc = 0; + + ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); + if (!ctrl) + return ERR_PTR(-ENOMEM); + + /* Init nvme_tcp_ofld_ctrl and nvme_ctrl params based on received opts */ + + /* Find device that can reach the dest addr */ + dev = nvme_tcp_ofld_lookup_dev(ctrl); + if (!dev) { + pr_info("no device found for addr %s:%s.\n", + opts->traddr, opts->trsvcid); + rc = -EINVAL; + goto out_free_ctrl; + } + + ctrl->dev = dev; + + /* Init queues */ + + /* Call nvme_init_ctrl */ + + rc = nvme_tcp_ofld_setup_ctrl(nctrl, true); + if (rc) + goto out_module_put; + + return nctrl; + +out_module_put: + module_put(dev->ops->module); +out_free_ctrl: + kfree(ctrl); + + return ERR_PTR(rc); +} + static struct nvmf_transport_ops nvme_tcp_ofld_transport = { .name = "tcp_offload", .module = THIS_MODULE, @@ -105,6 +186,7 @@ static struct nvmf_transport_ops nvme_tcp_ofld_transport = { NVMF_OPT_CTRL_LOSS_TMO | NVMF_OPT_RECONNECT_DELAY | NVMF_OPT_HDR_DIGEST | NVMF_OPT_DATA_DIGEST | NVMF_OPT_NR_POLL_QUEUES | NVMF_OPT_TOS, + .create_ctrl = nvme_tcp_ofld_create_ctrl, }; static int __init nvme_tcp_ofld_init_module(void) From patchwork Sun Feb 7 18:13:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073255 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8667C433DB for ; Sun, 7 Feb 2021 18:15:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B3E6164D9C for ; Sun, 7 Feb 2021 18:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229707AbhBGSPI (ORCPT ); Sun, 7 Feb 2021 13:15:08 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:16102 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229570AbhBGSOf (ORCPT ); Sun, 7 Feb 2021 13:14:35 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I5Rex029232; Sun, 7 Feb 2021 10:13:47 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakm8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:47 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:46 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:45 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:42 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Arie Gershberg Subject: [RFC PATCH v3 04/11] nvme-tcp-offload: Add controller level implementation Date: Sun, 7 Feb 2021 20:13:17 +0200 Message-ID: <20210207181324.11429-5-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Arie Gershberg In this patch we implement controller level functionality including: - create_ctrl. - delete_ctrl. - free_ctrl. The implementation is similar to other nvme fabrics modules, the main difference being that the nvme-tcp-offload ULP calls the vendor specific claim_dev() op with the given TCP/IP parameters to determine which device will be used for this controller. Once found, the vendor specific device and controller will be paired and kept in a controller list managed by the ULP. Signed-off-by: Arie Gershberg Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/tcp-offload.c | 458 +++++++++++++++++++++++++++++++- drivers/nvme/host/tcp-offload.h | 1 + 2 files changed, 453 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c index 49e11638b4fd..da934c6fed9e 100644 --- a/drivers/nvme/host/tcp-offload.c +++ b/drivers/nvme/host/tcp-offload.c @@ -12,6 +12,10 @@ static LIST_HEAD(nvme_tcp_ofld_devices); static DECLARE_RWSEM(nvme_tcp_ofld_devices_rwsem); +static LIST_HEAD(nvme_tcp_ofld_ctrl_list); +static DECLARE_RWSEM(nvme_tcp_ofld_ctrl_rwsem); +static struct blk_mq_ops nvme_tcp_ofld_admin_mq_ops; +static struct blk_mq_ops nvme_tcp_ofld_mq_ops; static inline struct nvme_tcp_ofld_ctrl * to_tcp_ofld_ctrl(struct nvme_ctrl *nctrl) @@ -127,26 +131,423 @@ nvme_tcp_ofld_lookup_dev(struct nvme_tcp_ofld_ctrl *ctrl) return dev; } +static struct blk_mq_tag_set * +nvme_tcp_ofld_alloc_tagset(struct nvme_ctrl *nctrl, bool admin) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct blk_mq_tag_set *set; + int rc; + + if (admin) { + set = &ctrl->admin_tag_set; + memset(set, 0, sizeof(*set)); + set->ops = &nvme_tcp_ofld_admin_mq_ops; + set->queue_depth = NVME_AQ_MQ_TAG_DEPTH; + set->reserved_tags = 2; /* connect + keep-alive */ + set->numa_node = nctrl->numa_node; + set->flags = BLK_MQ_F_BLOCKING; + set->cmd_size = sizeof(struct nvme_tcp_ofld_req); + set->driver_data = ctrl; + set->nr_hw_queues = 1; + set->timeout = NVME_ADMIN_TIMEOUT; + } else { + set = &ctrl->tag_set; + memset(set, 0, sizeof(*set)); + set->ops = &nvme_tcp_ofld_mq_ops; + set->queue_depth = nctrl->sqsize + 1; + set->reserved_tags = 1; /* fabric connect */ + set->numa_node = nctrl->numa_node; + set->flags = BLK_MQ_F_SHOULD_MERGE | BLK_MQ_F_BLOCKING; + set->cmd_size = sizeof(struct nvme_tcp_ofld_req); + set->driver_data = ctrl; + set->nr_hw_queues = nctrl->queue_count - 1; + set->timeout = NVME_IO_TIMEOUT; + set->nr_maps = nctrl->opts->nr_poll_queues ? + HCTX_MAX_TYPES : 2; + } + + rc = blk_mq_alloc_tag_set(set); + if (rc) + return ERR_PTR(rc); + + return set; +} + +static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, + bool new) +{ + int rc; + + /* Placeholder - alloc_admin_queue */ + if (new) { + nctrl->admin_tagset = + nvme_tcp_ofld_alloc_tagset(nctrl, true); + if (IS_ERR(nctrl->admin_tagset)) { + rc = PTR_ERR(nctrl->admin_tagset); + nctrl->admin_tagset = NULL; + goto out_free_queue; + } + + nctrl->fabrics_q = blk_mq_init_queue(nctrl->admin_tagset); + if (IS_ERR(nctrl->fabrics_q)) { + rc = PTR_ERR(nctrl->fabrics_q); + nctrl->fabrics_q = NULL; + goto out_free_tagset; + } + + nctrl->admin_q = blk_mq_init_queue(nctrl->admin_tagset); + if (IS_ERR(nctrl->admin_q)) { + rc = PTR_ERR(nctrl->admin_q); + nctrl->admin_q = NULL; + goto out_cleanup_fabrics_q; + } + } + + /* Placeholder - nvme_tcp_ofld_start_queue */ + + rc = nvme_enable_ctrl(nctrl); + if (rc) + goto out_stop_queue; + + blk_mq_unquiesce_queue(nctrl->admin_q); + + rc = nvme_init_identify(nctrl); + if (rc) + goto out_stop_queue; + + return 0; + +out_stop_queue: + /* Placeholder - stop offload queue */ + +out_cleanup_fabrics_q: + if (new) + blk_cleanup_queue(nctrl->fabrics_q); +out_free_tagset: + if (new) + blk_mq_free_tag_set(nctrl->admin_tagset); +out_free_queue: + /* Placeholder - free admin queue */ + + return rc; +} + +static int +nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) +{ + int rc; + + /* Placeholder - alloc_io_queues */ + + if (new) { + nctrl->tagset = nvme_tcp_ofld_alloc_tagset(nctrl, false); + if (IS_ERR(nctrl->tagset)) { + rc = PTR_ERR(nctrl->tagset); + nctrl->tagset = NULL; + goto out_free_io_queues; + } + + nctrl->connect_q = blk_mq_init_queue(nctrl->tagset); + if (IS_ERR(nctrl->connect_q)) { + rc = PTR_ERR(nctrl->connect_q); + nctrl->connect_q = NULL; + goto out_free_tag_set; + } + } + + /* Placeholder - start_io_queues */ + + if (!new) { + nvme_start_queues(nctrl); + if (!nvme_wait_freeze_timeout(nctrl, NVME_IO_TIMEOUT)) { + /* + * If we timed out waiting for freeze we are likely to + * be stuck. Fail the controller initialization just + * to be safe. + */ + rc = -ENODEV; + goto out_wait_freeze_timed_out; + } + blk_mq_update_nr_hw_queues(nctrl->tagset, nctrl->queue_count - 1); + nvme_unfreeze(nctrl); + } + + return 0; + +out_wait_freeze_timed_out: + nvme_stop_queues(nctrl); + + /* Placeholder - Stop IO queues */ + + if (new) + blk_cleanup_queue(nctrl->connect_q); +out_free_tag_set: + if (new) + blk_mq_free_tag_set(nctrl->tagset); + +out_free_io_queues: + /* Placeholder - free_io_queues */ + + return rc; +} + static int nvme_tcp_ofld_setup_ctrl(struct nvme_ctrl *nctrl, bool new) { - /* Placeholder - validates inputs and creates admin and IO queues */ + struct nvmf_ctrl_options *opts = nctrl->opts; + int rc; + + rc = nvme_tcp_ofld_configure_admin_queue(nctrl, new); + if (rc) + goto destroy_admin; + + if (nctrl->icdoff) { + dev_err(nctrl->device, "icdoff is not supported!\n"); + rc = -EINVAL; + goto destroy_admin; + } + + if (opts->queue_size > nctrl->sqsize + 1) + dev_warn(nctrl->device, + "queue_size %zu > ctrl sqsize %u, clamping down\n", + opts->queue_size, nctrl->sqsize + 1); + + if (nctrl->sqsize + 1 > nctrl->maxcmd) { + dev_warn(nctrl->device, + "sqsize %u > ctrl maxcmd %u, clamping down\n", + nctrl->sqsize + 1, nctrl->maxcmd); + nctrl->sqsize = nctrl->maxcmd - 1; + } + + if (nctrl->queue_count > 1) { + rc = nvme_tcp_ofld_configure_io_queues(nctrl, new); + if (rc) + goto destroy_admin; + } + + if (!nvme_change_ctrl_state(nctrl, NVME_CTRL_LIVE)) { + /* + * state change failure is ok if we started ctrl delete, + * unless we're during creation of a new controller to + * avoid races with teardown flow. + */ + WARN_ON_ONCE(nctrl->state != NVME_CTRL_DELETING && + nctrl->state != NVME_CTRL_DELETING_NOIO); + WARN_ON_ONCE(new); + rc = -EINVAL; + goto destroy_io; + } + + nvme_start_ctrl(nctrl); + return 0; + +destroy_io: + /* Placeholder - stop and destroy io queues*/ +destroy_admin: + /* Placeholder - stop and destroy admin queue*/ + + return rc; +} + +static int +nvme_tcp_ofld_check_dev_opts(struct nvmf_ctrl_options *opts, + struct nvme_tcp_ofld_ops *ofld_ops) +{ + unsigned int nvme_tcp_ofld_opt_mask = NVMF_ALLOWED_OPTS | + ofld_ops->allowed_opts | ofld_ops->required_opts; + if (opts->mask & ~nvme_tcp_ofld_opt_mask) { + pr_warn("One or more of the nvmf options isn't supported by %s.\n", + ofld_ops->name); + return -EINVAL; + } return 0; } +static void nvme_tcp_ofld_free_ctrl(struct nvme_ctrl *nctrl) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvme_tcp_ofld_dev *dev = ctrl->dev; + + if (list_empty(&ctrl->list)) + goto free_ctrl; + + down_write(&nvme_tcp_ofld_ctrl_rwsem); + list_del(&ctrl->list); + up_write(&nvme_tcp_ofld_ctrl_rwsem); + + nvmf_free_options(nctrl->opts); +free_ctrl: + module_put(dev->ops->module); + kfree(ctrl->queues); + kfree(ctrl); +} + +static void nvme_tcp_ofld_submit_async_event(struct nvme_ctrl *arg) +{ + /* Placeholder - submit_async_event */ +} + +static void +nvme_tcp_ofld_teardown_admin_queue(struct nvme_ctrl *ctrl, bool remove) +{ + /* Placeholder - teardown_admin_queue */ +} + +static void +nvme_tcp_ofld_teardown_io_queues(struct nvme_ctrl *nctrl, bool remove) +{ + /* Placeholder - teardown_io_queues */ +} + +static void +nvme_tcp_ofld_teardown_ctrl(struct nvme_ctrl *nctrl, bool shutdown) +{ + /* Placeholder - err_work and connect_work */ + nvme_tcp_ofld_teardown_io_queues(nctrl, shutdown); + blk_mq_quiesce_queue(nctrl->admin_q); + if (shutdown) + nvme_shutdown_ctrl(nctrl); + else + nvme_disable_ctrl(nctrl); + nvme_tcp_ofld_teardown_admin_queue(nctrl, shutdown); +} + +static void nvme_tcp_ofld_delete_ctrl(struct nvme_ctrl *nctrl) +{ + nvme_tcp_ofld_teardown_ctrl(nctrl, true); +} + +static int +nvme_tcp_ofld_init_request(struct blk_mq_tag_set *set, + struct request *rq, + unsigned int hctx_idx, + unsigned int numa_node) +{ + struct nvme_tcp_ofld_req *req = blk_mq_rq_to_pdu(rq); + struct nvme_tcp_ofld_ctrl *ctrl = set->driver_data; + + /* Placeholder - init request */ + + req->done = nvme_tcp_ofld_req_done; + ctrl->dev->ops->init_req(req); + + return 0; +} + +static blk_status_t +nvme_tcp_ofld_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +{ + /* Call nvme_setup_cmd(...) */ + + /* Call ops->map_sg(...) */ + + return BLK_STS_OK; +} + +static struct blk_mq_ops nvme_tcp_ofld_mq_ops = { + .queue_rq = nvme_tcp_ofld_queue_rq, + .init_request = nvme_tcp_ofld_init_request, + /* + * All additional ops will be also implemented and registered similar to + * tcp.c + */ +}; + +static struct blk_mq_ops nvme_tcp_ofld_admin_mq_ops = { + .queue_rq = nvme_tcp_ofld_queue_rq, + .init_request = nvme_tcp_ofld_init_request, + /* + * All additional ops will be also implemented and registered similar to + * tcp.c + */ +}; + +static const struct nvme_ctrl_ops nvme_tcp_ofld_ctrl_ops = { + .name = "tcp_offload", + .module = THIS_MODULE, + .flags = NVME_F_FABRICS, + .reg_read32 = nvmf_reg_read32, + .reg_read64 = nvmf_reg_read64, + .reg_write32 = nvmf_reg_write32, + .free_ctrl = nvme_tcp_ofld_free_ctrl, + .submit_async_event = nvme_tcp_ofld_submit_async_event, + .delete_ctrl = nvme_tcp_ofld_delete_ctrl, + .get_address = nvmf_get_address, +}; + +static bool +nvme_tcp_ofld_existing_controller(struct nvmf_ctrl_options *opts) +{ + struct nvme_tcp_ofld_ctrl *ctrl; + bool found = false; + + down_read(&nvme_tcp_ofld_ctrl_rwsem); + list_for_each_entry(ctrl, &nvme_tcp_ofld_ctrl_list, list) { + found = nvmf_ip_options_match(&ctrl->nctrl, opts); + if (found) + break; + } + up_read(&nvme_tcp_ofld_ctrl_rwsem); + + return found; +} + static struct nvme_ctrl * nvme_tcp_ofld_create_ctrl(struct device *ndev, struct nvmf_ctrl_options *opts) { + struct nvme_tcp_ofld_queue *queue; struct nvme_tcp_ofld_ctrl *ctrl; struct nvme_tcp_ofld_dev *dev; struct nvme_ctrl *nctrl; - int rc = 0; + int i, rc = 0; ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL); if (!ctrl) return ERR_PTR(-ENOMEM); - /* Init nvme_tcp_ofld_ctrl and nvme_ctrl params based on received opts */ + INIT_LIST_HEAD(&ctrl->list); + nctrl = &ctrl->nctrl; + nctrl->opts = opts; + nctrl->queue_count = opts->nr_io_queues + opts->nr_write_queues + + opts->nr_poll_queues + 1; + nctrl->sqsize = opts->queue_size - 1; + nctrl->kato = opts->kato; + if (!(opts->mask & NVMF_OPT_TRSVCID)) { + opts->trsvcid = + kstrdup(__stringify(NVME_TCP_DISC_PORT), GFP_KERNEL); + if (!opts->trsvcid) { + rc = -ENOMEM; + goto out_free_ctrl; + } + opts->mask |= NVMF_OPT_TRSVCID; + } + + rc = inet_pton_with_scope(&init_net, AF_UNSPEC, opts->traddr, + opts->trsvcid, + &ctrl->conn_params.remote_ip_addr); + if (rc) { + pr_err("malformed address passed: %s:%s\n", + opts->traddr, opts->trsvcid); + goto out_free_ctrl; + } + + if (opts->mask & NVMF_OPT_HOST_TRADDR) { + rc = inet_pton_with_scope(&init_net, AF_UNSPEC, + opts->host_traddr, NULL, + &ctrl->conn_params.local_ip_addr); + if (rc) { + pr_err("malformed src address passed: %s\n", + opts->host_traddr); + goto out_free_ctrl; + } + } + + if (!opts->duplicate_connect && + nvme_tcp_ofld_existing_controller(opts)) { + rc = -EALREADY; + goto out_free_ctrl; + } /* Find device that can reach the dest addr */ dev = nvme_tcp_ofld_lookup_dev(ctrl); @@ -157,18 +558,55 @@ nvme_tcp_ofld_create_ctrl(struct device *ndev, struct nvmf_ctrl_options *opts) goto out_free_ctrl; } + rc = nvme_tcp_ofld_check_dev_opts(opts, dev->ops); + if (rc) + goto out_module_put; + ctrl->dev = dev; - /* Init queues */ + ctrl->queues = kcalloc(nctrl->queue_count, + sizeof(struct nvme_tcp_ofld_queue), + GFP_KERNEL); + if (!ctrl->queues) { + rc = -ENOMEM; + goto out_module_put; + } + + for (i = 0; i < nctrl->queue_count; ++i) { + queue = &ctrl->queues[i]; + queue->ctrl = ctrl; + queue->dev = dev; + queue->report_err = nvme_tcp_ofld_report_queue_err; + } - /* Call nvme_init_ctrl */ + rc = nvme_init_ctrl(nctrl, ndev, &nvme_tcp_ofld_ctrl_ops, 0); + if (rc) + goto out_free_queues; + + if (!nvme_change_ctrl_state(nctrl, NVME_CTRL_CONNECTING)) { + WARN_ON_ONCE(1); + rc = -EINTR; + goto out_uninit_ctrl; + } rc = nvme_tcp_ofld_setup_ctrl(nctrl, true); if (rc) - goto out_module_put; + goto out_uninit_ctrl; + + dev_info(nctrl->device, "new ctrl: NQN \"%s\", addr %pISp\n", + opts->subsysnqn, &ctrl->conn_params.remote_ip_addr); + + down_write(&nvme_tcp_ofld_ctrl_rwsem); + list_add_tail(&ctrl->list, &nvme_tcp_ofld_ctrl_list); + up_write(&nvme_tcp_ofld_ctrl_rwsem); return nctrl; +out_uninit_ctrl: + nvme_uninit_ctrl(nctrl); + nvme_put_ctrl(nctrl); +out_free_queues: + kfree(ctrl->queues); out_module_put: module_put(dev->ops->module); out_free_ctrl: @@ -198,7 +636,15 @@ static int __init nvme_tcp_ofld_init_module(void) static void __exit nvme_tcp_ofld_cleanup_module(void) { + struct nvme_tcp_ofld_ctrl *ctrl; + nvmf_unregister_transport(&nvme_tcp_ofld_transport); + + down_write(&nvme_tcp_ofld_ctrl_rwsem); + list_for_each_entry(ctrl, &nvme_tcp_ofld_ctrl_list, list) + nvme_delete_ctrl(&ctrl->nctrl); + up_write(&nvme_tcp_ofld_ctrl_rwsem); + flush_workqueue(nvme_delete_wq); } module_init(nvme_tcp_ofld_init_module); diff --git a/drivers/nvme/host/tcp-offload.h b/drivers/nvme/host/tcp-offload.h index 468617a58e34..a9bcd4f7d150 100644 --- a/drivers/nvme/host/tcp-offload.h +++ b/drivers/nvme/host/tcp-offload.h @@ -76,6 +76,7 @@ struct nvme_tcp_ofld_ctrl_con_params { /* Allocated by nvme_tcp_ofld */ struct nvme_tcp_ofld_ctrl { struct nvme_ctrl nctrl; + struct list_head list; struct nvme_tcp_ofld_dev *dev; /* admin and IO queues */ From patchwork Sun Feb 7 18:13:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073249 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2B61C433E6 for ; Sun, 7 Feb 2021 18:14:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D17F364D9A for ; Sun, 7 Feb 2021 18:14:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229631AbhBGSOn (ORCPT ); Sun, 7 Feb 2021 13:14:43 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:19408 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229615AbhBGSOj (ORCPT ); Sun, 7 Feb 2021 13:14:39 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I5cIq016215; Sun, 7 Feb 2021 10:13:52 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugq2dn8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:51 -0800 Received: from SC-EXCH03.marvell.com (10.93.176.83) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:49 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:49 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:46 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Arie Gershberg Subject: [RFC PATCH v3 05/11] nvme-tcp-offload: Add controller level error recovery implementation Date: Sun, 7 Feb 2021 20:13:18 +0200 Message-ID: <20210207181324.11429-6-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Arie Gershberg In this patch, we implement controller level error handling and recovery. Upon an error discovered by the ULP or reset controller initiated by the nvme-core (using reset_ctrl workqueue), the ULP will initiate a controller recovery which includes teardown and re-connect of all queues. Signed-off-by: Arie Gershberg Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/tcp-offload.c | 121 +++++++++++++++++++++++++++++++- drivers/nvme/host/tcp-offload.h | 10 +++ 2 files changed, 130 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c index da934c6fed9e..e3759c281927 100644 --- a/drivers/nvme/host/tcp-offload.c +++ b/drivers/nvme/host/tcp-offload.c @@ -73,6 +73,23 @@ void nvme_tcp_ofld_unregister_dev(struct nvme_tcp_ofld_dev *dev) } EXPORT_SYMBOL_GPL(nvme_tcp_ofld_unregister_dev); +/** + * nvme_tcp_ofld_error_recovery() - NVMeTCP Offload Library error recovery. + * function. + * @nctrl: NVMe controller instance to change to resetting. + * + * API function that change the controller state to resseting. + * Part of the overall controller reset sequence. + */ +void nvme_tcp_ofld_error_recovery(struct nvme_ctrl *nctrl) +{ + if (!nvme_change_ctrl_state(nctrl, NVME_CTRL_RESETTING)) + return; + + queue_work(nvme_reset_wq, &to_tcp_ofld_ctrl(nctrl)->err_work); +} +EXPORT_SYMBOL_GPL(nvme_tcp_ofld_error_recovery); + /** * nvme_tcp_ofld_report_queue_err() - NVMeTCP Offload report error event * callback function. Pointed to by nvme_tcp_ofld_queue->report_err. @@ -291,6 +308,27 @@ nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) return rc; } +static void nvme_tcp_ofld_reconnect_or_remove(struct nvme_ctrl *nctrl) +{ + /* If we are resetting/deleting then do nothing */ + if (nctrl->state != NVME_CTRL_CONNECTING) { + WARN_ON_ONCE(nctrl->state == NVME_CTRL_NEW || + nctrl->state == NVME_CTRL_LIVE); + return; + } + + if (nvmf_should_reconnect(nctrl)) { + dev_info(nctrl->device, "Reconnecting in %d seconds...\n", + nctrl->opts->reconnect_delay); + queue_delayed_work(nvme_wq, + &to_tcp_ofld_ctrl(nctrl)->connect_work, + nctrl->opts->reconnect_delay * HZ); + } else { + dev_info(nctrl->device, "Removing controller...\n"); + nvme_delete_ctrl(nctrl); + } +} + static int nvme_tcp_ofld_setup_ctrl(struct nvme_ctrl *nctrl, bool new) { struct nvmf_ctrl_options *opts = nctrl->opts; @@ -399,10 +437,62 @@ nvme_tcp_ofld_teardown_io_queues(struct nvme_ctrl *nctrl, bool remove) /* Placeholder - teardown_io_queues */ } +static void nvme_tcp_ofld_reconnect_ctrl_work(struct work_struct *work) +{ + struct nvme_tcp_ofld_ctrl *ctrl = + container_of(to_delayed_work(work), + struct nvme_tcp_ofld_ctrl, + connect_work); + struct nvme_ctrl *nctrl = &ctrl->nctrl; + + ++nctrl->nr_reconnects; + + if (nvme_tcp_ofld_setup_ctrl(nctrl, false)) + goto requeue; + + dev_info(nctrl->device, "Successfully reconnected (%d attempt)\n", + nctrl->nr_reconnects); + + nctrl->nr_reconnects = 0; + + return; + +requeue: + dev_info(nctrl->device, "Failed reconnect attempt %d\n", + nctrl->nr_reconnects); + nvme_tcp_ofld_reconnect_or_remove(nctrl); +} + +static void nvme_tcp_ofld_error_recovery_work(struct work_struct *work) +{ + struct nvme_tcp_ofld_ctrl *ctrl = + container_of(work, struct nvme_tcp_ofld_ctrl, err_work); + struct nvme_ctrl *nctrl = &ctrl->nctrl; + + nvme_stop_keep_alive(nctrl); + nvme_tcp_ofld_teardown_io_queues(nctrl, false); + /* unquiesce to fail fast pending requests */ + nvme_start_queues(nctrl); + nvme_tcp_ofld_teardown_admin_queue(nctrl, false); + blk_mq_unquiesce_queue(nctrl->admin_q); + + if (!nvme_change_ctrl_state(nctrl, NVME_CTRL_CONNECTING)) { + /* state change failure is ok if we started nctrl delete */ + WARN_ON_ONCE(nctrl->state != NVME_CTRL_DELETING && + nctrl->state != NVME_CTRL_DELETING_NOIO); + return; + } + + nvme_tcp_ofld_reconnect_or_remove(nctrl); +} + static void nvme_tcp_ofld_teardown_ctrl(struct nvme_ctrl *nctrl, bool shutdown) { - /* Placeholder - err_work and connect_work */ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + + cancel_work_sync(&ctrl->err_work); + cancel_delayed_work_sync(&ctrl->connect_work); nvme_tcp_ofld_teardown_io_queues(nctrl, shutdown); blk_mq_quiesce_queue(nctrl->admin_q); if (shutdown) @@ -417,6 +507,31 @@ static void nvme_tcp_ofld_delete_ctrl(struct nvme_ctrl *nctrl) nvme_tcp_ofld_teardown_ctrl(nctrl, true); } +static void nvme_tcp_ofld_reset_ctrl_work(struct work_struct *work) +{ + struct nvme_ctrl *nctrl = + container_of(work, struct nvme_ctrl, reset_work); + + nvme_stop_ctrl(nctrl); + nvme_tcp_ofld_teardown_ctrl(nctrl, false); + + if (!nvme_change_ctrl_state(nctrl, NVME_CTRL_CONNECTING)) { + /* state change failure is ok if we started ctrl delete */ + WARN_ON_ONCE(nctrl->state != NVME_CTRL_DELETING && + nctrl->state != NVME_CTRL_DELETING_NOIO); + return; + } + + if (nvme_tcp_ofld_setup_ctrl(nctrl, false)) + goto out_fail; + + return; + +out_fail: + ++nctrl->nr_reconnects; + nvme_tcp_ofld_reconnect_or_remove(nctrl); +} + static int nvme_tcp_ofld_init_request(struct blk_mq_tag_set *set, struct request *rq, @@ -513,6 +628,10 @@ nvme_tcp_ofld_create_ctrl(struct device *ndev, struct nvmf_ctrl_options *opts) opts->nr_poll_queues + 1; nctrl->sqsize = opts->queue_size - 1; nctrl->kato = opts->kato; + INIT_DELAYED_WORK(&ctrl->connect_work, + nvme_tcp_ofld_reconnect_ctrl_work); + INIT_WORK(&ctrl->err_work, nvme_tcp_ofld_error_recovery_work); + INIT_WORK(&nctrl->reset_work, nvme_tcp_ofld_reset_ctrl_work); if (!(opts->mask & NVMF_OPT_TRSVCID)) { opts->trsvcid = kstrdup(__stringify(NVME_TCP_DISC_PORT), GFP_KERNEL); diff --git a/drivers/nvme/host/tcp-offload.h b/drivers/nvme/host/tcp-offload.h index a9bcd4f7d150..0fb212f9193a 100644 --- a/drivers/nvme/host/tcp-offload.h +++ b/drivers/nvme/host/tcp-offload.h @@ -84,6 +84,15 @@ struct nvme_tcp_ofld_ctrl { struct blk_mq_tag_set admin_tag_set; struct nvme_tcp_ofld_queue *queues; + struct work_struct err_work; + struct delayed_work connect_work; + + /* + * Each entry in the array indicates the number of queues of + * corresponding type. + */ + u32 queue_type_mapping[HCTX_MAX_TYPES]; + /* Connectivity params */ struct nvme_tcp_ofld_ctrl_con_params conn_params; @@ -166,3 +175,4 @@ struct nvme_tcp_ofld_ops { /* Exported functions for lower vendor specific offload drivers */ int nvme_tcp_ofld_register_dev(struct nvme_tcp_ofld_dev *dev); void nvme_tcp_ofld_unregister_dev(struct nvme_tcp_ofld_dev *dev); +void nvme_tcp_ofld_error_recovery(struct nvme_ctrl *nctrl); From patchwork Sun Feb 7 18:13:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A4ACC433E9 for ; Sun, 7 Feb 2021 18:15:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D3F2D61606 for ; Sun, 7 Feb 2021 18:15:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbhBGSPM (ORCPT ); Sun, 7 Feb 2021 13:15:12 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:41604 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229629AbhBGSOo (ORCPT ); Sun, 7 Feb 2021 13:14:44 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I5vmY029881; Sun, 7 Feb 2021 10:13:54 -0800 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakmc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:54 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:53 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:52 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:49 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 06/11] nvme-tcp-offload: Add queue level implementation Date: Sun, 7 Feb 2021 20:13:19 +0200 Message-ID: <20210207181324.11429-7-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Dean Balandin In this patch we implement queue level functionality. The implementation is similar to the nvme-tcp module, the main difference being that we call the vendor specific create_queue op which creates the TCP connection, and NVMeTPC connection including icreq+icresp negotiation. Once create_queue returns successfully, we can move on to the fabrics connect. Signed-off-by: Dean Balandin Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/tcp-offload.c | 322 ++++++++++++++++++++++++++++++-- drivers/nvme/host/tcp-offload.h | 7 + 2 files changed, 310 insertions(+), 19 deletions(-) diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c index e3759c281927..f1023f2eae8d 100644 --- a/drivers/nvme/host/tcp-offload.c +++ b/drivers/nvme/host/tcp-offload.c @@ -23,6 +23,11 @@ to_tcp_ofld_ctrl(struct nvme_ctrl *nctrl) return container_of(nctrl, struct nvme_tcp_ofld_ctrl, nctrl); } +static inline int nvme_tcp_ofld_qid(struct nvme_tcp_ofld_queue *queue) +{ + return queue - queue->ctrl->queues; +} + /** * nvme_tcp_ofld_register_dev() - NVMeTCP Offload Library registration * function. @@ -190,12 +195,97 @@ nvme_tcp_ofld_alloc_tagset(struct nvme_ctrl *nctrl, bool admin) return set; } +static bool nvme_tcp_ofld_poll_queue(struct nvme_tcp_ofld_queue *queue) +{ + /* Placeholder - implement logic to determine if poll queue */ + + return false; +} + +static void __nvme_tcp_ofld_stop_queue(struct nvme_tcp_ofld_queue *queue) +{ + queue->dev->ops->drain_queue(queue); + queue->dev->ops->destroy_queue(queue); + + /* Placeholder - additional cleanup such as cancel_work_sync io_work */ + clear_bit(NVME_TCP_OFLD_Q_LIVE, &queue->flags); +} + +static void nvme_tcp_ofld_stop_queue(struct nvme_ctrl *nctrl, int qid) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvme_tcp_ofld_queue *queue = &ctrl->queues[qid]; + + if (!test_and_clear_bit(NVME_TCP_OFLD_Q_LIVE, &queue->flags)) + return; + + __nvme_tcp_ofld_stop_queue(queue); +} + +static void nvme_tcp_ofld_free_queue(struct nvme_ctrl *nctrl, int qid) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvme_tcp_ofld_queue *queue = &ctrl->queues[qid]; + + if (!test_and_clear_bit(NVME_TCP_OFLD_Q_ALLOCATED, &queue->flags)) + return; + + /* Placeholder - additional queue cleanup */ +} + +static void +nvme_tcp_ofld_terminate_admin_queue(struct nvme_ctrl *nctrl, bool remove) +{ + nvme_tcp_ofld_stop_queue(nctrl, 0); + if (remove) { + if (nctrl->admin_q && !blk_queue_dead(nctrl->admin_q)) + blk_cleanup_queue(nctrl->admin_q); + + if (nctrl->fabrics_q) + blk_cleanup_queue(nctrl->fabrics_q); + + if (nctrl->admin_tagset) + blk_mq_free_tag_set(nctrl->admin_tagset); + } +} + +static int nvme_tcp_ofld_start_queue(struct nvme_ctrl *nctrl, int qid) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvme_tcp_ofld_queue *queue = &ctrl->queues[qid]; + int rc; + + queue = &ctrl->queues[qid]; + if (qid) + rc = nvmf_connect_io_queue(nctrl, qid, + nvme_tcp_ofld_poll_queue(queue)); + else + rc = nvmf_connect_admin_queue(nctrl); + + if (!rc) { + set_bit(NVME_TCP_OFLD_Q_LIVE, &queue->flags); + } else { + if (test_bit(NVME_TCP_OFLD_Q_ALLOCATED, &queue->flags)) + __nvme_tcp_ofld_stop_queue(queue); + dev_err(nctrl->device, + "failed to connect queue: %d ret=%d\n", qid, rc); + } + + return rc; +} + static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, bool new) { + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvme_tcp_ofld_queue *queue = &ctrl->queues[0]; int rc; - /* Placeholder - alloc_admin_queue */ + rc = ctrl->dev->ops->create_queue(queue, 0, NVME_AQ_DEPTH); + if (rc) + return rc; + + set_bit(NVME_TCP_OFLD_Q_ALLOCATED, &queue->flags); if (new) { nctrl->admin_tagset = nvme_tcp_ofld_alloc_tagset(nctrl, true); @@ -220,7 +310,9 @@ static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, } } - /* Placeholder - nvme_tcp_ofld_start_queue */ + rc = nvme_tcp_ofld_start_queue(nctrl, 0); + if (rc) + goto out_cleanup_queue; rc = nvme_enable_ctrl(nctrl); if (rc) @@ -235,8 +327,10 @@ static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, return 0; out_stop_queue: - /* Placeholder - stop offload queue */ - + nvme_tcp_ofld_stop_queue(nctrl, 0); +out_cleanup_queue: + if (new) + blk_cleanup_queue(nctrl->admin_q); out_cleanup_fabrics_q: if (new) blk_cleanup_queue(nctrl->fabrics_q); @@ -244,7 +338,123 @@ static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, if (new) blk_mq_free_tag_set(nctrl->admin_tagset); out_free_queue: - /* Placeholder - free admin queue */ + nvme_tcp_ofld_free_queue(nctrl, 0); + + return rc; +} + +static unsigned int nvme_tcp_ofld_nr_io_queues(struct nvme_ctrl *nctrl) +{ + unsigned int nr_io_queues; + + nr_io_queues = min(nctrl->opts->nr_io_queues, num_online_cpus()); + nr_io_queues += min(nctrl->opts->nr_write_queues, num_online_cpus()); + nr_io_queues += min(nctrl->opts->nr_poll_queues, num_online_cpus()); + + return nr_io_queues; +} + +static void +nvme_tcp_ofld_set_io_queues(struct nvme_ctrl *nctrl, unsigned int nr_io_queues) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + struct nvmf_ctrl_options *opts = nctrl->opts; + + if (opts->nr_write_queues && opts->nr_io_queues < nr_io_queues) { + /* + * separate read/write queues + * hand out dedicated default queues only after we have + * sufficient read queues. + */ + ctrl->io_queues[HCTX_TYPE_READ] = opts->nr_io_queues; + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_READ]; + ctrl->io_queues[HCTX_TYPE_DEFAULT] = + min(opts->nr_write_queues, nr_io_queues); + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT]; + } else { + /* + * shared read/write queues + * either no write queues were requested, or we don't have + * sufficient queue count to have dedicated default queues. + */ + ctrl->io_queues[HCTX_TYPE_DEFAULT] = + min(opts->nr_io_queues, nr_io_queues); + nr_io_queues -= ctrl->io_queues[HCTX_TYPE_DEFAULT]; + } + + if (opts->nr_poll_queues && nr_io_queues) { + /* map dedicated poll queues only if we have queues left */ + ctrl->io_queues[HCTX_TYPE_POLL] = + min(opts->nr_poll_queues, nr_io_queues); + } +} + +static void +nvme_tcp_ofld_terminate_io_queues(struct nvme_ctrl *nctrl, int start_from) +{ + int i; + + /* admin-q will be ignored because of the loop condition */ + for (i = start_from; i >= 1; i--) + nvme_tcp_ofld_stop_queue(nctrl, i); +} + +static int nvme_tcp_ofld_create_io_queues(struct nvme_ctrl *nctrl) +{ + struct nvme_tcp_ofld_ctrl *ctrl = to_tcp_ofld_ctrl(nctrl); + int i, rc; + + for (i = 1; i < nctrl->queue_count; i++) { + rc = ctrl->dev->ops->create_queue(&ctrl->queues[i], + i, nctrl->sqsize + 1); + if (rc) + goto out_free_queues; + + set_bit(NVME_TCP_OFLD_Q_ALLOCATED, &ctrl->queues[i].flags); + } + + return 0; + +out_free_queues: + nvme_tcp_ofld_terminate_io_queues(nctrl, --i); + + return rc; +} + +static int nvme_tcp_ofld_alloc_io_queues(struct nvme_ctrl *nctrl) +{ + unsigned int nr_io_queues; + int rc; + + nr_io_queues = nvme_tcp_ofld_nr_io_queues(nctrl); + rc = nvme_set_queue_count(nctrl, &nr_io_queues); + if (rc) + return rc; + + nctrl->queue_count = nr_io_queues + 1; + if (nctrl->queue_count < 2) + return 0; + + dev_info(nctrl->device, "creating %d I/O queues.\n", nr_io_queues); + nvme_tcp_ofld_set_io_queues(nctrl, nr_io_queues); + + return nvme_tcp_ofld_create_io_queues(nctrl); +} + +static int nvme_tcp_ofld_start_io_queues(struct nvme_ctrl *nctrl) +{ + int i, rc = 0; + + for (i = 1; i < nctrl->queue_count; i++) { + rc = nvme_tcp_ofld_start_queue(nctrl, i); + if (rc) + goto terminate_queues; + } + + return 0; + +terminate_queues: + nvme_tcp_ofld_terminate_io_queues(nctrl, --i); return rc; } @@ -252,9 +462,10 @@ static int nvme_tcp_ofld_configure_admin_queue(struct nvme_ctrl *nctrl, static int nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) { - int rc; + int rc = nvme_tcp_ofld_alloc_io_queues(nctrl); - /* Placeholder - alloc_io_queues */ + if (rc) + return rc; if (new) { nctrl->tagset = nvme_tcp_ofld_alloc_tagset(nctrl, false); @@ -272,7 +483,9 @@ nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) } } - /* Placeholder - start_io_queues */ + rc = nvme_tcp_ofld_start_io_queues(nctrl); + if (rc) + goto out_cleanup_connect_q; if (!new) { nvme_start_queues(nctrl); @@ -296,6 +509,7 @@ nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) /* Placeholder - Stop IO queues */ +out_cleanup_connect_q: if (new) blk_cleanup_queue(nctrl->connect_q); out_free_tag_set: @@ -303,7 +517,7 @@ nvme_tcp_ofld_configure_io_queues(struct nvme_ctrl *nctrl, bool new) blk_mq_free_tag_set(nctrl->tagset); out_free_io_queues: - /* Placeholder - free_io_queues */ + nvme_tcp_ofld_terminate_io_queues(nctrl, nctrl->queue_count); return rc; } @@ -379,9 +593,9 @@ static int nvme_tcp_ofld_setup_ctrl(struct nvme_ctrl *nctrl, bool new) return 0; destroy_io: - /* Placeholder - stop and destroy io queues*/ + nvme_tcp_ofld_terminate_io_queues(nctrl, nctrl->queue_count); destroy_admin: - /* Placeholder - stop and destroy admin queue*/ + nvme_tcp_ofld_terminate_admin_queue(nctrl, new); return rc; } @@ -560,22 +774,92 @@ nvme_tcp_ofld_queue_rq(struct blk_mq_hw_ctx *hctx, return BLK_STS_OK; } +static void +nvme_tcp_ofld_exit_request(struct blk_mq_tag_set *set, + struct request *rq, unsigned int hctx_idx) +{ + /* Placeholder */ +} + +static int +nvme_tcp_ofld_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, + unsigned int hctx_idx) +{ + struct nvme_tcp_ofld_ctrl *ctrl = data; + struct nvme_tcp_ofld_queue *queue = &ctrl->queues[hctx_idx + 1]; + + hctx->driver_data = queue; + return 0; +} + +static int nvme_tcp_ofld_map_queues(struct blk_mq_tag_set *set) +{ + struct nvme_tcp_ofld_ctrl *ctrl = set->driver_data; + struct nvmf_ctrl_options *opts = ctrl->nctrl.opts; + + if (opts->nr_write_queues && ctrl->queue_type_mapping[HCTX_TYPE_READ]) { + /* separate read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT]; + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = + ctrl->queue_type_mapping[HCTX_TYPE_READ]; + set->map[HCTX_TYPE_READ].queue_offset = + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT]; + } else { + /* shared read/write queues */ + set->map[HCTX_TYPE_DEFAULT].nr_queues = + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT]; + set->map[HCTX_TYPE_DEFAULT].queue_offset = 0; + set->map[HCTX_TYPE_READ].nr_queues = + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT]; + set->map[HCTX_TYPE_READ].queue_offset = 0; + } + blk_mq_map_queues(&set->map[HCTX_TYPE_DEFAULT]); + blk_mq_map_queues(&set->map[HCTX_TYPE_READ]); + + if (opts->nr_poll_queues && ctrl->queue_type_mapping[HCTX_TYPE_POLL]) { + /* map dedicated poll queues only if we have queues left */ + set->map[HCTX_TYPE_POLL].nr_queues = + ctrl->queue_type_mapping[HCTX_TYPE_POLL]; + set->map[HCTX_TYPE_POLL].queue_offset = + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT] + + ctrl->queue_type_mapping[HCTX_TYPE_READ]; + blk_mq_map_queues(&set->map[HCTX_TYPE_POLL]); + } + + dev_info(ctrl->nctrl.device, + "mapped %d/%d/%d default/read/poll queues.\n", + ctrl->queue_type_mapping[HCTX_TYPE_DEFAULT], + ctrl->queue_type_mapping[HCTX_TYPE_READ], + ctrl->queue_type_mapping[HCTX_TYPE_POLL]); + + return 0; +} + +static int nvme_tcp_ofld_poll(struct blk_mq_hw_ctx *hctx) +{ + /* Placeholder - Implement polling mechanism */ + + return 0; +} + static struct blk_mq_ops nvme_tcp_ofld_mq_ops = { .queue_rq = nvme_tcp_ofld_queue_rq, .init_request = nvme_tcp_ofld_init_request, - /* - * All additional ops will be also implemented and registered similar to - * tcp.c - */ + .complete = nvme_complete_rq, + .exit_request = nvme_tcp_ofld_exit_request, + .init_hctx = nvme_tcp_ofld_init_hctx, + .map_queues = nvme_tcp_ofld_map_queues, + .poll = nvme_tcp_ofld_poll, }; static struct blk_mq_ops nvme_tcp_ofld_admin_mq_ops = { .queue_rq = nvme_tcp_ofld_queue_rq, .init_request = nvme_tcp_ofld_init_request, - /* - * All additional ops will be also implemented and registered similar to - * tcp.c - */ + .complete = nvme_complete_rq, + .exit_request = nvme_tcp_ofld_exit_request, + .init_hctx = nvme_tcp_ofld_init_hctx, }; static const struct nvme_ctrl_ops nvme_tcp_ofld_ctrl_ops = { diff --git a/drivers/nvme/host/tcp-offload.h b/drivers/nvme/host/tcp-offload.h index 0fb212f9193a..65c33ad97da2 100644 --- a/drivers/nvme/host/tcp-offload.h +++ b/drivers/nvme/host/tcp-offload.h @@ -42,11 +42,17 @@ struct nvme_tcp_ofld_req { __le16 status); }; +enum nvme_tcp_ofld_queue_flags { + NVME_TCP_OFLD_Q_ALLOCATED = 0, + NVME_TCP_OFLD_Q_LIVE = 1, +}; + /* Allocated by nvme_tcp_ofld */ struct nvme_tcp_ofld_queue { /* Offload device associated to this queue */ struct nvme_tcp_ofld_dev *dev; struct nvme_tcp_ofld_ctrl *ctrl; + unsigned long flags; /* Vendor specific driver context */ void *private_data; @@ -92,6 +98,7 @@ struct nvme_tcp_ofld_ctrl { * corresponding type. */ u32 queue_type_mapping[HCTX_MAX_TYPES]; + u32 io_queues[HCTX_MAX_TYPES]; /* Connectivity params */ struct nvme_tcp_ofld_ctrl_con_params conn_params; From patchwork Sun Feb 7 18:13:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073257 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F891C43381 for ; Sun, 7 Feb 2021 18:15:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2CB5A64D9A for ; Sun, 7 Feb 2021 18:15:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229725AbhBGSPP (ORCPT ); Sun, 7 Feb 2021 13:15:15 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:49094 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229636AbhBGSOo (ORCPT ); Sun, 7 Feb 2021 13:14:44 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I5vmZ029881; Sun, 7 Feb 2021 10:13:59 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakmg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:13:58 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:57 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:13:56 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:53 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 07/11] nvme-tcp-offload: Add IO level implementation Date: Sun, 7 Feb 2021 20:13:20 +0200 Message-ID: <20210207181324.11429-8-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC From: Dean Balandin In this patch, we present the IO level functionality. The nvme-tcp-offload shall work on the IO-level, meaning the nvme-tcp-offload ULP module shall pass the request to the nvme-tcp-offload vendor driver and shall expect for the request compilation. No additional handling is needed in between, this design will reduce the CPU utilization as we will describe below. The nvme-tcp-offload vendor driver shall register to nvme-tcp-offload ULP with the following IO-path ops: - init_req - map_sg - in order to map the request sg (similar to nvme_rdma_map_data) - send_req - in order to pass the request to the handling of the offload driver that shall pass it to the vendor specific device - poll_queue The vendor driver will manage the context from which the request will be executed and the request aggregations. Once the IO completed, the nvme-tcp-offload vendor driver shall call command.done() that shall invoke the nvme-tcp-offload ULP layer for completing the request. Signed-off-by: Dean Balandin Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- drivers/nvme/host/tcp-offload.c | 82 ++++++++++++++++++++++++++++++--- 1 file changed, 75 insertions(+), 7 deletions(-) diff --git a/drivers/nvme/host/tcp-offload.c b/drivers/nvme/host/tcp-offload.c index f1023f2eae8d..bf64551c7a91 100644 --- a/drivers/nvme/host/tcp-offload.c +++ b/drivers/nvme/host/tcp-offload.c @@ -113,6 +113,7 @@ int nvme_tcp_ofld_report_queue_err(struct nvme_tcp_ofld_queue *queue) /** * nvme_tcp_ofld_req_done() - NVMeTCP Offload request done callback * function. Pointed to by nvme_tcp_ofld_req->done. + * Handles both NVME_TCP_F_DATA_SUCCESS flag and NVMe CQE. * @req: NVMeTCP offload request to complete. * @result: The nvme_result. * @status: The completion status. @@ -125,7 +126,10 @@ nvme_tcp_ofld_req_done(struct nvme_tcp_ofld_req *req, union nvme_result *result, __le16 status) { - /* Placeholder - complete request with/without error */ + struct request *rq = blk_mq_rq_from_pdu(req); + + if (!nvme_try_complete_req(rq, cpu_to_le16(status << 1), *result)) + nvme_complete_rq(rq); } struct nvme_tcp_ofld_dev * @@ -754,9 +758,10 @@ nvme_tcp_ofld_init_request(struct blk_mq_tag_set *set, { struct nvme_tcp_ofld_req *req = blk_mq_rq_to_pdu(rq); struct nvme_tcp_ofld_ctrl *ctrl = set->driver_data; + int qid = (set == &ctrl->tag_set) ? hctx_idx + 1 : 0; - /* Placeholder - init request */ - + req->queue = &ctrl->queues[qid]; + nvme_req(rq)->ctrl = &ctrl->nctrl; req->done = nvme_tcp_ofld_req_done; ctrl->dev->ops->init_req(req); @@ -767,9 +772,32 @@ static blk_status_t nvme_tcp_ofld_queue_rq(struct blk_mq_hw_ctx *hctx, const struct blk_mq_queue_data *bd) { - /* Call nvme_setup_cmd(...) */ + struct nvme_tcp_ofld_req *req = blk_mq_rq_to_pdu(bd->rq); + struct nvme_tcp_ofld_queue *queue = hctx->driver_data; + struct nvme_tcp_ofld_ctrl *ctrl = queue->ctrl; + struct nvme_ns *ns = hctx->queue->queuedata; + struct nvme_tcp_ofld_dev *dev = queue->dev; + struct nvme_tcp_ofld_ops *ops = dev->ops; + struct request *rq = bd->rq; + bool queue_ready; + int rc; + + queue_ready = test_bit(NVME_TCP_OFLD_Q_LIVE, &queue->flags); + if (!nvmf_check_ready(&ctrl->nctrl, rq, queue_ready)) + return nvmf_fail_nonready_command(&ctrl->nctrl, rq); + + rc = nvme_setup_cmd(ns, rq, &req->nvme_cmd); + if (unlikely(rc)) + return rc; + + blk_mq_start_request(rq); + rc = ops->map_sg(dev, req); + if (unlikely(rc)) + return rc; - /* Call ops->map_sg(...) */ + rc = ops->send_req(req); + if (unlikely(rc)) + return rc; return BLK_STS_OK; } @@ -839,9 +867,47 @@ static int nvme_tcp_ofld_map_queues(struct blk_mq_tag_set *set) static int nvme_tcp_ofld_poll(struct blk_mq_hw_ctx *hctx) { - /* Placeholder - Implement polling mechanism */ + struct nvme_tcp_ofld_queue *queue = hctx->driver_data; + struct nvme_tcp_ofld_dev *dev = queue->dev; + struct nvme_tcp_ofld_ops *ops = dev->ops; - return 0; + return ops->poll_queue(queue); +} + +static enum blk_eh_timer_return +nvme_tcp_ofld_timeout(struct request *rq, bool reserved) +{ + struct nvme_tcp_ofld_req *req = blk_mq_rq_to_pdu(rq); + struct nvme_tcp_ofld_ctrl *ctrl = req->queue->ctrl; + + /* Restart the timer if a controller reset is already scheduled. Any + * timed out request would be handled before entering the connecting + * state. + */ + if (ctrl->nctrl.state == NVME_CTRL_RESETTING) + return BLK_EH_RESET_TIMER; + + dev_warn(ctrl->nctrl.device, + "queue %d: timeout request %#x type %d\n", + nvme_tcp_ofld_qid(req->queue), rq->tag, + req->nvme_cmd.common.opcode); + + if (ctrl->nctrl.state != NVME_CTRL_LIVE) { + /* + * Teardown immediately if controller times out while starting + * or we are already started error recovery. all outstanding + * requests are completed on shutdown, so we return BLK_EH_DONE. + */ + flush_work(&ctrl->err_work); + nvme_tcp_ofld_teardown_io_queues(&ctrl->nctrl, false); + nvme_tcp_ofld_teardown_admin_queue(&ctrl->nctrl, false); + return BLK_EH_DONE; + } + + dev_warn(ctrl->nctrl.device, "starting error recovery\n"); + nvme_tcp_ofld_error_recovery(&ctrl->nctrl); + + return BLK_EH_RESET_TIMER; } static struct blk_mq_ops nvme_tcp_ofld_mq_ops = { @@ -851,6 +917,7 @@ static struct blk_mq_ops nvme_tcp_ofld_mq_ops = { .exit_request = nvme_tcp_ofld_exit_request, .init_hctx = nvme_tcp_ofld_init_hctx, .map_queues = nvme_tcp_ofld_map_queues, + .timeout = nvme_tcp_ofld_timeout, .poll = nvme_tcp_ofld_poll, }; @@ -860,6 +927,7 @@ static struct blk_mq_ops nvme_tcp_ofld_admin_mq_ops = { .complete = nvme_complete_rq, .exit_request = nvme_tcp_ofld_exit_request, .init_hctx = nvme_tcp_ofld_init_hctx, + .timeout = nvme_tcp_ofld_timeout, }; static const struct nvme_ctrl_ops nvme_tcp_ofld_ctrl_ops = { From patchwork Sun Feb 7 18:13:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 309F1C433DB for ; Sun, 7 Feb 2021 18:15:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 03A1964D9C for ; Sun, 7 Feb 2021 18:15:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229727AbhBGSP0 (ORCPT ); Sun, 7 Feb 2021 13:15:26 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:63932 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229690AbhBGSOx (ORCPT ); Sun, 7 Feb 2021 13:14:53 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I87Oq019283; Sun, 7 Feb 2021 10:14:03 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugq2dnn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:14:03 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:01 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:00 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:13:57 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Arie Gershberg Subject: [RFC PATCH v3 08/11] nvme-qedn: Add qedn - Marvell's NVMeTCP HW offload vendor driver Date: Sun, 7 Feb 2021 20:13:21 +0200 Message-ID: <20210207181324.11429-9-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This patch will present the skeleton of the qedn driver. The new driver will be added under "drivers/nvme/hw/qedn" and will be enabled by the Kconfig "Marvell NVM Express over Fabrics TCP offload". The internal implementation: - qedn.h: Includes all common structs to be used by the qedn vendor driver. - qedn.c Includes the qedn_init and qedn_cleanup implementation. As part of the qedn init, the driver will register as a pci device and will work with the Marvell fastlinQ NICs. As part of the probe, the driver will register to the nvme_tcp_offload (ULP). Signed-off-by: Arie Gershberg Signed-off-by: Michal Kalderon Signed-off-by: Ariel Elior Signed-off-by: Shai Malin --- MAINTAINERS | 10 ++ drivers/nvme/Kconfig | 1 + drivers/nvme/Makefile | 1 + drivers/nvme/hw/Kconfig | 8 ++ drivers/nvme/hw/Makefile | 3 + drivers/nvme/hw/qedn/Makefile | 3 + drivers/nvme/hw/qedn/qedn.c | 191 ++++++++++++++++++++++++++++++++++ drivers/nvme/hw/qedn/qedn.h | 14 +++ 8 files changed, 231 insertions(+) create mode 100644 drivers/nvme/hw/Kconfig create mode 100644 drivers/nvme/hw/Makefile create mode 100644 drivers/nvme/hw/qedn/Makefile create mode 100644 drivers/nvme/hw/qedn/qedn.c create mode 100644 drivers/nvme/hw/qedn/qedn.h diff --git a/MAINTAINERS b/MAINTAINERS index 546aa66428c9..707623d92008 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -14572,6 +14572,16 @@ S: Supported F: drivers/infiniband/hw/qedr/ F: include/uapi/rdma/qedr-abi.h +QLOGIC QL4xxx NVME-TCP-OFFLOAD DRIVER +M: Shai Malin +M: Ariel Elior +L: linux-nvme@lists.infradead.org +S: Supported +W: http://git.infradead.org/nvme.git +T: git://git.infradead.org/nvme.git +F: drivers/nvme/hw/qedn/ +F: include/linux/qed/ + QLOGIC QLA1280 SCSI DRIVER M: Michael Reed L: linux-scsi@vger.kernel.org diff --git a/drivers/nvme/Kconfig b/drivers/nvme/Kconfig index 87ae409a32b9..827c2c9f0ad1 100644 --- a/drivers/nvme/Kconfig +++ b/drivers/nvme/Kconfig @@ -3,5 +3,6 @@ menu "NVME Support" source "drivers/nvme/host/Kconfig" source "drivers/nvme/target/Kconfig" +source "drivers/nvme/hw/Kconfig" endmenu diff --git a/drivers/nvme/Makefile b/drivers/nvme/Makefile index fb42c44609a8..14c569040ef2 100644 --- a/drivers/nvme/Makefile +++ b/drivers/nvme/Makefile @@ -2,3 +2,4 @@ obj-y += host/ obj-y += target/ +obj-y += hw/ \ No newline at end of file diff --git a/drivers/nvme/hw/Kconfig b/drivers/nvme/hw/Kconfig new file mode 100644 index 000000000000..374f1f9dbd3d --- /dev/null +++ b/drivers/nvme/hw/Kconfig @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only +config NVME_QEDN + tristate "Marvell NVM Express over Fabrics TCP offload" + depends on NVME_TCP_OFFLOAD + help + This enables the Marvell NVMe TCP offload support (qedn). + + If unsure, say N. diff --git a/drivers/nvme/hw/Makefile b/drivers/nvme/hw/Makefile new file mode 100644 index 000000000000..7780ea5ab520 --- /dev/null +++ b/drivers/nvme/hw/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_NVME_QEDN) += qedn/ diff --git a/drivers/nvme/hw/qedn/Makefile b/drivers/nvme/hw/qedn/Makefile new file mode 100644 index 000000000000..5f1bd2cd36da --- /dev/null +++ b/drivers/nvme/hw/qedn/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_NVME_QEDN) := qedn.o diff --git a/drivers/nvme/hw/qedn/qedn.c b/drivers/nvme/hw/qedn/qedn.c new file mode 100644 index 000000000000..1fe33335989c --- /dev/null +++ b/drivers/nvme/hw/qedn/qedn.c @@ -0,0 +1,191 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + + /* Kernel includes */ +#include +#include + +/* Driver includes */ +#include "qedn.h" + +#define CHIP_NUM_AHP_NVMETCP 0x8194 + +static struct pci_device_id qedn_pci_tbl[] = { + { PCI_VDEVICE(QLOGIC, CHIP_NUM_AHP_NVMETCP), 0 }, + {0, 0}, +}; + +static int +qedn_claim_dev(struct nvme_tcp_ofld_dev *dev, + struct nvme_tcp_ofld_ctrl_con_params *conn_params) +{ + /* Placeholder - qedn_claim_dev */ + + return 0; +} + +static int qedn_create_queue(struct nvme_tcp_ofld_queue *queue, int qid, + size_t q_size) +{ + /* Placeholder - qedn_create_queue */ + + return 0; +} + +static void qedn_drain_queue(struct nvme_tcp_ofld_queue *queue) +{ + /* Placeholder - qedn_drain_queue */ +} + +static void qedn_destroy_queue(struct nvme_tcp_ofld_queue *queue) +{ + /* Placeholder - qedn_destroy_queue */ +} + +static int qedn_poll_queue(struct nvme_tcp_ofld_queue *queue) +{ + /* Placeholder - qedn_poll_queue */ + + return 0; +} + +static int qedn_init_req(struct nvme_tcp_ofld_req *req) +{ + /* Placeholder - qedn_init_req */ + + return 0; +} + +static int qedn_map_sg(struct nvme_tcp_ofld_dev *dev, + struct nvme_tcp_ofld_req *req) +{ + /* Placeholder - qedn_map_sg */ + + return 0; +} + +static int qedn_send_req(struct nvme_tcp_ofld_req *req) +{ + /* Placeholder - qedn_map_sg */ + + return 0; +} + +static struct nvme_tcp_ofld_ops qedn_ofld_ops = { + .name = "qedn", + .module = THIS_MODULE, + .required_opts = NVMF_OPT_TRADDR, + .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_DISABLE_SQFLOW | + NVMF_OPT_NR_WRITE_QUEUES | NVMF_OPT_HOST_TRADDR | + NVMF_OPT_CTRL_LOSS_TMO | NVMF_OPT_RECONNECT_DELAY | + NVMF_OPT_HDR_DIGEST | NVMF_OPT_DATA_DIGEST | + NVMF_OPT_NR_POLL_QUEUES | NVMF_OPT_TOS, + .claim_dev = qedn_claim_dev, + .create_queue = qedn_create_queue, + .drain_queue = qedn_drain_queue, + .destroy_queue = qedn_destroy_queue, + .poll_queue = qedn_poll_queue, + .init_req = qedn_init_req, + .map_sg = qedn_map_sg, + .send_req = qedn_send_req, +}; + +static void __qedn_remove(struct pci_dev *pdev) +{ + struct qedn_ctx *qedn = pci_get_drvdata(pdev); + + pr_notice("Starting qedn_remove\n"); + nvme_tcp_ofld_unregister_dev(&qedn->qedn_ofld_dev); + kfree(qedn); + pr_notice("Ending qedn_remove successfully\n"); +} + +static void qedn_remove(struct pci_dev *pdev) +{ + __qedn_remove(pdev); +} + +static void qedn_shutdown(struct pci_dev *pdev) +{ + __qedn_remove(pdev); +} + +static struct qedn_ctx *qedn_alloc_ctx(struct pci_dev *pdev) +{ + struct qedn_ctx *qedn = NULL; + + qedn = kzalloc(sizeof(*qedn), GFP_KERNEL); + if (!qedn) + return NULL; + + qedn->pdev = pdev; + pci_set_drvdata(pdev, qedn); + + return qedn; +} + +static int __qedn_probe(struct pci_dev *pdev) +{ + struct qedn_ctx *qedn; + int rc; + + pr_notice("Starting qedn probe\n"); + + qedn = qedn_alloc_ctx(pdev); + if (!qedn) + return -ENODEV; + + qedn->qedn_ofld_dev.ops = &qedn_ofld_ops; + INIT_LIST_HEAD(&qedn->qedn_ofld_dev.entry); + rc = nvme_tcp_ofld_register_dev(&qedn->qedn_ofld_dev); + if (rc) + goto release_qedn; + + return 0; +release_qedn: + kfree(qedn); + + return rc; +} + +static int qedn_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + return __qedn_probe(pdev); +} + +static struct pci_driver qedn_pci_driver = { + .name = QEDN_MODULE_NAME, + .id_table = qedn_pci_tbl, + .probe = qedn_probe, + .remove = qedn_remove, + .shutdown = qedn_shutdown, +}; + +static int __init qedn_init(void) +{ + int rc; + + rc = pci_register_driver(&qedn_pci_driver); + if (rc) { + pr_err("Failed to register pci driver\n"); + return -EINVAL; + } + + pr_notice("driver loaded successfully\n"); + + return 0; +} + +static void __exit qedn_cleanup(void) +{ + pci_unregister_driver(&qedn_pci_driver); + pr_notice("Unloading qedn ended\n"); +} + +module_init(qedn_init); +module_exit(qedn_cleanup); +MODULE_LICENSE("GPL v2"); diff --git a/drivers/nvme/hw/qedn/qedn.h b/drivers/nvme/hw/qedn/qedn.h new file mode 100644 index 000000000000..401fde08000e --- /dev/null +++ b/drivers/nvme/hw/qedn/qedn.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +/* Driver includes */ +#include "../../host/tcp-offload.h" + +#define QEDN_MODULE_NAME "qedn" + +struct qedn_ctx { + struct pci_dev *pdev; + struct nvme_tcp_ofld_dev qedn_ofld_dev; +}; From patchwork Sun Feb 7 18:13:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073263 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5161CC433DB for ; Sun, 7 Feb 2021 18:15:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 103CA64D9C for ; Sun, 7 Feb 2021 18:15:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229742AbhBGSPf (ORCPT ); Sun, 7 Feb 2021 13:15:35 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:17414 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229691AbhBGSOy (ORCPT ); Sun, 7 Feb 2021 13:14:54 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I67Ol029915; Sun, 7 Feb 2021 10:14:06 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakmn-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:14:05 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:04 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:04 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:14:01 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 09/11] net-qed: Add NVMeTCP Offload PF Level FW and HW HSI Date: Sun, 7 Feb 2021 20:13:22 +0200 Message-ID: <20210207181324.11429-10-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org X-Patchwork-State: RFC This patch introduces the NVMeTCP device and PF level HSI and HSI functionality in order to initialize and interact with the HW device. This patch is based on the qede, qedr, qedi, qedf drivers HSI. Signed-off-by: Dean Balandin Signed-off-by: Michal Kalderon Signed-off-by: Shai Malin --- drivers/net/ethernet/qlogic/Kconfig | 3 + drivers/net/ethernet/qlogic/qed/Makefile | 1 + drivers/net/ethernet/qlogic/qed/qed.h | 3 + drivers/net/ethernet/qlogic/qed/qed_hsi.h | 1 + drivers/net/ethernet/qlogic/qed/qed_nvmetcp.c | 269 ++++++++++++++++++ drivers/net/ethernet/qlogic/qed/qed_nvmetcp.h | 48 ++++ drivers/net/ethernet/qlogic/qed/qed_sp.h | 2 + include/linux/qed/common_hsi.h | 1 + include/linux/qed/nvmetcp_common.h | 47 +++ include/linux/qed/qed_if.h | 22 ++ include/linux/qed/qed_nvmetcp_if.h | 71 +++++ 11 files changed, 468 insertions(+) create mode 100644 drivers/net/ethernet/qlogic/qed/qed_nvmetcp.c create mode 100644 drivers/net/ethernet/qlogic/qed/qed_nvmetcp.h create mode 100644 include/linux/qed/nvmetcp_common.h create mode 100644 include/linux/qed/qed_nvmetcp_if.h diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig index 4366c7a8de95..1ddd82d83e3f 100644 --- a/drivers/net/ethernet/qlogic/Kconfig +++ b/drivers/net/ethernet/qlogic/Kconfig @@ -109,6 +109,9 @@ config QED_RDMA config QED_ISCSI bool +config QED_NVMETCP + bool + config QED_FCOE bool diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile index 8251755ec18c..c8f4d40a6285 100644 --- a/drivers/net/ethernet/qlogic/qed/Makefile +++ b/drivers/net/ethernet/qlogic/qed/Makefile @@ -23,6 +23,7 @@ qed-y := \ qed_sp_commands.o \ qed_spq.o +qed-$(CONFIG_QED_NVMETCP) += qed_nvmetcp.o qed-$(CONFIG_QED_FCOE) += qed_fcoe.o qed-$(CONFIG_QED_ISCSI) += qed_iscsi.o qed-$(CONFIG_QED_LL2) += qed_ll2.o diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h index a20cb8a0c377..91d4635009ab 100644 --- a/drivers/net/ethernet/qlogic/qed/qed.h +++ b/drivers/net/ethernet/qlogic/qed/qed.h @@ -240,6 +240,7 @@ enum QED_FEATURE { QED_VF, QED_RDMA_CNQ, QED_ISCSI_CQ, + QED_NVMETCP_CQ = QED_ISCSI_CQ, QED_FCOE_CQ, QED_VF_L2_QUE, QED_MAX_FEATURES, @@ -592,6 +593,7 @@ struct qed_hwfn { struct qed_ooo_info *p_ooo_info; struct qed_rdma_info *p_rdma_info; struct qed_iscsi_info *p_iscsi_info; + struct qed_nvmetcp_info *p_nvmetcp_info; struct qed_fcoe_info *p_fcoe_info; struct qed_pf_params pf_params; @@ -828,6 +830,7 @@ struct qed_dev { struct qed_eth_cb_ops *eth; struct qed_fcoe_cb_ops *fcoe; struct qed_iscsi_cb_ops *iscsi; + struct qed_nvmetcp_cb_ops *nvmetcp; } protocol_ops; void *ops_cookie; diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h index 559df9f4d656..24472f6a83c2 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h +++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.c b/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.c new file mode 100644 index 000000000000..720b32a35820 --- /dev/null +++ b/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.c @@ -0,0 +1,269 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "qed.h" +#include "qed_cxt.h" +#include "qed_dev_api.h" +#include "qed_hsi.h" +#include "qed_hw.h" +#include "qed_int.h" +#include "qed_nvmetcp.h" +#include "qed_ll2.h" +#include "qed_mcp.h" +#include "qed_sp.h" +#include "qed_reg_addr.h" + +static int qed_nvmetcp_async_event(struct qed_hwfn *p_hwfn, u8 fw_event_code, + u16 echo, union event_ring_data *data, + u8 fw_return_code) +{ + if (p_hwfn->p_nvmetcp_info->event_cb) { + struct qed_nvmetcp_info *p_nvmetcp = p_hwfn->p_nvmetcp_info; + + return p_nvmetcp->event_cb(p_nvmetcp->event_context, + fw_event_code, data); + } else { + DP_NOTICE(p_hwfn, "nvmetcp async completion is not set\n"); + return -EINVAL; + } +} + +static int qed_sp_nvmetcp_func_start(struct qed_hwfn *p_hwfn, + enum spq_mode comp_mode, + struct qed_spq_comp_cb *p_comp_addr, + void *event_context, + nvmetcp_event_cb_t async_event_cb) +{ + struct nvmetcp_init_ramrod_params *p_ramrod = NULL; + struct scsi_init_func_queues *p_queue = NULL; + struct qed_nvmetcp_pf_params *p_params = NULL; + struct nvmetcp_spe_func_init *p_init = NULL; + struct qed_sp_init_data init_data = {}; + struct qed_spq_entry *p_ent = NULL; + int rc = 0; + u16 val; + u8 i; + + /* Get SPQ entry */ + init_data.cid = qed_spq_get_cid(p_hwfn); + init_data.opaque_fid = p_hwfn->hw_info.opaque_fid; + init_data.comp_mode = comp_mode; + init_data.p_comp_data = p_comp_addr; + + rc = qed_sp_init_request(p_hwfn, &p_ent, + NVMETCP_RAMROD_CMD_ID_INIT_FUNC, + PROTOCOLID_NVMETCP, &init_data); + if (rc) + return rc; + + p_ramrod = &p_ent->ramrod.nvmetcp_init; + p_init = &p_ramrod->nvmetcp_init_spe; + p_params = &p_hwfn->pf_params.nvmetcp_pf_params; + p_queue = &p_init->q_params; + + p_init->num_sq_pages_in_ring = p_params->num_sq_pages_in_ring; + p_init->num_r2tq_pages_in_ring = p_params->num_r2tq_pages_in_ring; + p_init->num_uhq_pages_in_ring = p_params->num_uhq_pages_in_ring; + p_init->ll2_rx_queue_id = RESC_START(p_hwfn, QED_LL2_RAM_QUEUE) + + p_params->ll2_ooo_queue_id; + + p_init->func_params.log_page_size = ilog2(PAGE_SIZE); + p_init->func_params.num_tasks = cpu_to_le16(p_params->num_tasks); + + DMA_REGPAIR_LE(p_queue->glbl_q_params_addr, + p_params->glbl_q_params_addr); + + p_queue->cq_num_entries = cpu_to_le16(QED_NVMETCP_FW_CQ_SIZE); + p_queue->num_queues = p_params->num_queues; + p_queue->queue_relative_offset = (u8)RESC_START(p_hwfn, QED_CMDQS_CQS); + p_queue->cq_sb_pi = p_params->gl_rq_pi; + + for (i = 0; i < p_params->num_queues; i++) { + val = qed_get_igu_sb_id(p_hwfn, i); + p_queue->cq_cmdq_sb_num_arr[i] = cpu_to_le16(val); + } + + /* p_ramrod->tcp_init.min_rto = cpu_to_le16(p_params->min_rto); */ + p_ramrod->tcp_init.two_msl_timer = cpu_to_le32(QED_TCP_TWO_MSL_TIMER); + p_ramrod->tcp_init.tx_sws_timer = cpu_to_le16(QED_TCP_SWS_TIMER); + p_init->half_way_close_timeout = cpu_to_le16(QED_TCP_HALF_WAY_CLOSE_TIMEOUT); + p_ramrod->tcp_init.max_fin_rt = QED_TCP_MAX_FIN_RT; + + p_hwfn->p_nvmetcp_info->event_context = event_context; + p_hwfn->p_nvmetcp_info->event_cb = async_event_cb; + + qed_spq_register_async_cb(p_hwfn, PROTOCOLID_NVMETCP, + qed_nvmetcp_async_event); + + return qed_spq_post(p_hwfn, p_ent, NULL); +} + +static int qed_sp_nvmetcp_func_stop(struct qed_hwfn *p_hwfn, + enum spq_mode comp_mode, + struct qed_spq_comp_cb *p_comp_addr) +{ + struct qed_spq_entry *p_ent = NULL; + struct qed_sp_init_data init_data; + int rc = 0; + + /* Get SPQ entry */ + memset(&init_data, 0, sizeof(init_data)); + init_data.cid = qed_spq_get_cid(p_hwfn); + init_data.opaque_fid = p_hwfn->hw_info.opaque_fid; + init_data.comp_mode = comp_mode; + init_data.p_comp_data = p_comp_addr; + + rc = qed_sp_init_request(p_hwfn, &p_ent, + NVMETCP_RAMROD_CMD_ID_DESTROY_FUNC, + PROTOCOLID_NVMETCP, &init_data); + if (rc) + return rc; + + rc = qed_spq_post(p_hwfn, p_ent, NULL); + + qed_spq_unregister_async_cb(p_hwfn, PROTOCOLID_NVMETCP); + return rc; +} + +static int qed_fill_nvmetcp_dev_info(struct qed_dev *cdev, + struct qed_dev_nvmetcp_info *info) +{ + struct qed_hwfn *hwfn = QED_AFFIN_HWFN(cdev); + + int rc; + + memset(info, 0, sizeof(*info)); + rc = qed_fill_dev_info(cdev, &info->common); + + info->num_cqs = FEAT_NUM(hwfn, QED_NVMETCP_CQ); + + return rc; +} + +static void qed_register_nvmetcp_ops(struct qed_dev *cdev, + struct qed_nvmetcp_cb_ops *ops, + void *cookie) +{ + cdev->protocol_ops.nvmetcp = ops; + cdev->ops_cookie = cookie; +} + +static int qed_nvmetcp_stop(struct qed_dev *cdev) +{ + int rc; + + if (!(cdev->flags & QED_FLAG_STORAGE_STARTED)) { + DP_NOTICE(cdev, "nvmetcp already stopped\n"); + return 0; + } + + if (!hash_empty(cdev->connections)) { + DP_NOTICE(cdev, + "Can't stop nvmetcp - not all connections were returned\n"); + return -EINVAL; + } + + /* Stop the nvmetcp */ + rc = qed_sp_nvmetcp_func_stop(QED_AFFIN_HWFN(cdev), QED_SPQ_MODE_EBLOCK, + NULL); + cdev->flags &= ~QED_FLAG_STORAGE_STARTED; + + return rc; +} + +static int qed_nvmetcp_start(struct qed_dev *cdev, + struct qed_nvmetcp_tid *tasks, + void *event_context, + nvmetcp_event_cb_t async_event_cb) +{ + struct qed_tid_mem *tid_info; + int rc; + + if (cdev->flags & QED_FLAG_STORAGE_STARTED) { + DP_NOTICE(cdev, "nvmetcp already started;\n"); + return 0; + } + + rc = qed_sp_nvmetcp_func_start(QED_AFFIN_HWFN(cdev), + QED_SPQ_MODE_EBLOCK, + NULL, + event_context, + async_event_cb); + if (rc) { + DP_NOTICE(cdev, "Failed to start nvmetcp\n"); + return rc; + } + + cdev->flags |= QED_FLAG_STORAGE_STARTED; + hash_init(cdev->connections); + + if (!tasks) + return 0; + + tid_info = kzalloc(sizeof(*tid_info), GFP_KERNEL); + + if (!tid_info) { + qed_nvmetcp_stop(cdev); + return -ENOMEM; + } + + rc = qed_cxt_get_tid_mem_info(QED_AFFIN_HWFN(cdev), tid_info); + if (rc) { + DP_NOTICE(cdev, "Failed to gather task information\n"); + qed_nvmetcp_stop(cdev); + kfree(tid_info); + return rc; + } + + /* Fill task information */ + tasks->size = tid_info->tid_size; + tasks->num_tids_per_block = tid_info->num_tids_per_block; + memcpy(tasks->blocks, tid_info->blocks, + MAX_TID_BLOCKS_NVMETCP * sizeof(u8 *)); + + kfree(tid_info); + + return 0; +} + +static const struct qed_nvmetcp_ops qed_nvmetcp_ops_pass = { + .common = &qed_common_ops_pass, + .ll2 = &qed_ll2_ops_pass, + .fill_dev_info = &qed_fill_nvmetcp_dev_info, + .register_ops = &qed_register_nvmetcp_ops, + .start = &qed_nvmetcp_start, + .stop = &qed_nvmetcp_stop, + + /* Placeholder - Connection level ops */ +}; + +const struct qed_nvmetcp_ops *qed_get_nvmetcp_ops(void) +{ + return &qed_nvmetcp_ops_pass; +} +EXPORT_SYMBOL(qed_get_nvmetcp_ops); + +void qed_put_nvmetcp_ops(void) +{ +} +EXPORT_SYMBOL(qed_put_nvmetcp_ops); diff --git a/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.h b/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.h new file mode 100644 index 000000000000..cb10d526a241 --- /dev/null +++ b/drivers/net/ethernet/qlogic/qed/qed_nvmetcp.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +#ifndef _QED_NVMETCP_H +#define _QED_NVMETCP_H +#include +#include +#include +#include +#include +#include +#include +#include "qed.h" +#include "qed_hsi.h" +#include "qed_mcp.h" +#include "qed_sp.h" + +#define QED_NVMETCP_FW_CQ_SIZE (4 * 1024) + +/* tcp parameters */ +#define QED_TCP_TWO_MSL_TIMER 4000 +#define QED_TCP_HALF_WAY_CLOSE_TIMEOUT 10 +#define QED_TCP_MAX_FIN_RT 2 +#define QED_TCP_SWS_TIMER 5000 + +struct qed_nvmetcp_info { + spinlock_t lock; /* Connection resources. */ + struct list_head free_list; + u16 max_num_outstanding_tasks; + void *event_context; + nvmetcp_event_cb_t event_cb; +}; + +#if IS_ENABLED(CONFIG_QED_NVMETCP) +void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn); + +void qed_nvmetcp_free(struct qed_hwfn *p_hwfn); + +#else /* IS_ENABLED(CONFIG_QED_NVMETCP) */ +static inline void qed_nvmetcp_setup(struct qed_hwfn *p_hwfn) {} + +static inline void qed_nvmetcp_free(struct qed_hwfn *p_hwfn) {} + +#endif /* IS_ENABLED(CONFIG_QED_NVMETCP) */ + +#endif diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp.h b/drivers/net/ethernet/qlogic/qed/qed_sp.h index 993f1357b6fc..525159e747a5 100644 --- a/drivers/net/ethernet/qlogic/qed/qed_sp.h +++ b/drivers/net/ethernet/qlogic/qed/qed_sp.h @@ -100,6 +100,8 @@ union ramrod_data { struct iscsi_spe_conn_mac_update iscsi_conn_mac_update; struct iscsi_spe_conn_termination iscsi_conn_terminate; + struct nvmetcp_init_ramrod_params nvmetcp_init; + struct vf_start_ramrod_data vf_start; struct vf_stop_ramrod_data vf_stop; }; diff --git a/include/linux/qed/common_hsi.h b/include/linux/qed/common_hsi.h index 977807e1be53..59c5e5866607 100644 --- a/include/linux/qed/common_hsi.h +++ b/include/linux/qed/common_hsi.h @@ -703,6 +703,7 @@ enum mf_mode { /* Per-protocol connection types */ enum protocol_type { PROTOCOLID_ISCSI, + PROTOCOLID_NVMETCP = PROTOCOLID_ISCSI, PROTOCOLID_FCOE, PROTOCOLID_ROCE, PROTOCOLID_CORE, diff --git a/include/linux/qed/nvmetcp_common.h b/include/linux/qed/nvmetcp_common.h new file mode 100644 index 000000000000..0b097125efce --- /dev/null +++ b/include/linux/qed/nvmetcp_common.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +#ifndef __NVMETCP_COMMON__ +#define __NVMETCP_COMMON__ + +#include "tcp_common.h" + +/* * NVMeTCP firmware function init parameters */ +struct nvmetcp_spe_func_init { + __le16 reserved; + __le16 half_way_close_timeout; + u8 num_sq_pages_in_ring; + u8 num_r2tq_pages_in_ring; + u8 num_uhq_pages_in_ring; + u8 ll2_rx_queue_id; + u8 flags; +#define NVMETCP_SPE_FUNC_INIT_COUNTERS_EN_MASK 0x1 +#define NVMETCP_SPE_FUNC_INIT_COUNTERS_EN_SHIFT 0 +#define NVMETCP_SPE_FUNC_INIT_NVMETCP_MODE_MASK 0x1 +#define NVMETCP_SPE_FUNC_INIT_NVMETCP_MODE_SHIFT 1 +#define NVMETCP_SPE_FUNC_INIT_RESERVED0_MASK 0x3F +#define NVMETCP_SPE_FUNC_INIT_RESERVED0_SHIFT 2 + u8 debug_flags; + __le16 reserved1; + __le32 reserved2; + struct scsi_init_func_params func_params; + struct scsi_init_func_queues q_params; +}; + +/* NVMeTCP init params passed by driver to FW in NVMeTCP init ramrod. */ +struct nvmetcp_init_ramrod_params { + struct nvmetcp_spe_func_init nvmetcp_init_spe; + struct tcp_init_params tcp_init; +}; + +/* NVMeTCP Ramrod Command IDs */ +enum nvmetcp_ramrod_cmd_id { + NVMETCP_RAMROD_CMD_ID_UNUSED = 0, + NVMETCP_RAMROD_CMD_ID_INIT_FUNC = 1, + NVMETCP_RAMROD_CMD_ID_DESTROY_FUNC = 2, + MAX_NVMETCP_RAMROD_CMD_ID +}; + +#endif /* __NVMETCP_COMMON__ */ diff --git a/include/linux/qed/qed_if.h b/include/linux/qed/qed_if.h index 68d17a4fbf20..524f57821ba2 100644 --- a/include/linux/qed/qed_if.h +++ b/include/linux/qed/qed_if.h @@ -542,6 +542,26 @@ struct qed_iscsi_pf_params { u8 bdq_pbl_num_entries[3]; }; +struct qed_nvmetcp_pf_params { + u64 glbl_q_params_addr; + u16 cq_num_entries; + + u16 num_cons; + u16 num_tasks; + + u8 num_sq_pages_in_ring; + u8 num_r2tq_pages_in_ring; + u8 num_uhq_pages_in_ring; + + u8 num_queues; + u8 gl_rq_pi; + u8 gl_cmd_pi; + u8 debug_mode; + u8 ll2_ooo_queue_id; + + u16 min_rto; +}; + struct qed_rdma_pf_params { /* Supplied to QED during resource allocation (may affect the ILT and * the doorbell BAR). @@ -560,6 +580,7 @@ struct qed_pf_params { struct qed_eth_pf_params eth_pf_params; struct qed_fcoe_pf_params fcoe_pf_params; struct qed_iscsi_pf_params iscsi_pf_params; + struct qed_nvmetcp_pf_params nvmetcp_pf_params; struct qed_rdma_pf_params rdma_pf_params; }; @@ -662,6 +683,7 @@ enum qed_sb_type { enum qed_protocol { QED_PROTOCOL_ETH, QED_PROTOCOL_ISCSI, + QED_PROTOCOL_NVMETCP = QED_PROTOCOL_ISCSI, QED_PROTOCOL_FCOE, }; diff --git a/include/linux/qed/qed_nvmetcp_if.h b/include/linux/qed/qed_nvmetcp_if.h new file mode 100644 index 000000000000..4e0f4c02d347 --- /dev/null +++ b/include/linux/qed/qed_nvmetcp_if.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* + * Copyright 2021 Marvell. All rights reserved. + */ + +#ifndef _QED_NVMETCP_IF_H +#define _QED_NVMETCP_IF_H +#include +#include + +typedef int (*nvmetcp_event_cb_t) (void *context, + u8 fw_event_code, void *fw_handle); + +struct qed_dev_nvmetcp_info { + struct qed_dev_info common; + + u8 num_cqs; +}; + +#define MAX_TID_BLOCKS_NVMETCP (512) +struct qed_nvmetcp_tid { + u32 size; /* In bytes per task */ + u32 num_tids_per_block; + u8 *blocks[MAX_TID_BLOCKS_NVMETCP]; +}; + +struct qed_nvmetcp_cb_ops { + struct qed_common_cb_ops common; +}; + +/** + * struct qed_nvmetcp_ops - qed NVMeTCP operations. + * @common: common operations pointer + * @ll2: light L2 operations pointer + * @fill_dev_info: fills NVMeTCP specific information + * @param cdev + * @param info + * @return 0 on success, otherwise error value. + * @register_ops: register nvmetcp operations + * @param cdev + * @param ops - specified using qed_nvmetcp_cb_ops + * @param cookie - driver private + * @start: nvmetcp in FW + * @param cdev + * @param tasks - qed will fill information about tasks + * return 0 on success, otherwise error value. + * @stop: nvmetcp in FW + * @param cdev + * return 0 on success, otherwise error value. + */ +struct qed_nvmetcp_ops { + const struct qed_common_ops *common; + + const struct qed_ll2_ops *ll2; + + int (*fill_dev_info)(struct qed_dev *cdev, + struct qed_dev_nvmetcp_info *info); + + void (*register_ops)(struct qed_dev *cdev, + struct qed_nvmetcp_cb_ops *ops, void *cookie); + + int (*start)(struct qed_dev *cdev, + struct qed_nvmetcp_tid *tasks, + void *event_context, nvmetcp_event_cb_t async_event_cb); + + int (*stop)(struct qed_dev *cdev); +}; + +const struct qed_nvmetcp_ops *qed_get_nvmetcp_ops(void); +void qed_put_nvmetcp_ops(void); +#endif From patchwork Sun Feb 7 18:13:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E00E0C433E0 for ; Sun, 7 Feb 2021 18:16:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AAC4F64D9C for ; Sun, 7 Feb 2021 18:16:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229752AbhBGSPr (ORCPT ); Sun, 7 Feb 2021 13:15:47 -0500 Received: from mx0a-0016f401.pphosted.com ([67.231.148.174]:29508 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S229692AbhBGSO5 (ORCPT ); Sun, 7 Feb 2021 13:14:57 -0500 Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I6H9j029942; Sun, 7 Feb 2021 10:14:09 -0800 Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 36hsbrakms-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:14:09 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:08 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:14:04 -0800 From: Shai Malin To: , , CC: , , , , , , , , , , Dean Balandin Subject: [RFC PATCH v3 10/11] nvme-qedn: Add qedn probe Date: Sun, 7 Feb 2021 20:13:23 +0200 Message-ID: <20210207181324.11429-11-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC As part of the qedn init, the driver will register as a pci device and will work with the Marvell fastlinQ NICs. The HW ops- qed_nvmetcp_ops are similar to other "qed_*_ops" which are used by the qede, qedr, qedf and qedi device drivers. Struct qedn_ctx is per pci physical function (PF) container for PF-specific attributes and resources. Signed-off-by: Dean Balandin Signed-off-by: Prabhakar Kushwaha Signed-off-by: Michal Kalderon Signed-off-by: Shai Malin --- drivers/nvme/hw/Kconfig | 1 + drivers/nvme/hw/qedn/qedn.c | 193 +++++++++++++++++++++++++++++++++++- drivers/nvme/hw/qedn/qedn.h | 49 +++++++++ 3 files changed, 238 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/hw/Kconfig b/drivers/nvme/hw/Kconfig index 374f1f9dbd3d..91b1bd6f07d8 100644 --- a/drivers/nvme/hw/Kconfig +++ b/drivers/nvme/hw/Kconfig @@ -2,6 +2,7 @@ config NVME_QEDN tristate "Marvell NVM Express over Fabrics TCP offload" depends on NVME_TCP_OFFLOAD + select QED_NVMETCP help This enables the Marvell NVMe TCP offload support (qedn). diff --git a/drivers/nvme/hw/qedn/qedn.c b/drivers/nvme/hw/qedn/qedn.c index 1fe33335989c..11b31d086d27 100644 --- a/drivers/nvme/hw/qedn/qedn.c +++ b/drivers/nvme/hw/qedn/qedn.c @@ -14,6 +14,10 @@ #define CHIP_NUM_AHP_NVMETCP 0x8194 +const struct qed_nvmetcp_ops *qed_ops; + +/* Global context instance */ +struct qedn_global qedn_glb; static struct pci_device_id qedn_pci_tbl[] = { { PCI_VDEVICE(QLOGIC, CHIP_NUM_AHP_NVMETCP), 0 }, {0, 0}, @@ -94,12 +98,132 @@ static struct nvme_tcp_ofld_ops qedn_ofld_ops = { .send_req = qedn_send_req, }; +/* Initialize qedn fields, such as locks, lists, atomics, workqueues , hashes */ +static inline void qedn_init_pf_struct(struct qedn_ctx *qedn) +{ + /* Placeholder */ +} + +static inline void +qedn_init_core_probe_params(struct qed_probe_params *probe_params) +{ + memset(probe_params, 0, sizeof(*probe_params)); + probe_params->protocol = QED_PROTOCOL_NVMETCP; + probe_params->is_vf = false; + probe_params->recov_in_prog = 0; +} + +static inline int qedn_core_probe(struct qedn_ctx *qedn) +{ + struct qed_probe_params probe_params; + int rc = 0; + + qedn_init_core_probe_params(&probe_params); + pr_info("Starting QED probe\n"); + qedn->cdev = qed_ops->common->probe(qedn->pdev, &probe_params); + if (!qedn->cdev) { + rc = -ENODEV; + pr_err("QED probe failed\n"); + } + + return rc; +} + +static void qedn_add_pf_to_gl_list(struct qedn_ctx *qedn) +{ + mutex_lock(&qedn_glb.glb_mutex); + list_add_tail(&qedn->gl_pf_entry, &qedn_glb.qedn_pf_list); + mutex_unlock(&qedn_glb.glb_mutex); +} + +static void qedn_remove_pf_from_gl_list(struct qedn_ctx *qedn) +{ + mutex_lock(&qedn_glb.glb_mutex); + list_del_init(&qedn->gl_pf_entry); + mutex_unlock(&qedn_glb.glb_mutex); +} + +static int qedn_set_nvmetcp_pf_param(struct qedn_ctx *qedn) +{ + struct qed_nvmetcp_pf_params *pf_params; + + pf_params = &qedn->pf_params.nvmetcp_pf_params; + memset(pf_params, 0, sizeof(*pf_params)); + qedn->num_fw_cqs = min_t(u8, qedn->dev_info.num_cqs, num_online_cpus()); + + pf_params->num_cons = QEDN_MAX_CONNS_PER_PF; + pf_params->num_tasks = QEDN_MAX_TASKS_PER_PF; + + /* Placeholder - Initialize function level queues */ + + /* Placeholder - Initialize TCP params */ + + /* Queues */ + pf_params->num_sq_pages_in_ring = QEDN_NVMETCP_NUM_FW_SQ_PAGES * 2; + pf_params->num_r2tq_pages_in_ring = QEDN_NVMETCP_NUM_FW_SQ_PAGES; + pf_params->num_uhq_pages_in_ring = QEDN_NVMETCP_NUM_FW_SQ_PAGES; + pf_params->num_queues = qedn->num_fw_cqs; + pf_params->cq_num_entries = QEDN_FW_CQ_SIZE; + + /* the CQ SB pi */ + pf_params->gl_rq_pi = QEDN_PROTO_CQ_PROD_IDX; + + return 0; +} + +static inline int qedn_slowpath_start(struct qedn_ctx *qedn) +{ + struct qed_slowpath_params sp_params = {}; + int rc = 0; + + /* Start the Slowpath-process */ + sp_params.int_mode = QED_INT_MODE_MSIX; + sp_params.drv_major = QEDN_MAJOR_VERSION; + sp_params.drv_minor = QEDN_MINOR_VERSION; + sp_params.drv_rev = QEDN_REVISION_VERSION; + sp_params.drv_eng = QEDN_ENGINEERING_VERSION; + strlcpy(sp_params.name, "qedn NVMeTCP", QED_DRV_VER_STR_SIZE); + rc = qed_ops->common->slowpath_start(qedn->cdev, &sp_params); + if (rc) + pr_err("Cannot start slowpath\n"); + + return rc; +} + static void __qedn_remove(struct pci_dev *pdev) { struct qedn_ctx *qedn = pci_get_drvdata(pdev); + int rc; + + pr_notice("qedn remove started: abs PF id=%u\n", + qedn->dev_info.common.abs_pf_id); + + if (test_and_set_bit(QEDN_STATE_MODULE_REMOVE_ONGOING, &qedn->state)) { + pr_err("Remove already ongoing\n"); + + return; + } + + if (test_and_clear_bit(QEDN_STATE_REGISTERED_OFFLOAD_DEV, &qedn->state)) + nvme_tcp_ofld_unregister_dev(&qedn->qedn_ofld_dev); + + if (test_and_set_bit(QEDN_STATE_MFW_STATE, &qedn->state)) { + rc = qed_ops->common->update_drv_state(qedn->cdev, false); + if (rc) + pr_err("Failed to send drv state to MFW\n"); + } + + if (test_and_clear_bit(QEDN_STATE_GL_PF_LIST_ADDED, &qedn->state)) + qedn_remove_pf_from_gl_list(qedn); + else + pr_err("Failed to remove from global PF list\n"); + + if (test_and_clear_bit(QEDN_STATE_CORE_OPEN, &qedn->state)) + qed_ops->common->slowpath_stop(qedn->cdev); + + if (test_and_clear_bit(QEDN_STATE_CORE_PROBED, &qedn->state)) + qed_ops->common->remove(qedn->cdev); - pr_notice("Starting qedn_remove\n"); - nvme_tcp_ofld_unregister_dev(&qedn->qedn_ofld_dev); kfree(qedn); pr_notice("Ending qedn_remove successfully\n"); } @@ -139,15 +263,56 @@ static int __qedn_probe(struct pci_dev *pdev) if (!qedn) return -ENODEV; + qedn_init_pf_struct(qedn); + + /* QED probe */ + rc = qedn_core_probe(qedn); + if (rc) + goto exit_probe_and_release_mem; + + set_bit(QEDN_STATE_CORE_PROBED, &qedn->state); + + /* For global number of FW CQs */ + rc = qed_ops->fill_dev_info(qedn->cdev, &qedn->dev_info); + if (rc) { + pr_err("fill_dev_info failed\n"); + goto exit_probe_and_release_mem; + } + + qedn_add_pf_to_gl_list(qedn); + set_bit(QEDN_STATE_GL_PF_LIST_ADDED, &qedn->state); + + rc = qedn_set_nvmetcp_pf_param(qedn); + if (rc) + goto exit_probe_and_release_mem; + + qed_ops->common->update_pf_params(qedn->cdev, &qedn->pf_params); + rc = qedn_slowpath_start(qedn); + if (rc) + goto exit_probe_and_release_mem; + + set_bit(QEDN_STATE_CORE_OPEN, &qedn->state); + + rc = qed_ops->common->update_drv_state(qedn->cdev, true); + if (rc) { + pr_err("Failed to send drv state to MFW\n"); + goto exit_probe_and_release_mem; + } + + set_bit(QEDN_STATE_MFW_STATE, &qedn->state); + qedn->qedn_ofld_dev.ops = &qedn_ofld_ops; INIT_LIST_HEAD(&qedn->qedn_ofld_dev.entry); rc = nvme_tcp_ofld_register_dev(&qedn->qedn_ofld_dev); if (rc) - goto release_qedn; + goto exit_probe_and_release_mem; + + set_bit(QEDN_STATE_REGISTERED_OFFLOAD_DEV, &qedn->state); return 0; -release_qedn: - kfree(qedn); +exit_probe_and_release_mem: + __qedn_remove(pdev); + pr_err("probe ended with error\n"); return rc; } @@ -165,13 +330,30 @@ static struct pci_driver qedn_pci_driver = { .shutdown = qedn_shutdown, }; +static inline void qedn_init_global_contxt(void) +{ + INIT_LIST_HEAD(&qedn_glb.qedn_pf_list); + INIT_LIST_HEAD(&qedn_glb.ctrl_list); + mutex_init(&qedn_glb.glb_mutex); +} + static int __init qedn_init(void) { int rc; + qedn_init_global_contxt(); + + qed_ops = qed_get_nvmetcp_ops(); + if (!qed_ops) { + pr_err("Failed to get QED NVMeTCP ops\n"); + + return -EINVAL; + } + rc = pci_register_driver(&qedn_pci_driver); if (rc) { pr_err("Failed to register pci driver\n"); + return -EINVAL; } @@ -183,6 +365,7 @@ static int __init qedn_init(void) static void __exit qedn_cleanup(void) { pci_unregister_driver(&qedn_pci_driver); + qed_put_nvmetcp_ops(); pr_notice("Unloading qedn ended\n"); } diff --git a/drivers/nvme/hw/qedn/qedn.h b/drivers/nvme/hw/qedn/qedn.h index 401fde08000e..634e1217639a 100644 --- a/drivers/nvme/hw/qedn/qedn.h +++ b/drivers/nvme/hw/qedn/qedn.h @@ -3,12 +3,61 @@ * Copyright 2021 Marvell. All rights reserved. */ +#include +#include + /* Driver includes */ #include "../../host/tcp-offload.h" +#define QEDN_MAJOR_VERSION 8 +#define QEDN_MINOR_VERSION 62 +#define QEDN_REVISION_VERSION 10 +#define QEDN_ENGINEERING_VERSION 0 +#define DRV_MODULE_VERSION __stringify(QEDE_MAJOR_VERSION) "." \ + __stringify(QEDE_MINOR_VERSION) "." \ + __stringify(QEDE_REVISION_VERSION) "." \ + __stringify(QEDE_ENGINEERING_VERSION) + #define QEDN_MODULE_NAME "qedn" +#define QEDN_MAX_TASKS_PER_PF (32 * 1024) +#define QEDN_MAX_CONNS_PER_PF (8 * 1024) +#define QEDN_FW_CQ_SIZE (4 * 1024) +#define QEDN_PROTO_CQ_PROD_IDX 0 +#define QEDN_NVMETCP_NUM_FW_SQ_PAGES 4 + +enum qedn_state { + QEDN_STATE_CORE_PROBED = 0, + QEDN_STATE_CORE_OPEN, + QEDN_STATE_GL_PF_LIST_ADDED, + QEDN_STATE_MFW_STATE, + QEDN_STATE_REGISTERED_OFFLOAD_DEV, + QEDN_STATE_MODULE_REMOVE_ONGOING, +}; + struct qedn_ctx { struct pci_dev *pdev; + struct qed_dev *cdev; + struct qed_dev_nvmetcp_info dev_info; struct nvme_tcp_ofld_dev qedn_ofld_dev; + struct qed_pf_params pf_params; + + /* Global PF list entry */ + struct list_head gl_pf_entry; + + /* Accessed with atomic bit ops , used with enum qedn_state */ + unsigned long state; + + /* Fast path queues */ + u8 num_fw_cqs; +}; + +struct qedn_global { + struct list_head qedn_pf_list; + + /* Host mode */ + struct list_head ctrl_list; + + /* Mutex for accessing the global struct */ + struct mutex glb_mutex; }; From patchwork Sun Feb 7 18:13:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shai Malin X-Patchwork-Id: 12073267 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2580BC433E0 for ; Sun, 7 Feb 2021 18:16:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EF18964E31 for ; Sun, 7 Feb 2021 18:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229760AbhBGSQG (ORCPT ); Sun, 7 Feb 2021 13:16:06 -0500 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:48568 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229704AbhBGSPH (ORCPT ); Sun, 7 Feb 2021 13:15:07 -0500 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 117I7r9C019221; Sun, 7 Feb 2021 10:14:14 -0800 Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 36hugq2dnx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Feb 2021 10:14:14 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 7 Feb 2021 10:14:12 -0800 Received: from lbtlvb-pcie154.il.qlogic.org (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Sun, 7 Feb 2021 10:14:09 -0800 From: Shai Malin To: , , CC: , , , , , , , , , Subject: [RFC PATCH v3 11/11] nvme-qedn: Add IRQ and fast-path resources initializations Date: Sun, 7 Feb 2021 20:13:24 +0200 Message-ID: <20210207181324.11429-12-smalin@marvell.com> X-Mailer: git-send-email 2.16.6 In-Reply-To: <20210207181324.11429-1-smalin@marvell.com> References: <20210207181324.11429-1-smalin@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.369,18.0.737 definitions=2021-02-07_08:2021-02-05,2021-02-07 signatures=0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-State: RFC This patch will present the adding of qedn_fp_queue - this is a per cpu core element which handles all of the connections on that cpu core. The qedn_fp_queue will handle a group of connections (NVMeoF QPs) which are handled on the same cpu core, and will only use the same FW-driver resources with no need to be related to the same NVMeoF controller. The per qedn_fq_queue resources are the FW CQ and FW status block: - The FW CQ will be used for the FW to notify the driver that the the exchange has ended and the FW will pass the incoming NVMeoF CQE (if exist) to the driver. - FW status block - which is used for the FW to notify the driver with the producer update of the FW CQE chain. The FW fast-path queues are based on qed_chain.h Signed-off-by: Prabhakar Kushwaha Signed-off-by: Michal Kalderon Signed-off-by: Shai Malin --- drivers/nvme/hw/qedn/qedn.c | 284 +++++++++++++++++++++++++++++++++++- drivers/nvme/hw/qedn/qedn.h | 27 ++++ 2 files changed, 308 insertions(+), 3 deletions(-) diff --git a/drivers/nvme/hw/qedn/qedn.c b/drivers/nvme/hw/qedn/qedn.c index 11b31d086d27..64277dc28901 100644 --- a/drivers/nvme/hw/qedn/qedn.c +++ b/drivers/nvme/hw/qedn/qedn.c @@ -98,6 +98,103 @@ static struct nvme_tcp_ofld_ops qedn_ofld_ops = { .send_req = qedn_send_req, }; +/* Fastpath IRQ handler */ +static irqreturn_t qedn_irq_handler(int irq, void *dev_id) +{ + /* Placeholder */ + + return IRQ_HANDLED; +} + +static void qedn_sync_free_irqs(struct qedn_ctx *qedn) +{ + u16 vector_idx; + int i; + + for (i = 0; i < qedn->num_fw_cqs; i++) { + vector_idx = i * qedn->dev_info.common.num_hwfns + + qed_ops->common->get_affin_hwfn_idx(qedn->cdev); + synchronize_irq(qedn->int_info.msix[vector_idx].vector); + irq_set_affinity_hint(qedn->int_info.msix[vector_idx].vector, + NULL); + free_irq(qedn->int_info.msix[vector_idx].vector, + &qedn->fp_q_arr[i]); + } + + qedn->int_info.used_cnt = 0; + qed_ops->common->set_fp_int(qedn->cdev, 0); +} + +static int qedn_request_msix_irq(struct qedn_ctx *qedn) +{ + struct pci_dev *pdev = qedn->pdev; + struct qedn_fp_queue *fp_q = NULL; + int i, rc, cpu; + u16 vector_idx; + u32 vector; + + cpu = cpumask_first(cpu_online_mask); + for (i = 0; i < qedn->num_fw_cqs; i++) { + fp_q = &qedn->fp_q_arr[i]; + vector_idx = i * qedn->dev_info.common.num_hwfns + + qed_ops->common->get_affin_hwfn_idx(qedn->cdev); + vector = qedn->int_info.msix[vector_idx].vector; + sprintf(fp_q->irqname, "qedn_queue_%x.%x.%x_%d", + pdev->bus->number, PCI_SLOT(pdev->devfn), + PCI_FUNC(pdev->devfn), i); + rc = request_irq(vector, qedn_irq_handler, QEDN_IRQ_NO_FLAGS, + fp_q->irqname, fp_q); + if (rc) { + pr_err("request_irq failed.\n"); + qedn_sync_free_irqs(qedn); + + return rc; + } + + fp_q->cpu = cpu; + qedn->int_info.used_cnt++; + rc = irq_set_affinity_hint(vector, get_cpu_mask(cpu)); + cpu = cpumask_next_wrap(cpu, cpu_online_mask, -1, false); + } + + return 0; +} + +static int qedn_setup_irq(struct qedn_ctx *qedn) +{ + int rc = 0; + u8 rval; + + rval = qed_ops->common->set_fp_int(qedn->cdev, qedn->num_fw_cqs); + if (rval < qedn->num_fw_cqs) { + qedn->num_fw_cqs = rval; + if (rval == 0) { + pr_err("set_fp_int return 0 IRQs\n"); + + return -ENODEV; + } + } + + rc = qed_ops->common->get_fp_int(qedn->cdev, &qedn->int_info); + if (rc) { + pr_err("get_fp_int failed\n"); + goto exit_setup_int; + } + + if (qedn->int_info.msix_cnt) { + rc = qedn_request_msix_irq(qedn); + goto exit_setup_int; + } else { + pr_err("msix_cnt = 0\n"); + rc = -EINVAL; + goto exit_setup_int; + } + +exit_setup_int: + + return rc; +} + /* Initialize qedn fields, such as locks, lists, atomics, workqueues , hashes */ static inline void qedn_init_pf_struct(struct qedn_ctx *qedn) { @@ -143,9 +240,151 @@ static void qedn_remove_pf_from_gl_list(struct qedn_ctx *qedn) mutex_unlock(&qedn_glb.glb_mutex); } +static void qedn_free_function_queues(struct qedn_ctx *qedn) +{ + struct qed_sb_info *sb_info = NULL; + struct qedn_fp_queue *fp_q; + int i; + + /* Free workqueues */ + + /* Free the fast path queues*/ + for (i = 0; i < qedn->num_fw_cqs; i++) { + fp_q = &qedn->fp_q_arr[i]; + + /* Free SB */ + sb_info = fp_q->sb_info; + if (sb_info->sb_virt) { + qed_ops->common->sb_release(qedn->cdev, sb_info, + fp_q->sb_id, + QED_SB_TYPE_STORAGE); + dma_free_coherent(&qedn->pdev->dev, + sizeof(*sb_info->sb_virt), + (void *)sb_info->sb_virt, + sb_info->sb_phys); + kfree(sb_info); + fp_q->sb_info = NULL; + } + + qed_ops->common->chain_free(qedn->cdev, &fp_q->cq_chain); + } + + if (qedn->fw_cq_array_virt) + dma_free_coherent(&qedn->pdev->dev, + qedn->num_fw_cqs * sizeof(u64), + qedn->fw_cq_array_virt, + qedn->fw_cq_array_phy); + kfree(qedn->fp_q_arr); + qedn->fp_q_arr = NULL; +} + +static int qedn_alloc_and_init_sb(struct qedn_ctx *qedn, + struct qed_sb_info *sb_info, u16 sb_id) +{ + int rc = 0; + + sb_info->sb_virt = dma_alloc_coherent(&qedn->pdev->dev, + sizeof(struct status_block_e4), + &sb_info->sb_phys, GFP_KERNEL); + if (!sb_info->sb_virt) { + pr_err("Status block allocation failed\n"); + + return -ENOMEM; + } + + rc = qed_ops->common->sb_init(qedn->cdev, sb_info, sb_info->sb_virt, + sb_info->sb_phys, sb_id, + QED_SB_TYPE_STORAGE); + if (rc) { + pr_err("Status block initialization failed\n"); + + return rc; + } + + return 0; +} + +static int qedn_alloc_function_queues(struct qedn_ctx *qedn) +{ + struct qed_chain_init_params chain_params = {}; + struct status_block_e4 *sb = NULL; /* To change to status_block_e4 */ + struct qedn_fp_queue *fp_q = NULL; + int rc = 0; + int i; + + /* Place holder - IO-path workqueues */ + + qedn->fp_q_arr = kcalloc(qedn->num_fw_cqs, + sizeof(struct qedn_fp_queue), GFP_KERNEL); + if (!qedn->fp_q_arr) + return -ENOMEM; + + qedn->fw_cq_array_virt = dma_alloc_coherent(&qedn->pdev->dev, + qedn->num_fw_cqs * sizeof(u64), + &qedn->fw_cq_array_phy, + GFP_KERNEL); + if (!qedn->fw_cq_array_virt) { + rc = -ENOMEM; + goto mem_alloc_failure; + } + + /* placeholder - create task pools */ + + for (i = 0; i < qedn->num_fw_cqs; i++) { + fp_q = &qedn->fp_q_arr[i]; + mutex_init(&fp_q->cq_mutex); + + /* FW CQ */ + chain_params.intended_use = QED_CHAIN_USE_TO_CONSUME, + chain_params.mode = QED_CHAIN_MODE_PBL, + chain_params.cnt_type = QED_CHAIN_CNT_TYPE_U16, + chain_params.num_elems = QEDN_FW_CQ_SIZE; + chain_params.elem_size = 64; /*Placeholder - sizeof(struct nvmetcp_fw_cqe)*/ + + rc = qed_ops->common->chain_alloc(qedn->cdev, + &fp_q->cq_chain, + &chain_params); + if (rc) { + pr_err("CQ chain pci_alloc_consistent fail\n"); + goto mem_alloc_failure; + } + + qedn->fw_cq_array_virt[i] = qed_chain_get_pbl_phys(&fp_q->cq_chain); + + /* SB */ + fp_q->sb_info = kzalloc(sizeof(*fp_q->sb_info), GFP_KERNEL); + if (!fp_q->sb_info) + goto mem_alloc_failure; + + fp_q->sb_id = i; + rc = qedn_alloc_and_init_sb(qedn, fp_q->sb_info, fp_q->sb_id); + if (rc) { + pr_err("SB allocation and initialization failed.\n"); + goto mem_alloc_failure; + } + + sb = fp_q->sb_info->sb_virt; + fp_q->cq_prod = (u16 *)&sb->pi_array[QEDN_PROTO_CQ_PROD_IDX]; + fp_q->qedn = qedn; + + /* Placeholder - Init IO-path workqueue */ + + /* Placeholder - Init IO-path resources */ + } + + return 0; + +mem_alloc_failure: + pr_err("Function allocation failed\n"); + qedn_free_function_queues(qedn); + + return rc; +} + static int qedn_set_nvmetcp_pf_param(struct qedn_ctx *qedn) { struct qed_nvmetcp_pf_params *pf_params; + int rc; pf_params = &qedn->pf_params.nvmetcp_pf_params; memset(pf_params, 0, sizeof(*pf_params)); @@ -154,9 +393,13 @@ static int qedn_set_nvmetcp_pf_param(struct qedn_ctx *qedn) pf_params->num_cons = QEDN_MAX_CONNS_PER_PF; pf_params->num_tasks = QEDN_MAX_TASKS_PER_PF; - /* Placeholder - Initialize function level queues */ + rc = qedn_alloc_function_queues(qedn); + if (rc) { + pr_err("Global queue allocation failed.\n"); + goto err_alloc_mem; + } - /* Placeholder - Initialize TCP params */ + set_bit(QEDN_STATE_FP_WORK_THREAD_SET, &qedn->state); /* Queues */ pf_params->num_sq_pages_in_ring = QEDN_NVMETCP_NUM_FW_SQ_PAGES * 2; @@ -164,11 +407,14 @@ static int qedn_set_nvmetcp_pf_param(struct qedn_ctx *qedn) pf_params->num_uhq_pages_in_ring = QEDN_NVMETCP_NUM_FW_SQ_PAGES; pf_params->num_queues = qedn->num_fw_cqs; pf_params->cq_num_entries = QEDN_FW_CQ_SIZE; + pf_params->glbl_q_params_addr = qedn->fw_cq_array_phy; /* the CQ SB pi */ pf_params->gl_rq_pi = QEDN_PROTO_CQ_PROD_IDX; - return 0; +err_alloc_mem: + + return rc; } static inline int qedn_slowpath_start(struct qedn_ctx *qedn) @@ -218,9 +464,22 @@ static void __qedn_remove(struct pci_dev *pdev) else pr_err("Failed to remove from global PF list\n"); + if (test_and_clear_bit(QEDN_STATE_IRQ_SET, &qedn->state)) + qedn_sync_free_irqs(qedn); + + if (test_and_clear_bit(QEDN_STATE_NVMETCP_OPEN, &qedn->state)) + qed_ops->stop(qedn->cdev); + + rc = qed_ops->common->update_drv_state(qedn->cdev, false); + if (rc) + pr_err("Failed to send drv state to MFW\n"); + if (test_and_clear_bit(QEDN_STATE_CORE_OPEN, &qedn->state)) qed_ops->common->slowpath_stop(qedn->cdev); + if (test_and_clear_bit(QEDN_STATE_FP_WORK_THREAD_SET, &qedn->state)) + qedn_free_function_queues(qedn); + if (test_and_clear_bit(QEDN_STATE_CORE_PROBED, &qedn->state)) qed_ops->common->remove(qedn->cdev); @@ -301,6 +560,25 @@ static int __qedn_probe(struct pci_dev *pdev) set_bit(QEDN_STATE_MFW_STATE, &qedn->state); + rc = qedn_setup_irq(qedn); + if (rc) + goto exit_probe_and_release_mem; + + set_bit(QEDN_STATE_IRQ_SET, &qedn->state); + + /* NVMeTCP start HW PF */ + rc = qed_ops->start(qedn->cdev, + NULL /* Placeholder for FW IO-path resources */, + qedn, + NULL /* Placeholder for FW Event callback */); + if (rc) { + rc = -ENODEV; + pr_err("Cannot start NVMeTCP Function\n"); + goto exit_probe_and_release_mem; + } + + set_bit(QEDN_STATE_NVMETCP_OPEN, &qedn->state); + qedn->qedn_ofld_dev.ops = &qedn_ofld_ops; INIT_LIST_HEAD(&qedn->qedn_ofld_dev.entry); rc = nvme_tcp_ofld_register_dev(&qedn->qedn_ofld_dev); diff --git a/drivers/nvme/hw/qedn/qedn.h b/drivers/nvme/hw/qedn/qedn.h index 634e1217639a..17cebf741314 100644 --- a/drivers/nvme/hw/qedn/qedn.h +++ b/drivers/nvme/hw/qedn/qedn.h @@ -5,6 +5,7 @@ #include #include +#include /* Driver includes */ #include "../../host/tcp-offload.h" @@ -26,18 +27,41 @@ #define QEDN_PROTO_CQ_PROD_IDX 0 #define QEDN_NVMETCP_NUM_FW_SQ_PAGES 4 +#define QEDN_PAGE_SIZE 4096 /* FW page size - Configurable */ +#define QEDN_MAX_GLOBAL_QUEUES_PAIRS 16 +#define QEDN_IRQ_NAME_LEN 24 +#define QEDN_IRQ_NO_FLAGS 0 + +/* TCP defines */ +#define QEDN_TCP_RTO_DEFAULT 280 + enum qedn_state { QEDN_STATE_CORE_PROBED = 0, QEDN_STATE_CORE_OPEN, QEDN_STATE_GL_PF_LIST_ADDED, QEDN_STATE_MFW_STATE, + QEDN_STATE_NVMETCP_OPEN, + QEDN_STATE_IRQ_SET, + QEDN_STATE_FP_WORK_THREAD_SET, QEDN_STATE_REGISTERED_OFFLOAD_DEV, QEDN_STATE_MODULE_REMOVE_ONGOING, }; +struct qedn_fp_queue { + struct qedn_ctx *qedn; + struct qed_chain cq_chain; + struct qed_sb_info *sb_info; + u16 sb_id; + u16 *cq_prod; + unsigned int cpu; + char irqname[QEDN_IRQ_NAME_LEN]; + struct mutex cq_mutex; /* cq handler mutex */ +}; + struct qedn_ctx { struct pci_dev *pdev; struct qed_dev *cdev; + struct qed_int_info int_info; struct qed_dev_nvmetcp_info dev_info; struct nvme_tcp_ofld_dev qedn_ofld_dev; struct qed_pf_params pf_params; @@ -50,6 +74,9 @@ struct qedn_ctx { /* Fast path queues */ u8 num_fw_cqs; + struct qedn_fp_queue *fp_q_arr; + u64 *fw_cq_array_virt; + dma_addr_t fw_cq_array_phy; /* Physical address of fw_cq_array_virt */ }; struct qedn_global {