From patchwork Wed May 29 16:00:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aurelien Aptel X-Patchwork-Id: 13679164 X-Patchwork-Delegate: kuba@kernel.org Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2085.outbound.protection.outlook.com [40.107.244.85]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4EE4190676 for ; Wed, 29 May 2024 16:02:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=fail smtp.client-ip=40.107.244.85 ARC-Seal: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716998537; cv=fail; b=RsXR8imykqoDLcZuclZfcBYA4cpadmUkeX6cnB212x2h+xCf8PzPdKxDQQ3uyTedEj9nKYqz8SlNo7/JVuWBlxENeQNksyLvhvEynUOu9I5IDFr/pgC9wUQ6P3d8bddo8xnlVFzpW84pDPeKSY17dbcGzl7yk5aGoDgyG87BJeo= ARC-Message-Signature: i=2; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1716998537; c=relaxed/simple; bh=BwuZ7y6bx06uxN72S4AYjCiitrbgvC5yt6VkEcqlZoM=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: Content-Type:MIME-Version; b=jPjGT9w10fZNvLfKzCssiz9P4vg+zr72WJmwXXeQHQVzYs/7C1vSOIliEteuMzxXy1c/s5Lj3+wf98Mjnd4a3nyRuz2OUl/g96A8s44ABlGbw2HqCgI5LHAIiYAp/L7F+bNpABrdvTk+4LYDfCIMWwv2c+21+JmnpuVcHg2zdhk= ARC-Authentication-Results: i=2; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com; spf=fail smtp.mailfrom=nvidia.com; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b=OCMPxRbR; arc=fail smtp.client-ip=40.107.244.85 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=nvidia.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=nvidia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=Nvidia.com header.i=@Nvidia.com header.b="OCMPxRbR" ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=a7cAHbbbS5HFKRwd+LIhRjvd7RotTjGdm/BpHq7AoczLsZH+I8HgItCMTQQhs3NOdcBRNjLp1i9xGNk/wmH2mNdELM08TfFqhlSPbmuYzJon5/oVk+6w4JvOBHko/luXVKQOyizdaduBUqDBG7uqSwRHd4rnbxAbjbPXQnYHVX5G9uxzdUxkdDIJ074M7Y/QqgjSRe/iHraSCWbX4mdBG/ddPAXqeVQsBK8ZKtHjVyEJ/rQOZ7DxD4R4MhwvzDf7xxnhe324wtGJf75xvNPvJn+58c9JZ7ENmYQdE59GfvdMA5T3t3/Cg6VOmhi7YlVet8wjqprQRJ9lt2WCu/57xA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=32cc6wao9IYXyEiMlYnVztbYLUL9cj2N0kYIL71oPIE=; b=h+BK5dF3evEAQlZzuCf2lXVi/QGz6oCIVgE2TA57CVk7mflYrkWT5f2VUOHTsOptRPCiciFHkzQhWygOZBtS35k0nOVsay4IQNWYhetnMZ55lHJDuPedzxCx/6LcnphUEDREspWnElJyxmm6etxDwhvYX34EeBfGzedhz5Bv91pNdVHD7Rxurghvr4mP/0sy9zXxPkpTUVzpsjRAyGXPUDTTLHh9CtLlXKwu19N7KzqRCnH6RbeG4kmRuNpB6xjUTNnGa9IJ8jnASGlb+UhNfYotlj9ompn8zst7o+MVbLS+01VEPnvgOQUSmW2eyNSrRW97T+0QQE3mXzjeeIVK4g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com; dkim=pass header.d=nvidia.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=32cc6wao9IYXyEiMlYnVztbYLUL9cj2N0kYIL71oPIE=; b=OCMPxRbRPvpIfqptug0T9fMrBkAh7IMqgNRGcwDvUDMgf+Oas33m+9WVsjLXdGwABB5fGck+A0NFj79ayd+ke8d+/yxCF7aGoEjsxBk1pY4jaRgAheXO1lLUj7vw9zWefeNO37kMuB9DxjPgUOcJDRDvrhyMJLlI2DLJFYtDzDjRayixe9CZCMapn6GYIuaxIQPwfsn+FKyYqLNubp4cBUTfW8lf4oDAQPQpQoKrJCkiCUCgvROj9Ieittt3UjaVjwsbRquDXTnamTeYPwZutIkBjDLovoiuqqmZRkDvkco4cassp2UgTTcETHt0hs9mdaMF6Dp4q6aXSpkpFnsTVA== Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; Received: from SJ1PR12MB6075.namprd12.prod.outlook.com (2603:10b6:a03:45e::8) by SJ2PR12MB7799.namprd12.prod.outlook.com (2603:10b6:a03:4d3::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.21; Wed, 29 May 2024 16:02:10 +0000 Received: from SJ1PR12MB6075.namprd12.prod.outlook.com ([fe80::3715:9750:b92c:7bee]) by SJ1PR12MB6075.namprd12.prod.outlook.com ([fe80::3715:9750:b92c:7bee%6]) with mapi id 15.20.7633.018; Wed, 29 May 2024 16:02:10 +0000 From: Aurelien Aptel To: linux-nvme@lists.infradead.org, netdev@vger.kernel.org, sagi@grimberg.me, hch@lst.de, kbusch@kernel.org, axboe@fb.com, chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org Subject: [PATCH v25 13/20] net/mlx5e: NVMEoTCP, offload initialization Date: Wed, 29 May 2024 16:00:46 +0000 Message-Id: <20240529160053.111531-14-aaptel@nvidia.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240529160053.111531-1-aaptel@nvidia.com> References: <20240529160053.111531-1-aaptel@nvidia.com> X-ClientProxiedBy: LO2P123CA0104.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:139::19) To SJ1PR12MB6075.namprd12.prod.outlook.com (2603:10b6:a03:45e::8) Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PR12MB6075:EE_|SJ2PR12MB7799:EE_ X-MS-Office365-Filtering-Correlation-Id: 64a896e1-b2b7-4816-cb3c-08dc7ff8b252 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0;ARA:13230031|366007|376005|1800799015; X-Microsoft-Antispam-Message-Info: qD70V6EtOGI1CC+j/U43D5tZ2UordGMFejbEI1cstEP5QzhPQswpeBFYhOCe+DG5ofiiiGkVN/uNBctWISnNRf8247qundwQNey2iGqzQxo8fCzsp35A2ZV8kV6covTL0UpRRxCiSbdZL1b5vJmuZFdqVc7QM+zKIosm9N/bVK22C4DzIF9Zgq4IIAlkGwT8gLuNffjrrHMP4qFfz1hL4/sKXgRjHypnPIOenrEC/XlqSAgoBDAqXn5TNub3zkU54vO7qWTFglu4/Q+faDltjfqus2RBRAMhgZ7LNblKUuQPWypDMEgwg0evIFMj2sxeX3oON9FmU6bxd/ZV0eapemrrg/OMYbOJDadfzGQr4qJt1ZXErRtJipuCziibSwAVY7tB8Pk8EDUdL7HzIeOYVWhzhNyoncC0HWhLuDO5k52eefs5nzPjii4FmYjlFJ6X6sGwWVc9DtXtR0cGSb7OxVAKEH07yh2aStzHq/31dx2tWTBaA3iOP/e94dnQ+/27XXLiqJ68oUV9tRwVstCFOqlDhlKfG8JZG/qPz3RBQhUqWiJf2Mgh8lK5ycwKDX82+89ws3yzno8b3ejHJ3VzP0WHKtOIzzQ1VDVR6VF6TtV9l6ryCM6AczrBjbOfaO7YuPJc7/hMM4XnIn+63nhyNzTGDZ2KzjWRBtGgL1dOSqD4kGiFqhLxrQh/n6Tbp2jVEd6ofPEU71hYbstfWwa/pYSk76Q8Tq9aqLvrG7Vsv7IW0K62T69p3uflX03vEF7nZVfYxGSLVojFJUgTxMu8mh1quidoHYBOvveVdIocq4iwHjHLZ5aCEYiekT62gIkpE+Axbr3gOu92YLjcMgkbXQDClGecQ43NxuGrYx3T+KYA85AubSZUqEh8TN7HYLfZXTWBe5ENYttYrO28RA0Fz2laQLatfWd82SBNpQ2uwASl0/bKoDlfi4vSQ7b2Th2I+k3rnzfFVdHw7uLL8NGKLunvx5VIevK3uzeKv+NPZyB/tPlQmOboZemPKxcltDb4nbujMkqx0Whf8e6Ou/QdgSDVRAs1zH4X4M9nTH3jZ4ZE7YhWTRzshd1gEX7QVqWVL6JZRcs1F3MiD7qVidaHM/gGgJkApXIW4EmxdLgLYH5cy0zOTmFz5WYMDgngmB0kgn8AsyW7FY28N+zEkpucVx0myGzvjuhs3I1mF2dqNhREe4iV1nBOKLBNfMeI+2t+mQm5ClY1v3s4wze3kUGxwCwCFYCRGHNIeDe2WjuEQYH06XwLAsRFLlTId6GkhGSkC3tRwz4fgaQY0KSE4idJMA== X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ1PR12MB6075.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(376005)(1800799015);DIR:OUT;SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: B2W/Y4kLNsVNrgyqfCeDBv/34j0WP2f41Gx+rLHQJFn42XhFy6OCK9vqN5jY2TGOIPpeGeRhqVBQ5KUsh4TNQ2WQFxMHjY5l1y9sn0fsOckMCpWfrmaTeiPz/ByEQI3AvqzAvifdbxsUvAvUeCbbiGqpJ/xmWzYfLV+4fF1wo6mKaMA8bne53GgWzTyOj6yCNxFpRBwBxgJnn27jqhSUkbrKXJPLGM3dAQBz3k2RleBA0I7MBFINjDzZxnFt8CuSqPQyYJZl+7puFP9w91lZrRgTbQjHKVr9Wg1MTwzSafYa0EJ8aXg4IcoEqvuAapgBJApTWx34RGcZvLYOtqviyjSDGdk01Lo7k7QB/a0hYyq48IwyI9xUeiEofc26UShMKAZixoUEa4DQQk17g7QQnZGjLbK0uimyMK/I5E3La//eUQDLcVQwcft5N28rrSE3BGhUdVODZ6LJomp0vzAOFCSy2HxRXbfZIvxVLyocn1Sa8fZm4YPbxwVzPe4ZclgQqvLk9tynFECkZcWmdlyRKRa2fE2u9rs7GCgTFf6+kMQe2vstbA8qIR6c31o6JgehoQbbRgRM38yuWY+TYXJe5hwxUR7cOIFJXHhgJttNfFr7SD0kpJ+5MJZnMznJphPLbowAwWMNVMiGBICvhGr1XY1wjHdgRQVlmfCiGrIufAS4KoqhSexjLRYiU6XDJurkbvU7AD5ToHBxlNDVA07oFyWXMX5l0svt8/NH6f+TkTLQGS3rwJ7dm1W6xlGWCTIPVNfkrjGw9az9opmlXJlehT2WRFp6XsnVaSe5bMBiQqPYVBkVX139txbY8+2Y8BKeidiVGvjmrZykxENEWt/WONf9HBeRV3YnULTGwEoT7YgSkx+tNWblIxGM1K784wQsIz040iCD99Fvft+wDspULb0kP68Ep4F3zbNVrvVwiH5JruaX1VEMbr7Ybt0/nTlKb9XhSlnxxPfA/Plval98tqiQuuByQCxI80eY4INGgY05xD5zhN5xCqW+l6j+JRzBAmZ3kJeZxyy+XuuZo802Kj23I/OtfU+XY2a2TfWluEVnfuQmYkIwkwCJ83p/dRe/f09+zr/Fkeuoh6xHekA9xnf163zB9wYFjhJhTLnO0o/d98oLbBe2UCtfZ1WQq2amKq2mQZEnKEUzqehviXQzDT0O4jUnYxYcLEEL3fvL1Q/BRIW1+jH650lEJOhBN//u7RPu/wdh7eNPrmdsYzVxMDkC/mKIrvoaLGMOjDV4clrFiGQNguPK2ni2qv5F7BNTAE3gvQFmOlCXhGfoNdxaW+agYvTtTdEp0mh9384uY4Zcr+akndnDBBKT44kxlyKFQguMP64TNoBMJGwncLdKuSSmYm3if2P4Yoq85V9uu3Htp9UFXpaDdVU2/zv31Q/csF/eKGEUszj7xMa8E0SC+siffJsCivTjMhIoK+IUwl1GEg3WEI9k1J1oSgDu0KflvPyBGuUJZzb7WYt0h8JGRnyHD+iU4NI43QZEyifuf3nyN6wMHPO3GvBetMF0amvMgJI1ZvWOsvSMc2uAVoAVZsBQYDzZo0TRBDeRFxzXesGEE1qKmvgczQN2+jpJuWO4 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-Network-Message-Id: 64a896e1-b2b7-4816-cb3c-08dc7ff8b252 X-MS-Exchange-CrossTenant-AuthSource: SJ1PR12MB6075.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2024 16:02:10.2786 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 30dHiVkW+OaCQ/1Po007daj6ogrFSgBfLKFdl15kaWvAfjiR1ivG1nyPtX82d9ln/uTmkMoNVqvJNXUDnCbxEw== X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB7799 X-Patchwork-Delegate: kuba@kernel.org From: Ben Ben-Ishay This commit introduces the driver structures and initialization blocks for NVMEoTCP offload. The mlx5 nvmeotcp structures are: - queue (mlx5e_nvmeotcp_queue) - pairs 1:1 with nvmeotcp driver queues and deals with the offloading parts. The mlx5e queue is accessed in the ddp ops: initialized on sk_add, used in ddp setup,teardown,resync and in the fast path when dealing with packets, destroyed in the sk_del op. - queue entry (nvmeotcp_queue_entry) - pairs 1:1 with offloaded IO from that queue. Keeps pointers to the SG elements describing the buffers used for the IO and the ddp context of it. - queue handler (mlx5e_nvmeotcp_queue_handler) - we use icosq per NVME-TCP queue for UMR mapping as part of the ddp offload. Those dedicated SQs are unique in the sense that they are driven directly by the NVME-TCP layer to submit and invalidate ddp requests. Since the life-cycle of these icosqs is not tied to the channels, we create dedicated napi contexts for polling them such that channels can be re-created during offloading. The queue handler has pointer to the cq associated with the queue's sq and napi context. - main offload context (mlx5e_nvmeotcp) - has ida and hash table instances. Each offloaded queue gets an ID from the ida instance and the pairs are kept in the hash table. The id is programmed as flow tag to be set by HW on the completion (cqe) of all packets related to this queue (by 5-tuple steering). The fast path which deals with packets uses the flow tag to access the hash table and retrieve the queue for the processing. We query nvmeotcp capabilities to see if the offload can be supported and use 128B CQEs when this happens. By default, the offload is off but can be enabled with `ethtool --ulp-ddp nvme-tcp-ddp on`. Signed-off-by: Ben Ben-Ishay Signed-off-by: Boris Pismenny Signed-off-by: Or Gerlitz Signed-off-by: Yoray Zack Signed-off-by: Shai Malin Signed-off-by: Aurelien Aptel Reviewed-by: Tariq Toukan --- .../net/ethernet/mellanox/mlx5/core/Kconfig | 11 + .../net/ethernet/mellanox/mlx5/core/Makefile | 2 + drivers/net/ethernet/mellanox/mlx5/core/en.h | 4 + .../net/ethernet/mellanox/mlx5/core/en/fs.h | 4 +- .../ethernet/mellanox/mlx5/core/en/params.c | 12 +- .../ethernet/mellanox/mlx5/core/en/params.h | 3 + .../mellanox/mlx5/core/en_accel/en_accel.h | 3 + .../mellanox/mlx5/core/en_accel/fs_tcp.h | 2 +- .../mellanox/mlx5/core/en_accel/nvmeotcp.c | 217 ++++++++++++++++++ .../mellanox/mlx5/core/en_accel/nvmeotcp.h | 120 ++++++++++ .../ethernet/mellanox/mlx5/core/en_ethtool.c | 6 + .../net/ethernet/mellanox/mlx5/core/en_fs.c | 4 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 16 ++ .../net/ethernet/mellanox/mlx5/core/main.c | 1 + 14 files changed, 396 insertions(+), 9 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig index 685335832a93..5935c2cdefec 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig +++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig @@ -164,6 +164,17 @@ config MLX5_EN_TLS help Build support for TLS cryptography-offload acceleration in the NIC. +config MLX5_EN_NVMEOTCP + bool "NVMEoTCP acceleration" + depends on ULP_DDP + depends on MLX5_CORE_EN + default y + help + Build support for NVMEoTCP acceleration in the NIC. + This includes Direct Data Placement and CRC offload. + Note: Support for hardware with this capability needs to be selected + for this option to become available. + config MLX5_SW_STEERING bool "Mellanox Technologies software-managed steering" depends on MLX5_CORE_EN && MLX5_ESWITCH diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile index 76dc5a9b9648..41cb8f831632 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile +++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile @@ -109,6 +109,8 @@ mlx5_core-$(CONFIG_MLX5_EN_TLS) += en_accel/ktls_stats.o \ en_accel/fs_tcp.o en_accel/ktls.o en_accel/ktls_txrx.o \ en_accel/ktls_tx.o en_accel/ktls_rx.o +mlx5_core-$(CONFIG_MLX5_EN_NVMEOTCP) += en_accel/fs_tcp.o en_accel/nvmeotcp.o + mlx5_core-$(CONFIG_MLX5_SW_STEERING) += steering/dr_domain.o steering/dr_table.o \ steering/dr_matcher.o steering/dr_rule.o \ steering/dr_icm_pool.o steering/dr_buddy.o \ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index b02df4b15c97..60cbed7881cb 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -329,6 +329,7 @@ struct mlx5e_params { int hard_mtu; bool ptp_rx; __be32 terminate_lkey_be; + bool nvmeotcp; }; static inline u8 mlx5e_get_dcb_num_tc(struct mlx5e_params *params) @@ -944,6 +945,9 @@ struct mlx5e_priv { #endif #ifdef CONFIG_MLX5_EN_TLS struct mlx5e_tls *tls; +#endif +#ifdef CONFIG_MLX5_EN_NVMEOTCP + struct mlx5e_nvmeotcp *nvmeotcp; #endif struct devlink_health_reporter *tx_reporter; struct devlink_health_reporter *rx_reporter; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h index 4d6225e0eec7..780e8b5ae8e0 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h @@ -77,7 +77,7 @@ enum { MLX5E_INNER_TTC_FT_LEVEL, MLX5E_FS_TT_UDP_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, MLX5E_FS_TT_ANY_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, -#ifdef CONFIG_MLX5_EN_TLS +#if defined(CONFIG_MLX5_EN_TLS) || defined(CONFIG_MLX5_EN_NVMEOTCP) MLX5E_ACCEL_FS_TCP_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1, #endif #ifdef CONFIG_MLX5_EN_ARFS @@ -169,7 +169,7 @@ struct mlx5e_fs_any *mlx5e_fs_get_any(struct mlx5e_flow_steering *fs); void mlx5e_fs_set_any(struct mlx5e_flow_steering *fs, struct mlx5e_fs_any *any); struct mlx5e_fs_udp *mlx5e_fs_get_udp(struct mlx5e_flow_steering *fs); void mlx5e_fs_set_udp(struct mlx5e_flow_steering *fs, struct mlx5e_fs_udp *udp); -#ifdef CONFIG_MLX5_EN_TLS +#if defined(CONFIG_MLX5_EN_TLS) || defined(CONFIG_MLX5_EN_NVMEOTCP) struct mlx5e_accel_fs_tcp *mlx5e_fs_get_accel_tcp(struct mlx5e_flow_steering *fs); void mlx5e_fs_set_accel_tcp(struct mlx5e_flow_steering *fs, struct mlx5e_accel_fs_tcp *accel_tcp); #endif diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c index ec819dfc98be..f26155bb95ac 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c @@ -819,7 +819,8 @@ static void mlx5e_build_common_cq_param(struct mlx5_core_dev *mdev, void *cqc = param->cqc; MLX5_SET(cqc, cqc, uar_page, mdev->priv.uar->index); - if (MLX5_CAP_GEN(mdev, cqe_128_always) && cache_line_size() >= 128) + if (MLX5_CAP_GEN(mdev, cqe_128_always) && + (cache_line_size() >= 128 || param->force_cqe128)) MLX5_SET(cqc, cqc, cqe_sz, CQE_STRIDE_128_PAD); } @@ -849,6 +850,9 @@ static void mlx5e_build_rx_cq_param(struct mlx5_core_dev *mdev, void *cqc = param->cqc; u8 log_cq_size; + /* nvme-tcp offload mandates 128 byte cqes */ + param->force_cqe128 |= IS_ENABLED(CONFIG_MLX5_EN_NVMEOTCP) && params->nvmeotcp; + switch (params->rq_wq_type) { case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ: hw_stridx = MLX5_CAP_GEN(mdev, mini_cqe_resp_stride_index); @@ -1184,9 +1188,9 @@ static u8 mlx5e_build_async_icosq_log_wq_sz(struct mlx5_core_dev *mdev) return MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE; } -static void mlx5e_build_icosq_param(struct mlx5_core_dev *mdev, - u8 log_wq_size, - struct mlx5e_sq_param *param) +void mlx5e_build_icosq_param(struct mlx5_core_dev *mdev, + u8 log_wq_size, + struct mlx5e_sq_param *param) { void *sqc = param->sqc; void *wq = MLX5_ADDR_OF(sqc, sqc, wq); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h index 749b2ec0436e..0b9a73158951 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.h @@ -17,6 +17,7 @@ struct mlx5e_cq_param { struct mlx5_wq_param wq; u16 eq_ix; u8 cq_period_mode; + bool force_cqe128; }; struct mlx5e_rq_param { @@ -140,6 +141,8 @@ void mlx5e_build_xdpsq_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_xsk_param *xsk, struct mlx5e_sq_param *param); +void mlx5e_build_icosq_param(struct mlx5_core_dev *mdev, + u8 log_wq_size, struct mlx5e_sq_param *param); int mlx5e_build_channel_param(struct mlx5_core_dev *mdev, struct mlx5e_params *params, struct mlx5e_channel_param *cparam); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h index caa34b9c161e..070dabb03bd4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h @@ -40,6 +40,7 @@ #include "en_accel/ktls.h" #include "en_accel/ktls_txrx.h" #include +#include "en_accel/nvmeotcp.h" #include "en.h" #include "en/txrx.h" @@ -202,11 +203,13 @@ static inline void mlx5e_accel_tx_finish(struct mlx5e_txqsq *sq, static inline int mlx5e_accel_init_rx(struct mlx5e_priv *priv) { + mlx5e_nvmeotcp_init_rx(priv); return mlx5e_ktls_init_rx(priv); } static inline void mlx5e_accel_cleanup_rx(struct mlx5e_priv *priv) { + mlx5e_nvmeotcp_cleanup_rx(priv); mlx5e_ktls_cleanup_rx(priv); } diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.h index 7e899c716267..6714644986a1 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/fs_tcp.h @@ -6,7 +6,7 @@ #include "en/fs.h" -#ifdef CONFIG_MLX5_EN_TLS +#if defined(CONFIG_MLX5_EN_TLS) || defined(CONFIG_MLX5_EN_NVMEOTCP) int mlx5e_accel_fs_tcp_create(struct mlx5e_flow_steering *fs); void mlx5e_accel_fs_tcp_destroy(struct mlx5e_flow_steering *fs); struct mlx5_flow_handle *mlx5e_accel_fs_add_sk(struct mlx5e_flow_steering *fs, diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c new file mode 100644 index 000000000000..9965757873f9 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c @@ -0,0 +1,217 @@ +// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB +// Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. + +#include +#include +#include "en_accel/nvmeotcp.h" +#include "en_accel/fs_tcp.h" +#include "en/txrx.h" + +#define MAX_NUM_NVMEOTCP_QUEUES (4000) +#define MIN_NUM_NVMEOTCP_QUEUES (1) + +static const struct rhashtable_params rhash_queues = { + .key_len = sizeof(int), + .key_offset = offsetof(struct mlx5e_nvmeotcp_queue, id), + .head_offset = offsetof(struct mlx5e_nvmeotcp_queue, hash), + .automatic_shrinking = true, + .min_size = MIN_NUM_NVMEOTCP_QUEUES, + .max_size = MAX_NUM_NVMEOTCP_QUEUES, +}; + +static int +mlx5e_nvmeotcp_offload_limits(struct net_device *netdev, + struct ulp_ddp_limits *limits) +{ + return 0; +} + +static int +mlx5e_nvmeotcp_queue_init(struct net_device *netdev, + struct sock *sk, + struct ulp_ddp_config *tconfig) +{ + return 0; +} + +static void +mlx5e_nvmeotcp_queue_teardown(struct net_device *netdev, + struct sock *sk) +{ +} + +static int +mlx5e_nvmeotcp_ddp_setup(struct net_device *netdev, + struct sock *sk, + struct ulp_ddp_io *ddp) +{ + return 0; +} + +static void +mlx5e_nvmeotcp_ddp_teardown(struct net_device *netdev, + struct sock *sk, + struct ulp_ddp_io *ddp, + void *ddp_ctx) +{ +} + +static void +mlx5e_nvmeotcp_ddp_resync(struct net_device *netdev, + struct sock *sk, u32 seq) +{ +} + +int set_ulp_ddp_nvme_tcp(struct net_device *netdev, bool enable) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + struct mlx5e_params new_params; + int err = 0; + + /* There may be offloaded queues when an netlink callback to disable the feature is made. + * Hence, we can't destroy the tcp flow-table since it may be referenced by the offload + * related flows and we'll keep the 128B CQEs on the channel RQs. Also, since we don't + * deref/destroy the fs tcp table when the feature is disabled, we don't ref it again + * if the feature is enabled multiple times. + */ + if (!enable || priv->nvmeotcp->enabled) + return 0; + + err = mlx5e_accel_fs_tcp_create(priv->fs); + if (err) + return err; + + new_params = priv->channels.params; + new_params.nvmeotcp = enable; + err = mlx5e_safe_switch_params(priv, &new_params, NULL, NULL, true); + if (err) + goto fs_tcp_destroy; + + priv->nvmeotcp->enabled = true; + return 0; + +fs_tcp_destroy: + mlx5e_accel_fs_tcp_destroy(priv->fs); + return err; +} + +static int mlx5e_ulp_ddp_set_caps(struct net_device *netdev, unsigned long *new_caps, + struct netlink_ext_ack *extack) +{ + struct mlx5e_priv *priv = netdev_priv(netdev); + DECLARE_BITMAP(old_caps, ULP_DDP_CAP_COUNT); + struct mlx5e_params *params; + int ret = 0; + int nvme = -1; + + mutex_lock(&priv->state_lock); + params = &priv->channels.params; + bitmap_copy(old_caps, priv->nvmeotcp->ddp_caps.active, ULP_DDP_CAP_COUNT); + + /* always handle nvme-tcp-ddp and nvme-tcp-ddgst-rx together (all or nothing) */ + + if (ulp_ddp_cap_turned_on(old_caps, new_caps, ULP_DDP_CAP_NVME_TCP) && + ulp_ddp_cap_turned_on(old_caps, new_caps, ULP_DDP_CAP_NVME_TCP_DDGST_RX)) { + if (MLX5E_GET_PFLAG(params, MLX5E_PFLAG_RX_CQE_COMPRESS)) { + NL_SET_ERR_MSG_MOD(extack, + "NVMe-TCP offload not supported when CQE compress is active. Disable rx_cqe_compress ethtool private flag first"); + goto out; + } + + if (netdev->features & (NETIF_F_LRO | NETIF_F_GRO_HW)) { + NL_SET_ERR_MSG_MOD(extack, + "NVMe-TCP offload not supported when HW_GRO/LRO is active. Disable rx-gro-hw ethtool feature first"); + goto out; + } + nvme = 1; + } else if (ulp_ddp_cap_turned_off(old_caps, new_caps, ULP_DDP_CAP_NVME_TCP) && + ulp_ddp_cap_turned_off(old_caps, new_caps, ULP_DDP_CAP_NVME_TCP_DDGST_RX)) { + nvme = 0; + } + + if (nvme >= 0) { + ret = set_ulp_ddp_nvme_tcp(netdev, nvme); + if (ret) + goto out; + change_bit(ULP_DDP_CAP_NVME_TCP, priv->nvmeotcp->ddp_caps.active); + change_bit(ULP_DDP_CAP_NVME_TCP_DDGST_RX, priv->nvmeotcp->ddp_caps.active); + } + +out: + mutex_unlock(&priv->state_lock); + return ret; +} + +static void mlx5e_ulp_ddp_get_caps(struct net_device *dev, + struct ulp_ddp_dev_caps *caps) +{ + struct mlx5e_priv *priv = netdev_priv(dev); + + mutex_lock(&priv->state_lock); + memcpy(caps, &priv->nvmeotcp->ddp_caps, sizeof(*caps)); + mutex_unlock(&priv->state_lock); +} + +const struct ulp_ddp_dev_ops mlx5e_nvmeotcp_ops = { + .limits = mlx5e_nvmeotcp_offload_limits, + .sk_add = mlx5e_nvmeotcp_queue_init, + .sk_del = mlx5e_nvmeotcp_queue_teardown, + .setup = mlx5e_nvmeotcp_ddp_setup, + .teardown = mlx5e_nvmeotcp_ddp_teardown, + .resync = mlx5e_nvmeotcp_ddp_resync, + .set_caps = mlx5e_ulp_ddp_set_caps, + .get_caps = mlx5e_ulp_ddp_get_caps, +}; + +void mlx5e_nvmeotcp_cleanup_rx(struct mlx5e_priv *priv) +{ + if (priv->nvmeotcp && priv->nvmeotcp->enabled) + mlx5e_accel_fs_tcp_destroy(priv->fs); +} + +int mlx5e_nvmeotcp_init(struct mlx5e_priv *priv) +{ + struct mlx5e_nvmeotcp *nvmeotcp = NULL; + int ret = 0; + + if (!(MLX5_CAP_GEN(priv->mdev, nvmeotcp) && + MLX5_CAP_DEV_NVMEOTCP(priv->mdev, zerocopy) && + MLX5_CAP_DEV_NVMEOTCP(priv->mdev, crc_rx) && + MLX5_CAP_GEN(priv->mdev, cqe_128_always))) + return 0; + + nvmeotcp = kzalloc(sizeof(*nvmeotcp), GFP_KERNEL); + + if (!nvmeotcp) + return -ENOMEM; + + ida_init(&nvmeotcp->queue_ids); + ret = rhashtable_init(&nvmeotcp->queue_hash, &rhash_queues); + if (ret) + goto err_ida; + + /* report ULP DPP as supported, but don't enable it by default */ + set_bit(ULP_DDP_CAP_NVME_TCP, nvmeotcp->ddp_caps.hw); + set_bit(ULP_DDP_CAP_NVME_TCP_DDGST_RX, nvmeotcp->ddp_caps.hw); + nvmeotcp->enabled = false; + priv->nvmeotcp = nvmeotcp; + return 0; + +err_ida: + ida_destroy(&nvmeotcp->queue_ids); + kfree(nvmeotcp); + return ret; +} + +void mlx5e_nvmeotcp_cleanup(struct mlx5e_priv *priv) +{ + struct mlx5e_nvmeotcp *nvmeotcp = priv->nvmeotcp; + + if (!nvmeotcp) + return; + + rhashtable_destroy(&nvmeotcp->queue_hash); + ida_destroy(&nvmeotcp->queue_ids); + kfree(nvmeotcp); + priv->nvmeotcp = NULL; +} diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.h new file mode 100644 index 000000000000..29546992791f --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.h @@ -0,0 +1,120 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. */ +#ifndef __MLX5E_NVMEOTCP_H__ +#define __MLX5E_NVMEOTCP_H__ + +#ifdef CONFIG_MLX5_EN_NVMEOTCP + +#include +#include "en.h" +#include "en/params.h" + +struct mlx5e_nvmeotcp_queue_entry { + struct mlx5e_nvmeotcp_queue *queue; + u32 sgl_length; + u32 klm_mkey; + struct scatterlist *sgl; + u32 ccid_gen; + u64 size; + + /* for the ddp invalidate done callback */ + void *ddp_ctx; + struct ulp_ddp_io *ddp; +}; + +struct mlx5e_nvmeotcp_queue_handler { + struct napi_struct napi; + struct mlx5e_cq *cq; +}; + +/** + * struct mlx5e_nvmeotcp_queue - mlx5 metadata for NVMEoTCP queue + * @ulp_ddp_ctx: Generic ulp ddp context + * @tir: Destination TIR created for NVMEoTCP offload + * @fh: Flow handle representing the 5-tuple steering for this flow + * @id: Flow tag ID used to identify this queue + * @size: NVMEoTCP queue depth + * @ccid_gen: Generation ID for the CCID, used to avoid conflicts in DDP + * @max_klms_per_wqe: Number of KLMs per DDP operation + * @hash: Hash table of queues mapped by @id + * @pda: Padding alignment + * @tag_buf_table_id: Tag buffer table for CCIDs + * @dgst: Digest supported (header and/or data) + * @sq: Send queue used for posting umrs + * @ref_count: Reference count for this structure + * @after_resync_cqe: Indicate if resync occurred + * @ccid_table: Table holding metadata for each CC (Command Capsule) + * @ccid: ID of the current CC + * @ccsglidx: Index within the scatter-gather list (SGL) of the current CC + * @ccoff: Offset within the current CC + * @ccoff_inner: Current offset within the @ccsglidx element + * @channel_ix: Channel IX for this nvmeotcp_queue + * @sk: The socket used by the NVMe-TCP queue + * @crc_rx: CRC Rx offload indication for this queue + * @priv: mlx5e netdev priv + * @static_params_done: Async completion structure for the initial umr mapping + * synchronization + * @sq_lock: Spin lock for the icosq + * @qh: Completion queue handler for processing umr completions + */ +struct mlx5e_nvmeotcp_queue { + struct ulp_ddp_ctx ulp_ddp_ctx; + struct mlx5e_tir tir; + struct mlx5_flow_handle *fh; + int id; + u32 size; + /* needed when the upper layer immediately reuses CCID + some packet loss happens */ + u32 ccid_gen; + u32 max_klms_per_wqe; + struct rhash_head hash; + int pda; + u32 tag_buf_table_id; + u8 dgst; + struct mlx5e_icosq sq; + + /* data-path section cache aligned */ + refcount_t ref_count; + /* for MASK HW resync cqe */ + bool after_resync_cqe; + struct mlx5e_nvmeotcp_queue_entry *ccid_table; + /* current ccid fields */ + int ccid; + int ccsglidx; + off_t ccoff; + int ccoff_inner; + + u32 channel_ix; + struct sock *sk; + u8 crc_rx:1; + /* for ddp invalidate flow */ + struct mlx5e_priv *priv; + /* end of data-path section */ + + struct completion static_params_done; + /* spin lock for the ico sq, ULP can issue requests from multiple contexts */ + spinlock_t sq_lock; + struct mlx5e_nvmeotcp_queue_handler qh; +}; + +struct mlx5e_nvmeotcp { + struct ida queue_ids; + struct rhashtable queue_hash; + struct ulp_ddp_dev_caps ddp_caps; + bool enabled; +}; + +int mlx5e_nvmeotcp_init(struct mlx5e_priv *priv); +int set_ulp_ddp_nvme_tcp(struct net_device *netdev, bool enable); +void mlx5e_nvmeotcp_cleanup(struct mlx5e_priv *priv); +static inline void mlx5e_nvmeotcp_init_rx(struct mlx5e_priv *priv) {} +void mlx5e_nvmeotcp_cleanup_rx(struct mlx5e_priv *priv); +extern const struct ulp_ddp_dev_ops mlx5e_nvmeotcp_ops; +#else + +static inline int mlx5e_nvmeotcp_init(struct mlx5e_priv *priv) { return 0; } +static inline void mlx5e_nvmeotcp_cleanup(struct mlx5e_priv *priv) {} +static inline int set_ulp_ddp_nvme_tcp(struct net_device *dev, bool en) { return -EOPNOTSUPP; } +static inline void mlx5e_nvmeotcp_init_rx(struct mlx5e_priv *priv) {} +static inline void mlx5e_nvmeotcp_cleanup_rx(struct mlx5e_priv *priv) {} +#endif +#endif /* __MLX5E_NVMEOTCP_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c index 3320f12ba2db..50ea4a34e78c 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c @@ -41,6 +41,7 @@ #include "en/ptp.h" #include "lib/clock.h" #include "en/fs_ethtool.h" +#include "en_accel/nvmeotcp.h" void mlx5e_ethtool_get_drvinfo(struct mlx5e_priv *priv, struct ethtool_drvinfo *drvinfo) @@ -2127,6 +2128,11 @@ int mlx5e_modify_rx_cqe_compression_locked(struct mlx5e_priv *priv, bool new_val return -EINVAL; } + if (priv->channels.params.nvmeotcp) { + netdev_warn(priv->netdev, "Can't set CQE compression after ULP DDP NVMe-TCP offload\n"); + return -EINVAL; + } + new_params = priv->channels.params; MLX5E_SET_PFLAG(&new_params, MLX5E_PFLAG_RX_CQE_COMPRESS, new_val); if (rx_filter) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c index 8c5b291a171f..a8275b348aa4 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c @@ -62,7 +62,7 @@ struct mlx5e_flow_steering { #ifdef CONFIG_MLX5_EN_ARFS struct mlx5e_arfs_tables *arfs; #endif -#ifdef CONFIG_MLX5_EN_TLS +#if defined(CONFIG_MLX5_EN_TLS) || defined(CONFIG_MLX5_EN_NVMEOTCP) struct mlx5e_accel_fs_tcp *accel_tcp; #endif struct mlx5e_fs_udp *udp; @@ -1555,7 +1555,7 @@ void mlx5e_fs_set_any(struct mlx5e_flow_steering *fs, struct mlx5e_fs_any *any) fs->any = any; } -#ifdef CONFIG_MLX5_EN_TLS +#if defined(CONFIG_MLX5_EN_TLS) || defined(CONFIG_MLX5_EN_NVMEOTCP) struct mlx5e_accel_fs_tcp *mlx5e_fs_get_accel_tcp(struct mlx5e_flow_steering *fs) { return fs->accel_tcp; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c index a29b8b6bd2ac..fd138e2dd61b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c @@ -52,6 +52,7 @@ #include "en_accel/macsec.h" #include "en_accel/en_accel.h" #include "en_accel/ktls.h" +#include "en_accel/nvmeotcp.h" #include "lib/vxlan.h" #include "lib/clock.h" #include "en/port.h" @@ -4384,6 +4385,13 @@ static netdev_features_t mlx5e_fix_features(struct net_device *netdev, features &= ~NETIF_F_NETNS_LOCAL; } + if (features & (NETIF_F_LRO | NETIF_F_GRO_HW)) { + if (params->nvmeotcp) { + netdev_warn(netdev, "Disabling HW-GRO/LRO, not supported after ULP DDP NVMe-TCP offload\n"); + features &= ~(NETIF_F_LRO | NETIF_F_GRO_HW); + } + } + mutex_unlock(&priv->state_lock); return features; @@ -5142,6 +5150,9 @@ const struct net_device_ops mlx5e_netdev_ops = { .ndo_has_offload_stats = mlx5e_has_offload_stats, .ndo_get_offload_stats = mlx5e_get_offload_stats, #endif +#ifdef CONFIG_MLX5_EN_NVMEOTCP + .ulp_ddp_ops = &mlx5e_nvmeotcp_ops, +#endif }; static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout) @@ -5492,6 +5503,10 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev, if (err) mlx5_core_err(mdev, "TLS initialization failed, %d\n", err); + err = mlx5e_nvmeotcp_init(priv); + if (err) + mlx5_core_err(mdev, "NVMEoTCP initialization failed, %d\n", err); + mlx5e_health_create_reporters(priv); /* If netdev is already registered (e.g. move from uplink to nic profile), @@ -5512,6 +5527,7 @@ static int mlx5e_nic_init(struct mlx5_core_dev *mdev, static void mlx5e_nic_cleanup(struct mlx5e_priv *priv) { mlx5e_health_destroy_reporters(priv); + mlx5e_nvmeotcp_cleanup(priv); mlx5e_ktls_cleanup(priv); mlx5e_fs_cleanup(priv->fs); debugfs_remove_recursive(priv->dfs_root); diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c index 6574c145dc1e..24a7f2cba8c2 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/main.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c @@ -1762,6 +1762,7 @@ static const int types[] = { MLX5_CAP_MACSEC, MLX5_CAP_ADV_VIRTUALIZATION, MLX5_CAP_CRYPTO, + MLX5_CAP_DEV_NVMEOTCP, }; static void mlx5_hca_caps_free(struct mlx5_core_dev *dev)