From patchwork Thu Jan 14 15:10:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Pismenny X-Patchwork-Id: 12019749 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 229B2C43332 for ; Thu, 14 Jan 2021 15:12:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0D60123A80 for ; Thu, 14 Jan 2021 15:12:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729323AbhANPLo (ORCPT ); Thu, 14 Jan 2021 10:11:44 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:38992 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729314AbhANPLm (ORCPT ); Thu, 14 Jan 2021 10:11:42 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from borisp@mellanox.com) with SMTP; 14 Jan 2021 17:10:45 +0200 Received: from gen-l-vrt-133.mtl.labs.mlnx. (gen-l-vrt-133.mtl.labs.mlnx [10.237.11.160]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 10EFAh0I000835; Thu, 14 Jan 2021 17:10:45 +0200 From: Boris Pismenny To: kuba@kernel.org, davem@davemloft.net, saeedm@nvidia.com, hch@lst.de, sagi@grimberg.me, axboe@fb.com, kbusch@kernel.org, viro@zeniv.linux.org.uk, edumazet@google.com, dsahern@gmail.com, smalin@marvell.com Cc: boris.pismenny@gmail.com, linux-nvme@lists.infradead.org, netdev@vger.kernel.org, benishay@nvidia.com, ogerlitz@nvidia.com, yorayz@nvidia.com, Or Gerlitz , Yoray Zack Subject: [PATCH v2 net-next 18/21] net/mlx5e: NVMEoTCP ddp setup and resync Date: Thu, 14 Jan 2021 17:10:30 +0200 Message-Id: <20210114151033.13020-19-borisp@mellanox.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20210114151033.13020-1-borisp@mellanox.com> References: <20210114151033.13020-1-borisp@mellanox.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Ben Ben-Ishay NVMEoTCP offload uses buffer registration for every NVME request to perform direct data placement, The registration is done via KLM UMR WQE's. The driver resync handler advertise the software resync response via static params WQE. Signed-off-by: Boris Pismenny Signed-off-by: Ben Ben-Ishay Signed-off-by: Or Gerlitz Signed-off-by: Yoray Zack --- .../mellanox/mlx5/core/en_accel/nvmeotcp.c | 33 +++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c index 22fa1e9f3e7a..c2967cb4cb6a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c @@ -751,6 +751,30 @@ mlx5e_nvmeotcp_ddp_setup(struct net_device *netdev, struct sock *sk, struct tcp_ddp_io *ddp) { + struct mlx5e_priv *priv = netdev_priv(netdev); + struct scatterlist *sg = ddp->sg_table.sgl; + struct mlx5e_nvmeotcp_queue *queue; + struct mlx5_core_dev *mdev; + int i, size = 0, count = 0; + + queue = container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx); + + mdev = queue->priv->mdev; + count = dma_map_sg(mdev->device, ddp->sg_table.sgl, ddp->nents, + DMA_FROM_DEVICE); + + if (WARN_ON(count > mlx5e_get_max_sgl(mdev))) + return -ENOSPC; + + for (i = 0; i < count; i++) + size += sg[i].length; + + queue->ccid_table[ddp->command_id].size = size; + queue->ccid_table[ddp->command_id].ddp = ddp; + queue->ccid_table[ddp->command_id].sgl = sg; + queue->ccid_table[ddp->command_id].ccid_gen++; + queue->ccid_table[ddp->command_id].sgl_length = count; + return 0; } @@ -788,11 +812,11 @@ mlx5e_nvmeotcp_ddp_teardown(struct net_device *netdev, struct tcp_ddp_io *ddp, void *ddp_ctx) { - struct mlx5e_nvmeotcp_queue *queue = - (struct mlx5e_nvmeotcp_queue *)tcp_ddp_get_ctx(sk); + struct mlx5e_nvmeotcp_queue *queue; struct mlx5e_priv *priv = netdev_priv(netdev); struct nvmeotcp_queue_entry *q_entry; + queue = container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx); q_entry = &queue->ccid_table[ddp->command_id]; WARN_ON(q_entry->sgl_length == 0); @@ -808,6 +832,11 @@ static void mlx5e_nvmeotcp_dev_resync(struct net_device *netdev, struct sock *sk, u32 seq) { + struct mlx5e_nvmeotcp_queue *queue = + container_of(tcp_ddp_get_ctx(sk), struct mlx5e_nvmeotcp_queue, tcp_ddp_ctx); + + queue->after_resync_cqe = 1; + mlx5e_nvmeotcp_rx_post_static_params_wqe(queue, seq); } static const struct tcp_ddp_dev_ops mlx5e_nvmeotcp_ops = {