From patchwork Thu Feb 11 21:10:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Pismenny X-Patchwork-Id: 12084159 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30D01C433E0 for ; Thu, 11 Feb 2021 21:16:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F29F264E4D for ; Thu, 11 Feb 2021 21:16:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229768AbhBKVP5 convert rfc822-to-8bit (ORCPT ); Thu, 11 Feb 2021 16:15:57 -0500 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:19020 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229790AbhBKVNU (ORCPT ); Thu, 11 Feb 2021 16:13:20 -0500 Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 11 Feb 2021 13:12:17 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 11 Feb 2021 21:12:09 +0000 Received: from vdi.nvidia.com (172.20.145.6) by mail.nvidia.com (172.20.187.15) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 11 Feb 2021 21:12:04 +0000 From: Boris Pismenny To: , , , , , , , , , , CC: , , , , , , Boris Pismenny , Or Gerlitz , Yoray Zack Subject: [PATCH v4 net-next 15/21] net/mlx5e: NVMEoTCP use KLM UMRs Date: Thu, 11 Feb 2021 23:10:38 +0200 Message-ID: <20210211211044.32701-16-borisp@mellanox.com> X-Mailer: git-send-email 2.24.1 In-Reply-To: <20210211211044.32701-1-borisp@mellanox.com> References: <20210211211044.32701-1-borisp@mellanox.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Ben Ben-Ishay NVMEoTCP offload uses buffer registration for ddp operation, every request comprises from SG list that might have elements with size > 4K, thus the appropriate way to perform buffer registration is with KLM UMRs. Signed-off-by: Boris Pismenny Signed-off-by: Ben Ben-Ishay Signed-off-by: Or Gerlitz Signed-off-by: Yoray Zack --- drivers/net/ethernet/mellanox/mlx5/core/en.h | 5 +- .../net/ethernet/mellanox/mlx5/core/en/txrx.h | 3 + .../mellanox/mlx5/core/en_accel/nvmeotcp.c | 116 ++++++++++++++++++ .../mlx5/core/en_accel/nvmeotcp_utils.h | 12 ++ .../net/ethernet/mellanox/mlx5/core/en_rx.c | 12 +- 5 files changed, 145 insertions(+), 3 deletions(-) create mode 100644 drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp_utils.h diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h index 1982e0a58be7..734b54e3090a 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h @@ -236,7 +236,10 @@ struct mlx5e_umr_wqe { struct mlx5_wqe_ctrl_seg ctrl; struct mlx5_wqe_umr_ctrl_seg uctrl; struct mlx5_mkey_seg mkc; - struct mlx5_mtt inline_mtts[0]; + union { + struct mlx5_mtt inline_mtts[0]; + struct mlx5_klm inline_klms[0]; + }; }; extern const char mlx5e_self_tests[][ETH_GSTRING_LEN]; diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h index 4880f2179273..e997b7230028 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/txrx.h @@ -34,6 +34,9 @@ enum mlx5e_icosq_wqe_type { MLX5E_ICOSQ_WQE_SET_PSV_TLS, MLX5E_ICOSQ_WQE_GET_PSV_TLS, #endif +#ifdef CONFIG_MLX5_EN_NVMEOTCP + MLX5E_ICOSQ_WQE_UMR_NVME_TCP, +#endif }; /* General */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c index 37a86ca83913..b1c3cb11c96b 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp.c @@ -4,6 +4,7 @@ #include #include #include "en_accel/nvmeotcp.h" +#include "en_accel/nvmeotcp_utils.h" #include "en_accel/fs_tcp.h" #include "en/txrx.h" @@ -19,6 +20,121 @@ static const struct rhashtable_params rhash_queues = { .max_size = MAX_NVMEOTCP_QUEUES, }; +static void +fill_nvmeotcp_klm_wqe(struct mlx5e_nvmeotcp_queue *queue, + struct mlx5e_umr_wqe *wqe, u16 ccid, u32 klm_entries, + u16 klm_offset) +{ + struct scatterlist *sgl_mkey; + u32 lkey, i; + + lkey = queue->priv->mdev->mlx5e_res.mkey.key; + for (i = 0; i < klm_entries; i++) { + sgl_mkey = &queue->ccid_table[ccid].sgl[i + klm_offset]; + wqe->inline_klms[i].bcount = cpu_to_be32(sgl_mkey->length); + wqe->inline_klms[i].key = cpu_to_be32(lkey); + wqe->inline_klms[i].va = cpu_to_be64(sgl_mkey->dma_address); + } + + for (; i < ALIGN(klm_entries, KLM_ALIGNMENT); i++) { + wqe->inline_klms[i].bcount = 0; + wqe->inline_klms[i].key = 0; + wqe->inline_klms[i].va = 0; + } +} + +static void +build_nvmeotcp_klm_umr(struct mlx5e_nvmeotcp_queue *queue, + struct mlx5e_umr_wqe *wqe, u16 ccid, int klm_entries, + u32 klm_offset, u32 len) +{ + u32 id = queue->ccid_table[ccid].klm_mkey.key; + struct mlx5_wqe_umr_ctrl_seg *ucseg = &wqe->uctrl; + struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl; + struct mlx5_mkey_seg *mkc = &wqe->mkc; + + u32 sqn = queue->sq->icosq.sqn; + u16 pc = queue->sq->icosq.pc; + + cseg->opmod_idx_opcode = cpu_to_be32((pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) | + MLX5_OPCODE_UMR); + cseg->qpn_ds = cpu_to_be32((sqn << MLX5_WQE_CTRL_QPN_SHIFT) | + MLX5E_KLM_UMR_DS_CNT(ALIGN(klm_entries, KLM_ALIGNMENT))); + cseg->general_id = cpu_to_be32(id); + + if (!klm_offset) { + ucseg->mkey_mask |= cpu_to_be64(MLX5_MKEY_MASK_XLT_OCT_SIZE | + MLX5_MKEY_MASK_LEN | MLX5_MKEY_MASK_FREE); + mkc->xlt_oct_size = cpu_to_be32(ALIGN(len, KLM_ALIGNMENT)); + mkc->len = cpu_to_be64(queue->ccid_table[ccid].size); + } + + ucseg->flags = MLX5_UMR_INLINE | MLX5_UMR_TRANSLATION_OFFSET_EN; + ucseg->xlt_octowords = cpu_to_be16(ALIGN(klm_entries, KLM_ALIGNMENT)); + ucseg->xlt_offset = cpu_to_be16(klm_offset); + fill_nvmeotcp_klm_wqe(queue, wqe, ccid, klm_entries, klm_offset); +} + +static void +mlx5e_nvmeotcp_fill_wi(struct mlx5e_nvmeotcp_queue *nvmeotcp_queue, + struct mlx5e_icosq *sq, u32 wqe_bbs, u16 pi) +{ + struct mlx5e_icosq_wqe_info *wi = &sq->db.wqe_info[pi]; + + wi->num_wqebbs = wqe_bbs; + wi->wqe_type = MLX5E_ICOSQ_WQE_UMR_NVME_TCP; +} + +static void +post_klm_wqe(struct mlx5e_nvmeotcp_queue *queue, + u16 ccid, + u32 klm_length, + u32 *klm_offset) +{ + struct mlx5e_icosq *sq = &queue->sq->icosq; + u32 wqe_bbs, cur_klm_entries; + struct mlx5e_umr_wqe *wqe; + u16 pi, wqe_sz; + + cur_klm_entries = min_t(int, queue->max_klms_per_wqe, + klm_length - *klm_offset); + wqe_sz = MLX5E_KLM_UMR_WQE_SZ(ALIGN(cur_klm_entries, KLM_ALIGNMENT)); + wqe_bbs = DIV_ROUND_UP(wqe_sz, MLX5_SEND_WQE_BB); + pi = mlx5e_icosq_get_next_pi(sq, wqe_bbs); + wqe = MLX5E_NVMEOTCP_FETCH_KLM_WQE(sq, pi); + mlx5e_nvmeotcp_fill_wi(queue, sq, wqe_bbs, pi); + build_nvmeotcp_klm_umr(queue, wqe, ccid, cur_klm_entries, *klm_offset, + klm_length); + *klm_offset += cur_klm_entries; + sq->pc += wqe_bbs; + sq->doorbell_cseg = &wqe->ctrl; +} + +static int +mlx5e_nvmeotcp_post_klm_wqe(struct mlx5e_nvmeotcp_queue *queue, + u16 ccid, + u32 klm_length) +{ + u32 klm_offset = 0, wqes, wqe_sz, max_wqe_bbs, i, room; + struct mlx5e_icosq *sq = &queue->sq->icosq; + + /* TODO: set stricter wqe_sz; using max for now */ + wqes = DIV_ROUND_UP(klm_length, queue->max_klms_per_wqe); + wqe_sz = MLX5E_KLM_UMR_WQE_SZ(queue->max_klms_per_wqe); + + max_wqe_bbs = DIV_ROUND_UP(wqe_sz, MLX5_SEND_WQE_BB); + + room = mlx5e_stop_room_for_wqe(max_wqe_bbs) * wqes; + if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, room))) + return -ENOSPC; + + for (i = 0; i < wqes; i++) + post_klm_wqe(queue, ccid, klm_length, &klm_offset); + + mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, sq->doorbell_cseg); + return 0; +} + static int mlx5e_nvmeotcp_offload_limits(struct net_device *netdev, struct tcp_ddp_limits *limits) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp_utils.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp_utils.h new file mode 100644 index 000000000000..329e114d6571 --- /dev/null +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/nvmeotcp_utils.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */ +/* Copyright (c) 2021 Mellanox Technologies. */ +#ifndef __MLX5E_NVMEOTCP_UTILS_H__ +#define __MLX5E_NVMEOTCP_UTILS_H__ + +#include "en.h" + +#define MLX5E_NVMEOTCP_FETCH_KLM_WQE(sq, pi) \ + ((struct mlx5e_umr_wqe *)\ + mlx5e_fetch_wqe(&(sq)->wq, pi, sizeof(struct mlx5e_umr_wqe))) + +#endif /* __MLX5E_NVMEOTCP_UTILS_H__ */ diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 4de5a97ceac6..df6dc111f6df 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -612,16 +612,20 @@ void mlx5e_free_icosq_descs(struct mlx5e_icosq *sq) ci = mlx5_wq_cyc_ctr2ix(&sq->wq, sqcc); wi = &sq->db.wqe_info[ci]; sqcc += wi->num_wqebbs; -#ifdef CONFIG_MLX5_EN_TLS switch (wi->wqe_type) { +#ifdef CONFIG_MLX5_EN_TLS case MLX5E_ICOSQ_WQE_SET_PSV_TLS: mlx5e_ktls_handle_ctx_completion(wi); break; case MLX5E_ICOSQ_WQE_GET_PSV_TLS: mlx5e_ktls_handle_get_psv_completion(wi, sq); break; - } #endif +#ifdef CONFIG_MLX5_EN_NVMEOTCP + case MLX5E_ICOSQ_WQE_UMR_NVME_TCP: + break; +#endif + } } sq->cc = sqcc; } @@ -690,6 +694,10 @@ int mlx5e_poll_ico_cq(struct mlx5e_cq *cq) case MLX5E_ICOSQ_WQE_GET_PSV_TLS: mlx5e_ktls_handle_get_psv_completion(wi, sq); break; +#endif +#ifdef CONFIG_MLX5_EN_NVMEOTCP + case MLX5E_ICOSQ_WQE_UMR_NVME_TCP: + break; #endif default: netdev_WARN_ONCE(cq->netdev,