From patchwork Mon Mar 18 12:24:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 10857541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 969451390 for ; Mon, 18 Mar 2019 12:25:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B67529227 for ; Mon, 18 Mar 2019 12:25:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6FDD3292D1; Mon, 18 Mar 2019 12:25:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C439D29227 for ; Mon, 18 Mar 2019 12:25:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727220AbfCRMZk (ORCPT ); Mon, 18 Mar 2019 08:25:40 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:53317 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727141AbfCRMZi (ORCPT ); Mon, 18 Mar 2019 08:25:38 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from yishaih@mellanox.com) with ESMTPS (AES256-SHA encrypted); 18 Mar 2019 14:25:30 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [10.7.2.17]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id x2ICPUFt019557; Mon, 18 Mar 2019 14:25:30 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [127.0.0.1]) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8) with ESMTP id x2ICPTdg004904; Mon, 18 Mar 2019 14:25:30 +0200 Received: (from yishaih@localhost) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8/Submit) id x2ICPTL2004903; Mon, 18 Mar 2019 14:25:29 +0200 From: Yishai Hadas To: linux-rdma@vger.kernel.org Cc: yishaih@mellanox.com, guyle@mellanox.com, Alexr@mellanox.com, jgg@mellanox.com, majd@mellanox.com Subject: [PATCH rdma-core 3/6] mlx5: Support inline data WR over new post send API Date: Mon, 18 Mar 2019 14:24:16 +0200 Message-Id: <1552911859-4073-4-git-send-email-yishaih@mellanox.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1552911859-4073-1-git-send-email-yishaih@mellanox.com> References: <1552911859-4073-1-git-send-email-yishaih@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Guy Levi It is a complementary part for new post send API support. Now inline data setters are also supported: - ibv_wr_set_inline_data() - ibv_wr_set_inline_data_list() Signed-off-by: Guy Levi Signed-off-by: Yishai Hadas --- providers/mlx5/mlx5.h | 1 + providers/mlx5/qp.c | 155 +++++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 155 insertions(+), 1 deletion(-) diff --git a/providers/mlx5/mlx5.h b/providers/mlx5/mlx5.h index 71220a2..b31619c 100644 --- a/providers/mlx5/mlx5.h +++ b/providers/mlx5/mlx5.h @@ -513,6 +513,7 @@ struct mlx5_qp { struct mlx5_bf *bf; /* Start of new post send API specific fields */ + bool inl_wqe; uint8_t cur_setters_cnt; uint8_t fm_cache_rb; int err; diff --git a/providers/mlx5/qp.c b/providers/mlx5/qp.c index c933ee6..8cff584 100644 --- a/providers/mlx5/qp.c +++ b/providers/mlx5/qp.c @@ -1154,6 +1154,7 @@ static void mlx5_send_wr_start(struct ibv_qp_ex *ibqp) mqp->fm_cache_rb = mqp->fm_cache; mqp->err = 0; mqp->nreq = 0; + mqp->inl_wqe = 0; } static int mlx5_send_wr_complete(struct ibv_qp_ex *ibqp) @@ -1168,7 +1169,7 @@ static int mlx5_send_wr_complete(struct ibv_qp_ex *ibqp) goto out; } - post_send_db(mqp, mqp->bf, mqp->nreq, 0, mqp->cur_size, + post_send_db(mqp, mqp->bf, mqp->nreq, mqp->inl_wqe, mqp->cur_size, mqp->cur_ctrl); out: @@ -1570,6 +1571,154 @@ mlx5_send_wr_set_sge_list_ud_xrc(struct ibv_qp_ex *ibqp, size_t num_sge, mqp->cur_setters_cnt++; } +static inline void memcpy_to_wqe(struct mlx5_qp *mqp, void *dest, void *src, + size_t n) +{ + if (unlikely(dest + n > mqp->sq.qend)) { + size_t copy = mqp->sq.qend - dest; + + memcpy(dest, src, copy); + src += copy; + n -= copy; + dest = mlx5_get_send_wqe(mqp, 0); + } + memcpy(dest, src, n); +} + +static inline void memcpy_to_wqe_and_update(struct mlx5_qp *mqp, void **dest, + void *src, size_t n) +{ + if (unlikely(*dest + n > mqp->sq.qend)) { + size_t copy = mqp->sq.qend - *dest; + + memcpy(*dest, src, copy); + src += copy; + n -= copy; + *dest = mlx5_get_send_wqe(mqp, 0); + } + memcpy(*dest, src, n); + + *dest += n; +} + +static inline void +_mlx5_send_wr_set_inline_data(struct mlx5_qp *mqp, void *addr, size_t length) +{ + struct mlx5_wqe_inline_seg *dseg = mqp->cur_data; + + if (unlikely(length > mqp->max_inline_data)) { + FILE *fp = to_mctx(mqp->ibv_qp->context)->dbg_fp; + + mlx5_dbg(fp, MLX5_DBG_QP_SEND, + "Inline data %zu exceeds the maximum (%d)\n", + length, mqp->max_inline_data); + + if (!mqp->err) + mqp->err = ENOMEM; + + return; + } + + mqp->inl_wqe = 1; /* Encourage a BlueFlame usage */ + + if (unlikely(!length)) + return; + + memcpy_to_wqe(mqp, (void *)dseg + sizeof(*dseg), addr, length); + dseg->byte_count = htobe32(length | MLX5_INLINE_SEG); + mqp->cur_size += DIV_ROUND_UP(length + sizeof(*dseg), 16); +} + +static void +mlx5_send_wr_set_inline_data_rc_uc(struct ibv_qp_ex *ibqp, void *addr, + size_t length) +{ + struct mlx5_qp *mqp = to_mqp((struct ibv_qp *)ibqp); + + _mlx5_send_wr_set_inline_data(mqp, addr, length); + _common_wqe_finilize(mqp); +} + +static void +mlx5_send_wr_set_inline_data_ud_xrc(struct ibv_qp_ex *ibqp, void *addr, + size_t length) +{ + struct mlx5_qp *mqp = to_mqp((struct ibv_qp *)ibqp); + + _mlx5_send_wr_set_inline_data(mqp, addr, length); + + if (mqp->cur_setters_cnt == WQE_REQ_SETTERS_UD_XRC - 1) + _common_wqe_finilize(mqp); + else + mqp->cur_setters_cnt++; +} + +static inline void +_mlx5_send_wr_set_inline_data_list(struct mlx5_qp *mqp, + size_t num_buf, + const struct ibv_data_buf *buf_list) +{ + struct mlx5_wqe_inline_seg *dseg = mqp->cur_data; + void *wqe = (void *)dseg + sizeof(*dseg); + size_t inl_size = 0; + int i; + + for (i = 0; i < num_buf; i++) { + size_t length = buf_list[i].length; + + inl_size += length; + + if (unlikely(inl_size > mqp->max_inline_data)) { + FILE *fp = to_mctx(mqp->ibv_qp->context)->dbg_fp; + + mlx5_dbg(fp, MLX5_DBG_QP_SEND, + "Inline data %zu exceeds the maximum (%d)\n", + inl_size, mqp->max_inline_data); + + if (!mqp->err) + mqp->err = ENOMEM; + + return; + } + + memcpy_to_wqe_and_update(mqp, &wqe, buf_list[i].addr, length); + } + + mqp->inl_wqe = 1; /* Encourage a BlueFlame usage */ + + if (unlikely(!inl_size)) + return; + + dseg->byte_count = htobe32(inl_size | MLX5_INLINE_SEG); + mqp->cur_size += DIV_ROUND_UP(inl_size + sizeof(*dseg), 16); +} + +static void +mlx5_send_wr_set_inline_data_list_rc_uc(struct ibv_qp_ex *ibqp, + size_t num_buf, + const struct ibv_data_buf *buf_list) +{ + struct mlx5_qp *mqp = to_mqp((struct ibv_qp *)ibqp); + + _mlx5_send_wr_set_inline_data_list(mqp, num_buf, buf_list); + _common_wqe_finilize(mqp); +} + +static void +mlx5_send_wr_set_inline_data_list_ud_xrc(struct ibv_qp_ex *ibqp, + size_t num_buf, + const struct ibv_data_buf *buf_list) +{ + struct mlx5_qp *mqp = to_mqp((struct ibv_qp *)ibqp); + + _mlx5_send_wr_set_inline_data_list(mqp, num_buf, buf_list); + + if (mqp->cur_setters_cnt == WQE_REQ_SETTERS_UD_XRC - 1) + _common_wqe_finilize(mqp); + else + mqp->cur_setters_cnt++; +} + static void mlx5_send_wr_set_ud_addr(struct ibv_qp_ex *ibqp, struct ibv_ah *ah, uint32_t remote_qpn, uint32_t remote_qkey) @@ -1664,12 +1813,16 @@ static void fill_wr_setters_rc_uc(struct ibv_qp_ex *ibqp) { ibqp->wr_set_sge = mlx5_send_wr_set_sge_rc_uc; ibqp->wr_set_sge_list = mlx5_send_wr_set_sge_list_rc_uc; + ibqp->wr_set_inline_data = mlx5_send_wr_set_inline_data_rc_uc; + ibqp->wr_set_inline_data_list = mlx5_send_wr_set_inline_data_list_rc_uc; } static void fill_wr_setters_ud_xrc(struct ibv_qp_ex *ibqp) { ibqp->wr_set_sge = mlx5_send_wr_set_sge_ud_xrc; ibqp->wr_set_sge_list = mlx5_send_wr_set_sge_list_ud_xrc; + ibqp->wr_set_inline_data = mlx5_send_wr_set_inline_data_ud_xrc; + ibqp->wr_set_inline_data_list = mlx5_send_wr_set_inline_data_list_ud_xrc; } int mlx5_qp_fill_wr_pfns(struct mlx5_qp *mqp,