From patchwork Mon Dec 25 13:57:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yishai Hadas X-Patchwork-Id: 10132565 X-Patchwork-Delegate: leon@leon.nu Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C3B7E6063D for ; Mon, 25 Dec 2017 13:57:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B62872F135 for ; Mon, 25 Dec 2017 13:57:35 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AB0842F141; Mon, 25 Dec 2017 13:57:35 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F8922F135 for ; Mon, 25 Dec 2017 13:57:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752202AbdLYN5c (ORCPT ); Mon, 25 Dec 2017 08:57:32 -0500 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:37304 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752414AbdLYN51 (ORCPT ); Mon, 25 Dec 2017 08:57:27 -0500 Received: from Internal Mail-Server by MTLPINE1 (envelope-from yishaih@mellanox.com) with ESMTPS (AES256-SHA encrypted); 25 Dec 2017 15:57:19 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [10.7.2.17]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id vBPDvJbN016353; Mon, 25 Dec 2017 15:57:19 +0200 Received: from vnc17.mtl.labs.mlnx (vnc17.mtl.labs.mlnx [127.0.0.1]) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8) with ESMTP id vBPDvI6a010677; Mon, 25 Dec 2017 15:57:18 +0200 Received: (from yishaih@localhost) by vnc17.mtl.labs.mlnx (8.13.8/8.13.8/Submit) id vBPDvIed010676; Mon, 25 Dec 2017 15:57:18 +0200 From: Yishai Hadas To: linux-rdma@vger.kernel.org Cc: yishaih@mellanox.com, Alexr@mellanox.com, jgg@mellanox.com, majd@mellanox.com Subject: [PATCH rdma-core 8/9] mlx5: Add support for thread domain as part of QP creation Date: Mon, 25 Dec 2017 15:57:00 +0200 Message-Id: <1514210221-10466-9-git-send-email-yishaih@mellanox.com> X-Mailer: git-send-email 1.8.2.3 In-Reply-To: <1514210221-10466-1-git-send-email-yishaih@mellanox.com> References: <1514210221-10466-1-git-send-email-yishaih@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When a TD is supplied as part of QP creation use its internal BF. Signed-off-by: Yishai Hadas --- providers/mlx5/mlx5-abi.h | 3 ++- providers/mlx5/verbs.c | 27 ++++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 4 deletions(-) diff --git a/providers/mlx5/mlx5-abi.h b/providers/mlx5/mlx5-abi.h index ebe079b..7b96429 100644 --- a/providers/mlx5/mlx5-abi.h +++ b/providers/mlx5/mlx5-abi.h @@ -44,6 +44,7 @@ enum { MLX5_QP_FLAG_SIGNATURE = 1 << 0, MLX5_QP_FLAG_SCATTER_CQE = 1 << 1, MLX5_QP_FLAG_TUNNEL_OFFLOADS = 1 << 2, + MLX5_QP_FLAG_BFREG_INDEX = 1 << 3, }; enum { @@ -203,7 +204,7 @@ struct mlx5_create_qp { __u32 rq_wqe_shift; __u32 flags; __u32 uidx; - __u32 reserved; + __u32 bfreg_index; /* SQ buffer address - used for Raw Packet QP */ __u64 sq_buf_addr; }; diff --git a/providers/mlx5/verbs.c b/providers/mlx5/verbs.c index b8cbbe9..efecd93 100644 --- a/providers/mlx5/verbs.c +++ b/providers/mlx5/verbs.c @@ -1266,11 +1266,14 @@ static int mlx5_calc_wq_size(struct mlx5_context *ctx, } static void map_uuar(struct ibv_context *context, struct mlx5_qp *qp, - int uuar_index) + int uuar_index, struct mlx5_bf *dyn_bf) { struct mlx5_context *ctx = to_mctx(context); - qp->bf = &ctx->bfs[uuar_index]; + if (!dyn_bf) + qp->bf = &ctx->bfs[uuar_index]; + else + qp->bf = dyn_bf; } static const char *qptype2key(enum ibv_qp_type type) @@ -1501,7 +1504,9 @@ static struct ibv_qp *create_qp(struct ibv_context *context, int32_t usr_idx = 0; uint32_t uuar_index; uint32_t mlx5_create_flags = 0; + struct mlx5_bf *bf = NULL; FILE *fp = ctx->dbg_fp; + struct mlx5_parent_domain *mparent_domain; if (attr->comp_mask & ~MLX5_CREATE_QP_SUP_COMP_MASK) return NULL; @@ -1515,6 +1520,7 @@ static struct ibv_qp *create_qp(struct ibv_context *context, mlx5_dbg(fp, MLX5_DBG_QP, "\n"); return NULL; } + ibqp = (struct ibv_qp *)&qp->verbs_qp; qp->ibv_qp = ibqp; @@ -1645,6 +1651,15 @@ static struct ibv_qp *create_qp(struct ibv_context *context, cmd.uidx = usr_idx; } + mparent_domain = to_mparent_domain(attr->pd); + if (mparent_domain && mparent_domain->mtd) + bf = mparent_domain->mtd->bf; + + if (bf) { + cmd.bfreg_index = bf->bfreg_dyn_index; + cmd.flags |= MLX5_QP_FLAG_BFREG_INDEX; + } + if (attr->comp_mask & MLX5_CREATE_QP_EX2_COMP_MASK) ret = mlx5_cmd_create_qp_ex(context, attr, &cmd, qp, &resp_ex); else @@ -1670,7 +1685,7 @@ static struct ibv_qp *create_qp(struct ibv_context *context, pthread_mutex_unlock(&ctx->qp_table_mutex); } - map_uuar(context, qp, uuar_index); + map_uuar(context, qp, uuar_index, bf); qp->rq.max_post = qp->rq.wqe_cnt; if (attr->sq_sig_all) @@ -1686,6 +1701,8 @@ static struct ibv_qp *create_qp(struct ibv_context *context, qp->rsc.rsn = (ctx->cqe_version && !is_xrc_tgt(attr->qp_type)) ? usr_idx : ibqp->qp_num; + if (mparent_domain) + atomic_fetch_add(&mparent_domain->mpd.refcount, 1); return ibqp; err_destroy: @@ -1775,6 +1792,7 @@ int mlx5_destroy_qp(struct ibv_qp *ibqp) struct mlx5_qp *qp = to_mqp(ibqp); struct mlx5_context *ctx = to_mctx(ibqp->context); int ret; + struct mlx5_parent_domain *mparent_domain = to_mparent_domain(ibqp->pd); if (qp->rss_qp) { ret = ibv_cmd_destroy_qp(ibqp); @@ -1814,6 +1832,9 @@ int mlx5_destroy_qp(struct ibv_qp *ibqp) mlx5_free_db(ctx, qp->db); mlx5_free_qp_buf(qp); free: + if (mparent_domain) + atomic_fetch_sub(&mparent_domain->mpd.refcount, 1); + free(qp); return 0;