From patchwork Mon Aug 1 06:57:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haggai Eran X-Patchwork-Id: 9253799 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1592D6077C for ; Mon, 1 Aug 2016 06:58:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 073502848F for ; Mon, 1 Aug 2016 06:58:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F056828493; Mon, 1 Aug 2016 06:58:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CB692848F for ; Mon, 1 Aug 2016 06:58:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751342AbcHAG6c (ORCPT ); Mon, 1 Aug 2016 02:58:32 -0400 Received: from mail-il-dmz.mellanox.com ([193.47.165.129]:36150 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751027AbcHAG6c (ORCPT ); Mon, 1 Aug 2016 02:58:32 -0400 Received: from Internal Mail-Server by MTLPINE1 (envelope-from haggaie@mellanox.com) with ESMTPS (AES256-SHA encrypted); 1 Aug 2016 09:58:06 +0300 Received: from arch003.mtl.labs.mlnx (arch003.mtl.labs.mlnx [10.137.35.1]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id u716w5B0000707; Mon, 1 Aug 2016 09:58:06 +0300 From: Haggai Eran To: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org Cc: linux-pci@vger.kernel.org, Stephen Bates , Liran Liss , Leon Romanovsky , Artemy Kovalyov , Jerome Glisse , Yishai Hadas , Haggai Eran Subject: [RFC 2/7] IB/mlx5: Support registration and invalidate operations on the UMR QP Date: Mon, 1 Aug 2016 09:57:28 +0300 Message-Id: <1470034653-9097-3-git-send-email-haggaie@mellanox.com> X-Mailer: git-send-email 1.7.11.2 In-Reply-To: <1470034653-9097-1-git-send-email-haggaie@mellanox.com> References: <1470034653-9097-1-git-send-email-haggaie@mellanox.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently the UMR QP only supports UMR work requests. There is nothing preventing it from also supporting fast registration and local invalidate operations. This would allow internal driver operations to use the fast registration API internally, without having to create an RC QP. Use the ib_create_qp instead of mlx5_ib_create_qp in order to initialize the PD field of the new UMR QP. Signed-off-by: Haggai Eran --- drivers/infiniband/hw/mlx5/main.c | 6 +--- drivers/infiniband/hw/mlx5/qp.c | 71 +++++++++++++++++++++++++++------------ 2 files changed, 50 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c index b48ad85315dc..f41254f3689a 100644 --- a/drivers/infiniband/hw/mlx5/main.c +++ b/drivers/infiniband/hw/mlx5/main.c @@ -2023,16 +2023,12 @@ static int create_umr_res(struct mlx5_ib_dev *dev) init_attr->cap.max_send_sge = 1; init_attr->qp_type = MLX5_IB_QPT_REG_UMR; init_attr->port_num = 1; - qp = mlx5_ib_create_qp(pd, init_attr, NULL); + qp = ib_create_qp(pd, init_attr); if (IS_ERR(qp)) { mlx5_ib_dbg(dev, "Couldn't create sync UMR QP\n"); ret = PTR_ERR(qp); goto error_3; } - qp->device = &dev->ib_dev; - qp->real_qp = qp; - qp->uobject = NULL; - qp->qp_type = MLX5_IB_QPT_REG_UMR; attr->qp_state = IB_QPS_INIT; attr->port_num = 1; diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index ce0a7ab35a22..861ec1f6a20b 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3237,8 +3237,8 @@ static int set_psv_wr(struct ib_sig_domain *domain, return 0; } -static int set_reg_wr(struct mlx5_ib_qp *qp, - struct ib_reg_wr *wr, +static int set_reg_wr(struct mlx5_ib_qp *qp, struct ib_reg_wr *wr, + unsigned int idx, struct mlx5_wqe_ctrl_seg *ctrl, void **seg, int *size) { struct mlx5_ib_mr *mr = to_mmr(wr->mr); @@ -3266,10 +3266,15 @@ static int set_reg_wr(struct mlx5_ib_qp *qp, *seg += sizeof(struct mlx5_wqe_data_seg); *size += (sizeof(struct mlx5_wqe_data_seg) / 16); + qp->sq.wr_data[idx] = IB_WR_REG_MR; + ctrl->imm = cpu_to_be32(wr->key); + return 0; } -static void set_linv_wr(struct mlx5_ib_qp *qp, void **seg, int *size) +static void set_linv_wr(struct mlx5_ib_qp *qp, struct ib_send_wr *wr, + unsigned int idx, struct mlx5_wqe_ctrl_seg *ctrl, + void **seg, int *size) { set_linv_umr_seg(*seg); *seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); @@ -3281,6 +3286,9 @@ static void set_linv_wr(struct mlx5_ib_qp *qp, void **seg, int *size) *size += sizeof(struct mlx5_mkey_seg) / 16; if (unlikely((*seg == qp->sq.qend))) *seg = mlx5_get_send_wqe(qp, 0); + + qp->sq.wr_data[idx] = IB_WR_LOCAL_INV; + ctrl->imm = cpu_to_be32(wr->ex.invalidate_rkey); } static void dump_wqe(struct mlx5_ib_qp *qp, int idx, int size_16) @@ -3476,17 +3484,14 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, case IB_WR_LOCAL_INV: next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; - qp->sq.wr_data[idx] = IB_WR_LOCAL_INV; - ctrl->imm = cpu_to_be32(wr->ex.invalidate_rkey); - set_linv_wr(qp, &seg, &size); + set_linv_wr(qp, wr, idx, ctrl, &seg, &size); num_sge = 0; break; case IB_WR_REG_MR: next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; - qp->sq.wr_data[idx] = IB_WR_REG_MR; - ctrl->imm = cpu_to_be32(reg_wr(wr)->key); - err = set_reg_wr(qp, reg_wr(wr), &seg, &size); + err = set_reg_wr(qp, reg_wr(wr), idx, ctrl, + &seg, &size); if (err) { *bad_wr = wr; goto out; @@ -3613,23 +3618,45 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, } break; case MLX5_IB_QPT_REG_UMR: - if (wr->opcode != MLX5_IB_WR_UMR) { + switch (wr->opcode) { + case MLX5_IB_WR_UMR: + qp->sq.wr_data[idx] = MLX5_IB_WR_UMR; + ctrl->imm = cpu_to_be32(umr_wr(wr)->mkey); + set_reg_umr_segment(seg, wr); + seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); + size += sizeof(struct mlx5_wqe_umr_ctrl_seg) + / 16; + if (unlikely((seg == qend))) + seg = mlx5_get_send_wqe(qp, 0); + set_reg_mkey_segment(seg, wr); + seg += sizeof(struct mlx5_mkey_seg); + size += sizeof(struct mlx5_mkey_seg) / 16; + if (unlikely((seg == qend))) + seg = mlx5_get_send_wqe(qp, 0); + break; + + case IB_WR_LOCAL_INV: + next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; + set_linv_wr(qp, wr, idx, ctrl, &seg, &size); + num_sge = 0; + break; + + case IB_WR_REG_MR: + next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; + err = set_reg_wr(qp, reg_wr(wr), idx, ctrl, + &seg, &size); + if (err) { + *bad_wr = wr; + goto out; + } + num_sge = 0; + break; + + default: err = -EINVAL; mlx5_ib_warn(dev, "bad opcode\n"); goto out; } - qp->sq.wr_data[idx] = MLX5_IB_WR_UMR; - ctrl->imm = cpu_to_be32(umr_wr(wr)->mkey); - set_reg_umr_segment(seg, wr); - seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); - size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); - set_reg_mkey_segment(seg, wr); - seg += sizeof(struct mlx5_mkey_seg); - size += sizeof(struct mlx5_mkey_seg) / 16; - if (unlikely((seg == qend))) - seg = mlx5_get_send_wqe(qp, 0); break; default: