From patchwork Thu Oct 27 13:36:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Leon Romanovsky X-Patchwork-Id: 9399335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AD4F16059F for ; Thu, 27 Oct 2016 13:55:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8CD7D2A2F5 for ; Thu, 27 Oct 2016 13:55:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8BC952A304; Thu, 27 Oct 2016 13:55:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 69C4E2A2F7 for ; Thu, 27 Oct 2016 13:55:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934631AbcJ0Nym (ORCPT ); Thu, 27 Oct 2016 09:54:42 -0400 Received: from mail.kernel.org ([198.145.29.136]:46012 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S941645AbcJ0Nyf (ORCPT ); Thu, 27 Oct 2016 09:54:35 -0400 Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 46738203FB; Thu, 27 Oct 2016 13:37:23 +0000 (UTC) Received: from localhost (unknown [213.57.247.249]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id DD1262040F; Thu, 27 Oct 2016 13:37:13 +0000 (UTC) From: Leon Romanovsky To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org, Maor Gottlieb , Eli Cohen Subject: [PATCH rdma-rc 02/12] IB/mlx5: Fix atomic cap in indirect UMR Date: Thu, 27 Oct 2016 16:36:37 +0300 Message-Id: <1477575407-20562-3-git-send-email-leon@kernel.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1477575407-20562-1-git-send-email-leon@kernel.org> References: <1477575407-20562-1-git-send-email-leon@kernel.org> X-Virus-Scanned: ClamAV using ClamSMTP Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Maor Gottlieb Remove from the driver the limitation imposed by firmware check to not allow change of atomic permissions for indirect UMRs. In order to avoid failures on old firmware, we only ask for change of atomic permissions if atomic operations are supported. Fixes: 968e78dd9644 ('IB/mlx5: Enhance UMR support to allow partial page table update') Signed-off-by: Eli Cohen Signed-off-by: Maor Gottlieb Signed-off-by: Leon Romanovsky --- drivers/infiniband/hw/mlx5/qp.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c index 00cffbf..5bdf20c 100644 --- a/drivers/infiniband/hw/mlx5/qp.c +++ b/drivers/infiniband/hw/mlx5/qp.c @@ -3071,7 +3071,7 @@ static void set_linv_umr_seg(struct mlx5_wqe_umr_ctrl_seg *umr) umr->flags = MLX5_UMR_INLINE; } -static __be64 get_umr_reg_mr_mask(void) +static __be64 get_umr_reg_mr_mask(int atomic) { u64 result; @@ -3084,9 +3084,11 @@ static __be64 get_umr_reg_mr_mask(void) MLX5_MKEY_MASK_KEY | MLX5_MKEY_MASK_RR | MLX5_MKEY_MASK_RW | - MLX5_MKEY_MASK_A | MLX5_MKEY_MASK_FREE; + if (atomic) + result |= MLX5_MKEY_MASK_A; + return cpu_to_be64(result); } @@ -3147,7 +3149,7 @@ static __be64 get_umr_update_pd_mask(void) } static void set_reg_umr_segment(struct mlx5_wqe_umr_ctrl_seg *umr, - struct ib_send_wr *wr) + struct ib_send_wr *wr, int atomic) { struct mlx5_umr_wr *umrwr = umr_wr(wr); @@ -3172,7 +3174,7 @@ static void set_reg_umr_segment(struct mlx5_wqe_umr_ctrl_seg *umr, if (wr->send_flags & MLX5_IB_SEND_UMR_UPDATE_PD) umr->mkey_mask |= get_umr_update_pd_mask(); if (!umr->mkey_mask) - umr->mkey_mask = get_umr_reg_mr_mask(); + umr->mkey_mask = get_umr_reg_mr_mask(atomic); } else { umr->mkey_mask = get_umr_unreg_mr_mask(); } @@ -4025,7 +4027,7 @@ int mlx5_ib_post_send(struct ib_qp *ibqp, struct ib_send_wr *wr, } qp->sq.wr_data[idx] = MLX5_IB_WR_UMR; ctrl->imm = cpu_to_be32(umr_wr(wr)->mkey); - set_reg_umr_segment(seg, wr); + set_reg_umr_segment(seg, wr, !!(MLX5_CAP_GEN(mdev, atomic))); seg += sizeof(struct mlx5_wqe_umr_ctrl_seg); size += sizeof(struct mlx5_wqe_umr_ctrl_seg) / 16; if (unlikely((seg == qend)))