From patchwork Thu Oct 11 14:46:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10636725 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DD9AA17E1 for ; Thu, 11 Oct 2018 14:44:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CEE782B85A for ; Thu, 11 Oct 2018 14:44:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C31492B8F1; Thu, 11 Oct 2018 14:44:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 653D02B85A for ; Thu, 11 Oct 2018 14:44:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728646AbeJKWLj (ORCPT ); Thu, 11 Oct 2018 18:11:39 -0400 Received: from szxga07-in.huawei.com ([45.249.212.35]:35802 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728622AbeJKWLj (ORCPT ); Thu, 11 Oct 2018 18:11:39 -0400 Received: from DGGEMS409-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 17AC54EEE5E86; Thu, 11 Oct 2018 22:44:08 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS409-HUB.china.huawei.com (10.3.19.209) with Microsoft SMTP Server id 14.3.399.0; Thu, 11 Oct 2018 22:44:06 +0800 From: Lijun Ou To: , CC: , Subject: [PATCH V2 rdma-core 2/2] libhns: Bugfix for atomic operation in user mode Date: Thu, 11 Oct 2018 22:46:16 +0800 Message-ID: <1539269176-144961-3-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1539269176-144961-1-git-send-email-oulijun@huawei.com> References: <1539269176-144961-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The atomic operation not support to inline. Besides, the standard atomic operation only support a sge and the sge place in wqe. This patch mainly ajdust the code. Fix: d92b0f5("libhns: Add atomic support for hip08 user mode") Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_hw_v2.c | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/providers/hns/hns_roce_u_hw_v2.c b/providers/hns/hns_roce_u_hw_v2.c index 1efd97f..4c86332 100644 --- a/providers/hns/hns_roce_u_hw_v2.c +++ b/providers/hns/hns_roce_u_hw_v2.c @@ -705,8 +705,6 @@ static int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, rc_sq_wqe->rkey = htole32(wr->wr.atomic.rkey); rc_sq_wqe->va = htole64(wr->wr.atomic.remote_addr); - wqe += sizeof(struct hns_roce_v2_wqe_data_seg); - set_atomic_seg(wqe, wr); break; case IBV_WR_ATOMIC_FETCH_AND_ADD: @@ -717,8 +715,6 @@ static int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, rc_sq_wqe->rkey = htole32(wr->wr.atomic.rkey); rc_sq_wqe->va = htole64(wr->wr.atomic.remote_addr); - wqe += sizeof(struct hns_roce_v2_wqe_data_seg); - set_atomic_seg(wqe, wr); break; default: roce_set_field(rc_sq_wqe->byte_4, @@ -737,14 +733,13 @@ static int hns_roce_u_v2_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, break; } + dseg = wqe; if (wr->opcode == IBV_WR_ATOMIC_FETCH_AND_ADD || - wr->opcode == IBV_WR_ATOMIC_CMP_AND_SWP) - dseg = wqe - sizeof(struct hns_roce_v2_wqe_data_seg); - else - dseg = wqe; - - /* Inline */ - if (wr->send_flags & IBV_SEND_INLINE && wr->num_sge) { + wr->opcode == IBV_WR_ATOMIC_CMP_AND_SWP) { + set_data_seg_v2(dseg, wr->sg_list); + wqe += sizeof(struct hns_roce_v2_wqe_data_seg); + set_atomic_seg(wqe, wr); + } else if (wr->send_flags & IBV_SEND_INLINE && wr->num_sge) { if (le32toh(rc_sq_wqe->msg_len) > qp->max_inline_data) { ret = EINVAL; *bad_wr = wr;