From patchwork Thu Feb 21 14:49:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10824161 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ACF3C15AC for ; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9AD603118A for ; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8F05B311A6; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 38A69311DB for ; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728515AbfBUOtl (ORCPT ); Thu, 21 Feb 2019 09:49:41 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44922 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728747AbfBUOtk (ORCPT ); Thu, 21 Feb 2019 09:49:40 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id B8E86B978AFE5110DB9E; Thu, 21 Feb 2019 22:49:33 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.408.0; Thu, 21 Feb 2019 22:49:26 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V3 rdma-core 1/5] libhns: CQ depth does not support 0 Date: Thu, 21 Feb 2019 22:49:44 +0800 Message-ID: <1550760588-204074-2-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550760588-204074-1-git-send-email-oulijun@huawei.com> References: <1550760588-204074-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang When the user configures the CQ depth to be less than 64, the driver would set the CQ depth to 64. The hip0x series does not support user configuration 0. So we modify the user mode driver to unify the parameter range. Signed-off-by: chenglang --- providers/hns/hns_roce_u_verbs.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 05c2a8e..e2e27a6 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -304,6 +304,9 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) struct hns_roce_device *hr_dev = to_hr_dev(context->ibv_ctx.context.device); + if (*cqe < 1 || *cqe > context->max_cqe) + return -1; + if (hr_dev->hw_version == HNS_ROCE_HW_VER1) if (*cqe < HNS_ROCE_MIN_CQE_NUM) { fprintf(stderr, @@ -312,9 +315,6 @@ static int hns_roce_verify_cq(int *cqe, struct hns_roce_context *context) *cqe = HNS_ROCE_MIN_CQE_NUM; } - if (*cqe > context->max_cqe) - return -1; - return 0; } From patchwork Thu Feb 21 14:49:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10824159 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C32313B5 for ; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5B53B311A6 for ; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4CB693118A; Thu, 21 Feb 2019 14:49:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6DC43118A for ; Thu, 21 Feb 2019 14:49:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728518AbfBUOtk (ORCPT ); Thu, 21 Feb 2019 09:49:40 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44920 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728515AbfBUOtj (ORCPT ); Thu, 21 Feb 2019 09:49:39 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id A1725A85091F5683DCE2; Thu, 21 Feb 2019 22:49:33 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.408.0; Thu, 21 Feb 2019 22:49:26 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V3 rdma-core 2/5] libhns: Fix errors detected by Cppcheck tool Date: Thu, 21 Feb 2019 22:49:45 +0800 Message-ID: <1550760588-204074-3-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550760588-204074-1-git-send-email-oulijun@huawei.com> References: <1550760588-204074-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: chenglang The driver passes structure resp's member a to ib core. Then, ib core uses container_of() to init resp's all members. At last, the driver uses resp's member b. The static check tool CppCheck considers this is an uninitStructMember bug. Here initialize resp in the driver to avoid this dependence. Signed-off-by: chenglang Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u.c | 2 +- providers/hns/hns_roce_u_verbs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/providers/hns/hns_roce_u.c b/providers/hns/hns_roce_u.c index 8113c00..15e52f6 100644 --- a/providers/hns/hns_roce_u.c +++ b/providers/hns/hns_roce_u.c @@ -92,7 +92,7 @@ static struct verbs_context *hns_roce_alloc_context(struct ibv_device *ibdev, struct ibv_get_context cmd; struct ibv_device_attr dev_attrs; struct hns_roce_context *context; - struct hns_roce_alloc_ucontext_resp resp; + struct hns_roce_alloc_ucontext_resp resp = {}; struct hns_roce_device *hr_dev = to_hr_dev(ibdev); context = verbs_init_and_alloc_context(ibdev, cmd_fd, context, ibv_ctx, diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index e2e27a6..4c60375 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -89,7 +89,7 @@ struct ibv_pd *hns_roce_u_alloc_pd(struct ibv_context *context) { struct ibv_alloc_pd cmd; struct hns_roce_pd *pd; - struct hns_roce_alloc_pd_resp resp; + struct hns_roce_alloc_pd_resp resp = {}; pd = (struct hns_roce_pd *)malloc(sizeof(*pd)); if (!pd) From patchwork Thu Feb 21 14:49:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10824157 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6E3A013B5 for ; Thu, 21 Feb 2019 14:49:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E4A43117D for ; Thu, 21 Feb 2019 14:49:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 521EB311A6; Thu, 21 Feb 2019 14:49:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D9C843118A for ; Thu, 21 Feb 2019 14:49:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727594AbfBUOtj (ORCPT ); Thu, 21 Feb 2019 09:49:39 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44918 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728569AbfBUOtj (ORCPT ); Thu, 21 Feb 2019 09:49:39 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id A76B24C2A2C5BA9A1E8C; Thu, 21 Feb 2019 22:49:33 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.408.0; Thu, 21 Feb 2019 22:49:26 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V3 rdma-core 3/5] libhns: Package some lines for calculating qp buffer size Date: Thu, 21 Feb 2019 22:49:46 +0800 Message-ID: <1550760588-204074-4-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550760588-204074-1-git-send-email-oulijun@huawei.com> References: <1550760588-204074-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For readability, here moves the relatived lines of calculating qp buffer size into an independent function as well as moves the relatived lines of allocating rq inline buffer space into an independent function. Signed-off-by: Lijun Ou --- V2->V3: 1. Remove the casts on the output of malloc --- providers/hns/hns_roce_u_verbs.c | 97 +++++++++++++++++++++++----------------- 1 file changed, 57 insertions(+), 40 deletions(-) diff --git a/providers/hns/hns_roce_u_verbs.c b/providers/hns/hns_roce_u_verbs.c index 4c60375..3bc63ac 100644 --- a/providers/hns/hns_roce_u_verbs.c +++ b/providers/hns/hns_roce_u_verbs.c @@ -658,25 +658,41 @@ static int hns_roce_verify_qp(struct ibv_qp_init_attr *attr, return 0; } -static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, - enum ibv_qp_type type, struct hns_roce_qp *qp) +static int hns_roce_alloc_recv_inl_buf(struct ibv_qp_cap *cap, + struct hns_roce_qp *qp) { int i; - int page_size = to_hr_dev(pd->context->device)->page_size; - qp->sq.wrid = - (unsigned long *)malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); - if (!qp->sq.wrid) + qp->rq_rinl_buf.wqe_list = calloc(1, qp->rq.wqe_cnt * + sizeof(struct hns_roce_rinl_wqe)); + if (!qp->rq_rinl_buf.wqe_list) return -1; - if (qp->rq.wqe_cnt) { - qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); - if (!qp->rq.wrid) { - free(qp->sq.wrid); - return -1; - } + qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; + + qp->rq_rinl_buf.wqe_list[0].sg_list = calloc(1, qp->rq.wqe_cnt * + cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); + if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { + free(qp->rq_rinl_buf.wqe_list); + return -1; + } + + for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { + int wqe_size = i * cap->max_recv_sge; + + qp->rq_rinl_buf.wqe_list[i].sg_list = + &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); } + return 0; +} + +static int hns_roce_calc_qp_buff_size(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, + struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + if (to_hr_dev(pd->context->device)->hw_version == HNS_ROCE_HW_VER1) { for (qp->rq.wqe_shift = 4; 1 << qp->rq.wqe_shift < sizeof(struct hns_roce_rc_send_wqe); qp->rq.wqe_shift++) @@ -704,35 +720,9 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, else qp->sge.sge_shift = 0; - /* alloc recv inline buf*/ - qp->rq_rinl_buf.wqe_list = - (struct hns_roce_rinl_wqe *)calloc(1, qp->rq.wqe_cnt * - sizeof(struct hns_roce_rinl_wqe)); - if (!qp->rq_rinl_buf.wqe_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); + /* alloc recv inline buf */ + if (hns_roce_alloc_recv_inl_buf(cap, qp)) return -1; - } - - qp->rq_rinl_buf.wqe_cnt = qp->rq.wqe_cnt; - - qp->rq_rinl_buf.wqe_list[0].sg_list = - (struct hns_roce_rinl_sge *)calloc(1, qp->rq.wqe_cnt * - cap->max_recv_sge * sizeof(struct hns_roce_rinl_sge)); - if (!qp->rq_rinl_buf.wqe_list[0].sg_list) { - if (qp->rq.wqe_cnt) - free(qp->rq.wrid); - free(qp->sq.wrid); - free(qp->rq_rinl_buf.wqe_list); - return -1; - } - for (i = 0; i < qp->rq_rinl_buf.wqe_cnt; i++) { - int wqe_size = i * cap->max_recv_sge; - - qp->rq_rinl_buf.wqe_list[i].sg_list = - &(qp->rq_rinl_buf.wqe_list[0].sg_list[wqe_size]); - } qp->buf_size = align((qp->sq.wqe_cnt << qp->sq.wqe_shift), page_size) + @@ -755,6 +745,33 @@ static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, } } + return 0; +} + +static int hns_roce_alloc_qp_buf(struct ibv_pd *pd, struct ibv_qp_cap *cap, + enum ibv_qp_type type, struct hns_roce_qp *qp) +{ + int page_size = to_hr_dev(pd->context->device)->page_size; + + qp->sq.wrid = malloc(qp->sq.wqe_cnt * sizeof(uint64_t)); + if (!qp->sq.wrid) + return -1; + + if (qp->rq.wqe_cnt) { + qp->rq.wrid = malloc(qp->rq.wqe_cnt * sizeof(uint64_t)); + if (!qp->rq.wrid) { + free(qp->sq.wrid); + return -1; + } + } + + if (hns_roce_calc_qp_buff_size(pd, cap, type, qp)) { + if (qp->rq.wqe_cnt) + free(qp->rq.wrid); + free(qp->sq.wrid); + return -1; + } + if (hns_roce_alloc_buf(&qp->buf, align(qp->buf_size, page_size), to_hr_dev(pd->context->device)->page_size)) { if (qp->rq.wqe_cnt) From patchwork Thu Feb 21 14:49:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10824163 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BF8D1399 for ; Thu, 21 Feb 2019 14:49:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EF05B3118A for ; Thu, 21 Feb 2019 14:49:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E3213311DB; Thu, 21 Feb 2019 14:49:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 354F53118A for ; Thu, 21 Feb 2019 14:49:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728569AbfBUOtj (ORCPT ); Thu, 21 Feb 2019 09:49:39 -0500 Received: from szxga07-in.huawei.com ([45.249.212.35]:44916 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728518AbfBUOtj (ORCPT ); Thu, 21 Feb 2019 09:49:39 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id ABEFCAE6E60F019E7F56; Thu, 21 Feb 2019 22:49:33 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.408.0; Thu, 21 Feb 2019 22:49:27 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V3 rdma-core 4/5] libhns: Package for polling cqe function Date: Thu, 21 Feb 2019 22:49:47 +0800 Message-ID: <1550760588-204074-5-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550760588-204074-1-git-send-email-oulijun@huawei.com> References: <1550760588-204074-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to reduce complexity, we move some lines into some separated function according to the flow of polling cqe. Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_hw_v2.c | 300 +++++++++++++++++++++------------------ 1 file changed, 163 insertions(+), 137 deletions(-) diff --git a/providers/hns/hns_roce_u_hw_v2.c b/providers/hns/hns_roce_u_hw_v2.c index 7938b96..0a49437 100644 --- a/providers/hns/hns_roce_u_hw_v2.c +++ b/providers/hns/hns_roce_u_hw_v2.c @@ -273,6 +273,159 @@ static void hns_roce_v2_clear_qp(struct hns_roce_context *ctx, uint32_t qpn) static int hns_roce_u_v2_modify_qp(struct ibv_qp *qp, struct ibv_qp_attr *attr, int attr_mask); +static int hns_roce_flush_cqe(struct hns_roce_qp **cur_qp, struct ibv_wc *wc) +{ + struct ibv_qp_attr attr; + int attr_mask; + int ret; + + if ((wc->status != IBV_WC_SUCCESS) && + (wc->status != IBV_WC_WR_FLUSH_ERR)) { + attr_mask = IBV_QP_STATE; + attr.qp_state = IBV_QPS_ERR; + ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, + &attr, attr_mask); + if (ret) { + fprintf(stderr, PFX "failed to modify qp!\n"); + return ret; + } + (*cur_qp)->ibv_qp.state = IBV_QPS_ERR; + } + + return V2_CQ_OK; +} + +static void hns_roce_v2_get_opcode_from_sender(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc) +{ + /* Get opcode and flag before update the tail point for send */ + switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { + case HNS_ROCE_SQ_OP_SEND: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_SEND; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_SEND; + break; + case HNS_ROCE_SQ_OP_RDMA_READ: + wc->opcode = IBV_WC_RDMA_READ; + wc->byte_len = le32toh(cqe->byte_cnt); + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_RDMA_WRITE: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = 0; + break; + + case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: + wc->opcode = IBV_WC_RDMA_WRITE; + wc->wc_flags = IBV_WC_WITH_IMM; + break; + case HNS_ROCE_SQ_OP_LOCAL_INV: + wc->opcode = IBV_WC_LOCAL_INV; + wc->wc_flags = IBV_WC_WITH_INV; + break; + case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: + wc->opcode = IBV_WC_COMP_SWAP; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: + wc->opcode = IBV_WC_FETCH_ADD; + wc->byte_len = 8; + wc->wc_flags = 0; + break; + case HNS_ROCE_SQ_OP_BIND_MW: + wc->opcode = IBV_WC_BIND_MW; + wc->wc_flags = 0; + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + wc->wc_flags = 0; + break; + } +} + +static void hns_roce_v2_get_opcode_from_receiver(struct hns_roce_v2_cqe *cqe, + struct ibv_wc *wc, + uint32_t opcode) +{ + switch (opcode) { + case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: + wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = 0; + break; + case HNS_ROCE_RECV_OP_SEND_WITH_IMM: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_IMM; + wc->imm_data = htobe32(le32toh(cqe->immtdata)); + break; + case HNS_ROCE_RECV_OP_SEND_WITH_INV: + wc->opcode = IBV_WC_RECV; + wc->wc_flags = IBV_WC_WITH_INV; + wc->invalidated_rkey = le32toh(cqe->rkey); + break; + default: + wc->status = IBV_WC_GENERAL_ERR; + break; + } +} + +static int hns_roce_handle_recv_inl_wqe(struct hns_roce_v2_cqe *cqe, + struct hns_roce_qp **cur_qp, + struct ibv_wc *wc, uint32_t opcode) +{ + if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || + (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && + (opcode == HNS_ROCE_RECV_OP_SEND || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || + opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && + (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { + struct hns_roce_rinl_sge *sge_list; + uint32_t wr_num, wr_cnt, sge_num, data_len; + uint8_t *wqe_buf; + uint32_t sge_cnt, size; + + wr_num = (uint16_t)roce_get_field(cqe->byte_4, + CQE_BYTE_4_WQE_IDX_M, + CQE_BYTE_4_WQE_IDX_S) & 0xffff; + wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); + + sge_list = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; + sge_num = (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; + wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); + data_len = wc->byte_len; + + for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); + sge_cnt++) { + size = sge_list[sge_cnt].len < data_len ? + sge_list[sge_cnt].len : data_len; + + memcpy((void *)sge_list[sge_cnt].addr, + (void *)wqe_buf, size); + data_len -= size; + wqe_buf += size; + } + + if (data_len) { + wc->status = IBV_WC_LOC_LEN_ERR; + return V2_CQ_POLL_ERR; + } + } + + return V2_CQ_OK; +} + static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, struct hns_roce_qp **cur_qp, struct ibv_wc *wc) { @@ -282,11 +435,8 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, uint32_t local_qpn; struct hns_roce_wq *wq = NULL; struct hns_roce_v2_cqe *cqe = NULL; - struct hns_roce_rinl_sge *sge_list; struct hns_roce_srq *srq = NULL; uint32_t opcode; - struct ibv_qp_attr attr; - int attr_mask; int ret; /* According to CI, find the relative cqe */ @@ -361,18 +511,7 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, if (roce_get_field(cqe->byte_4, CQE_BYTE_4_STATUS_M, CQE_BYTE_4_STATUS_S) != HNS_ROCE_V2_CQE_SUCCESS) { hns_roce_v2_handle_error_cqe(cqe, wc); - - /* flush cqe */ - if ((wc->status != IBV_WC_SUCCESS) && - (wc->status != IBV_WC_WR_FLUSH_ERR)) { - attr_mask = IBV_QP_STATE; - attr.qp_state = IBV_QPS_ERR; - ret = hns_roce_u_v2_modify_qp(&(*cur_qp)->ibv_qp, - &attr, attr_mask); - if (ret) - return ret; - } - return V2_CQ_OK; + return hns_roce_flush_cqe(cur_qp, wc); } wc->status = IBV_WC_SUCCESS; @@ -382,132 +521,19 @@ static int hns_roce_v2_poll_one(struct hns_roce_cq *cq, * information of wc */ if (is_send) { - /* Get opcode and flag before update the tail point for send */ - switch (roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK) { - case HNS_ROCE_SQ_OP_SEND: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_SEND; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - - case HNS_ROCE_SQ_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_SEND; - break; - - case HNS_ROCE_SQ_OP_RDMA_READ: - wc->opcode = IBV_WC_RDMA_READ; - wc->byte_len = le32toh(cqe->byte_cnt); - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = 0; - break; - - case HNS_ROCE_SQ_OP_RDMA_WRITE_WITH_IMM: - wc->opcode = IBV_WC_RDMA_WRITE; - wc->wc_flags = IBV_WC_WITH_IMM; - break; - case HNS_ROCE_SQ_OP_LOCAL_INV: - wc->opcode = IBV_WC_LOCAL_INV; - wc->wc_flags = IBV_WC_WITH_INV; - break; - case HNS_ROCE_SQ_OP_ATOMIC_COMP_AND_SWAP: - wc->opcode = IBV_WC_COMP_SWAP; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_ATOMIC_FETCH_AND_ADD: - wc->opcode = IBV_WC_FETCH_ADD; - wc->byte_len = 8; - wc->wc_flags = 0; - break; - case HNS_ROCE_SQ_OP_BIND_MW: - wc->opcode = IBV_WC_BIND_MW; - wc->wc_flags = 0; - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - wc->wc_flags = 0; - break; - } + hns_roce_v2_get_opcode_from_sender(cqe, wc); } else { /* Get opcode and flag in rq&srq */ wc->byte_len = le32toh(cqe->byte_cnt); - opcode = roce_get_field(cqe->byte_4, CQE_BYTE_4_OPCODE_M, - CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; - switch (opcode) { - case HNS_ROCE_RECV_OP_RDMA_WRITE_IMM: - wc->opcode = IBV_WC_RECV_RDMA_WITH_IMM; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = 0; - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_IMM: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_IMM; - wc->imm_data = htobe32(le32toh(cqe->immtdata)); - break; - - case HNS_ROCE_RECV_OP_SEND_WITH_INV: - wc->opcode = IBV_WC_RECV; - wc->wc_flags = IBV_WC_WITH_INV; - wc->invalidated_rkey = le32toh(cqe->rkey); - break; - default: - wc->status = IBV_WC_GENERAL_ERR; - break; - } - - if (((*cur_qp)->ibv_qp.qp_type == IBV_QPT_RC || - (*cur_qp)->ibv_qp.qp_type == IBV_QPT_UC) && - (opcode == HNS_ROCE_RECV_OP_SEND || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_IMM || - opcode == HNS_ROCE_RECV_OP_SEND_WITH_INV) && - (roce_get_bit(cqe->byte_4, CQE_BYTE_4_RQ_INLINE_S))) { - uint32_t wr_num, wr_cnt, sge_num, data_len; - uint8_t *wqe_buf; - uint32_t sge_cnt, size; - - wr_num = (uint16_t)roce_get_field(cqe->byte_4, - CQE_BYTE_4_WQE_IDX_M, - CQE_BYTE_4_WQE_IDX_S) & 0xffff; - wr_cnt = wr_num & ((*cur_qp)->rq.wqe_cnt - 1); - - sge_list = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sg_list; - sge_num = - (*cur_qp)->rq_rinl_buf.wqe_list[wr_cnt].sge_cnt; - wqe_buf = (uint8_t *)get_recv_wqe_v2(*cur_qp, wr_cnt); - data_len = wc->byte_len; - - for (sge_cnt = 0; (sge_cnt < sge_num) && (data_len); - sge_cnt++) { - size = sge_list[sge_cnt].len < data_len ? - sge_list[sge_cnt].len : data_len; - - memcpy((void *)sge_list[sge_cnt].addr, - (void *)wqe_buf, size); - data_len -= size; - wqe_buf += size; - } - - if (data_len) { - wc->status = IBV_WC_LOC_LEN_ERR; - return V2_CQ_POLL_ERR; - } + CQE_BYTE_4_OPCODE_S) & HNS_ROCE_V2_CQE_OPCODE_MASK; + hns_roce_v2_get_opcode_from_receiver(cqe, wc, opcode); + + ret = hns_roce_handle_recv_inl_wqe(cqe, cur_qp, wc, opcode); + if (ret) { + fprintf(stderr, + PFX "failed to handle recv inline wqe!\n"); + return ret; } } From patchwork Thu Feb 21 14:49:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lijun Ou X-Patchwork-Id: 10824153 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F41D15AC for ; Thu, 21 Feb 2019 14:49:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F2B23117D for ; Thu, 21 Feb 2019 14:49:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0193F311DB; Thu, 21 Feb 2019 14:49:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7F01E3117D for ; Thu, 21 Feb 2019 14:49:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728639AbfBUOth (ORCPT ); Thu, 21 Feb 2019 09:49:37 -0500 Received: from szxga04-in.huawei.com ([45.249.212.190]:3716 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728518AbfBUOth (ORCPT ); Thu, 21 Feb 2019 09:49:37 -0500 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id D1BE6D1AFA9024936090; Thu, 21 Feb 2019 22:49:33 +0800 (CST) Received: from localhost.localdomain (10.67.212.75) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.408.0; Thu, 21 Feb 2019 22:49:27 +0800 From: Lijun Ou To: , CC: , , Subject: [PATCH V3 rdma-core 5/5] libhns: Bugfix for using buffer length Date: Thu, 21 Feb 2019 22:49:48 +0800 Message-ID: <1550760588-204074-6-git-send-email-oulijun@huawei.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1550760588-204074-1-git-send-email-oulijun@huawei.com> References: <1550760588-204074-1-git-send-email-oulijun@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.67.212.75] X-CFilter-Loop: Reflected Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We should use the length of buffer after aligned according the input size for ibv_dontfork_range function. Fixes: c24583975044 ("libhns: Add verbs of qp support") Signed-off-by: Lijun Ou --- providers/hns/hns_roce_u_buf.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/providers/hns/hns_roce_u_buf.c b/providers/hns/hns_roce_u_buf.c index f92ea65..27ed90c 100644 --- a/providers/hns/hns_roce_u_buf.c +++ b/providers/hns/hns_roce_u_buf.c @@ -46,7 +46,7 @@ int hns_roce_alloc_buf(struct hns_roce_buf *buf, unsigned int size, if (buf->buf == MAP_FAILED) return errno; - ret = ibv_dontfork_range(buf->buf, size); + ret = ibv_dontfork_range(buf->buf, buf->length); if (ret) munmap(buf->buf, buf->length);