From patchwork Wed May 5 17:10:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12240775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85074C433B4 for ; Wed, 5 May 2021 17:40:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 56CB661073 for ; Wed, 5 May 2021 17:40:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235093AbhEERli (ORCPT ); Wed, 5 May 2021 13:41:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238345AbhEERiV (ORCPT ); Wed, 5 May 2021 13:38:21 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2A9E8C0612B1 for ; Wed, 5 May 2021 10:11:15 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id s22so2238635pgk.6 for ; Wed, 05 May 2021 10:11:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=R5jYFJOWxglxcx+TgEgtgWg0SvFZ5YyijOPGLPbzLtg=; b=iA4xmo3JaNRWXw2uvgZPhg0jRMQiPqIgbFLv1C8T6qfrMGSqE+qNvQOf3NXaPZJVKc kENFVHjCqhVYtQD3GE7MgPNrAWdZLAf6FoC9RyaFrTzr7KGOeLe7fbCPbr5tW9e5E+Tf aHJkY4Ax25XInQlChKQi2uvQxxwdey+idlpbQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=R5jYFJOWxglxcx+TgEgtgWg0SvFZ5YyijOPGLPbzLtg=; b=XAiGsD9oLxcZYIhRyMOW41txxsv1L8LJbW7/bbRzqMXyuKqJASO2ltBW6lWhbBXbKp Phm7iFGArv3QMr8ZkQgRxAMJmxZtjPheP9ShYzIOWsMI/EyywfGR6SXY7t6RNI1/joBW G9u8aFD4hRAV20FzR3z4MvIOzMLAFP0uhQjQM5bs0DGxjg/5coU1mSHnSyzTS39iQ4uq /pgJUA+pqasf8yZR4/yRX2Q5HhB9CzzE30t5UVnU7mNzJKhlRutR1oObJfDBPPM7UUOe W+ma21E386RbPai9s/P7XIKL2HkrU5gKuhdd9y13VPlHre0xJFGMOfad4asBjUffQcNA q3PQ== X-Gm-Message-State: AOAM532L4eu5STcvvT6agqd2lTM+8VX5K1SiamPkj8dZeBTvoDGY9i/X SUSYZwv4j+OXGSVVBBTpfeQJYfFTDgeqf8nnynQfVI3g2eGZ8upAAb5gnFN7oAJ3YaHwImsXDy2 LgJNdhZfUfYQcWAPNy/S/sAEmW+/PHiWJf2kc4ddS/aqWWV1xCxE4eipMFYSGGitdZRtq7EysOR sUs+dInvkg X-Google-Smtp-Source: ABdhPJzjcKOYKj5H2LVPooS9Zwn3LNLs/17gGTPJDRKPrOZYHEpmSzk0ifDanfhWdTi9R0vzaYtCjA== X-Received: by 2002:aa7:9aa2:0:b029:28e:af64:ec59 with SMTP id x2-20020aa79aa20000b029028eaf64ec59mr92802pfi.0.1620234674103; Wed, 05 May 2021 10:11:14 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id u21sm15381614pfm.89.2021.05.05.10.11.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 10:11:13 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [V2 rdma-core 1/4] bnxt_re/lib: Check AH handler validity before use Date: Wed, 5 May 2021 22:40:53 +0530 Message-Id: <20210505171056.514204-2-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210505171056.514204-1-devesh.sharma@broadcom.com> References: <20210505171056.514204-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Provider library should check the validity of AH handler before referencing it. Fixing the AH validity check when initializing the UD SQE from AH. Fixes: 60ce22c59eaa ("Enable-UD-control-path-and-wqe-posting") Signed-off-by: Devesh Sharma --- providers/bnxt_re/verbs.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index ca561662..a015bed7 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -1193,13 +1193,13 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe, int len; len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline); - sqe->qkey = htole32(wr->wr.ud.remote_qkey); - sqe->dst_qp = htole32(wr->wr.ud.remote_qpn); if (!wr->wr.ud.ah) { len = -EINVAL; goto bail; } ah = to_bnxt_re_ah(wr->wr.ud.ah); + sqe->qkey = htole32(wr->wr.ud.remote_qkey); + sqe->dst_qp = htole32(wr->wr.ud.remote_qpn); sqe->avid = htole32(ah->avid & 0xFFFFF); bail: return len; From patchwork Wed May 5 17:10:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12240777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1E65C433ED for ; Wed, 5 May 2021 17:40:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7DE37610E6 for ; Wed, 5 May 2021 17:40:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234334AbhEERlk (ORCPT ); Wed, 5 May 2021 13:41:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234641AbhEERiZ (ORCPT ); Wed, 5 May 2021 13:38:25 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3E1FC0612B2 for ; Wed, 5 May 2021 10:11:16 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id lj11-20020a17090b344bb029015bc3073608so1214029pjb.3 for ; Wed, 05 May 2021 10:11:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=n+Jrar4SyoBmqCVW6l1CGFiThnPRsymsVuNaX6THJQI=; b=YRB9aG44vLy2/4871dZCdnQyGcYuXDC+BXDHphVt92uqa0KbOCPX9jx8t25c1ClzZ7 0I7rd2nBRbHKDiuKTa0j90ZDSifrkjxZACbIERWjH1FaDJfj6FM1ZsW9LYOhUlZQ30og E66E2DN0DuVyd9kbLfuh/zpfYM2JauYRyT6VQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=n+Jrar4SyoBmqCVW6l1CGFiThnPRsymsVuNaX6THJQI=; b=eG2+C4jDTiNNbaAa/N6hTS05PMunUyPOBux9eJaOrOPOGCknXfcZ8teIsTH5w2Nn33 9uhFi5k2gJZVZrXOuDzhgvYFBlLR3ydMwsKIJq8UIeP8X9mxLeI93IfMquPHu1oDY7jS 41b9i6qFXBfjb0+rJY731c25PYULEnVO2tEsi34qPHSoB8QhqDNAEDX/CfdAy/6EBH/l +HS+uPvfl5+lIx54pEVia/5cM/7bUO1mAe7PwmzJmmCCIhMPuVdDNPzc60d4DXQuLjjH 993tbJRJwLfu9MjJCIcxyCmeCS1i+T9srQcYaFE687TirU5RybXiN+cI0Sic/IHdIidP hGwg== X-Gm-Message-State: AOAM5317+oYaG/fflM2cLpOioLWYEMI36vra5+5omGFf3ogC/TU4XNPz DFF8x39nZqXAIWClKRRAdwUyZBic/J3ZryRrewldCE2zRxb6q9lv3tL+A/bncwpvwS7BnEeT7wS vwSOmeW6cn5vWeJLEccsuKLto0vYRdsj8L8wAP+8wszIXE20XapFXzO/Gz0tS3ULK2nugSbaKVJ uKUDiSu3Kk X-Google-Smtp-Source: ABdhPJxHdNmhMVpiSpfCecPoHR+C5y3MWwFpP0VdXZ+Is+ClncclJnVZxpBGtaBPOZ8CoQZkClssZA== X-Received: by 2002:a17:90a:bc8:: with SMTP id x8mr12326551pjd.224.1620234675620; Wed, 05 May 2021 10:11:15 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id u21sm15381614pfm.89.2021.05.05.10.11.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 10:11:15 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [V2 rdma-core 2/4] bnxt_re/lib: align base sq entry structure to 16B boundary Date: Wed, 5 May 2021 22:40:54 +0530 Message-Id: <20210505171056.514204-3-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210505171056.514204-1-devesh.sharma@broadcom.com> References: <20210505171056.514204-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Breaking the send_wqe and recv_wqe hardware specific interface into smaller chunk size instead of fixed 128B chunks. This make both post-send and post-recv flexible. Fixes: d2745fe2ab86 ("Add support for posting and polling") Signed-off-by: Devesh Sharma --- providers/bnxt_re/bnxt_re-abi.h | 24 ++++++++++-------------- providers/bnxt_re/verbs.c | 26 ++++++++++++-------------- 2 files changed, 22 insertions(+), 28 deletions(-) diff --git a/providers/bnxt_re/bnxt_re-abi.h b/providers/bnxt_re/bnxt_re-abi.h index c6998e85..c82019e8 100644 --- a/providers/bnxt_re/bnxt_re-abi.h +++ b/providers/bnxt_re/bnxt_re-abi.h @@ -234,9 +234,16 @@ struct bnxt_re_term_cqe { __le64 rsvd1; }; +union lower_shdr { + __le64 qkey_len; + __le64 lkey_plkey; + __le64 rva; +}; + struct bnxt_re_bsqe { __le32 rsv_ws_fl_wt; __le32 key_immd; + union lower_shdr lhdr; }; struct bnxt_re_psns { @@ -262,42 +269,33 @@ struct bnxt_re_sge { #define BNXT_RE_MAX_INLINE_SIZE 0x60 struct bnxt_re_send { - __le32 length; - __le32 qkey; __le32 dst_qp; __le32 avid; __le64 rsvd; }; struct bnxt_re_raw { - __le32 length; - __le32 rsvd1; __le32 cfa_meta; __le32 rsvd2; __le64 rsvd3; }; struct bnxt_re_rdma { - __le32 length; - __le32 rsvd1; __le64 rva; __le32 rkey; __le32 rsvd2; }; struct bnxt_re_atomic { - __le64 rva; __le64 swp_dt; __le64 cmp_dt; }; struct bnxt_re_inval { - __le64 rsvd[3]; + __le64 rsvd[2]; }; struct bnxt_re_bind { - __le32 plkey; - __le32 lkey; __le64 va; __le64 len; /* only 40 bits are valid */ }; @@ -305,17 +303,15 @@ struct bnxt_re_bind { struct bnxt_re_brqe { __le32 rsv_ws_fl_wt; __le32 rsvd; + __le32 wrid; + __le32 rsvd1; }; struct bnxt_re_rqe { - __le32 wrid; - __le32 rsvd1; __le64 rsvd[2]; }; struct bnxt_re_srqe { - __le32 srq_tag; /* 20 bits are valid */ - __le32 rsvd1; __le64 rsvd[2]; }; #endif diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index a015bed7..760e840a 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -1150,17 +1150,16 @@ static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, struct ibv_send_wr *wr, static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe, struct ibv_send_wr *wr, uint8_t is_inline) { - struct bnxt_re_bsqe *hdr = wqe; - struct bnxt_re_send *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe)); struct bnxt_re_sge *sge = ((void *)wqe + bnxt_re_get_sqe_hdr_sz()); + struct bnxt_re_bsqe *hdr = wqe; uint32_t wrlen, hdrval = 0; - int len; uint8_t opcode, qesize; + int len; len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, is_inline); if (len < 0) return len; - sqe->length = htole32(len); + hdr->lhdr.qkey_len = htole64((uint64_t)len); /* Fill Header */ opcode = bnxt_re_ibv_to_bnxt_wr_opcd(wr->opcode); @@ -1189,7 +1188,9 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe, struct ibv_send_wr *wr, uint8_t is_inline) { struct bnxt_re_send *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe)); + struct bnxt_re_bsqe *hdr = wqe; struct bnxt_re_ah *ah; + uint64_t qkey; int len; len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline); @@ -1198,7 +1199,8 @@ static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe, goto bail; } ah = to_bnxt_re_ah(wr->wr.ud.ah); - sqe->qkey = htole32(wr->wr.ud.remote_qkey); + qkey = wr->wr.ud.remote_qkey; + hdr->lhdr.qkey_len |= htole64(qkey << 32); sqe->dst_qp = htole32(wr->wr.ud.remote_qpn); sqe->avid = htole32(ah->avid & 0xFFFFF); bail: @@ -1228,7 +1230,7 @@ static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, void *wqe, len = bnxt_re_build_send_sqe(qp, wqe, wr, false); hdr->key_immd = htole32(wr->wr.atomic.rkey); - sqe->rva = htole64(wr->wr.atomic.remote_addr); + hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr); sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); sqe->swp_dt = htole64(wr->wr.atomic.swap); @@ -1245,7 +1247,7 @@ static int bnxt_re_build_fna_sqe(struct bnxt_re_qp *qp, void *wqe, len = bnxt_re_build_send_sqe(qp, wqe, wr, false); hdr->key_immd = htole32(wr->wr.atomic.rkey); - sqe->rva = htole64(wr->wr.atomic.remote_addr); + hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr); sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); return len; @@ -1368,13 +1370,11 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, void *rqe) { struct bnxt_re_brqe *hdr = rqe; - struct bnxt_re_rqe *rwr; - struct bnxt_re_sge *sge; struct bnxt_re_wrid *wrid; + struct bnxt_re_sge *sge; int wqe_sz, len; uint32_t hdrval; - rwr = (rqe + sizeof(struct bnxt_re_brqe)); sge = (rqe + bnxt_re_get_rqe_hdr_sz()); wrid = &qp->rwrid[qp->rqq->tail]; @@ -1388,7 +1388,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, hdrval = BNXT_RE_WR_OPCD_RECV; hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT); hdr->rsv_ws_fl_wt = htole32(hdrval); - rwr->wrid = htole32(qp->rqq->tail); + hdr->wrid = htole32(qp->rqq->tail); /* Fill wrid */ wrid->wrid = wr->wr_id; @@ -1586,13 +1586,11 @@ static int bnxt_re_build_srqe(struct bnxt_re_srq *srq, struct ibv_recv_wr *wr, void *srqe) { struct bnxt_re_brqe *hdr = srqe; - struct bnxt_re_rqe *rwr; struct bnxt_re_sge *sge; struct bnxt_re_wrid *wrid; int wqe_sz, len, next; uint32_t hdrval = 0; - rwr = (srqe + sizeof(struct bnxt_re_brqe)); sge = (srqe + bnxt_re_get_srqe_hdr_sz()); next = srq->start_idx; wrid = &srq->srwrid[next]; @@ -1602,7 +1600,7 @@ static int bnxt_re_build_srqe(struct bnxt_re_srq *srq, wqe_sz = wr->num_sge + (bnxt_re_get_srqe_hdr_sz() >> 4); /* 16B align */ hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT); hdr->rsv_ws_fl_wt = htole32(hdrval); - rwr->wrid = htole32((uint32_t)next); + hdr->wrid = htole32((uint32_t)next); /* Fill wrid */ wrid->wrid = wr->wr_id; From patchwork Wed May 5 17:10:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12240779 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35DB9C43460 for ; Wed, 5 May 2021 17:40:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0F3F961073 for ; Wed, 5 May 2021 17:40:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235141AbhEERln (ORCPT ); Wed, 5 May 2021 13:41:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51810 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234992AbhEERiq (ORCPT ); Wed, 5 May 2021 13:38:46 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF15CC061241 for ; Wed, 5 May 2021 10:11:18 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id v20so1444182plo.10 for ; Wed, 05 May 2021 10:11:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=AqAA4JGXnXaLH6q6Z+d4KRMf/hUofUjwUImSD/4Yodg=; b=hXiqwTl2B3dPawh6CxmzldsgQWThtK9xBLDy8t368uzAtds7RlBpS8hpIVR9ucKsDt bURf5hYIw0NBT7+nONDiGTn1PbpLNtqVJxU/WTF+wS+Sz1bZZA725JFVweevK0zvwREY /G9pmlSCvh9SqTDmHr7gdTZDYvzwozj2Rq4ks= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=AqAA4JGXnXaLH6q6Z+d4KRMf/hUofUjwUImSD/4Yodg=; b=VoKxoK5BEWdTE6oHooYaKH2t3cBy7eXalPBP/Urmi3Y+LV85hBzr0JgnI9ibBGrV7R wrLcjh3xtpOARF9Uze/GQUoz6wZ8ZtKKnc3BgudD1utKaqS2AwT8u31tRr/SWPFq/3lT byfO2cH04Q1yvwXzuG4Qp7TYuYxZ17JVTjDSkQk9fH8kYQyqzQRJ0N1aJBgHSkBblyW3 wDAsq/LXLyy8HY8Qk2T0i2+GqTihYSw/GT+duZSxb24wNG2NzSRwpVogH26QCydI8cue riBGjd5Ut1ZT/16VUX/ep6BEC1nvXDN1cuWZdFQBt1PDFpn4msK2qkviNzqqcqjyedSk sq4g== X-Gm-Message-State: AOAM530lESuN8p+PHOzl3ovzNib5ZJONyaX8ZqWM+aFlJEpXtlP6H+Xw p3AO9Suk4pJ0yNxZbkbLcBxPtJ8JoYMuzzM5ZMi/76MGDIVRMDdytwt0A2mlkxVBvyfkZV+KaMs STNR6HnbSZyXBoAAQytTzH4MbEAOLvhRQkHRVxSy+84Jo+5oZpOYQ9PgVbcC/cXnt1H1T1P2LwP yLyLtkleOM X-Google-Smtp-Source: ABdhPJwGPgqgVKAqbSDhU8M+8MWmMRVLKXqq53vakwl+c8YJO4JW+PMHDuAPuc4C9pnnjKjOOdNSXQ== X-Received: by 2002:a17:90a:f491:: with SMTP id bx17mr12782906pjb.176.1620234677290; Wed, 05 May 2021 10:11:17 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id u21sm15381614pfm.89.2021.05.05.10.11.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 10:11:16 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [V2 rdma-core 3/4] bnxt_re/lib: consolidate hwque and swque in common structure Date: Wed, 5 May 2021 22:40:55 +0530 Message-Id: <20210505171056.514204-4-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210505171056.514204-1-devesh.sharma@broadcom.com> References: <20210505171056.514204-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Consolidating hardware queue (hwque) and software queue (swque) under a single bookkeeping data structure bnxt_re_joint_queue. This is to ease the hardware and software queue management. Further reduces the size of bnxt_re_qp structure. Fixes: d2745fe2ab86 ("Add support for posting and polling") Signed-off-by: Devesh Sharma --- providers/bnxt_re/db.c | 6 +- providers/bnxt_re/main.h | 13 ++-- providers/bnxt_re/verbs.c | 131 +++++++++++++++++++++----------------- 3 files changed, 85 insertions(+), 65 deletions(-) diff --git a/providers/bnxt_re/db.c b/providers/bnxt_re/db.c index 85da182e..3c797573 100644 --- a/providers/bnxt_re/db.c +++ b/providers/bnxt_re/db.c @@ -63,7 +63,8 @@ void bnxt_re_ring_rq_db(struct bnxt_re_qp *qp) { struct bnxt_re_db_hdr hdr; - bnxt_re_init_db_hdr(&hdr, qp->rqq->tail, qp->qpid, BNXT_RE_QUE_TYPE_RQ); + bnxt_re_init_db_hdr(&hdr, qp->jrqq->hwque->tail, + qp->qpid, BNXT_RE_QUE_TYPE_RQ); bnxt_re_ring_db(qp->udpi, &hdr); } @@ -71,7 +72,8 @@ void bnxt_re_ring_sq_db(struct bnxt_re_qp *qp) { struct bnxt_re_db_hdr hdr; - bnxt_re_init_db_hdr(&hdr, qp->sqq->tail, qp->qpid, BNXT_RE_QUE_TYPE_SQ); + bnxt_re_init_db_hdr(&hdr, qp->jsqq->hwque->tail, + qp->qpid, BNXT_RE_QUE_TYPE_SQ); bnxt_re_ring_db(qp->udpi, &hdr); } diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index 368297e6..d470e30a 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -120,13 +120,18 @@ struct bnxt_re_srq { bool arm_req; }; +struct bnxt_re_joint_queue { + struct bnxt_re_queue *hwque; + struct bnxt_re_wrid *swque; + uint32_t start_idx; + uint32_t last_idx; +}; + struct bnxt_re_qp { struct ibv_qp ibvqp; struct bnxt_re_chip_ctx *cctx; - struct bnxt_re_queue *sqq; - struct bnxt_re_wrid *swrid; - struct bnxt_re_queue *rqq; - struct bnxt_re_wrid *rwrid; + struct bnxt_re_joint_queue *jsqq; + struct bnxt_re_joint_queue *jrqq; struct bnxt_re_srq *srq; struct bnxt_re_cq *scq; struct bnxt_re_cq *rcq; diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 760e840a..59a57f72 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -242,7 +242,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp, struct bnxt_re_bcqe *hdr, struct bnxt_re_req_cqe *scqe, int *cnt) { - struct bnxt_re_queue *sq = qp->sqq; + struct bnxt_re_queue *sq = qp->jsqq->hwque; struct bnxt_re_context *cntx; struct bnxt_re_wrid *swrid; struct bnxt_re_psns *spsn; @@ -252,7 +252,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp, scq = to_bnxt_re_cq(qp->ibvqp.send_cq); cntx = to_bnxt_re_context(scq->ibvcq.context); - swrid = &qp->swrid[head]; + swrid = &qp->jsqq->swque[head]; spsn = swrid->psns; *cnt = 1; @@ -267,7 +267,7 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp, BNXT_RE_PSNS_OPCD_MASK; ibvwc->byte_len = 0; - bnxt_re_incr_head(qp->sqq); + bnxt_re_incr_head(sq); if (qp->qpst != IBV_QPS_ERR) qp->qpst = IBV_QPS_ERR; @@ -284,14 +284,14 @@ static uint8_t bnxt_re_poll_success_scqe(struct bnxt_re_qp *qp, struct bnxt_re_req_cqe *scqe, int *cnt) { - struct bnxt_re_queue *sq = qp->sqq; + struct bnxt_re_queue *sq = qp->jsqq->hwque; struct bnxt_re_wrid *swrid; struct bnxt_re_psns *spsn; - uint8_t pcqe = false; uint32_t head = sq->head; + uint8_t pcqe = false; uint32_t cindx; - swrid = &qp->swrid[head]; + swrid = &qp->jsqq->swque[head]; spsn = swrid->psns; cindx = le32toh(scqe->con_indx); @@ -361,8 +361,8 @@ static int bnxt_re_poll_err_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc, cntx = to_bnxt_re_context(rcq->ibvcq.context); if (!qp->srq) { - rq = qp->rqq; - ibvwc->wr_id = qp->rwrid[rq->head].wrid; + rq = qp->jrqq->hwque; + ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid; } else { struct bnxt_re_srq *srq; int tag; @@ -423,8 +423,8 @@ static void bnxt_re_poll_success_rcqe(struct bnxt_re_qp *qp, rcqe = cqe; if (!qp->srq) { - rq = qp->rqq; - ibvwc->wr_id = qp->rwrid[rq->head].wrid; + rq = qp->jrqq->hwque; + ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid; } else { struct bnxt_re_srq *srq; int tag; @@ -648,13 +648,13 @@ static int bnxt_re_poll_flush_wqes(struct bnxt_re_cq *cq, if (sq_list) { qp = container_of(cur, struct bnxt_re_qp, snode); - que = qp->sqq; - wridp = qp->swrid; + que = qp->jsqq->hwque; + wridp = qp->jsqq->swque; } else { qp = container_of(cur, struct bnxt_re_qp, rnode); - que = qp->rqq; - wridp = qp->rwrid; + que = qp->jrqq->hwque; + wridp = qp->jrqq->swque; } if (bnxt_re_is_que_empty(que)) continue; @@ -802,55 +802,66 @@ static int bnxt_re_check_qp_limits(struct bnxt_re_context *cntx, static void bnxt_re_free_queue_ptr(struct bnxt_re_qp *qp) { - if (qp->rqq) - free(qp->rqq); - if (qp->sqq) - free(qp->sqq); + free(qp->jrqq->hwque); + free(qp->jrqq); + free(qp->jsqq->hwque); + free(qp->jsqq); } static int bnxt_re_alloc_queue_ptr(struct bnxt_re_qp *qp, struct ibv_qp_init_attr *attr) { - qp->sqq = calloc(1, sizeof(struct bnxt_re_queue)); - if (!qp->sqq) - return -ENOMEM; + int rc = -ENOMEM; + + qp->jsqq = calloc(1, sizeof(struct bnxt_re_joint_queue)); + if (!qp->jsqq) + return rc; + qp->jsqq->hwque = calloc(1, sizeof(struct bnxt_re_queue)); + if (!qp->jsqq->hwque) + goto fail; + if (!attr->srq) { - qp->rqq = calloc(1, sizeof(struct bnxt_re_queue)); - if (!qp->rqq) { - free(qp->sqq); - return -ENOMEM; + qp->jrqq = calloc(1, sizeof(struct bnxt_re_joint_queue)); + if (!qp->jrqq) { + free(qp->jsqq); + goto fail; } + qp->jrqq->hwque = calloc(1, sizeof(struct bnxt_re_queue)); + if (!qp->jrqq->hwque) + goto fail; } return 0; +fail: + bnxt_re_free_queue_ptr(qp); + return rc; } static void bnxt_re_free_queues(struct bnxt_re_qp *qp) { - if (qp->rqq) { - if (qp->rwrid) - free(qp->rwrid); - pthread_spin_destroy(&qp->rqq->qlock); - bnxt_re_free_aligned(qp->rqq); + if (qp->jrqq) { + free(qp->jrqq->swque); + pthread_spin_destroy(&qp->jrqq->hwque->qlock); + bnxt_re_free_aligned(qp->jrqq->hwque); } - if (qp->swrid) - free(qp->swrid); - pthread_spin_destroy(&qp->sqq->qlock); - bnxt_re_free_aligned(qp->sqq); + free(qp->jsqq->swque); + pthread_spin_destroy(&qp->jsqq->hwque->qlock); + bnxt_re_free_aligned(qp->jsqq->hwque); } static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, struct ibv_qp_init_attr *attr, uint32_t pg_size) { struct bnxt_re_psns_ext *psns_ext; + struct bnxt_re_wrid *swque; struct bnxt_re_queue *que; struct bnxt_re_psns *psns; uint32_t psn_depth; uint32_t psn_size; int ret, indx; - que = qp->sqq; + que = qp->jsqq->hwque; que->stride = bnxt_re_get_sqe_sz(); /* 8916 adjustment */ que->depth = roundup_pow_of_two(attr->cap.max_send_wr + 1 + @@ -870,7 +881,7 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, * is UD-qp. UD-qp use this memory to maintain WC-opcode. * See definition of bnxt_re_fill_psns() for the use case. */ - ret = bnxt_re_alloc_aligned(qp->sqq, pg_size); + ret = bnxt_re_alloc_aligned(que, pg_size); if (ret) return ret; /* exclude psns depth*/ @@ -878,36 +889,38 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, /* start of spsn space sizeof(struct bnxt_re_psns) each. */ psns = (que->va + que->stride * que->depth); psns_ext = (struct bnxt_re_psns_ext *)psns; - pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); - qp->swrid = calloc(que->depth, sizeof(struct bnxt_re_wrid)); - if (!qp->swrid) { + swque = calloc(que->depth, sizeof(struct bnxt_re_wrid)); + if (!swque) { ret = -ENOMEM; goto fail; } for (indx = 0 ; indx < que->depth; indx++, psns++) - qp->swrid[indx].psns = psns; + swque[indx].psns = psns; if (bnxt_re_is_chip_gen_p5(qp->cctx)) { for (indx = 0 ; indx < que->depth; indx++, psns_ext++) { - qp->swrid[indx].psns_ext = psns_ext; - qp->swrid[indx].psns = (struct bnxt_re_psns *)psns_ext; + swque[indx].psns_ext = psns_ext; + swque[indx].psns = (struct bnxt_re_psns *)psns_ext; } } + qp->jsqq->swque = swque; qp->cap.max_swr = que->depth; + pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); - if (qp->rqq) { - que = qp->rqq; + if (qp->jrqq) { + que = qp->jrqq->hwque; que->stride = bnxt_re_get_rqe_sz(); que->depth = roundup_pow_of_two(attr->cap.max_recv_wr + 1); que->diff = que->depth - attr->cap.max_recv_wr; - ret = bnxt_re_alloc_aligned(qp->rqq, pg_size); + ret = bnxt_re_alloc_aligned(que, pg_size); if (ret) goto fail; pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); /* For RQ only bnxt_re_wri.wrid is used. */ - qp->rwrid = calloc(que->depth, sizeof(struct bnxt_re_wrid)); - if (!qp->rwrid) { + qp->jrqq->swque = calloc(que->depth, + sizeof(struct bnxt_re_wrid)); + if (!qp->jrqq->swque) { ret = -ENOMEM; goto fail; } @@ -946,8 +959,8 @@ struct ibv_qp *bnxt_re_create_qp(struct ibv_pd *ibvpd, goto failq; /* Fill ibv_cmd */ cap = &qp->cap; - req.qpsva = (uintptr_t)qp->sqq->va; - req.qprva = qp->rqq ? (uintptr_t)qp->rqq->va : 0; + req.qpsva = (uintptr_t)qp->jsqq->hwque->va; + req.qprva = qp->jrqq ? (uintptr_t)qp->jrqq->hwque->va : 0; req.qp_handle = (uintptr_t)qp; if (ibv_cmd_create_qp(ibvpd, &qp->ibvqp, attr, &req.ibv_cmd, sizeof(req), @@ -995,11 +1008,11 @@ int bnxt_re_modify_qp(struct ibv_qp *ibvqp, struct ibv_qp_attr *attr, qp->qpst = attr->qp_state; /* transition to reset */ if (qp->qpst == IBV_QPS_RESET) { - qp->sqq->head = 0; - qp->sqq->tail = 0; - if (qp->rqq) { - qp->rqq->head = 0; - qp->rqq->tail = 0; + qp->jsqq->hwque->head = 0; + qp->jsqq->hwque->tail = 0; + if (qp->jrqq) { + qp->jrqq->hwque->head = 0; + qp->jrqq->hwque->tail = 0; } } } @@ -1257,7 +1270,7 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, struct ibv_send_wr **bad) { struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp); - struct bnxt_re_queue *sq = qp->sqq; + struct bnxt_re_queue *sq = qp->jsqq->hwque; struct bnxt_re_wrid *wrid; uint8_t is_inline = false; struct bnxt_re_bsqe *hdr; @@ -1289,7 +1302,7 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, } sqe = (void *)(sq->va + (sq->tail * sq->stride)); - wrid = &qp->swrid[sq->tail]; + wrid = &qp->jsqq->swque[sq->tail]; memset(sqe, 0, bnxt_re_get_sqe_sz()); hdr = sqe; @@ -1376,7 +1389,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, uint32_t hdrval; sge = (rqe + bnxt_re_get_rqe_hdr_sz()); - wrid = &qp->rwrid[qp->rqq->tail]; + wrid = &qp->jrqq->swque[qp->jrqq->hwque->tail]; len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, false); wqe_sz = wr->num_sge + (bnxt_re_get_rqe_hdr_sz() >> 4); /* 16B align */ @@ -1388,7 +1401,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, hdrval = BNXT_RE_WR_OPCD_RECV; hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT); hdr->rsv_ws_fl_wt = htole32(hdrval); - hdr->wrid = htole32(qp->rqq->tail); + hdr->wrid = htole32(qp->jrqq->hwque->tail); /* Fill wrid */ wrid->wrid = wr->wr_id; @@ -1402,7 +1415,7 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, struct ibv_recv_wr **bad) { struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp); - struct bnxt_re_queue *rq = qp->rqq; + struct bnxt_re_queue *rq = qp->jrqq->hwque; void *rqe; int ret; From patchwork Wed May 5 17:10:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12240781 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-19.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E06EC43461 for ; Wed, 5 May 2021 17:40:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2EB9661073 for ; Wed, 5 May 2021 17:40:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234641AbhEERlp (ORCPT ); Wed, 5 May 2021 13:41:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234964AbhEERiq (ORCPT ); Wed, 5 May 2021 13:38:46 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14A0DC061242 for ; Wed, 5 May 2021 10:11:20 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id lp4so1167597pjb.1 for ; Wed, 05 May 2021 10:11:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=xovdW97TbBC+dsfTcVdKGImohu2qf4Rmll/wB6h4QwI=; b=a7Vs/VAja7YLpyZU6hwslhprxT6QIRxYeBzhguWDFR4hO/TKpVA3L8ckOjegIST7Fb gUbYV+zpKCxAjmUi5f1q9mBEzgv7dnZ0RsLdB56tJQClglc4rtxoSX9Zfurc6LiGkNqp KpfrSVg8ATsENNoduvHUjsgwCRzuSzlze7ic0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=xovdW97TbBC+dsfTcVdKGImohu2qf4Rmll/wB6h4QwI=; b=Gp8vadw398bZNKKRedBXjaUX6MrnNaJG67fm3toyWWZasi6DewWKrCNBnXfDQKTqJ3 kQNPUWTTIsEiYhXUzRpPdnTOTuKvdP4SOFnXhDzgjupbMFRy59/RUNSKlNYCwFbyP973 739SnX/lSmY7nm6nqUdXYd6JeVp5IUvzYH1iGwrP9yd7oZRR+QNLWjBZA0kjUc8l7p1c g7Y2IjjXCToI8CAnBACPq4e+7ObEDXWfeOyRhUKr2PrLg1pzi0m1KlpsQMQ61xmdUQEK 7J9g9zLSi7TcbGfShWzrlOw6khHc4+u5OVcUtUrtzUpfio+FcJr8mXleSMTywV6ZuSgt U6sQ== X-Gm-Message-State: AOAM530MI2IPMroGjRg1bQhwPE6DFi0aFbgkjjh3QYaNjbLJSYWU7Kda Zj8BhFMkt3GOeYapg7kwrGDoZLkCYuS+tN1PRY0n+nKbh3PvW7srgqYIcfsLVMMEravFbFgKNPS eeLKUJmPSod9Ic9N3H/idHIpmokk0hO4Ijz74smg0Mnoya/YXiiUELMxSujCvJiprl4k7YSN5h+ y3XqgU+99l X-Google-Smtp-Source: ABdhPJzYpvR9zvAtAgO+/JTiY0WHjW8v37aH7HEXEeMHgoZH5dZ2ZLqtgRW9ONMsd+xjxdH36Gx3gg== X-Received: by 2002:a17:90a:414a:: with SMTP id m10mr12127085pjg.63.1620234678914; Wed, 05 May 2021 10:11:18 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id u21sm15381614pfm.89.2021.05.05.10.11.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 May 2021 10:11:18 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [V2 rdma-core 4/4] bnxt_re/lib: query device attributes only once and store Date: Wed, 5 May 2021 22:40:56 +0530 Message-Id: <20210505171056.514204-5-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210505171056.514204-1-devesh.sharma@broadcom.com> References: <20210505171056.514204-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Making a change to query device attributes only once during context initialization. Context structure would store the attributes for future reference. This avoids multiple user to kernel context switch during QP creation. Fixes: d2745fe2ab86 ("Add support for posting and polling") Signed-off-by: Devesh Sharma --- providers/bnxt_re/main.c | 31 ++++++++++++++++++------------- providers/bnxt_re/main.h | 2 ++ providers/bnxt_re/verbs.c | 25 +++++++++++-------------- 3 files changed, 31 insertions(+), 27 deletions(-) diff --git a/providers/bnxt_re/main.c b/providers/bnxt_re/main.c index a78e6b98..1779e1ec 100644 --- a/providers/bnxt_re/main.c +++ b/providers/bnxt_re/main.c @@ -129,10 +129,11 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev, int cmd_fd, void *private_data) { - struct ibv_get_context cmd; + struct bnxt_re_dev *rdev = to_bnxt_re_dev(vdev); struct ubnxt_re_cntx_resp resp; - struct bnxt_re_dev *dev = to_bnxt_re_dev(vdev); struct bnxt_re_context *cntx; + struct ibv_get_context cmd; + int ret; cntx = verbs_init_and_alloc_context(vdev, cmd_fd, cntx, ibvctx, RDMA_DRIVER_BNXT_RE); @@ -146,9 +147,9 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev, cntx->dev_id = resp.dev_id; cntx->max_qp = resp.max_qp; - dev->pg_size = resp.pg_size; - dev->cqe_size = resp.cqe_sz; - dev->max_cq_depth = resp.max_cqd; + rdev->pg_size = resp.pg_size; + rdev->cqe_size = resp.cqe_sz; + rdev->max_cq_depth = resp.max_cqd; if (resp.comp_mask & BNXT_RE_UCNTX_CMASK_HAVE_CCTX) { cntx->cctx.chip_num = resp.chip_id0 & 0xFFFF; cntx->cctx.chip_rev = (resp.chip_id0 >> @@ -159,7 +160,7 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev, } pthread_spin_init(&cntx->fqlock, PTHREAD_PROCESS_PRIVATE); /* mmap shared page. */ - cntx->shpg = mmap(NULL, dev->pg_size, PROT_READ | PROT_WRITE, + cntx->shpg = mmap(NULL, rdev->pg_size, PROT_READ | PROT_WRITE, MAP_SHARED, cmd_fd, 0); if (cntx->shpg == MAP_FAILED) { cntx->shpg = NULL; @@ -168,6 +169,10 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev, pthread_mutex_init(&cntx->shlock, NULL); verbs_set_ops(&cntx->ibvctx, &bnxt_re_cntx_ops); + cntx->rdev = rdev; + ret = ibv_query_device(&cntx->ibvctx.context, &rdev->devattr); + if (ret) + goto failed; return &cntx->ibvctx; @@ -180,19 +185,19 @@ failed: static void bnxt_re_free_context(struct ibv_context *ibvctx) { struct bnxt_re_context *cntx = to_bnxt_re_context(ibvctx); - struct bnxt_re_dev *dev = to_bnxt_re_dev(ibvctx->device); + struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibvctx->device); /* Unmap if anything device specific was mapped in init_context. */ pthread_mutex_destroy(&cntx->shlock); if (cntx->shpg) - munmap(cntx->shpg, dev->pg_size); + munmap(cntx->shpg, rdev->pg_size); pthread_spin_destroy(&cntx->fqlock); /* Un-map DPI only for the first PD that was * allocated in this context. */ if (cntx->udpi.dbpage && cntx->udpi.dbpage != MAP_FAILED) { - munmap(cntx->udpi.dbpage, dev->pg_size); + munmap(cntx->udpi.dbpage, rdev->pg_size); cntx->udpi.dbpage = NULL; } @@ -203,13 +208,13 @@ static void bnxt_re_free_context(struct ibv_context *ibvctx) static struct verbs_device * bnxt_re_device_alloc(struct verbs_sysfs_dev *sysfs_dev) { - struct bnxt_re_dev *dev; + struct bnxt_re_dev *rdev; - dev = calloc(1, sizeof(*dev)); - if (!dev) + rdev = calloc(1, sizeof(*rdev)); + if (!rdev) return NULL; - return &dev->vdev; + return &rdev->vdev; } static const struct verbs_device_ops bnxt_re_dev_ops = { diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index d470e30a..a63719e8 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -166,10 +166,12 @@ struct bnxt_re_dev { uint32_t cqe_size; uint32_t max_cq_depth; + struct ibv_device_attr devattr; }; struct bnxt_re_context { struct verbs_context ibvctx; + struct bnxt_re_dev *rdev; uint32_t dev_id; uint32_t max_qp; struct bnxt_re_chip_ctx cctx; diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 59a57f72..fb2cf5ac 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -777,25 +777,22 @@ int bnxt_re_arm_cq(struct ibv_cq *ibvcq, int flags) static int bnxt_re_check_qp_limits(struct bnxt_re_context *cntx, struct ibv_qp_init_attr *attr) { - struct ibv_device_attr devattr; - int ret; + struct ibv_device_attr *devattr; + struct bnxt_re_dev *rdev; - ret = bnxt_re_query_device( - &cntx->ibvctx.context, NULL, - container_of(&devattr, struct ibv_device_attr_ex, orig_attr), - sizeof(devattr)); - if (ret) - return ret; - if (attr->cap.max_send_sge > devattr.max_sge) + rdev = cntx->rdev; + devattr = &rdev->devattr; + + if (attr->cap.max_send_sge > devattr->max_sge) return EINVAL; - if (attr->cap.max_recv_sge > devattr.max_sge) + if (attr->cap.max_recv_sge > devattr->max_sge) return EINVAL; if (attr->cap.max_inline_data > BNXT_RE_MAX_INLINE_SIZE) return EINVAL; - if (attr->cap.max_send_wr > devattr.max_qp_wr) - attr->cap.max_send_wr = devattr.max_qp_wr; - if (attr->cap.max_recv_wr > devattr.max_qp_wr) - attr->cap.max_recv_wr = devattr.max_qp_wr; + if (attr->cap.max_send_wr > devattr->max_qp_wr) + attr->cap.max_send_wr = devattr->max_qp_wr; + if (attr->cap.max_recv_wr > devattr->max_qp_wr) + attr->cap.max_recv_wr = devattr->max_qp_wr; return 0; }