From patchwork Thu Jun 10 10:49:06 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12312643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E907FC47094 for ; Thu, 10 Jun 2021 10:50:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C31016108E for ; Thu, 10 Jun 2021 10:50:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229937AbhFJKwW (ORCPT ); Thu, 10 Jun 2021 06:52:22 -0400 Received: from mail-pj1-f53.google.com ([209.85.216.53]:37417 "EHLO mail-pj1-f53.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229778AbhFJKwW (ORCPT ); Thu, 10 Jun 2021 06:52:22 -0400 Received: by mail-pj1-f53.google.com with SMTP id 22-20020a17090a0c16b0290164a5354ad0so3504506pjs.2 for ; Thu, 10 Jun 2021 03:50:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=Mp8RJFjJ3D+L3/YJIOgt0B8+KQqjYby0CvPT07wy768=; b=QD061OsEW/DU06uxYiiZ736gXRT/twW1Zg/EfEbJT01Y/WQDiFi5kzbdLEC8qCixfw S3rX8TO4SwEwBfXnhcfSHwnh6r81wa83AZdApDvjI5p/CtQGaeiP5aSzHSACMOzyyjdz 4wQQGNHlGdN7YomKdzg+3Y0FmC0wbbeZO1nI4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=Mp8RJFjJ3D+L3/YJIOgt0B8+KQqjYby0CvPT07wy768=; b=FcmgMTQ48bT0Tfj3hKXDwSYV3ZeblqAx2MOGMYpj4hwv120O8udFhVdCd9lWcFd7qG sZ9HFbN52450Qo4mUqCUgo6vSnXEZauYkIMLeZOM4X+LDH+tRB9VgQ+9HJU7MkQrrcCJ tBRJn5Dg2G6LnpEaW1KkS/BoNfuwi2GCX8UhGk2QOHii1LlwegDh0WbcWlKOHsBD+spz Nq2DLqIjGkketYIPElpi65wJGYqSjMKIEn9tZqHX4JJjiEM3UEIuYJieiqo3j25ZQnlk /4QkDOKPHZ473Ep1RApT3tlE/uHXHSFSgCdZnv+4igTXkRmnZM5Bi1nplZX1OuFAlDdM 7OeQ== X-Gm-Message-State: AOAM532PAwSwjbgFsrcjrZQDhwJeB+XVpXG8oQSL7dqwEsO6Bwd8TcdS xWu3+OKlhUDy9LxbSYNS35pYe1B7LRDo9/mQh4bWUplVEXC98Xr+BZHt6spLQw8Wve4Zl9PPaqu araOR5xfHdt2v8bxJ6zldOqBK4cJXXrb9/LEeOQkG5OpwHU6dYeOOwOJGX82EBBtmFsXL2aR0Mw tzRk71Bg== X-Google-Smtp-Source: ABdhPJxdYQqkfFCN5DePpGMoo/uxr4MLC1/IJK2UmA8DWHZ4zerfELbCXcJ9256RvmMQowljyH++8w== X-Received: by 2002:a17:90a:e54e:: with SMTP id ei14mr2774776pjb.53.1623322165178; Thu, 10 Jun 2021 03:49:25 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r6sm7362564pjm.12.2021.06.10.03.49.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Jun 2021 03:49:24 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [PATCH V4 rdma-core 1/5] Update kernel headers Date: Thu, 10 Jun 2021 16:19:06 +0530 Message-Id: <20210610104910.1147756-2-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210610104910.1147756-1-devesh.sharma@broadcom.com> References: <20210610104910.1147756-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org To commit ?? ("RDMA/bnxt_re: update ABI to pass wqe-mode to user space"). Signed-off-by: Devesh Sharma --- kernel-headers/rdma/bnxt_re-abi.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/kernel-headers/rdma/bnxt_re-abi.h b/kernel-headers/rdma/bnxt_re-abi.h index dc52e3cf..52205ed2 100644 --- a/kernel-headers/rdma/bnxt_re-abi.h +++ b/kernel-headers/rdma/bnxt_re-abi.h @@ -49,7 +49,8 @@ #define BNXT_RE_CHIP_ID0_CHIP_MET_SFT 0x18 enum { - BNXT_RE_UCNTX_CMASK_HAVE_CCTX = 0x1ULL + BNXT_RE_UCNTX_CMASK_HAVE_CCTX = 0x1ULL, + BNXT_RE_UCNTX_CMASK_HAVE_MODE = 0x02ULL }; struct bnxt_re_uctx_resp { @@ -62,6 +63,8 @@ struct bnxt_re_uctx_resp { __aligned_u64 comp_mask; __u32 chip_id0; __u32 chip_id1; + __u32 mode; + __u32 rsvd1; /* padding */ }; /* From patchwork Thu Jun 10 10:49:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12312645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4107EC47094 for ; Thu, 10 Jun 2021 10:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1D977613CB for ; Thu, 10 Jun 2021 10:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229941AbhFJKw1 (ORCPT ); Thu, 10 Jun 2021 06:52:27 -0400 Received: from mail-pj1-f50.google.com ([209.85.216.50]:35362 "EHLO mail-pj1-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbhFJKwX (ORCPT ); Thu, 10 Jun 2021 06:52:23 -0400 Received: by mail-pj1-f50.google.com with SMTP id fy24-20020a17090b0218b029016c5a59021fso3527362pjb.0 for ; Thu, 10 Jun 2021 03:50:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=lMQ8sv4TL1wbz/E6lqgzXkL11ehoYXE1UPkQt9tdE/Y=; b=DyTruwq5ECr62xvmYCOJZuPQKUvATuVQfPDxPpzugu5An/3297S5xX8fATXwNNPhVg yeuW3DaoOrvsZ6d1ElWfGyXdhOvgw+e97cYecOv86F0OkZG/AZt5OfXU6gHpsR7wpio0 XxDEfCBVihOfynANTq34CFPrB7hMVff3b794M= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=lMQ8sv4TL1wbz/E6lqgzXkL11ehoYXE1UPkQt9tdE/Y=; b=Np67DiZcNMlJwqUDNqc1AnmWmNPIb85RDbQdz5tg7geqQma7SZhnQkNLPeAnTx3hsH hmgyvjxNuUomZpL7m7PmekQjGCT9gyhfwac4esXsNX7E/AE7/vQbtZcCf9DlW1AKnw9K JvpMAu4ZYPLF/mhB0yOzp1kDv6WoSPScmo2/8r7PBaB7e0FoXYVdCdJcrrVpUcPJp8Sp 4PEmNqpD9N+i8Z8BeNBeWTW8oGWTYat/r5Z2JSw04mHqDbpsbBONVkTkvgMHV7HSOU4J FiIILRgIYfs1wBOgLkOMb4L3+bJUbMDfCNGYCqR6KBZ3sz58NdDLOEvHGGTtouH7MmP5 Crmg== X-Gm-Message-State: AOAM530dVC53ZiTklZfdle2H7HPzA/gxe2/WH9awlKwSPDhyXVHsZiSI kLx3QYmfLJEOXd4815db9LIIbtsu9f1QhQHS6pJ+QqUNVFyjbOsZ4Z8pX0SW6RFMo5+3f1iqVYs pNSKYd99y25lAeJjMTSKGz2utOV1Rv49oAO55k7Xgo6XzAa+2AB1jGICcRYvkIxDXoqy6eQhj3N VR6oyNjA== X-Google-Smtp-Source: ABdhPJywxHuD6cDKJgAYahFKVcISHdX6CgPF2qMb+ufyJ7laWpBg8osqTWBogPMwMl1oO3LvhP0YSQ== X-Received: by 2002:a17:90b:318:: with SMTP id ay24mr2631663pjb.150.1623322167145; Thu, 10 Jun 2021 03:49:27 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r6sm7362564pjm.12.2021.06.10.03.49.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Jun 2021 03:49:26 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [PATCH V4 rdma-core 2/5] bnxt_re/lib: Read wqe mode from the driver Date: Thu, 10 Jun 2021 16:19:07 +0530 Message-Id: <20210610104910.1147756-3-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210610104910.1147756-1-devesh.sharma@broadcom.com> References: <20210610104910.1147756-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org During bnxt_re device context creation, read the wqe mode and store it in context structure and in the QP context structure. wqe mode would be required to change between fixed size wqe and non-fixed sized wqe modes of SQ/RQ in gen-p5 or newer devices. Signed-off-by: Devesh Sharma --- providers/bnxt_re/bnxt_re-abi.h | 5 +++++ providers/bnxt_re/main.c | 4 ++++ providers/bnxt_re/main.h | 2 ++ providers/bnxt_re/verbs.c | 1 + 4 files changed, 12 insertions(+) diff --git a/providers/bnxt_re/bnxt_re-abi.h b/providers/bnxt_re/bnxt_re-abi.h index c82019e8..d138cd9c 100644 --- a/providers/bnxt_re/bnxt_re-abi.h +++ b/providers/bnxt_re/bnxt_re-abi.h @@ -196,6 +196,11 @@ enum bnxt_re_ud_cqe_mask { BNXT_RE_UD_CQE_SRCQPLO_SHIFT = 0x30 }; +enum bnxt_re_modes { + BNXT_RE_WQE_MODE_STATIC = 0x00, + BNXT_RE_WQE_MODE_VARIABLE = 0x01 +}; + struct bnxt_re_db_hdr { __le32 indx; __le32 typ_qid; /* typ: 4, qid:20*/ diff --git a/providers/bnxt_re/main.c b/providers/bnxt_re/main.c index 1779e1ec..ee9edd7d 100644 --- a/providers/bnxt_re/main.c +++ b/providers/bnxt_re/main.c @@ -158,6 +158,10 @@ static struct verbs_context *bnxt_re_alloc_context(struct ibv_device *vdev, BNXT_RE_CHIP_ID0_CHIP_MET_SFT) & 0xFF; } + + if (resp.comp_mask & BNXT_RE_UCNTX_CMASK_HAVE_MODE) + cntx->wqe_mode = resp.mode; + pthread_spin_init(&cntx->fqlock, PTHREAD_PROCESS_PRIVATE); /* mmap shared page. */ cntx->shpg = mmap(NULL, rdev->pg_size, PROT_READ | PROT_WRITE, diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index a63719e8..dc8166f2 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -146,6 +146,7 @@ struct bnxt_re_qp { uint64_t wqe_cnt; uint16_t mtu; uint16_t qpst; + uint32_t qpmode; uint8_t qptyp; /* irdord? */ }; @@ -178,6 +179,7 @@ struct bnxt_re_context { uint32_t max_srq; struct bnxt_re_dpi udpi; void *shpg; + uint32_t wqe_mode; pthread_mutex_t shlock; pthread_spinlock_t fqlock; }; diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index fb2cf5ac..11c01574 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -952,6 +952,7 @@ struct ibv_qp *bnxt_re_create_qp(struct ibv_pd *ibvpd, goto fail; /* alloc queues */ qp->cctx = &cntx->cctx; + qp->qpmode = cntx->wqe_mode & BNXT_RE_WQE_MODE_VARIABLE; if (bnxt_re_alloc_queues(qp, attr, dev->pg_size)) goto failq; /* Fill ibv_cmd */ From patchwork Thu Jun 10 10:49:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12312647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AF6AC48BDF for ; Thu, 10 Jun 2021 10:50:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60DAA6108E for ; Thu, 10 Jun 2021 10:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230026AbhFJKw2 (ORCPT ); Thu, 10 Jun 2021 06:52:28 -0400 Received: from mail-pl1-f179.google.com ([209.85.214.179]:44640 "EHLO mail-pl1-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229778AbhFJKw0 (ORCPT ); Thu, 10 Jun 2021 06:52:26 -0400 Received: by mail-pl1-f179.google.com with SMTP id b12so787422plg.11 for ; Thu, 10 Jun 2021 03:50:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=ny/gGn1woogojrufq6RtisteOy1Bp1Bm1hX19fv80FI=; b=ObmE4b91gz1HQksRAkvnyQTZarbQ1I6sn/DG5UxhBoJhz0zvm3Ayd+ZPoTqRFft7R6 ZPzFm4QpWCuLWcs23f3FMr8aRbxSPPxIGyVpdTv+Uh+Po5dWnbYpXsjLGTQ05uC8aC61 EtZ4IAlnY1C/NCfQGzBBgfNbCuBfGoNa8Reoo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=ny/gGn1woogojrufq6RtisteOy1Bp1Bm1hX19fv80FI=; b=JOWG00jNQfYP1wKTU+rv5wmOwUOKzhAHCxNn33NKx+vF4Q3sUCCklS8/DrMxbwv8Uz app+Rzmq/ecTPS5w25w7T5NTtskrEMO0/PQ8adncwUuhjyuOopepksKBens4Vg731GtJ Mu8M5UjExSSrUh92RLx17RTeVz0lmw87cfw+CiabCfPORSD9CkZ8O3Jc1dcjD0YJgvgw aL33TZZ011sdRHKJKgLaCPHvceSYaPry5WMZcUbVT6fvu9KwiiQ859660veWXyerXvZ4 SK/sE+fHXx+zvcrNv2Fa6kY0xb8TRXDyjf8wXZ58UcaHcNsbphLB+01sEllAkyQzvdp9 N2gA== X-Gm-Message-State: AOAM531gIrXovDNegClCUHZ4UNE3/2IgSSmuVCdB84yBkFLDrj/5qMxp TYTL2W0eBc3NwLCa5/5WN/3PnFG4VjOG8MNvW9AbQ0gtPB1uHGNHBdFNvZDdJ0cKheDRsJCzYLy Lim8Zi7oa+GTEpkzEWNUN/ThreWGSlkOoyvsGGxoRfdK3FU+hB5sZL0YDspAkfUYTfC6DqhE/lA sHf2hCJg== X-Google-Smtp-Source: ABdhPJz981+E6pj+1YFOG69/wCKhfhTgAgh1GpKZom4hEJtESxzpdgYwkpQAkmmsYfMkzwxze/kSkQ== X-Received: by 2002:a17:902:a40b:b029:10e:3f6b:e7c4 with SMTP id p11-20020a170902a40bb029010e3f6be7c4mr4230498plq.60.1623322169272; Thu, 10 Jun 2021 03:49:29 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r6sm7362564pjm.12.2021.06.10.03.49.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Jun 2021 03:49:28 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [PATCH V4 rdma-core 3/5] bnxt_re/lib: add a function to initialize software queue Date: Thu, 10 Jun 2021 16:19:08 +0530 Message-Id: <20210610104910.1147756-4-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210610104910.1147756-1-devesh.sharma@broadcom.com> References: <20210610104910.1147756-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Splitting the shadow software queue initialization into a separate function. Same is being called for both RQ and SQ during create QP. Signed-off-by: Devesh Sharma --- providers/bnxt_re/main.h | 3 ++ providers/bnxt_re/verbs.c | 65 ++++++++++++++++++++++++--------------- 2 files changed, 44 insertions(+), 24 deletions(-) diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index dc8166f2..94d42958 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -96,7 +96,10 @@ struct bnxt_re_wrid { uint64_t wrid; uint32_t bytes; int next_idx; + uint32_t st_slot_idx; + uint8_t slots; uint8_t sig; + }; struct bnxt_re_qpcap { diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 11c01574..e0e6e045 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -847,9 +847,27 @@ static void bnxt_re_free_queues(struct bnxt_re_qp *qp) bnxt_re_free_aligned(qp->jsqq->hwque); } +static int bnxt_re_alloc_init_swque(struct bnxt_re_joint_queue *jqq, int nwr) +{ + int indx; + + jqq->swque = calloc(nwr, sizeof(struct bnxt_re_wrid)); + if (!jqq->swque) + return -ENOMEM; + jqq->start_idx = 0; + jqq->last_idx = nwr - 1; + for (indx = 0; indx < nwr; indx++) + jqq->swque[indx].next_idx = indx + 1; + jqq->swque[jqq->last_idx].next_idx = 0; + jqq->last_idx = 0; + + return 0; +} + static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, struct ibv_qp_init_attr *attr, - uint32_t pg_size) { + uint32_t pg_size) +{ struct bnxt_re_psns_ext *psns_ext; struct bnxt_re_wrid *swque; struct bnxt_re_queue *que; @@ -857,22 +875,23 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, uint32_t psn_depth; uint32_t psn_size; int ret, indx; + uint32_t nswr; que = qp->jsqq->hwque; que->stride = bnxt_re_get_sqe_sz(); /* 8916 adjustment */ - que->depth = roundup_pow_of_two(attr->cap.max_send_wr + 1 + - BNXT_RE_FULL_FLAG_DELTA); - que->diff = que->depth - attr->cap.max_send_wr; + nswr = roundup_pow_of_two(attr->cap.max_send_wr + 1 + + BNXT_RE_FULL_FLAG_DELTA); + que->diff = nswr - attr->cap.max_send_wr; /* psn_depth extra entries of size que->stride */ psn_size = bnxt_re_is_chip_gen_p5(qp->cctx) ? sizeof(struct bnxt_re_psns_ext) : sizeof(struct bnxt_re_psns); - psn_depth = (que->depth * psn_size) / que->stride; - if ((que->depth * psn_size) % que->stride) + psn_depth = (nswr * psn_size) / que->stride; + if ((nswr * psn_size) % que->stride) psn_depth++; - que->depth += psn_depth; + que->depth = nswr + psn_depth; /* PSN-search memory is allocated without checking for * QP-Type. Kenrel driver do not map this memory if it * is UD-qp. UD-qp use this memory to maintain WC-opcode. @@ -884,44 +903,42 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, /* exclude psns depth*/ que->depth -= psn_depth; /* start of spsn space sizeof(struct bnxt_re_psns) each. */ - psns = (que->va + que->stride * que->depth); + psns = (que->va + que->stride * nswr); psns_ext = (struct bnxt_re_psns_ext *)psns; - swque = calloc(que->depth, sizeof(struct bnxt_re_wrid)); - if (!swque) { + + ret = bnxt_re_alloc_init_swque(qp->jsqq, nswr); + if (ret) { ret = -ENOMEM; goto fail; } - for (indx = 0 ; indx < que->depth; indx++, psns++) + swque = qp->jsqq->swque; + for (indx = 0 ; indx < nswr; indx++, psns++) swque[indx].psns = psns; if (bnxt_re_is_chip_gen_p5(qp->cctx)) { - for (indx = 0 ; indx < que->depth; indx++, psns_ext++) { + for (indx = 0 ; indx < nswr; indx++, psns_ext++) { swque[indx].psns_ext = psns_ext; swque[indx].psns = (struct bnxt_re_psns *)psns_ext; } } - qp->jsqq->swque = swque; - - qp->cap.max_swr = que->depth; + qp->cap.max_swr = nswr; pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); if (qp->jrqq) { que = qp->jrqq->hwque; que->stride = bnxt_re_get_rqe_sz(); - que->depth = roundup_pow_of_two(attr->cap.max_recv_wr + 1); - que->diff = que->depth - attr->cap.max_recv_wr; + nswr = roundup_pow_of_two(attr->cap.max_recv_wr + 1); + que->depth = nswr; + que->diff = nswr - attr->cap.max_recv_wr; ret = bnxt_re_alloc_aligned(que, pg_size); if (ret) goto fail; - pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); /* For RQ only bnxt_re_wri.wrid is used. */ - qp->jrqq->swque = calloc(que->depth, - sizeof(struct bnxt_re_wrid)); - if (!qp->jrqq->swque) { - ret = -ENOMEM; + ret = bnxt_re_alloc_init_swque(qp->jrqq, nswr); + if (ret) goto fail; - } - qp->cap.max_rwr = que->depth; + pthread_spin_init(&que->qlock, PTHREAD_PROCESS_PRIVATE); + qp->cap.max_rwr = nswr; } return 0; From patchwork Thu Jun 10 10:49:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12312649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14958C48BD1 for ; Thu, 10 Jun 2021 10:50:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F28DE613CB for ; Thu, 10 Jun 2021 10:50:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230060AbhFJKw3 (ORCPT ); Thu, 10 Jun 2021 06:52:29 -0400 Received: from mail-pf1-f169.google.com ([209.85.210.169]:46019 "EHLO mail-pf1-f169.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230001AbhFJKw2 (ORCPT ); Thu, 10 Jun 2021 06:52:28 -0400 Received: by mail-pf1-f169.google.com with SMTP id d16so1260782pfn.12 for ; Thu, 10 Jun 2021 03:50:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=ZhlrIc287mmDeys/OMsBi4HMLElZi6pr++2yoDwdas8=; b=E8feXxZOV10xOkL8pynF4EzlDZON/vwmnbMe/a2RhlkBYXVCShTXXW7tg+kILculvw ssJ2aKb5qkHBVvFymqKry44/ZDkJR6vnRt6euJJP6UI96mRcRK5ugFLxguyBSB1wZ5KJ R15WhyeWtSnTNv6wv1bPJg8NA883VoKIubKeQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=ZhlrIc287mmDeys/OMsBi4HMLElZi6pr++2yoDwdas8=; b=Aln4MTq80llJPj4S4G7Wij3Y6Mb2xGfqg7s1qoOMLv8NWJlFI7hp/pu9NAcGJ8n5FU EOLrqDnNue1YztNsIg6oXeqh3Sb6br9fasj08TVKtGEXhgMGbJQq4NLDZSWWEaXBBpnQ XlHyJphZfqH13ffUnkSTHWr/CYbnYypKrvHpEuMbtu/PXdVDDGKzCcno2cWu0sQNnavH ZCUWUgPE1/btw87+PKUC4qMaRPcht4998caVQZxQcHN0SCNbIsikmHGe3DVH7pvYSvXM c3I6hzFBNBgtYdIfPDBk2gKj5d0tZoSIER0clP9O2GYwiLb5MuY/bcVmpk1arHgI/mvO 1bFw== X-Gm-Message-State: AOAM530/RXi3DxAkVQGH/oA0wr85xP6uV7tOBfxFmLR9Ud3o1zsPVu1g qGIgdpZUmXG24FqM0MQnwYOkBufZl68fhMudeV04stC5vHLT+3fQ/kRgRr/AlfuyOUBOah84Rg5 VZDLmOtWR01Vm/PqmXzj5XPARqZZZ38zh/BGmDO7S92h905MpmqflGjErZO7qFNADovQo8pw0mC NYY7e5Hg== X-Google-Smtp-Source: ABdhPJzlhSnXJzsl8ynufeq1VURLnRQkh4upWnW91D3+ur0OiKz3OF5Ow3EJicjev3+7/JaVuU8ncQ== X-Received: by 2002:a63:e64b:: with SMTP id p11mr4410470pgj.25.1623322171242; Thu, 10 Jun 2021 03:49:31 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r6sm7362564pjm.12.2021.06.10.03.49.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Jun 2021 03:49:30 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [PATCH V4 rdma-core 4/5] bnxt_re/lib: Use separate indices for shadow queue Date: Thu, 10 Jun 2021 16:19:09 +0530 Message-Id: <20210610104910.1147756-5-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210610104910.1147756-1-devesh.sharma@broadcom.com> References: <20210610104910.1147756-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org The shadow queue is used for wrid and flush wqe management. The indices used in this queue are independent to what is actually used by hardware. Thus, detaching the shadow queue indices from hardware queue indices. This even more useful when the hardware queue indices has alignment other than wqe boundary. Signed-off-by: Devesh Sharma --- providers/bnxt_re/main.h | 20 ++++++ providers/bnxt_re/memory.h | 8 +-- providers/bnxt_re/verbs.c | 128 ++++++++++++++++++++++--------------- 3 files changed, 101 insertions(+), 55 deletions(-) diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index 94d42958..ad660e1a 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -437,4 +437,24 @@ static inline void bnxt_re_change_cq_phase(struct bnxt_re_cq *cq) if (!cq->cqq.head) cq->phase = (~cq->phase & BNXT_RE_BCQE_PH_MASK); } + +static inline void *bnxt_re_get_swqe(struct bnxt_re_joint_queue *jqq, + uint32_t *wqe_idx) +{ + if (wqe_idx) + *wqe_idx = jqq->start_idx; + return &jqq->swque[jqq->start_idx]; +} + +static inline void bnxt_re_jqq_mod_start(struct bnxt_re_joint_queue *jqq, + uint32_t idx) +{ + jqq->start_idx = jqq->swque[idx].next_idx; +} + +static inline void bnxt_re_jqq_mod_last(struct bnxt_re_joint_queue *jqq, + uint32_t idx) +{ + jqq->last_idx = jqq->swque[idx].next_idx; +} #endif diff --git a/providers/bnxt_re/memory.h b/providers/bnxt_re/memory.h index 75564c43..5bcdef9a 100644 --- a/providers/bnxt_re/memory.h +++ b/providers/bnxt_re/memory.h @@ -97,14 +97,14 @@ static inline uint32_t bnxt_re_incr(uint32_t val, uint32_t max) return (++val & (max - 1)); } -static inline void bnxt_re_incr_tail(struct bnxt_re_queue *que) +static inline void bnxt_re_incr_tail(struct bnxt_re_queue *que, uint8_t cnt) { - que->tail = bnxt_re_incr(que->tail, que->depth); + que->tail = (que->tail + cnt) & (que->depth - 1); } -static inline void bnxt_re_incr_head(struct bnxt_re_queue *que) +static inline void bnxt_re_incr_head(struct bnxt_re_queue *que, uint8_t cnt) { - que->head = bnxt_re_incr(que->head, que->depth); + que->head = (que->head + cnt) & (que->depth - 1); } #endif diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index e0e6e045..268f443c 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -247,10 +247,12 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp, struct bnxt_re_wrid *swrid; struct bnxt_re_psns *spsn; struct bnxt_re_cq *scq; - uint32_t head = sq->head; uint8_t status; + uint32_t head; scq = to_bnxt_re_cq(qp->ibvqp.send_cq); + + head = qp->jsqq->last_idx; cntx = to_bnxt_re_context(scq->ibvcq.context); swrid = &qp->jsqq->swque[head]; spsn = swrid->psns; @@ -267,7 +269,8 @@ static uint8_t bnxt_re_poll_err_scqe(struct bnxt_re_qp *qp, BNXT_RE_PSNS_OPCD_MASK; ibvwc->byte_len = 0; - bnxt_re_incr_head(sq); + bnxt_re_incr_head(sq, swrid->slots); + bnxt_re_jqq_mod_last(qp->jsqq, head); if (qp->qpst != IBV_QPS_ERR) qp->qpst = IBV_QPS_ERR; @@ -287,13 +290,14 @@ static uint8_t bnxt_re_poll_success_scqe(struct bnxt_re_qp *qp, struct bnxt_re_queue *sq = qp->jsqq->hwque; struct bnxt_re_wrid *swrid; struct bnxt_re_psns *spsn; - uint32_t head = sq->head; uint8_t pcqe = false; uint32_t cindx; + uint32_t head; + head = qp->jsqq->last_idx; swrid = &qp->jsqq->swque[head]; spsn = swrid->psns; - cindx = le32toh(scqe->con_indx); + cindx = le32toh(scqe->con_indx) & (qp->cap.max_swr - 1); if (!(swrid->sig & IBV_SEND_SIGNALED)) { *cnt = 0; @@ -313,8 +317,10 @@ static uint8_t bnxt_re_poll_success_scqe(struct bnxt_re_qp *qp, *cnt = 1; } - bnxt_re_incr_head(sq); - if (sq->head != cindx) + bnxt_re_incr_head(sq, swrid->slots); + bnxt_re_jqq_mod_last(qp->jsqq, head); + + if (qp->jsqq->last_idx != cindx) pcqe = true; return pcqe; @@ -352,23 +358,29 @@ static void bnxt_re_release_srqe(struct bnxt_re_srq *srq, int tag) static int bnxt_re_poll_err_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc, struct bnxt_re_bcqe *hdr, void *cqe) { + struct bnxt_re_context *cntx; + struct bnxt_re_wrid *swque; struct bnxt_re_queue *rq; + uint8_t status, cnt = 0; struct bnxt_re_cq *rcq; - struct bnxt_re_context *cntx; - uint8_t status; + uint32_t head = 0; rcq = to_bnxt_re_cq(qp->ibvqp.recv_cq); cntx = to_bnxt_re_context(rcq->ibvcq.context); if (!qp->srq) { rq = qp->jrqq->hwque; - ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid; + head = qp->jrqq->last_idx; + swque = &qp->jrqq->swque[head]; + ibvwc->wr_id = swque->wrid; + cnt = swque->slots; } else { struct bnxt_re_srq *srq; int tag; srq = qp->srq; rq = srq->srqq; + cnt = 1; tag = le32toh(hdr->qphi_rwrid) & BNXT_RE_BCQE_RWRID_MASK; ibvwc->wr_id = srq->srwrid[tag].wrid; bnxt_re_release_srqe(srq, tag); @@ -387,7 +399,10 @@ static int bnxt_re_poll_err_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc, ibvwc->wc_flags = 0; if (qp->qptyp == IBV_QPT_UD) ibvwc->src_qp = 0; - bnxt_re_incr_head(rq); + + if (!qp->srq) + bnxt_re_jqq_mod_last(qp->jrqq, head); + bnxt_re_incr_head(rq, cnt); if (!qp->srq) { pthread_spin_lock(&cntx->fqlock); @@ -417,14 +432,20 @@ static void bnxt_re_poll_success_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc, struct bnxt_re_bcqe *hdr, void *cqe) { - struct bnxt_re_queue *rq; - struct bnxt_re_rc_cqe *rcqe; uint8_t flags, is_imm, is_rdma; + struct bnxt_re_rc_cqe *rcqe; + struct bnxt_re_wrid *swque; + struct bnxt_re_queue *rq; + uint32_t head = 0; + uint8_t cnt = 0; rcqe = cqe; if (!qp->srq) { rq = qp->jrqq->hwque; - ibvwc->wr_id = qp->jrqq->swque[rq->head].wrid; + head = qp->jrqq->last_idx; + swque = &qp->jrqq->swque[head]; + ibvwc->wr_id = swque->wrid; + cnt = swque->slots; } else { struct bnxt_re_srq *srq; int tag; @@ -433,6 +454,7 @@ static void bnxt_re_poll_success_rcqe(struct bnxt_re_qp *qp, rq = srq->srqq; tag = le32toh(hdr->qphi_rwrid) & BNXT_RE_BCQE_RWRID_MASK; ibvwc->wr_id = srq->srwrid[tag].wrid; + cnt = 1; bnxt_re_release_srqe(srq, tag); } @@ -463,7 +485,9 @@ static void bnxt_re_poll_success_rcqe(struct bnxt_re_qp *qp, if (qp->qptyp == IBV_QPT_UD) bnxt_re_fill_ud_cqe(ibvwc, hdr, cqe); - bnxt_re_incr_head(rq); + if (!qp->srq) + bnxt_re_jqq_mod_last(qp->jrqq, head); + bnxt_re_incr_head(rq, cnt); } static uint8_t bnxt_re_poll_rcqe(struct bnxt_re_qp *qp, struct ibv_wc *ibvwc, @@ -575,7 +599,7 @@ static int bnxt_re_poll_one(struct bnxt_re_cq *cq, int nwc, struct ibv_wc *wc) *qp_handle = 0x0ULL; /* mark cqe as read */ qp_handle = NULL; } - bnxt_re_incr_head(&cq->cqq); + bnxt_re_incr_head(&cq->cqq, 1); bnxt_re_change_cq_phase(cq); skipp_real: if (cnt) { @@ -592,21 +616,21 @@ skipp_real: return dqed; } -static int bnxt_re_poll_flush_wcs(struct bnxt_re_queue *que, - struct bnxt_re_wrid *wridp, +static int bnxt_re_poll_flush_wcs(struct bnxt_re_joint_queue *jqq, struct ibv_wc *ibvwc, uint32_t qpid, int nwc) { + uint8_t opcode = IBV_WC_RECV; + struct bnxt_re_queue *que; struct bnxt_re_wrid *wrid; struct bnxt_re_psns *psns; - uint32_t cnt = 0, head; - uint8_t opcode = IBV_WC_RECV; + uint32_t cnt = 0; + que = jqq->hwque; while (nwc) { if (bnxt_re_is_que_empty(que)) break; - head = que->head; - wrid = &wridp[head]; + wrid = &jqq->swque[jqq->last_idx]; if (wrid->psns) { psns = wrid->psns; opcode = (le32toh(psns->opc_spsn) >> @@ -621,7 +645,8 @@ static int bnxt_re_poll_flush_wcs(struct bnxt_re_queue *que, ibvwc->byte_len = 0; ibvwc->wc_flags = 0; - bnxt_re_incr_head(que); + bnxt_re_jqq_mod_last(jqq, jqq->last_idx); + bnxt_re_incr_head(que, wrid->slots); nwc--; cnt++; ibvwc++; @@ -636,8 +661,7 @@ static int bnxt_re_poll_flush_wqes(struct bnxt_re_cq *cq, int32_t nwc) { struct bnxt_re_fque_node *cur, *tmp; - struct bnxt_re_wrid *wridp; - struct bnxt_re_queue *que; + struct bnxt_re_joint_queue *jqq; struct bnxt_re_qp *qp; bool sq_list = false; uint32_t polled = 0; @@ -648,18 +672,15 @@ static int bnxt_re_poll_flush_wqes(struct bnxt_re_cq *cq, if (sq_list) { qp = container_of(cur, struct bnxt_re_qp, snode); - que = qp->jsqq->hwque; - wridp = qp->jsqq->swque; + jqq = qp->jsqq; } else { qp = container_of(cur, struct bnxt_re_qp, rnode); - que = qp->jrqq->hwque; - wridp = qp->jrqq->swque; + jqq = qp->jrqq; } - if (bnxt_re_is_que_empty(que)) + if (bnxt_re_is_que_empty(jqq->hwque)) continue; - polled += bnxt_re_poll_flush_wcs(que, wridp, - ibvwc + polled, + polled += bnxt_re_poll_flush_wcs(jqq, ibvwc + polled, qp->qpid, nwc - polled); if (!(nwc - polled)) @@ -1165,14 +1186,17 @@ static void bnxt_re_fill_psns(struct bnxt_re_qp *qp, struct bnxt_re_wrid *wrid, psns_ext->st_slot_idx = 0; } -static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, struct ibv_send_wr *wr, - uint32_t len, uint8_t sqsig) +static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, uint64_t wr_id, + uint32_t len, uint8_t sqsig, uint32_t st_idx, + uint8_t slots) { - wrid->wrid = wr->wr_id; + wrid->wrid = wr_id; wrid->bytes = len; wrid->sig = 0; - if (wr->send_flags & IBV_SEND_SIGNALED || sqsig) + if (sqsig) wrid->sig = IBV_SEND_SIGNALED; + wrid->st_slot_idx = st_idx; + wrid->slots = slots; } static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe, @@ -1291,6 +1315,8 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, struct bnxt_re_bsqe *hdr; int ret = 0, bytes = 0; bool ring_db = false; + uint32_t swq_idx; + uint32_t sig; void *sqe; pthread_spin_lock(&sq->qlock); @@ -1317,8 +1343,6 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, } sqe = (void *)(sq->va + (sq->tail * sq->stride)); - wrid = &qp->jsqq->swque[sq->tail]; - memset(sqe, 0, bnxt_re_get_sqe_sz()); hdr = sqe; is_inline = bnxt_re_set_hdr_flags(hdr, wr->send_flags, @@ -1366,9 +1390,12 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, break; } - bnxt_re_fill_wrid(wrid, wr, bytes, qp->cap.sqsig); + wrid = bnxt_re_get_swqe(qp->jsqq, &swq_idx); + sig = ((wr->send_flags & IBV_SEND_SIGNALED) || qp->cap.sqsig); + bnxt_re_fill_wrid(wrid, wr->wr_id, bytes, sig, sq->tail, 1); bnxt_re_fill_psns(qp, wrid, wr->opcode, bytes); - bnxt_re_incr_tail(sq); + bnxt_re_jqq_mod_start(qp->jsqq, swq_idx); + bnxt_re_incr_tail(sq, 1); qp->wqe_cnt++; wr = wr->next; ring_db = true; @@ -1395,16 +1422,14 @@ bad_wr: } static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, - void *rqe) + void *rqe, uint32_t idx) { struct bnxt_re_brqe *hdr = rqe; - struct bnxt_re_wrid *wrid; struct bnxt_re_sge *sge; int wqe_sz, len; uint32_t hdrval; sge = (rqe + bnxt_re_get_rqe_hdr_sz()); - wrid = &qp->jrqq->swque[qp->jrqq->hwque->tail]; len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, false); wqe_sz = wr->num_sge + (bnxt_re_get_rqe_hdr_sz() >> 4); /* 16B align */ @@ -1416,12 +1441,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, hdrval = BNXT_RE_WR_OPCD_RECV; hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT); hdr->rsv_ws_fl_wt = htole32(hdrval); - hdr->wrid = htole32(qp->jrqq->hwque->tail); - - /* Fill wrid */ - wrid->wrid = wr->wr_id; - wrid->bytes = len; /* N.A. for RQE */ - wrid->sig = 0; /* N.A. for RQE */ + hdr->wrid = htole32(idx); return len; } @@ -1431,6 +1451,8 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, { struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp); struct bnxt_re_queue *rq = qp->jrqq->hwque; + struct bnxt_re_wrid *swque; + uint32_t swq_idx; void *rqe; int ret; @@ -1452,14 +1474,18 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, rqe = (void *)(rq->va + (rq->tail * rq->stride)); memset(rqe, 0, bnxt_re_get_rqe_sz()); - ret = bnxt_re_build_rqe(qp, wr, rqe); + swque = bnxt_re_get_swqe(qp->jrqq, &swq_idx); + ret = bnxt_re_build_rqe(qp, wr, rqe, swq_idx); if (ret < 0) { pthread_spin_unlock(&rq->qlock); *bad = wr; return ENOMEM; } - bnxt_re_incr_tail(rq); + swque = bnxt_re_get_swqe(qp->jrqq, NULL); + bnxt_re_fill_wrid(swque, wr->wr_id, ret, 0, rq->tail, 1); + bnxt_re_jqq_mod_start(qp->jrqq, swq_idx); + bnxt_re_incr_tail(rq, 1); wr = wr->next; bnxt_re_ring_rq_db(qp); } @@ -1667,7 +1693,7 @@ int bnxt_re_post_srq_recv(struct ibv_srq *ibvsrq, struct ibv_recv_wr *wr, } srq->start_idx = srq->srwrid[srq->start_idx].next_idx; - bnxt_re_incr_tail(rq); + bnxt_re_incr_tail(rq, 1); wr = wr->next; bnxt_re_ring_srq_db(srq); count++; From patchwork Thu Jun 10 10:49:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 12312651 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.9 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CE65C47094 for ; Thu, 10 Jun 2021 10:50:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EFACA613CB for ; Thu, 10 Jun 2021 10:50:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230059AbhFJKwb (ORCPT ); Thu, 10 Jun 2021 06:52:31 -0400 Received: from mail-pg1-f170.google.com ([209.85.215.170]:47011 "EHLO mail-pg1-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229778AbhFJKwa (ORCPT ); Thu, 10 Jun 2021 06:52:30 -0400 Received: by mail-pg1-f170.google.com with SMTP id n12so22214523pgs.13 for ; Thu, 10 Jun 2021 03:50:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=gcZF1G9ZmCtSi5DfeNhj5KMiGVrfPylYATRvSN5vu38=; b=Cq19bIiDe0nO8VFuZD3WwxoCq0sDZS94wWG/Lxmc/1Y5naGoIEcClI+uNBtlTCOOhe r66Lb/fUZ7ZQOCxBtJ4Nz4hh2LBC8AMQskw/v6UoDQcC0En9eZPg+NJbt6Ge0mKl7itC HkecceSxiUrrqh5Ta9OveitzqyRnTrlty9W2I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=gcZF1G9ZmCtSi5DfeNhj5KMiGVrfPylYATRvSN5vu38=; b=B2LcxDqcMyeDEFyoN8VxYCdEYR2+g6FLlWEC/rbPNkwnBaSHSQGD4GdgNf/DfETS6A jP0ULiW3X2ImUf/AHEFK9aR6t3nR52Bb40Rr7DBM2+cYoNJmHSLkwr1+5Lm75G0INoie WFfMbaXtr4ivIzYPYlQX61QbNBJWFt9PA96pjJ9GFYJsagjovHcBBxqJsKMopKoWXbnX lM3b3q3Y7ssfqWFlypk5e8BfKsrmqUMgGhTbBn0DLwkP6a74Ax/4H7QkvlP+OvAqFE5T j/M0MOeLxOlKJz4BuBtqOU63/evUM7FtZASCeL5V1cUH4HT+kRGlKRRDhoQTjeyCGRdE 3Ayg== X-Gm-Message-State: AOAM533VJjJCXLuO4J/hwReGsWaNnDvGDq//w0FQfzlspFvRglVIqOYy cXEmlpxjtdm9OJPxGg5Nc9pJWgMTvvSs2gx7PKjfezheJ+qLgiraRaD7DsxVKT7nQjPq4V4z8pe 5eTeIeinHOYKhXiHI4+ueHpS08duLV/41HhA9Khjh1Zataj46YJDphfYJ/TEv+YmPVbCpv7J+wv /3/c5LOQ== X-Google-Smtp-Source: ABdhPJy6YX8k5ww1EnLMglDWHbcaJ8cz4ni9lwu/cWw1pxO5YZDSSbZnKcbWS66a/ndhjc9Q8vyndA== X-Received: by 2002:a63:514b:: with SMTP id r11mr4335515pgl.437.1623322173345; Thu, 10 Jun 2021 03:49:33 -0700 (PDT) Received: from dev01.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id r6sm7362564pjm.12.2021.06.10.03.49.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Jun 2021 03:49:32 -0700 (PDT) From: Devesh Sharma To: linux-rdma@vger.kernel.org Cc: Devesh Sharma Subject: [PATCH V4 rdma-core 5/5] bnxt_re/lib: Move hardware queue to 16B aligned indices Date: Thu, 10 Jun 2021 16:19:10 +0530 Message-Id: <20210610104910.1147756-6-devesh.sharma@broadcom.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210610104910.1147756-1-devesh.sharma@broadcom.com> References: <20210610104910.1147756-1-devesh.sharma@broadcom.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org Move SQ and RQ indices from WQE boundary to 16B boundary alignment. Changing the SQ-wqe posting algorithm accordingly. The new alignment needs to pull a 16B slot from the hardware queue and initialize the current 16B into the hardware buffer. Depending on the max possible wqe size supported by hardware, the number of 16B slots are calculated and pulled for initialization. Currently 128B wqe is supported and it requires 8 slots. Signed-off-by: Devesh Sharma --- providers/bnxt_re/db.c | 10 +- providers/bnxt_re/main.h | 1 + providers/bnxt_re/memory.h | 33 +++- providers/bnxt_re/verbs.c | 371 ++++++++++++++++++++++++++----------- 4 files changed, 294 insertions(+), 121 deletions(-) diff --git a/providers/bnxt_re/db.c b/providers/bnxt_re/db.c index 3c797573..e99b7b62 100644 --- a/providers/bnxt_re/db.c +++ b/providers/bnxt_re/db.c @@ -62,18 +62,20 @@ static void bnxt_re_init_db_hdr(struct bnxt_re_db_hdr *hdr, uint32_t indx, void bnxt_re_ring_rq_db(struct bnxt_re_qp *qp) { struct bnxt_re_db_hdr hdr; + uint32_t tail; - bnxt_re_init_db_hdr(&hdr, qp->jrqq->hwque->tail, - qp->qpid, BNXT_RE_QUE_TYPE_RQ); + tail = qp->jrqq->hwque->tail / qp->jrqq->hwque->max_slots; + bnxt_re_init_db_hdr(&hdr, tail, qp->qpid, BNXT_RE_QUE_TYPE_RQ); bnxt_re_ring_db(qp->udpi, &hdr); } void bnxt_re_ring_sq_db(struct bnxt_re_qp *qp) { struct bnxt_re_db_hdr hdr; + uint32_t tail; - bnxt_re_init_db_hdr(&hdr, qp->jsqq->hwque->tail, - qp->qpid, BNXT_RE_QUE_TYPE_SQ); + tail = qp->jsqq->hwque->tail / qp->jsqq->hwque->max_slots; + bnxt_re_init_db_hdr(&hdr, tail, qp->qpid, BNXT_RE_QUE_TYPE_SQ); bnxt_re_ring_db(qp->udpi, &hdr); } diff --git a/providers/bnxt_re/main.h b/providers/bnxt_re/main.h index ad660e1a..ab7ac521 100644 --- a/providers/bnxt_re/main.h +++ b/providers/bnxt_re/main.h @@ -44,6 +44,7 @@ #include #include #include +#include #include #include diff --git a/providers/bnxt_re/memory.h b/providers/bnxt_re/memory.h index 5bcdef9a..ebbc3c51 100644 --- a/providers/bnxt_re/memory.h +++ b/providers/bnxt_re/memory.h @@ -57,6 +57,8 @@ struct bnxt_re_queue { * and the consumer indices in the queue */ uint32_t diff; + uint32_t esize; + uint32_t max_slots; pthread_spinlock_t qlock; }; @@ -82,29 +84,44 @@ int bnxt_re_alloc_aligned(struct bnxt_re_queue *que, uint32_t pg_size); void bnxt_re_free_aligned(struct bnxt_re_queue *que); /* Basic queue operation */ -static inline uint32_t bnxt_re_is_que_full(struct bnxt_re_queue *que) +static inline void *bnxt_re_get_hwqe(struct bnxt_re_queue *que, uint32_t idx) { - return (((que->diff + que->tail) & (que->depth - 1)) == que->head); + idx += que->tail; + if (idx >= que->depth) + idx -= que->depth; + return (void *)(que->va + (idx << 4)); } -static inline uint32_t bnxt_re_is_que_empty(struct bnxt_re_queue *que) +static inline uint32_t bnxt_re_is_que_full(struct bnxt_re_queue *que, + uint32_t slots) { - return que->tail == que->head; + int32_t avail, head, tail; + + head = que->head; + tail = que->tail; + avail = head - tail; + if (head <= tail) + avail += que->depth; + return avail <= (slots + que->diff); } -static inline uint32_t bnxt_re_incr(uint32_t val, uint32_t max) +static inline uint32_t bnxt_re_is_que_empty(struct bnxt_re_queue *que) { - return (++val & (max - 1)); + return que->tail == que->head; } static inline void bnxt_re_incr_tail(struct bnxt_re_queue *que, uint8_t cnt) { - que->tail = (que->tail + cnt) & (que->depth - 1); + que->tail += cnt; + if (que->tail >= que->depth) + que->tail %= que->depth; } static inline void bnxt_re_incr_head(struct bnxt_re_queue *que, uint8_t cnt) { - que->head = (que->head + cnt) & (que->depth - 1); + que->head += cnt; + if (que->head >= que->depth) + que->head %= que->depth; } #endif diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 268f443c..4daa8944 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -885,7 +885,80 @@ static int bnxt_re_alloc_init_swque(struct bnxt_re_joint_queue *jqq, int nwr) return 0; } -static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, +static int bnxt_re_calc_wqe_sz(int nsge) +{ + /* This is used for both sq and rq. In case hdr size differs + * in future move to individual functions. + */ + return sizeof(struct bnxt_re_sge) * nsge + bnxt_re_get_sqe_hdr_sz(); +} + +static int bnxt_re_get_rq_slots(struct bnxt_re_dev *rdev, + struct bnxt_re_qp *qp, uint32_t nrwr, + uint32_t nsge) +{ + uint32_t max_wqesz; + uint32_t wqe_size; + uint32_t stride; + uint32_t slots; + + stride = sizeof(struct bnxt_re_sge); + max_wqesz = bnxt_re_calc_wqe_sz(rdev->devattr.max_sge); + + wqe_size = bnxt_re_calc_wqe_sz(nsge); + if (wqe_size > max_wqesz) + return -EINVAL; + + if (qp->qpmode == BNXT_RE_WQE_MODE_STATIC) + wqe_size = bnxt_re_calc_wqe_sz(6); + + qp->jrqq->hwque->esize = wqe_size; + qp->jrqq->hwque->max_slots = wqe_size / stride; + + slots = (nrwr * wqe_size) / stride; + return slots; +} + +static int bnxt_re_get_sq_slots(struct bnxt_re_dev *rdev, + struct bnxt_re_qp *qp, uint32_t nswr, + uint32_t nsge, uint32_t *ils) +{ + uint32_t max_wqesz; + uint32_t wqe_size; + uint32_t cal_ils; + uint32_t stride; + uint32_t ilsize; + uint32_t hdr_sz; + uint32_t slots; + + hdr_sz = bnxt_re_get_sqe_hdr_sz(); + stride = sizeof(struct bnxt_re_sge); + max_wqesz = bnxt_re_calc_wqe_sz(rdev->devattr.max_sge); + ilsize = get_aligned(*ils, hdr_sz); + + wqe_size = bnxt_re_calc_wqe_sz(nsge); + if (ilsize) { + cal_ils = hdr_sz + ilsize; + wqe_size = MAX(cal_ils, wqe_size); + wqe_size = get_aligned(wqe_size, hdr_sz); + } + if (wqe_size > max_wqesz) + return -EINVAL; + + if (qp->qpmode == BNXT_RE_WQE_MODE_STATIC) + wqe_size = bnxt_re_calc_wqe_sz(6); + + if (*ils) + *ils = wqe_size - hdr_sz; + qp->jsqq->hwque->esize = wqe_size; + qp->jsqq->hwque->max_slots = (qp->qpmode == BNXT_RE_WQE_MODE_STATIC) ? + wqe_size / stride : 1; + slots = (nswr * wqe_size) / stride; + return slots; +} + +static int bnxt_re_alloc_queues(struct bnxt_re_dev *dev, + struct bnxt_re_qp *qp, struct ibv_qp_init_attr *attr, uint32_t pg_size) { @@ -893,17 +966,27 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, struct bnxt_re_wrid *swque; struct bnxt_re_queue *que; struct bnxt_re_psns *psns; + uint32_t nswr, diff; uint32_t psn_depth; uint32_t psn_size; + uint32_t nsge; int ret, indx; - uint32_t nswr; + int nslots; que = qp->jsqq->hwque; - que->stride = bnxt_re_get_sqe_sz(); - /* 8916 adjustment */ - nswr = roundup_pow_of_two(attr->cap.max_send_wr + 1 + - BNXT_RE_FULL_FLAG_DELTA); - que->diff = nswr - attr->cap.max_send_wr; + diff = (qp->qpmode == BNXT_RE_WQE_MODE_VARIABLE) ? + 0 : BNXT_RE_FULL_FLAG_DELTA; + nswr = roundup_pow_of_two(attr->cap.max_send_wr + 1 + diff); + nsge = attr->cap.max_send_sge; + if (nsge % 2) + nsge++; + nslots = bnxt_re_get_sq_slots(dev, qp, nswr, nsge, + &attr->cap.max_inline_data); + if (nslots < 0) + return nslots; + que->stride = sizeof(struct bnxt_re_sge); + que->depth = nslots; + que->diff = (diff * que->esize) / que->stride; /* psn_depth extra entries of size que->stride */ psn_size = bnxt_re_is_chip_gen_p5(qp->cctx) ? @@ -912,7 +995,7 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, psn_depth = (nswr * psn_size) / que->stride; if ((nswr * psn_size) % que->stride) psn_depth++; - que->depth = nswr + psn_depth; + que->depth += psn_depth; /* PSN-search memory is allocated without checking for * QP-Type. Kenrel driver do not map this memory if it * is UD-qp. UD-qp use this memory to maintain WC-opcode. @@ -924,7 +1007,7 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, /* exclude psns depth*/ que->depth -= psn_depth; /* start of spsn space sizeof(struct bnxt_re_psns) each. */ - psns = (que->va + que->stride * nswr); + psns = (que->va + que->stride * que->depth); psns_ext = (struct bnxt_re_psns_ext *)psns; ret = bnxt_re_alloc_init_swque(qp->jsqq, nswr); @@ -947,10 +1030,19 @@ static int bnxt_re_alloc_queues(struct bnxt_re_qp *qp, if (qp->jrqq) { que = qp->jrqq->hwque; - que->stride = bnxt_re_get_rqe_sz(); nswr = roundup_pow_of_two(attr->cap.max_recv_wr + 1); - que->depth = nswr; - que->diff = nswr - attr->cap.max_recv_wr; + nsge = attr->cap.max_recv_sge; + if (nsge % 2) + nsge++; + nslots = bnxt_re_get_rq_slots(dev, qp, nswr, nsge); + if (nslots < 0) { + ret = nslots; + goto fail; + } + que->stride = sizeof(struct bnxt_re_sge); + que->depth = nslots; + que->diff = 0; + ret = bnxt_re_alloc_aligned(que, pg_size); if (ret) goto fail; @@ -971,10 +1063,10 @@ fail: struct ibv_qp *bnxt_re_create_qp(struct ibv_pd *ibvpd, struct ibv_qp_init_attr *attr) { - struct bnxt_re_qp *qp; - struct ubnxt_re_qp req; struct ubnxt_re_qp_resp resp; struct bnxt_re_qpcap *cap; + struct ubnxt_re_qp req; + struct bnxt_re_qp *qp; struct bnxt_re_context *cntx = to_bnxt_re_context(ibvpd->context); struct bnxt_re_dev *dev = to_bnxt_re_dev(cntx->ibvctx.context.device); @@ -991,7 +1083,7 @@ struct ibv_qp *bnxt_re_create_qp(struct ibv_pd *ibvpd, /* alloc queues */ qp->cctx = &cntx->cctx; qp->qpmode = cntx->wqe_mode & BNXT_RE_WQE_MODE_VARIABLE; - if (bnxt_re_alloc_queues(qp, attr, dev->pg_size)) + if (bnxt_re_alloc_queues(dev, qp, attr, dev->pg_size)) goto failq; /* Fill ibv_cmd */ cap = &qp->cap; @@ -1095,8 +1187,44 @@ int bnxt_re_destroy_qp(struct ibv_qp *ibvqp) return 0; } +static int bnxt_re_calc_inline_len(struct ibv_send_wr *swr, uint32_t max_ils) +{ + int illen, indx; + + illen = 0; + for (indx = 0; indx < swr->num_sge; indx++) + illen += swr->sg_list[indx].length; + if (illen > max_ils) + illen = max_ils; + return illen; +} + +static int bnxt_re_calc_posted_wqe_slots(struct bnxt_re_queue *que, void *wr, + uint32_t max_ils, bool is_rq) +{ + struct ibv_send_wr *swr; + struct ibv_recv_wr *rwr; + uint32_t wqe_byte; + uint32_t nsge; + int ilsize; + + swr = wr; + rwr = wr; + + nsge = is_rq ? rwr->num_sge : swr->num_sge; + wqe_byte = bnxt_re_calc_wqe_sz(nsge); + if (!is_rq && (swr->send_flags & IBV_SEND_INLINE)) { + ilsize = bnxt_re_calc_inline_len(swr, max_ils); + wqe_byte = get_aligned(ilsize, sizeof(struct bnxt_re_sge)); + wqe_byte += sizeof(struct bnxt_re_bsqe); + } + + return (wqe_byte / que->stride); +} + static inline uint8_t bnxt_re_set_hdr_flags(struct bnxt_re_bsqe *hdr, - uint32_t send_flags, uint8_t sqsig) + uint32_t send_flags, uint8_t sqsig, + uint32_t slots) { uint8_t is_inline = false; uint32_t hdrval = 0; @@ -1117,36 +1245,38 @@ static inline uint8_t bnxt_re_set_hdr_flags(struct bnxt_re_bsqe *hdr, << BNXT_RE_HDR_FLAGS_SHIFT); is_inline = true; } + hdrval |= (slots & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT; hdr->rsv_ws_fl_wt = htole32(hdrval); return is_inline; } -static int bnxt_re_build_sge(struct bnxt_re_sge *sge, struct ibv_sge *sg_list, - uint32_t num_sge, uint8_t is_inline) { +static int bnxt_re_build_sge(struct bnxt_re_queue *que, struct ibv_sge *sg_list, + uint32_t num_sge, uint8_t is_inline, + uint32_t *idx) +{ + struct bnxt_re_sge *sge; int indx, length = 0; void *dst; - if (!num_sge) { - memset(sge, 0, sizeof(*sge)); + if (!num_sge) return 0; - } if (is_inline) { - dst = sge; for (indx = 0; indx < num_sge; indx++) { + dst = bnxt_re_get_hwqe(que, *idx); + (*idx)++; length += sg_list[indx].length; - if (length > BNXT_RE_MAX_INLINE_SIZE) - return -ENOMEM; memcpy(dst, (void *)(uintptr_t)sg_list[indx].addr, sg_list[indx].length); - dst = dst + sg_list[indx].length; } } else { for (indx = 0; indx < num_sge; indx++) { - sge[indx].pa = htole64(sg_list[indx].addr); - sge[indx].lkey = htole32(sg_list[indx].lkey); - sge[indx].length = htole32(sg_list[indx].length); + sge = bnxt_re_get_hwqe(que, *idx); + (*idx)++; + sge->pa = htole64(sg_list[indx].addr); + sge->lkey = htole32(sg_list[indx].lkey); + sge->length = htole32(sg_list[indx].length); length += sg_list[indx].length; } } @@ -1164,6 +1294,7 @@ static void bnxt_re_fill_psns(struct bnxt_re_qp *qp, struct bnxt_re_wrid *wrid, psns = wrid->psns; psns_ext = wrid->psns_ext; + len = wrid->bytes; if (qp->qptyp == IBV_QPT_RC) { opc_spsn = qp->sq_psn & BNXT_RE_PSNS_SPSN_MASK; @@ -1183,7 +1314,7 @@ static void bnxt_re_fill_psns(struct bnxt_re_qp *qp, struct bnxt_re_wrid *wrid, psns->opc_spsn = htole32(opc_spsn); psns->flg_npsn = htole32(flg_npsn); if (bnxt_re_is_chip_gen_p5(qp->cctx)) - psns_ext->st_slot_idx = 0; + psns_ext->st_slot_idx = wrid->st_slot_idx; } static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, uint64_t wr_id, @@ -1199,16 +1330,19 @@ static void bnxt_re_fill_wrid(struct bnxt_re_wrid *wrid, uint64_t wr_id, wrid->slots = slots; } -static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe, - struct ibv_send_wr *wr, uint8_t is_inline) +static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, + struct ibv_send_wr *wr, + struct bnxt_re_bsqe *hdr, + uint8_t is_inline, uint32_t *idx) { - struct bnxt_re_sge *sge = ((void *)wqe + bnxt_re_get_sqe_hdr_sz()); - struct bnxt_re_bsqe *hdr = wqe; - uint32_t wrlen, hdrval = 0; - uint8_t opcode, qesize; + struct bnxt_re_queue *que; + uint32_t hdrval = 0; + uint8_t opcode; int len; - len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, is_inline); + que = qp->jsqq->hwque; + len = bnxt_re_build_sge(que, wr->sg_list, wr->num_sge, + is_inline, idx); if (len < 0) return len; hdr->lhdr.qkey_len = htole64((uint64_t)len); @@ -1218,34 +1352,22 @@ static int bnxt_re_build_send_sqe(struct bnxt_re_qp *qp, void *wqe, if (opcode == BNXT_RE_WR_OPCD_INVAL) return -EINVAL; hdrval = (opcode & BNXT_RE_HDR_WT_MASK); - - if (is_inline) { - wrlen = get_aligned(len, 16); - qesize = wrlen >> 4; - } else { - qesize = wr->num_sge; - } - /* HW requires wqe size has room for atleast one sge even if none was - * supplied by application - */ - if (!wr->num_sge) - qesize++; - qesize += (bnxt_re_get_sqe_hdr_sz() >> 4); - hdrval |= (qesize & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT; hdr->rsv_ws_fl_wt |= htole32(hdrval); return len; } -static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, void *wqe, - struct ibv_send_wr *wr, uint8_t is_inline) +static int bnxt_re_build_ud_sqe(struct bnxt_re_qp *qp, struct ibv_send_wr *wr, + struct bnxt_re_bsqe *hdr, uint8_t is_inline, + uint32_t *idx) { - struct bnxt_re_send *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe)); - struct bnxt_re_bsqe *hdr = wqe; + struct bnxt_re_send *sqe; struct bnxt_re_ah *ah; uint64_t qkey; int len; - len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline); + sqe = bnxt_re_get_hwqe(qp->jsqq->hwque, *idx); + (*idx)++; + len = bnxt_re_build_send_sqe(qp, wr, hdr, is_inline, idx); if (!wr->wr.ud.ah) { len = -EINVAL; goto bail; @@ -1259,28 +1381,33 @@ bail: return len; } -static int bnxt_re_build_rdma_sqe(struct bnxt_re_qp *qp, void *wqe, - struct ibv_send_wr *wr, uint8_t is_inline) +static int bnxt_re_build_rdma_sqe(struct bnxt_re_qp *qp, + struct bnxt_re_bsqe *hdr, + struct ibv_send_wr *wr, + uint8_t is_inline, uint32_t *idx) { - struct bnxt_re_rdma *sqe = ((void *)wqe + sizeof(struct bnxt_re_bsqe)); + struct bnxt_re_rdma *sqe; int len; - len = bnxt_re_build_send_sqe(qp, wqe, wr, is_inline); + sqe = bnxt_re_get_hwqe(qp->jsqq->hwque, *idx); + (*idx)++; + len = bnxt_re_build_send_sqe(qp, wr, hdr, is_inline, idx); sqe->rva = htole64(wr->wr.rdma.remote_addr); sqe->rkey = htole32(wr->wr.rdma.rkey); return len; } -static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, void *wqe, - struct ibv_send_wr *wr) +static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, + struct bnxt_re_bsqe *hdr, + struct ibv_send_wr *wr, uint32_t *idx) { - struct bnxt_re_bsqe *hdr = wqe; - struct bnxt_re_atomic *sqe = ((void *)wqe + - sizeof(struct bnxt_re_bsqe)); + struct bnxt_re_atomic *sqe; int len; - len = bnxt_re_build_send_sqe(qp, wqe, wr, false); + sqe = bnxt_re_get_hwqe(qp->jsqq->hwque, *idx); + (*idx)++; + len = bnxt_re_build_send_sqe(qp, wr, hdr, false, idx); hdr->key_immd = htole32(wr->wr.atomic.rkey); hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr); sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); @@ -1289,15 +1416,16 @@ static int bnxt_re_build_cns_sqe(struct bnxt_re_qp *qp, void *wqe, return len; } -static int bnxt_re_build_fna_sqe(struct bnxt_re_qp *qp, void *wqe, - struct ibv_send_wr *wr) +static int bnxt_re_build_fna_sqe(struct bnxt_re_qp *qp, + struct bnxt_re_bsqe *hdr, + struct ibv_send_wr *wr, uint32_t *idx) { - struct bnxt_re_bsqe *hdr = wqe; - struct bnxt_re_atomic *sqe = ((void *)wqe + - sizeof(struct bnxt_re_bsqe)); + struct bnxt_re_atomic *sqe; int len; - len = bnxt_re_build_send_sqe(qp, wqe, wr, false); + sqe = bnxt_re_get_hwqe(qp->jsqq->hwque, *idx); + (*idx)++; + len = bnxt_re_build_send_sqe(qp, wr, hdr, false, idx); hdr->key_immd = htole32(wr->wr.atomic.rkey); hdr->lhdr.rva = htole64(wr->wr.atomic.remote_addr); sqe->cmp_dt = htole64(wr->wr.atomic.compare_add); @@ -1311,13 +1439,16 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp); struct bnxt_re_queue *sq = qp->jsqq->hwque; struct bnxt_re_wrid *wrid; + struct bnxt_re_send *sqe; uint8_t is_inline = false; struct bnxt_re_bsqe *hdr; + uint32_t swq_idx, slots; int ret = 0, bytes = 0; bool ring_db = false; - uint32_t swq_idx; - uint32_t sig; - void *sqe; + uint32_t wqe_size; + uint32_t max_ils; + uint8_t sig = 0; + uint32_t idx; pthread_spin_lock(&sq->qlock); while (wr) { @@ -1335,18 +1466,20 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, goto bad_wr; } - if (bnxt_re_is_que_full(sq) || + max_ils = qp->cap.max_inline; + wqe_size = bnxt_re_calc_posted_wqe_slots(sq, wr, max_ils, false); + slots = (qp->qpmode == BNXT_RE_WQE_MODE_STATIC) ? 8 : wqe_size; + if (bnxt_re_is_que_full(sq, slots) || wr->num_sge > qp->cap.max_ssge) { *bad = wr; ret = ENOMEM; goto bad_wr; } - sqe = (void *)(sq->va + (sq->tail * sq->stride)); - memset(sqe, 0, bnxt_re_get_sqe_sz()); - hdr = sqe; + idx = 0; + hdr = bnxt_re_get_hwqe(sq, idx++); is_inline = bnxt_re_set_hdr_flags(hdr, wr->send_flags, - qp->cap.sqsig); + qp->cap.sqsig, wqe_size); switch (wr->opcode) { case IBV_WR_SEND_WITH_IMM: /* Since our h/w is LE and user supplies raw-data in @@ -1357,27 +1490,31 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, hdr->key_immd = htole32(be32toh(wr->imm_data)); SWITCH_FALLTHROUGH; case IBV_WR_SEND: - if (qp->qptyp == IBV_QPT_UD) - bytes = bnxt_re_build_ud_sqe(qp, sqe, wr, - is_inline); - else - bytes = bnxt_re_build_send_sqe(qp, sqe, wr, - is_inline); + if (qp->qptyp == IBV_QPT_UD) { + bytes = bnxt_re_build_ud_sqe(qp, wr, hdr, + is_inline, &idx); + } else { + sqe = bnxt_re_get_hwqe(sq, idx++); + memset(sqe, 0, sizeof(struct bnxt_re_send)); + bytes = bnxt_re_build_send_sqe(qp, wr, hdr, + is_inline, + &idx); + } break; case IBV_WR_RDMA_WRITE_WITH_IMM: hdr->key_immd = htole32(be32toh(wr->imm_data)); SWITCH_FALLTHROUGH; case IBV_WR_RDMA_WRITE: - bytes = bnxt_re_build_rdma_sqe(qp, sqe, wr, is_inline); + bytes = bnxt_re_build_rdma_sqe(qp, hdr, wr, is_inline, &idx); break; case IBV_WR_RDMA_READ: - bytes = bnxt_re_build_rdma_sqe(qp, sqe, wr, false); + bytes = bnxt_re_build_rdma_sqe(qp, hdr, wr, false, &idx); break; case IBV_WR_ATOMIC_CMP_AND_SWP: - bytes = bnxt_re_build_cns_sqe(qp, sqe, wr); + bytes = bnxt_re_build_cns_sqe(qp, hdr, wr, &idx); break; case IBV_WR_ATOMIC_FETCH_AND_ADD: - bytes = bnxt_re_build_fna_sqe(qp, sqe, wr); + bytes = bnxt_re_build_fna_sqe(qp, hdr, wr, &idx); break; default: bytes = -EINVAL; @@ -1392,10 +1529,11 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, wrid = bnxt_re_get_swqe(qp->jsqq, &swq_idx); sig = ((wr->send_flags & IBV_SEND_SIGNALED) || qp->cap.sqsig); - bnxt_re_fill_wrid(wrid, wr->wr_id, bytes, sig, sq->tail, 1); + bnxt_re_fill_wrid(wrid, wr->wr_id, bytes, + sig, sq->tail, slots); bnxt_re_fill_psns(qp, wrid, wr->opcode, bytes); bnxt_re_jqq_mod_start(qp->jsqq, swq_idx); - bnxt_re_incr_tail(sq, 1); + bnxt_re_incr_tail(sq, slots); qp->wqe_cnt++; wr = wr->next; ring_db = true; @@ -1421,17 +1559,14 @@ bad_wr: return ret; } -static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, - void *rqe, uint32_t idx) +static int bnxt_re_build_rqe(struct bnxt_re_queue *rq, struct ibv_recv_wr *wr, + struct bnxt_re_brqe *hdr, uint32_t wqe_sz, + uint32_t *idx, uint32_t wqe_idx) { - struct bnxt_re_brqe *hdr = rqe; - struct bnxt_re_sge *sge; - int wqe_sz, len; uint32_t hdrval; + int len; - sge = (rqe + bnxt_re_get_rqe_hdr_sz()); - - len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, false); + len = bnxt_re_build_sge(rq, wr->sg_list, wr->num_sge, false, idx); wqe_sz = wr->num_sge + (bnxt_re_get_rqe_hdr_sz() >> 4); /* 16B align */ /* HW requires wqe size has room for atleast one sge even if none was * supplied by application @@ -1441,7 +1576,7 @@ static int bnxt_re_build_rqe(struct bnxt_re_qp *qp, struct ibv_recv_wr *wr, hdrval = BNXT_RE_WR_OPCD_RECV; hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT); hdr->rsv_ws_fl_wt = htole32(hdrval); - hdr->wrid = htole32(idx); + hdr->wrid = htole32(wqe_idx); return len; } @@ -1452,8 +1587,11 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, struct bnxt_re_qp *qp = to_bnxt_re_qp(ibvqp); struct bnxt_re_queue *rq = qp->jrqq->hwque; struct bnxt_re_wrid *swque; - uint32_t swq_idx; - void *rqe; + struct bnxt_re_brqe *hdr; + struct bnxt_re_rqe *rqe; + uint32_t slots, swq_idx; + uint32_t wqe_size; + uint32_t idx = 0; int ret; pthread_spin_lock(&rq->qlock); @@ -1465,17 +1603,24 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, return EINVAL; } - if (bnxt_re_is_que_full(rq) || + wqe_size = bnxt_re_calc_posted_wqe_slots(rq, wr, 0, true); + slots = rq->max_slots; + if (bnxt_re_is_que_full(rq, slots) || wr->num_sge > qp->cap.max_rsge) { pthread_spin_unlock(&rq->qlock); *bad = wr; return ENOMEM; } - rqe = (void *)(rq->va + (rq->tail * rq->stride)); - memset(rqe, 0, bnxt_re_get_rqe_sz()); + idx = 0; swque = bnxt_re_get_swqe(qp->jrqq, &swq_idx); - ret = bnxt_re_build_rqe(qp, wr, rqe, swq_idx); + hdr = bnxt_re_get_hwqe(rq, idx++); + /* Just to build clean rqe */ + rqe = bnxt_re_get_hwqe(rq, idx++); + memset(rqe, 0, sizeof(struct bnxt_re_rqe)); + /* Fill SGEs */ + + ret = bnxt_re_build_rqe(rq, wr, hdr, wqe_size, &idx, swq_idx); if (ret < 0) { pthread_spin_unlock(&rq->qlock); *bad = wr; @@ -1483,9 +1628,9 @@ int bnxt_re_post_recv(struct ibv_qp *ibvqp, struct ibv_recv_wr *wr, } swque = bnxt_re_get_swqe(qp->jrqq, NULL); - bnxt_re_fill_wrid(swque, wr->wr_id, ret, 0, rq->tail, 1); + bnxt_re_fill_wrid(swque, wr->wr_id, ret, 0, rq->tail, slots); bnxt_re_jqq_mod_start(qp->jrqq, swq_idx); - bnxt_re_incr_tail(rq, 1); + bnxt_re_incr_tail(rq, slots); wr = wr->next; bnxt_re_ring_rq_db(qp); } @@ -1644,12 +1789,20 @@ static int bnxt_re_build_srqe(struct bnxt_re_srq *srq, struct bnxt_re_wrid *wrid; int wqe_sz, len, next; uint32_t hdrval = 0; + int indx; sge = (srqe + bnxt_re_get_srqe_hdr_sz()); next = srq->start_idx; wrid = &srq->srwrid[next]; - len = bnxt_re_build_sge(sge, wr->sg_list, wr->num_sge, false); + len = 0; + for (indx = 0; indx < wr->num_sge; indx++, sge++) { + sge->pa = htole64(wr->sg_list[indx].addr); + sge->lkey = htole32(wr->sg_list[indx].lkey); + sge->length = htole32(wr->sg_list[indx].length); + len += wr->sg_list[indx].length; + } + hdrval = BNXT_RE_WR_OPCD_RECV; wqe_sz = wr->num_sge + (bnxt_re_get_srqe_hdr_sz() >> 4); /* 16B align */ hdrval |= ((wqe_sz & BNXT_RE_HDR_WS_MASK) << BNXT_RE_HDR_WS_SHIFT);