From patchwork Thu Sep 20 17:08:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Devesh Sharma X-Patchwork-Id: 10608237 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EE386CB for ; Thu, 20 Sep 2018 17:08:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 490E52E259 for ; Thu, 20 Sep 2018 17:08:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3D04E2E25F; Thu, 20 Sep 2018 17:08:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2BD092E259 for ; Thu, 20 Sep 2018 17:08:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731520AbeITWxU (ORCPT ); Thu, 20 Sep 2018 18:53:20 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:36021 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726193AbeITWxU (ORCPT ); Thu, 20 Sep 2018 18:53:20 -0400 Received: by mail-wr1-f68.google.com with SMTP id e1-v6so10185444wrt.3 for ; Thu, 20 Sep 2018 10:08:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dmnTugGbvgb1lV78aN61zqtM3RQJsUNaFUKSiVDGApI=; b=L2LCt/nm3FzfU5LPZcVw/DsxIpXuFklGS8Rl0nG7+syy2IcoN2o8mZY5bMK4oucoIQ rDK8RG/SegI8WWEjRudZnxb31E4K06SyPiXRidwGyjWZDcTxDfDCIa8nHUZWgU6C4N+T HiAZHqF4PnsI6Ok515nJZUMMb0snDKpDKlrNs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dmnTugGbvgb1lV78aN61zqtM3RQJsUNaFUKSiVDGApI=; b=nyaFXVx9eFejLwsUWXRyZokjHx5gOUlLPiFzaLRQi3WwBIBqvAxe24tpF2J8m2ItcS E4r5EL7PuHYRz+SIGdBMMqCnFjtv5cENUTHv28nvyjB9aJLAnE/QAtq7duEMhcsxjZf8 iCMfiKvaADVCfJa1UGu+3JclFPFfrsuo0ael8jqq51SO/VT6KZQTUElt8iJtrRRUmo/p wMpg9U2fH9aMP6z8xb3nufpb2lJNGJc+/34EZac9NavTaKhkyURmE8a19ibwsu085CBc w4JkUSiuw6v1tjpZw6nWmFFe1zs/swpouHRqFpZnbAth/vscFJMqrRkQPTmLYpquh3ge lucw== X-Gm-Message-State: APzg51CDSeGtpbnT8qpWffdKWvxoBPT4bqe7fr192DpMQAJ7Eme4Pj2z oFaFRivBpp8BAXVRr24VWcM6TQ== X-Google-Smtp-Source: ANB0VdZE+u+n0kNXh88Q6IIJMehiFaghyDlIujOP6RFm+UgUxCxVbMkDI3rh91qEaBAVnvDI8qprjg== X-Received: by 2002:a5d:6b01:: with SMTP id v1-v6mr16472704wrw.208.1537463331750; Thu, 20 Sep 2018 10:08:51 -0700 (PDT) Received: from neo00-el73.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id v133-v6sm3458232wma.36.2018.09.20.10.08.47 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 20 Sep 2018 10:08:51 -0700 (PDT) From: Devesh Sharma To: jgg@mellanox.com, dledford@redhat.com Cc: linux-rdma@vger.kernel.org, NMoreyChaisemartin@suse.de, Devesh Sharma , Jonathan Richardson , JD Zheng Subject: [PATCH rdma-core 1/4] bnxt_re/lib: Reduce memory barrier calls Date: Thu, 20 Sep 2018 13:08:31 -0400 Message-Id: <1537463314-7807-2-git-send-email-devesh.sharma@broadcom.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1537463314-7807-1-git-send-email-devesh.sharma@broadcom.com> References: <1537463314-7807-1-git-send-email-devesh.sharma@broadcom.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Move wmb calls (ring doorbell) out of the loop when processing work requests in post send. This reduces the number of calls and increases performance. in some cases it improves the performance by 35%. Signed-off-by: Jonathan Richardson Signed-off-by: JD Zheng Signed-off-by: Devesh Sharma --- providers/bnxt_re/verbs.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/providers/bnxt_re/verbs.c b/providers/bnxt_re/verbs.c index 0036cc5..9ce1454 100644 --- a/providers/bnxt_re/verbs.c +++ b/providers/bnxt_re/verbs.c @@ -1222,31 +1222,32 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, struct bnxt_re_bsqe *hdr; struct bnxt_re_wrid *wrid; struct bnxt_re_psns *psns; - void *sqe; - int ret = 0, bytes = 0; uint8_t is_inline = false; + int ret = 0, bytes = 0; + bool ring_db = false; + void *sqe; pthread_spin_lock(&sq->qlock); while (wr) { if ((qp->qpst != IBV_QPS_RTS) && (qp->qpst != IBV_QPS_SQD)) { *bad = wr; - pthread_spin_unlock(&sq->qlock); - return EINVAL; + ret = EINVAL; + goto bad_wr; } if ((qp->qptyp == IBV_QPT_UD) && (wr->opcode != IBV_WR_SEND && wr->opcode != IBV_WR_SEND_WITH_IMM)) { *bad = wr; - pthread_spin_unlock(&sq->qlock); - return EINVAL; + ret = EINVAL; + goto bad_wr; } if (bnxt_re_is_que_full(sq) || wr->num_sge > qp->cap.max_ssge) { *bad = wr; - pthread_spin_unlock(&sq->qlock); - return ENOMEM; + ret = ENOMEM; + goto bad_wr; } sqe = (void *)(sq->va + (sq->tail * sq->stride)); @@ -1305,9 +1306,10 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, bnxt_re_incr_tail(sq); qp->wqe_cnt++; wr = wr->next; - bnxt_re_ring_sq_db(qp); - if (qp->wqe_cnt == BNXT_RE_UD_QP_HW_STALL && qp->qptyp == - IBV_QPT_UD) { + ring_db = true; + + if (qp->wqe_cnt == BNXT_RE_UD_QP_HW_STALL && + qp->qptyp == IBV_QPT_UD) { /* Move RTS to RTS since it is time. */ struct ibv_qp_attr attr; int attr_mask; @@ -1319,6 +1321,10 @@ int bnxt_re_post_send(struct ibv_qp *ibvqp, struct ibv_send_wr *wr, } } +bad_wr: + if (ring_db) + bnxt_re_ring_sq_db(qp); + pthread_spin_unlock(&sq->qlock); return ret; }