From patchwork Wed Jun 14 10:26:31 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 9786071 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3FC2D60325 for ; Wed, 14 Jun 2017 10:27:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B15128547 for ; Wed, 14 Jun 2017 10:27:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FCEA285EE; Wed, 14 Jun 2017 10:27:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1CAFE285E5 for ; Wed, 14 Jun 2017 10:27:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751904AbdFNK12 (ORCPT ); Wed, 14 Jun 2017 06:27:28 -0400 Received: from mail-qt0-f178.google.com ([209.85.216.178]:33479 "EHLO mail-qt0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751768AbdFNK11 (ORCPT ); Wed, 14 Jun 2017 06:27:27 -0400 Received: by mail-qt0-f178.google.com with SMTP id u12so200649570qth.0 for ; Wed, 14 Jun 2017 03:27:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=X3YNYZQh4geS+oPbI9BEDcKlPaiRsKGiTrnlSFOstoU=; b=QJfwUXzezcWwZEmNvHqMs2b7hdLNDlKXaX+S5602QRZveqbmO2btwy4XtjqB1FAIHD mFqACJtm5YaNJzi9NKwYsWf7lX189yoySpvipUA2dbpPIKrFzBAga/JwoJtOKtH1jo/z UXk3XQz7/X0lQshmXhansJW9Ulq6WPBAAc5V8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=X3YNYZQh4geS+oPbI9BEDcKlPaiRsKGiTrnlSFOstoU=; b=mczMfXayBrBn9IHneMwulVLfHshAT8HVz/L5E7FJrl3q0aib3SPu/gsMpxwQcrhoi6 kS/0hYMS815+rIHAb3kgYF0EWsE9Re8rU8T4vGbFQaXQR70cris+KYqv7IEpG604TSEg NldI5dHkVvLsN571DfWZy3n9RNdE/tb8LhQ4FNA+1z0FKVLfMkZiB+fNzkzIzMMajkX9 Q6E3197UcVlD1Ph6F1wo9113GCRtzEvOwDlBMZC0ne+DzwsGlbDp5eWhbTiHLY/q9eE3 bNkDE7PyD8s/23gXhXXPwYAk4ulgQmOom3DWYtJYBWpiv60+39CcDCq69XPYj16aQQcR aQTw== X-Gm-Message-State: AKS2vOzXIBkDLMmbz4nq9BaqF6RiaMIjzDOnFpDjF6c1tlVCmJQA09uc 0gtq9waJyTfS5uqw X-Received: by 10.237.58.102 with SMTP id n93mr5528322qte.76.1497436046742; Wed, 14 Jun 2017 03:27:26 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net ([192.19.239.250]) by smtp.gmail.com with ESMTPSA id o49sm285065qta.34.2017.06.14.03.27.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 14 Jun 2017 03:27:26 -0700 (PDT) From: Selvin Xavier To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org, Devesh Sharma , Kalesh AP , Selvin Xavier Subject: [PATCH V5 for-next 10/14] RDMA/bnxt_re: Fix RQE posting logic Date: Wed, 14 Jun 2017 03:26:31 -0700 Message-Id: <1497435995-20480-11-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1497435995-20480-1-git-send-email-selvin.xavier@broadcom.com> References: <1497435995-20480-1-git-send-email-selvin.xavier@broadcom.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Devesh Sharma This patch adds code to ring RQ Doorbell aggressively so that the adapter can DMA RQ buffers sooner, instead of DMA all WQEs in the post_recv WR list together at the end of the post_recv verb. Also use spinlock to serialize RQ posting Signed-off-by: Kalesh AP Signed-off-by: Devesh Sharma Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/bnxt_re.h | 2 ++ drivers/infiniband/hw/bnxt_re/ib_verbs.c | 18 +++++++++++++++++- drivers/infiniband/hw/bnxt_re/ib_verbs.h | 1 + 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h index 277c2da..12950ec 100644 --- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h +++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h @@ -58,6 +58,8 @@ #define BNXT_RE_UD_QP_HW_STALL 0x400000 +#define BNXT_RE_RQ_WQE_THRESHOLD 32 + struct bnxt_re_work { struct work_struct work; unsigned long event; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 2ad2876..5118cf3 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -1218,6 +1218,7 @@ struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd, qp->ib_qp.qp_num = qp->qplib_qp.id; spin_lock_init(&qp->sq_lock); + spin_lock_init(&qp->rq_lock); if (udata) { struct bnxt_re_qp_resp resp; @@ -2250,7 +2251,10 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, struct ib_recv_wr *wr, struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp); struct bnxt_qplib_swqe wqe; int rc = 0, payload_sz = 0; + unsigned long flags; + u32 count = 0; + spin_lock_irqsave(&qp->rq_lock, flags); while (wr) { /* House keeping */ memset(&wqe, 0, sizeof(wqe)); @@ -2279,9 +2283,21 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, struct ib_recv_wr *wr, *bad_wr = wr; break; } + + /* Ring DB if the RQEs posted reaches a threshold value */ + if (++count >= BNXT_RE_RQ_WQE_THRESHOLD) { + bnxt_qplib_post_recv_db(&qp->qplib_qp); + count = 0; + } + wr = wr->next; } - bnxt_qplib_post_recv_db(&qp->qplib_qp); + + if (count) + bnxt_qplib_post_recv_db(&qp->qplib_qp); + + spin_unlock_irqrestore(&qp->rq_lock, flags); + return rc; } diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index 279eceb..1456ccd 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -73,6 +73,7 @@ struct bnxt_re_qp { struct bnxt_re_dev *rdev; struct ib_qp ib_qp; spinlock_t sq_lock; /* protect sq */ + spinlock_t rq_lock; /* protect rq */ struct bnxt_qplib_qp qplib_qp; struct ib_umem *sumem; struct ib_umem *rumem;