From patchwork Mon May 22 10:15:40 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Selvin Xavier X-Patchwork-Id: 9739871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CC8D1603F1 for ; Mon, 22 May 2017 10:16:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AB48A286EC for ; Mon, 22 May 2017 10:16:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A019A286F5; Mon, 22 May 2017 10:16:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B526286F3 for ; Mon, 22 May 2017 10:16:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758430AbdEVKQp (ORCPT ); Mon, 22 May 2017 06:16:45 -0400 Received: from mail-pf0-f179.google.com ([209.85.192.179]:34003 "EHLO mail-pf0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758425AbdEVKQn (ORCPT ); Mon, 22 May 2017 06:16:43 -0400 Received: by mail-pf0-f179.google.com with SMTP id 9so78064538pfj.1 for ; Mon, 22 May 2017 03:16:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=3J+b6wD1OgkXZ2X/K/GCZRnpFohyYvu1jt7inF2RhOk=; b=NbShrl0xKKDdf3KZBlniWDgb/H0EkbzkaCrL9frqdQoUqjAsH8WX6efrk34WBYl5lJ qbY9qcCij/UjNYAv3IYqA+Wp7PXwDasQgdN0/hV5z3l2wUEeFkWJEjSLLYiVJVaQAIAw Z2eOTxTwIrEhIebgfwzRVGyDUTWZwxxIVJYCs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=3J+b6wD1OgkXZ2X/K/GCZRnpFohyYvu1jt7inF2RhOk=; b=OEp0OZwr2I1xYaRdFz7bBArer6HlOScsrHvq/5Sx1mXdfaTMmlxmWhsa46plT2q9xG Lh/zCAiahWjGA6VT6BVDWxYiJNa70G7OnQPeJG51twQTwRZmqKzAp1g4MuaXBt/jZ6/5 sp4FslGTMQXPXoWiZBzDmcgq67APc26Cku22Xm5C6zOVYk/XyA0wrEQ62AIniMU8hcUH eKgYI59BjxHyZ2Syz46ZRBfwXuQ5hz/tALLSSzCK8tvJNTDF3hQW77ZrUIn/vC97Ppi2 zVzPDHvvJmFDdD7l6M/cuqKyFCIPL8GG4C/DTxxUocsGB1xwAy3JkUIJm1RbQiP5P9nU OJEg== X-Gm-Message-State: AODbwcCNsNBuP7D0cnlOkhspSSwxca8oa1tIza7gNehFufL3hNf1ATvG YefXkEMISjEPsu+f7CY= X-Received: by 10.98.89.201 with SMTP id k70mr24366481pfj.196.1495448197009; Mon, 22 May 2017 03:16:37 -0700 (PDT) Received: from dhcp-10-192-206-197.iig.avagotech.net ([192.19.239.250]) by smtp.gmail.com with ESMTPSA id m12sm38283211pgn.30.2017.05.22.03.16.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 22 May 2017 03:16:36 -0700 (PDT) From: Selvin Xavier To: dledford@redhat.com Cc: linux-rdma@vger.kernel.org, Devesh Sharma , Kalesh AP , Selvin Xavier Subject: [PATCH V4 for-next 10/14] RDMA/bnxt_re: Fix RQE posting logic Date: Mon, 22 May 2017 03:15:40 -0700 Message-Id: <1495448144-18966-11-git-send-email-selvin.xavier@broadcom.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1495448144-18966-1-git-send-email-selvin.xavier@broadcom.com> References: <1495448144-18966-1-git-send-email-selvin.xavier@broadcom.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Devesh Sharma This patch adds code to ring RQ Doorbell aggressively so that the adapter can DMA RQ buffers sooner, instead of DMA all WQEs in the post_recv WR list together at the end of the post_recv verb. Also use spinlock to serialize RQ posting Signed-off-by: Kalesh AP Signed-off-by: Devesh Sharma Signed-off-by: Selvin Xavier --- drivers/infiniband/hw/bnxt_re/bnxt_re.h | 2 ++ drivers/infiniband/hw/bnxt_re/ib_verbs.c | 18 +++++++++++++++++- drivers/infiniband/hw/bnxt_re/ib_verbs.h | 1 + 3 files changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h index 277c2da..12950ec 100644 --- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h +++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h @@ -58,6 +58,8 @@ #define BNXT_RE_UD_QP_HW_STALL 0x400000 +#define BNXT_RE_RQ_WQE_THRESHOLD 32 + struct bnxt_re_work { struct work_struct work; unsigned long event; diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c index 43f7d66..ebacc5a 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c @@ -1217,6 +1217,7 @@ struct ib_qp *bnxt_re_create_qp(struct ib_pd *ib_pd, qp->ib_qp.qp_num = qp->qplib_qp.id; spin_lock_init(&qp->sq_lock); + spin_lock_init(&qp->rq_lock); if (udata) { struct bnxt_re_qp_resp resp; @@ -2248,7 +2249,10 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, struct ib_recv_wr *wr, struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp); struct bnxt_qplib_swqe wqe; int rc = 0, payload_sz = 0; + unsigned long flags; + u32 count = 0; + spin_lock_irqsave(&qp->rq_lock, flags); while (wr) { /* House keeping */ memset(&wqe, 0, sizeof(wqe)); @@ -2277,9 +2281,21 @@ int bnxt_re_post_recv(struct ib_qp *ib_qp, struct ib_recv_wr *wr, *bad_wr = wr; break; } + + /* Ring DB if the RQEs posted reaches a threshold value */ + if (++count >= BNXT_RE_RQ_WQE_THRESHOLD) { + bnxt_qplib_post_recv_db(&qp->qplib_qp); + count = 0; + } + wr = wr->next; } - bnxt_qplib_post_recv_db(&qp->qplib_qp); + + if (count) + bnxt_qplib_post_recv_db(&qp->qplib_qp); + + spin_unlock_irqrestore(&qp->rq_lock, flags); + return rc; } diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h index f0be321..a89ab3f 100644 --- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h +++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h @@ -73,6 +73,7 @@ struct bnxt_re_qp { struct bnxt_re_dev *rdev; struct ib_qp ib_qp; spinlock_t sq_lock; /* protect sq */ + spinlock_t rq_lock; /* protect rq */ struct bnxt_qplib_qp qplib_qp; struct ib_umem *sumem; struct ib_umem *rumem;