From patchwork Fri May 18 07:36:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Raju Rangoju X-Patchwork-Id: 10408345 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 63AE660230 for ; Fri, 18 May 2018 07:37:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 50B4A28867 for ; Fri, 18 May 2018 07:37:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 45BC22889B; Fri, 18 May 2018 07:37:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A07E628867 for ; Fri, 18 May 2018 07:37:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752472AbeERHhF (ORCPT ); Fri, 18 May 2018 03:37:05 -0400 Received: from stargate.chelsio.com ([12.32.117.8]:54468 "EHLO stargate.chelsio.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751429AbeERHgw (ORCPT ); Fri, 18 May 2018 03:36:52 -0400 Received: from localhost (junagarh.blr.asicdesigners.com [10.193.185.238]) by stargate.chelsio.com (8.13.8/8.13.8) with ESMTP id w4I7ahaR022266; Fri, 18 May 2018 00:36:44 -0700 From: Raju Rangoju To: jgg@mellanox.com, dledford@redhat.com, linux-rdma@vger.kernel.org Cc: swise@opengridcomputing.com, bharat@chelsio.com, rajur@chelsio.com Subject: [PATCH rdma-core 2/2] cxgb4: Atomically flush per QP HW CQEs Date: Fri, 18 May 2018 13:06:17 +0530 Message-Id: <20180518073617.26404-3-rajur@chelsio.com> X-Mailer: git-send-email 2.12.0 In-Reply-To: <20180518073617.26404-1-rajur@chelsio.com> References: <20180518073617.26404-1-rajur@chelsio.com> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Potnuri Bharat Teja When a CQ is shared by multiple QPs, c4iw_flush_hw_cq() needs to acquire corresponding QP lock before moving the CQEs into its corresponding SW queue and accessing the SQ contents for completing a WR. Ignore CQEs if the corresponding QP is already flushed. Signed-off-by: Potnuri Bharat Teja Reviewed-by: Steve Wise Signed-off-by: Raju Rangoju --- providers/cxgb4/cq.c | 20 +++++++++++++++++++- providers/cxgb4/libcxgb4.h | 2 +- providers/cxgb4/qp.c | 4 ++-- 3 files changed, 22 insertions(+), 4 deletions(-) diff --git a/providers/cxgb4/cq.c b/providers/cxgb4/cq.c index be6cf2f2..478c596a 100644 --- a/providers/cxgb4/cq.c +++ b/providers/cxgb4/cq.c @@ -196,7 +196,7 @@ static void advance_oldest_read(struct t4_wq *wq) * Deal with out-of-order and/or completions that complete * prior unsignalled WRs. */ -void c4iw_flush_hw_cq(struct c4iw_cq *chp) +void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp) { struct t4_cqe *hw_cqe, *swcqe, read_cqe; struct c4iw_qp *qhp; @@ -220,6 +220,14 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp) if (qhp == NULL) goto next_cqe; + if (flush_qhp != qhp) { + pthread_spin_lock(&qhp->lock); + + if (qhp->wq.flushed == 1) { + goto next_cqe; + } + } + if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) goto next_cqe; @@ -279,6 +287,8 @@ void c4iw_flush_hw_cq(struct c4iw_cq *chp) next_cqe: t4_hwcq_consume(&chp->cq); ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); + if (qhp && flush_qhp != qhp) + pthread_spin_unlock(&qhp->lock); } } @@ -372,6 +382,14 @@ static int poll_cq(struct t4_wq *wq, struct t4_cq *cq, struct t4_cqe *cqe, } /* + * skip HW cqe's if wq is already flushed. + */ + if (wq->flushed && !SW_CQE(hw_cqe)) { + ret = -EAGAIN; + goto skip_cqe; + } + + /* * Gotta tweak READ completions: * 1) the cqe doesn't contain the sq_wptr from the wr. * 2) opcode not reflected from the wr. diff --git a/providers/cxgb4/libcxgb4.h b/providers/cxgb4/libcxgb4.h index 893bd85d..8eda822e 100644 --- a/providers/cxgb4/libcxgb4.h +++ b/providers/cxgb4/libcxgb4.h @@ -225,7 +225,7 @@ int c4iw_attach_mcast(struct ibv_qp *qp, const union ibv_gid *gid, int c4iw_detach_mcast(struct ibv_qp *qp, const union ibv_gid *gid, uint16_t lid); void c4iw_async_event(struct ibv_async_event *event); -void c4iw_flush_hw_cq(struct c4iw_cq *chp); +void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp); int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count); void c4iw_flush_sq(struct c4iw_qp *qhp); void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); diff --git a/providers/cxgb4/qp.c b/providers/cxgb4/qp.c index 46806341..5d90510c 100644 --- a/providers/cxgb4/qp.c +++ b/providers/cxgb4/qp.c @@ -517,12 +517,12 @@ void c4iw_flush_qp(struct c4iw_qp *qhp) update_qp_state(qhp); - c4iw_flush_hw_cq(rchp); + c4iw_flush_hw_cq(rchp, qhp); c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); c4iw_flush_rq(&qhp->wq, &rchp->cq, count); if (schp != rchp) - c4iw_flush_hw_cq(schp); + c4iw_flush_hw_cq(schp, qhp); c4iw_flush_sq(qhp);