diff mbox

iw_cxgb4: Atomically flush per QP HW CQEs

Message ID 20180427111116.5047-1-bharat@chelsio.com (mailing list archive)
State Accepted
Headers show

Commit Message

Potnuri Bharat Teja April 27, 2018, 11:11 a.m. UTC
When a CQ is shared by multiple QPs, c4iw_flush_hw_cq() needs to acquire
corresponding QP lock before moving the CQEs into its corresponding SW
queue and accessing the SQ contents for completing a WR.
Ignore CQEs if corresponding QP is already flushed.

Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com>
---
 drivers/infiniband/hw/cxgb4/cq.c       | 11 ++++++++++-
 drivers/infiniband/hw/cxgb4/iw_cxgb4.h |  2 +-
 drivers/infiniband/hw/cxgb4/qp.c       |  4 ++--
 3 files changed, 13 insertions(+), 4 deletions(-)

Comments

Steve Wise April 27, 2018, 12:51 p.m. UTC | #1
> -----Original Message-----
> From: Potnuri Bharat Teja <bharat@chelsio.com>
> Sent: Friday, April 27, 2018 6:11 AM
> To: jgg@ziepe.ca; dledford@redhat.com; leon@kernel.org
> Cc: linux-rdma@vger.kernel.org; swise@opengridcomputing.com;
> bharat@chelsio.com
> Subject: [PATCH] iw_cxgb4: Atomically flush per QP HW CQEs
> 
> When a CQ is shared by multiple QPs, c4iw_flush_hw_cq() needs to acquire
> corresponding QP lock before moving the CQEs into its corresponding SW
> queue and accessing the SQ contents for completing a WR.
> Ignore CQEs if corresponding QP is already flushed.
> 
> Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com>

Looks good.   This needs a stable tag.

Reviewed-by: Steve Wise <swise@opengridcomputing.com>


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Doug Ledford April 27, 2018, 6:40 p.m. UTC | #2
On Fri, 2018-04-27 at 07:51 -0500, Steve Wise wrote:
> > -----Original Message-----
> > From: Potnuri Bharat Teja <bharat@chelsio.com>
> > Sent: Friday, April 27, 2018 6:11 AM
> > To: jgg@ziepe.ca; dledford@redhat.com; leon@kernel.org
> > Cc: linux-rdma@vger.kernel.org; swise@opengridcomputing.com;
> > bharat@chelsio.com
> > Subject: [PATCH] iw_cxgb4: Atomically flush per QP HW CQEs
> > 
> > When a CQ is shared by multiple QPs, c4iw_flush_hw_cq() needs to acquire
> > corresponding QP lock before moving the CQEs into its corresponding SW
> > queue and accessing the SQ contents for completing a WR.
> > Ignore CQEs if corresponding QP is already flushed.
> > 
> > Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com>
> 
> Looks good.   This needs a stable tag.
> 
> Reviewed-by: Steve Wise <swise@opengridcomputing.com>
> 
> 

I added a generic stable tag.  However, this didn't apply cleanly to my
tree, so I'm not going to hold my breath that what I submit to Linus
will apply cleanly to the stable trees.
Steve Wise April 27, 2018, 7 p.m. UTC | #3
> On Fri, 2018-04-27 at 07:51 -0500, Steve Wise wrote:
> > > -----Original Message-----
> > > From: Potnuri Bharat Teja <bharat@chelsio.com>
> > > Sent: Friday, April 27, 2018 6:11 AM
> > > To: jgg@ziepe.ca; dledford@redhat.com; leon@kernel.org
> > > Cc: linux-rdma@vger.kernel.org; swise@opengridcomputing.com;
> > > bharat@chelsio.com
> > > Subject: [PATCH] iw_cxgb4: Atomically flush per QP HW CQEs
> > >
> > > When a CQ is shared by multiple QPs, c4iw_flush_hw_cq() needs to
> acquire
> > > corresponding QP lock before moving the CQEs into its corresponding SW
> > > queue and accessing the SQ contents for completing a WR.
> > > Ignore CQEs if corresponding QP is already flushed.
> > >
> > > Signed-off-by: Potnuri Bharat Teja <bharat@chelsio.com>
> >
> > Looks good.   This needs a stable tag.
> >
> > Reviewed-by: Steve Wise <swise@opengridcomputing.com>
> >
> >
> 
> I added a generic stable tag.  However, this didn't apply cleanly to my
> tree, so I'm not going to hold my breath that what I submit to Linus
> will apply cleanly to the stable trees.

Thanks Doug.  When the time comes, we'll provide the backport(s) to linux-stable if needed.

Steve.


--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/infiniband/hw/cxgb4/cq.c b/drivers/infiniband/hw/cxgb4/cq.c
index 1ccef76e2ae5..e8e8ba95a260 100644
--- a/drivers/infiniband/hw/cxgb4/cq.c
+++ b/drivers/infiniband/hw/cxgb4/cq.c
@@ -317,7 +317,7 @@  static void advance_oldest_read(struct t4_wq *wq)
  * Deal with out-of-order and/or completions that complete
  * prior unsignalled WRs.
  */
-void c4iw_flush_hw_cq(struct c4iw_cq *chp)
+void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp)
 {
 	struct t4_cqe *hw_cqe, *swcqe, read_cqe;
 	struct c4iw_qp *qhp;
@@ -341,6 +341,13 @@  void c4iw_flush_hw_cq(struct c4iw_cq *chp)
 		if (qhp == NULL)
 			goto next_cqe;
 
+		if (flush_qhp != qhp) {
+			spin_lock(&qhp->lock);
+
+			if (qhp->wq.flushed == 1)
+				goto next_cqe;
+		}
+
 		if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE)
 			goto next_cqe;
 
@@ -392,6 +399,8 @@  void c4iw_flush_hw_cq(struct c4iw_cq *chp)
 next_cqe:
 		t4_hwcq_consume(&chp->cq);
 		ret = t4_next_hw_cqe(&chp->cq, &hw_cqe);
+		if (qhp && flush_qhp != qhp)
+			spin_unlock(&qhp->lock);
 	}
 }
 
diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
index c2cf0d1d968a..3297df6b84dd 100644
--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
@@ -1081,7 +1081,7 @@  u32 c4iw_pblpool_alloc(struct c4iw_rdev *rdev, int size);
 void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size);
 u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size);
 void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size);
-void c4iw_flush_hw_cq(struct c4iw_cq *chp);
+void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp);
 void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count);
 int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp);
 int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count);
diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
index a488db1e3bf0..10b02680ed76 100644
--- a/drivers/infiniband/hw/cxgb4/qp.c
+++ b/drivers/infiniband/hw/cxgb4/qp.c
@@ -1627,13 +1627,13 @@  static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
 	qhp->wq.flushed = 1;
 	t4_set_wq_in_error(&qhp->wq, 0);
 
-	c4iw_flush_hw_cq(rchp);
+	c4iw_flush_hw_cq(rchp, qhp);
 	if (!qhp->srq) {
 		c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count);
 		rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count);
 	}
 	if (schp != rchp)
-		c4iw_flush_hw_cq(schp);
+		c4iw_flush_hw_cq(schp, qhp);
 	sq_flushed = c4iw_flush_sq(qhp);
 
 	spin_unlock(&qhp->lock);