diff mbox series

RDMA/rxe: Send last wqe reached event on qp cleanup

Message ID 20230602164042.9240-1-rpearsonhpe@gmail.com (mailing list archive)
State Superseded
Headers show
Series RDMA/rxe: Send last wqe reached event on qp cleanup | expand

Commit Message

Bob Pearson June 2, 2023, 4:40 p.m. UTC
The IBA requires:
	o11-5.2.5: If the HCA supports SRQ, for RC and UD service,
	the CI shall generate a Last WQE Reached Affiliated Asynchronous
	Event on a QP that is in the Error State and is associated with
	an SRQ when either:
		• a CQE is generated for the last WQE, or
		• the QP gets in the Error State and there are no more
		  WQEs on the RQ.

This patch implements this behavior in flush_recv_queue() which is
called as a result of rxe_qp_error() being called whenever the qp
is put into the error state. The rxe responder executes SRQ WQEs
directly from the SRQ so there are never more WQES on the RQ.

Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
---
 drivers/infiniband/sw/rxe/rxe_resp.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Bob Pearson June 2, 2023, 4:43 p.m. UTC | #1
Sorry ignore this one. I forgot the for-next.

Bob

On 6/2/23 11:40, Bob Pearson wrote:
> The IBA requires:
> 	o11-5.2.5: If the HCA supports SRQ, for RC and UD service,
> 	the CI shall generate a Last WQE Reached Affiliated Asynchronous
> 	Event on a QP that is in the Error State and is associated with
> 	an SRQ when either:
> 		• a CQE is generated for the last WQE, or
> 		• the QP gets in the Error State and there are no more
> 		  WQEs on the RQ.
> 
> This patch implements this behavior in flush_recv_queue() which is
> called as a result of rxe_qp_error() being called whenever the qp
> is put into the error state. The rxe responder executes SRQ WQEs
> directly from the SRQ so there are never more WQES on the RQ.
> 
> Signed-off-by: Bob Pearson <rpearsonhpe@gmail.com>
> ---
>  drivers/infiniband/sw/rxe/rxe_resp.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
> index 172c8f916470..0c24facd12cb 100644
> --- a/drivers/infiniband/sw/rxe/rxe_resp.c
> +++ b/drivers/infiniband/sw/rxe/rxe_resp.c
> @@ -1492,8 +1492,17 @@ static void flush_recv_queue(struct rxe_qp *qp, bool notify)
>  	struct rxe_recv_wqe *wqe;
>  	int err;
>  
> -	if (qp->srq)
> +	if (qp->srq) {
> +		if (notify && qp->ibqp.event_handler) {
> +			struct ib_event ev;
> +
> +			ev.device = qp->ibqp.device;
> +			ev.element.qp = &qp->ibqp;
> +			ev.event = IB_EVENT_QP_LAST_WQE_REACHED;
> +			qp->ibqp.event_handler(&ev, qp->ibqp.qp_context);
> +		}
>  		return;
> +	}
>  
>  	while ((wqe = queue_head(q, q->type))) {
>  		if (notify) {
diff mbox series

Patch

diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
index 172c8f916470..0c24facd12cb 100644
--- a/drivers/infiniband/sw/rxe/rxe_resp.c
+++ b/drivers/infiniband/sw/rxe/rxe_resp.c
@@ -1492,8 +1492,17 @@  static void flush_recv_queue(struct rxe_qp *qp, bool notify)
 	struct rxe_recv_wqe *wqe;
 	int err;
 
-	if (qp->srq)
+	if (qp->srq) {
+		if (notify && qp->ibqp.event_handler) {
+			struct ib_event ev;
+
+			ev.device = qp->ibqp.device;
+			ev.element.qp = &qp->ibqp;
+			ev.event = IB_EVENT_QP_LAST_WQE_REACHED;
+			qp->ibqp.event_handler(&ev, qp->ibqp.qp_context);
+		}
 		return;
+	}
 
 	while ((wqe = queue_head(q, q->type))) {
 		if (notify) {