diff mbox series

[net-next] Revert "net/smc: don't req_notify until all CQEs drained"

Message ID 20220304091719.48340-1-dust.li@linux.alibaba.com (mailing list archive)
State Accepted
Commit 925a24213b5cc80fcef8858e6dc1f97ea2b17afb
Delegated to: Netdev Maintainers
Headers show
Series [net-next] Revert "net/smc: don't req_notify until all CQEs drained" | expand

Checks

Context Check Description
netdev/tree_selection success Clearly marked for net-next
netdev/fixes_present success Fixes tag not required for -next series
netdev/subject_prefix success Link
netdev/cover_letter success Single patches do not need cover letters
netdev/patch_count success Link
netdev/header_inline success No static functions without inline keyword in header files
netdev/build_32bit success Errors and warnings before: 0 this patch: 0
netdev/cc_maintainers success CCed 5 of 5 maintainers
netdev/build_clang success Errors and warnings before: 0 this patch: 0
netdev/module_param success Was 0 now: 0
netdev/verify_signedoff success Signed-off-by tag matches author and committer
netdev/verify_fixes success No Fixes tag
netdev/build_allmodconfig_warn success Errors and warnings before: 0 this patch: 0
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 77 lines checked
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/source_inline success Was 0 now: 0

Commit Message

Dust Li March 4, 2022, 9:17 a.m. UTC
This reverts commit a505cce6f7cfaf2aa2385aab7286063c96444526.

Leon says:
  We already discussed that. SMC should be changed to use
  RDMA CQ pool API
  drivers/infiniband/core/cq.c.
  ib_poll_handler() has much better implementation (tracing,
  IRQ rescheduling, proper error handling) than this SMC variant.

Since we will switch to ib_poll_handler() in the future,
revert this patch.

Link: https://lore.kernel.org/netdev/20220301105332.GA9417@linux.alibaba.com/
Suggested-by: Leon Romanovsky <leon@kernel.org>
Suggested-by: Karsten Graul <kgraul@linux.ibm.com>
Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
---
 net/smc/smc_wr.c | 49 +++++++++++++++++++++---------------------------
 1 file changed, 21 insertions(+), 28 deletions(-)

Comments

Leon Romanovsky March 4, 2022, 5:12 p.m. UTC | #1
On Fri, Mar 04, 2022 at 05:17:19PM +0800, Dust Li wrote:
> This reverts commit a505cce6f7cfaf2aa2385aab7286063c96444526.
> 
> Leon says:
>   We already discussed that. SMC should be changed to use
>   RDMA CQ pool API
>   drivers/infiniband/core/cq.c.
>   ib_poll_handler() has much better implementation (tracing,
>   IRQ rescheduling, proper error handling) than this SMC variant.
> 
> Since we will switch to ib_poll_handler() in the future,
> revert this patch.
> 
> Link: https://lore.kernel.org/netdev/20220301105332.GA9417@linux.alibaba.com/
> Suggested-by: Leon Romanovsky <leon@kernel.org>
> Suggested-by: Karsten Graul <kgraul@linux.ibm.com>
> Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
> ---
>  net/smc/smc_wr.c | 49 +++++++++++++++++++++---------------------------
>  1 file changed, 21 insertions(+), 28 deletions(-)
> 

Thanks,
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
patchwork-bot+netdevbpf@kernel.org March 6, 2022, 11 a.m. UTC | #2
Hello:

This patch was applied to netdev/net-next.git (master)
by David S. Miller <davem@davemloft.net>:

On Fri,  4 Mar 2022 17:17:19 +0800 you wrote:
> This reverts commit a505cce6f7cfaf2aa2385aab7286063c96444526.
> 
> Leon says:
>   We already discussed that. SMC should be changed to use
>   RDMA CQ pool API
>   drivers/infiniband/core/cq.c.
>   ib_poll_handler() has much better implementation (tracing,
>   IRQ rescheduling, proper error handling) than this SMC variant.
> 
> [...]

Here is the summary with links:
  - [net-next] Revert "net/smc: don't req_notify until all CQEs drained"
    https://git.kernel.org/netdev/net-next/c/925a24213b5c

You are awesome, thank you!
diff mbox series

Patch

diff --git a/net/smc/smc_wr.c b/net/smc/smc_wr.c
index 34d616406d51..24be1d03fef9 100644
--- a/net/smc/smc_wr.c
+++ b/net/smc/smc_wr.c
@@ -137,28 +137,25 @@  static void smc_wr_tx_tasklet_fn(struct tasklet_struct *t)
 {
 	struct smc_ib_device *dev = from_tasklet(dev, t, send_tasklet);
 	struct ib_wc wc[SMC_WR_MAX_POLL_CQE];
-	int i, rc;
+	int i = 0, rc;
+	int polled = 0;
 
 again:
+	polled++;
 	do {
 		memset(&wc, 0, sizeof(wc));
 		rc = ib_poll_cq(dev->roce_cq_send, SMC_WR_MAX_POLL_CQE, wc);
+		if (polled == 1) {
+			ib_req_notify_cq(dev->roce_cq_send,
+					 IB_CQ_NEXT_COMP |
+					 IB_CQ_REPORT_MISSED_EVENTS);
+		}
+		if (!rc)
+			break;
 		for (i = 0; i < rc; i++)
 			smc_wr_tx_process_cqe(&wc[i]);
-		if (rc < SMC_WR_MAX_POLL_CQE)
-			/* If < SMC_WR_MAX_POLL_CQE, the CQ should have been
-			 * drained, no need to poll again. --Guangguan Wang
-			 */
-			break;
 	} while (rc > 0);
-
-	/* IB_CQ_REPORT_MISSED_EVENTS make sure if ib_req_notify_cq() returns
-	 * 0, it is safe to wait for the next event.
-	 * Else we must poll the CQ again to make sure we won't miss any event
-	 */
-	if (ib_req_notify_cq(dev->roce_cq_send,
-			     IB_CQ_NEXT_COMP |
-			     IB_CQ_REPORT_MISSED_EVENTS))
+	if (polled == 1)
 		goto again;
 }
 
@@ -481,28 +478,24 @@  static void smc_wr_rx_tasklet_fn(struct tasklet_struct *t)
 {
 	struct smc_ib_device *dev = from_tasklet(dev, t, recv_tasklet);
 	struct ib_wc wc[SMC_WR_MAX_POLL_CQE];
+	int polled = 0;
 	int rc;
 
 again:
+	polled++;
 	do {
 		memset(&wc, 0, sizeof(wc));
 		rc = ib_poll_cq(dev->roce_cq_recv, SMC_WR_MAX_POLL_CQE, wc);
-		if (rc > 0)
-			smc_wr_rx_process_cqes(&wc[0], rc);
-		if (rc < SMC_WR_MAX_POLL_CQE)
-			/* If < SMC_WR_MAX_POLL_CQE, the CQ should have been
-			 * drained, no need to poll again. --Guangguan Wang
-			 */
+		if (polled == 1) {
+			ib_req_notify_cq(dev->roce_cq_recv,
+					 IB_CQ_SOLICITED_MASK
+					 | IB_CQ_REPORT_MISSED_EVENTS);
+		}
+		if (!rc)
 			break;
+		smc_wr_rx_process_cqes(&wc[0], rc);
 	} while (rc > 0);
-
-	/* IB_CQ_REPORT_MISSED_EVENTS make sure if ib_req_notify_cq() returns
-	 * 0, it is safe to wait for the next event.
-	 * Else we must poll the CQ again to make sure we won't miss any event
-	 */
-	if (ib_req_notify_cq(dev->roce_cq_recv,
-			     IB_CQ_SOLICITED_MASK |
-			     IB_CQ_REPORT_MISSED_EVENTS))
+	if (polled == 1)
 		goto again;
 }