diff mbox

[v2,02/10] xprtrdma: Cap req_cqinit

Message ID 20141109011420.8806.1849.stgit@manet.1015granger.net (mailing list archive)
State New, archived
Headers show

Commit Message

Chuck Lever Nov. 9, 2014, 1:14 a.m. UTC
Recent work made FRMR registration and invalidation completions
unsignaled. This greatly reduces the adapter interrupt rate.

Every so often, however, a posted send Work Request is allowed to
signal. Otherwise, the provider's Work Queue will wrap and the
workload will hang.

The number of Work Requests that are allowed to remain unsignaled is
determined by the value of req_cqinit. Currently, this is set to the
size of the send Work Queue divided by two, minus 1.

For FRMR, the send Work Queue is the maximum number of concurrent
RPCs (currently 32) times the maximum number of Work Requests an
RPC might use (currently 7, though some adapters may need more).

For mlx4, this is 224 entries. This leaves completion signaling
disabled for 111 send Work Requests.

Some providers hold back dispatching Work Requests until a CQE is
generated.  If completions are disabled, then no CQEs are generated
for quite some time, and that can stall the Work Queue.

I've seen this occur running xfstests generic/113 over NFSv4, where
eventually, posting a FAST_REG_MR Work Request fails with -ENOMEM
because the Work Queue has overflowed. The connection is dropped
and re-established.

Cap the rep_cqinit setting so completions are not left turned off
for too long.

BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=269
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
---
 net/sunrpc/xprtrdma/verbs.c     |    4 +++-
 net/sunrpc/xprtrdma/xprt_rdma.h |    6 ++++++
 2 files changed, 9 insertions(+), 1 deletion(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Sagi Grimberg Nov. 9, 2014, 10:13 a.m. UTC | #1
On 11/9/2014 3:14 AM, Chuck Lever wrote:
> Recent work made FRMR registration and invalidation completions
> unsignaled. This greatly reduces the adapter interrupt rate.
>
> Every so often, however, a posted send Work Request is allowed to
> signal. Otherwise, the provider's Work Queue will wrap and the
> workload will hang.
>
> The number of Work Requests that are allowed to remain unsignaled is
> determined by the value of req_cqinit. Currently, this is set to the
> size of the send Work Queue divided by two, minus 1.
>
> For FRMR, the send Work Queue is the maximum number of concurrent
> RPCs (currently 32) times the maximum number of Work Requests an
> RPC might use (currently 7, though some adapters may need more).
>
> For mlx4, this is 224 entries. This leaves completion signaling
> disabled for 111 send Work Requests.
>
> Some providers hold back dispatching Work Requests until a CQE is
> generated.  If completions are disabled, then no CQEs are generated
> for quite some time, and that can stall the Work Queue.
>
> I've seen this occur running xfstests generic/113 over NFSv4, where
> eventually, posting a FAST_REG_MR Work Request fails with -ENOMEM
> because the Work Queue has overflowed. The connection is dropped
> and re-established.

Hey Chuck,

As you know, I've seen this issue too...
Looking into this is definitely on my todo list.

Does this happen if you run a simple dd (single request-response inflight)?

Sagi.
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Chuck Lever Nov. 9, 2014, 9:43 p.m. UTC | #2
On Nov 9, 2014, at 5:13 AM, Sagi Grimberg <sagig@dev.mellanox.co.il> wrote:

> On 11/9/2014 3:14 AM, Chuck Lever wrote:
>> Recent work made FRMR registration and invalidation completions
>> unsignaled. This greatly reduces the adapter interrupt rate.
>> 
>> Every so often, however, a posted send Work Request is allowed to
>> signal. Otherwise, the provider's Work Queue will wrap and the
>> workload will hang.
>> 
>> The number of Work Requests that are allowed to remain unsignaled is
>> determined by the value of req_cqinit. Currently, this is set to the
>> size of the send Work Queue divided by two, minus 1.
>> 
>> For FRMR, the send Work Queue is the maximum number of concurrent
>> RPCs (currently 32) times the maximum number of Work Requests an
>> RPC might use (currently 7, though some adapters may need more).
>> 
>> For mlx4, this is 224 entries. This leaves completion signaling
>> disabled for 111 send Work Requests.
>> 
>> Some providers hold back dispatching Work Requests until a CQE is
>> generated.  If completions are disabled, then no CQEs are generated
>> for quite some time, and that can stall the Work Queue.
>> 
>> I've seen this occur running xfstests generic/113 over NFSv4, where
>> eventually, posting a FAST_REG_MR Work Request fails with -ENOMEM
>> because the Work Queue has overflowed. The connection is dropped
>> and re-established.
> 
> Hey Chuck,
> 
> As you know, I've seen this issue too...
> Looking into this is definitely on my todo list.
> 
> Does this happen if you run a simple dd (single request-response inflight)?

Hi Sagi-

I typically run dbench, iozone, and xfstests when preparing patches for
upstream. The generic/113 test I mention in the patch description is the
only test where I saw this issue. I expect single-thread won’t drive
enough Work Queue activity to push the provider into WQ overflow.

--
Chuck Lever
chuck[dot]lever[at]oracle[dot]com



--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 6ea2942..af45cf3 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -733,7 +733,9 @@  rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
 
 	/* set trigger for requesting send completion */
 	ep->rep_cqinit = ep->rep_attr.cap.max_send_wr/2 - 1;
-	if (ep->rep_cqinit <= 2)
+	if (ep->rep_cqinit > RPCRDMA_MAX_UNSIGNALED_SENDS)
+		ep->rep_cqinit = RPCRDMA_MAX_UNSIGNALED_SENDS;
+	else if (ep->rep_cqinit <= 2)
 		ep->rep_cqinit = 0;
 	INIT_CQCOUNT(ep);
 	ep->rep_ia = ia;
diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
index ac7fc9a..b799041 100644
--- a/net/sunrpc/xprtrdma/xprt_rdma.h
+++ b/net/sunrpc/xprtrdma/xprt_rdma.h
@@ -97,6 +97,12 @@  struct rpcrdma_ep {
 	struct ib_wc		rep_recv_wcs[RPCRDMA_POLLSIZE];
 };
 
+/*
+ * Force a signaled SEND Work Request every so often,
+ * in case the provider needs to do some housekeeping.
+ */
+#define RPCRDMA_MAX_UNSIGNALED_SENDS	(32)
+
 #define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit)
 #define DECR_CQCOUNT(ep) atomic_sub_return(1, &(ep)->rep_cqcount)