From patchwork Mon Nov 30 22:24:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 7730721 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6AEBDBEEE1 for ; Mon, 30 Nov 2015 22:25:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 728E9204FB for ; Mon, 30 Nov 2015 22:25:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7FD562045A for ; Mon, 30 Nov 2015 22:24:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755006AbbK3WYx (ORCPT ); Mon, 30 Nov 2015 17:24:53 -0500 Received: from mail-ig0-f181.google.com ([209.85.213.181]:33182 "EHLO mail-ig0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754668AbbK3WYv (ORCPT ); Mon, 30 Nov 2015 17:24:51 -0500 Received: by igcmv3 with SMTP id mv3so81122819igc.0; Mon, 30 Nov 2015 14:24:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=3vmzeZzvT+LET8I2KviYVMknm8+EZNyYXo4gW+hXOns=; b=G6EVdx1DjwIcf6FAhZXePYrait7J/JfH0DbG7JhO7tI91wkMI9hkNd3kN9S3XB46C4 EnyjPGX8oNbu7gkWQtVC6Y67HLezQs3ReJ6PvuE7QxGA4vpyZSLIdP0MFuE4ORY5szIR u+R+DP2jsd8eK0QZfHw+EhYBILG6Te+XlUREs2WgHzNAbTAO1Rx8566xCRHOSxFvSORI Byv1fVeVGjFPtQoEv6DcJPjlQ/SvoU6mYYqamwkWIpzVXj7K3mwKJpmI6D4nI8Vt7WJd TF7i1pjiaC4Hx9uU4LRt+U4uXt3/d6xsgmiBnCxgGnQcPiz/6Ie0BJQhadshhV58cWxW 40ZA== X-Received: by 10.50.6.42 with SMTP id x10mr24296006igx.14.1448922290696; Mon, 30 Nov 2015 14:24:50 -0800 (PST) Received: from klimt.1015granger.net (c-68-46-169-226.hsd1.mi.comcast.net. [68.46.169.226]) by smtp.gmail.com with ESMTPSA id l5sm18417499ioa.17.2015.11.30.14.24.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 30 Nov 2015 14:24:50 -0800 (PST) Subject: [PATCH v2 3/7] svcrdma: Define maximum number of backchannel requests From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Mon, 30 Nov 2015 17:24:49 -0500 Message-ID: <20151130222449.13029.35573.stgit@klimt.1015granger.net> In-Reply-To: <20151130222141.13029.98664.stgit@klimt.1015granger.net> References: <20151130222141.13029.98664.stgit@klimt.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Extra resources for handling backchannel requests have to be pre-allocated when a transport instance is created. Set a limit. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 2 ++ net/sunrpc/xprtrdma/svc_rdma_transport.c | 14 +++++++++----- 2 files changed, 11 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index cc69551..c189fbd 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -137,6 +137,7 @@ struct svcxprt_rdma { int sc_max_requests; /* Depth of RQ */ int sc_max_req_size; /* Size of each RQ WR buf */ + int sc_max_bc_requests; struct ib_pd *sc_pd; @@ -178,6 +179,7 @@ struct svcxprt_rdma { #define RPCRDMA_SQ_DEPTH_MULT 8 #define RPCRDMA_MAX_REQUESTS 32 #define RPCRDMA_MAX_REQ_SIZE 4096 +#define RPCRDMA_MAX_BC_REQUESTS 2 #define RPCSVC_MAXPAYLOAD_RDMA RPCSVC_MAXPAYLOAD diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index 94b8d4c..643402e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -541,6 +541,7 @@ static struct svcxprt_rdma *rdma_create_xprt(struct svc_serv *serv, cma_xprt->sc_max_req_size = svcrdma_max_req_size; cma_xprt->sc_max_requests = svcrdma_max_requests; + cma_xprt->sc_max_bc_requests = RPCRDMA_MAX_BC_REQUESTS; cma_xprt->sc_sq_depth = svcrdma_max_requests * RPCRDMA_SQ_DEPTH_MULT; atomic_set(&cma_xprt->sc_sq_count, 0); atomic_set(&cma_xprt->sc_ctxt_used, 0); @@ -897,6 +898,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) struct ib_device_attr devattr; int uninitialized_var(dma_mr_acc); int need_dma_mr = 0; + int total_reqs; int ret; int i; @@ -932,8 +934,10 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_max_sge_rd = min_t(size_t, devattr.max_sge_rd, RPCSVC_MAXPAGES); newxprt->sc_max_requests = min((size_t)devattr.max_qp_wr, - (size_t)svcrdma_max_requests); - newxprt->sc_sq_depth = RPCRDMA_SQ_DEPTH_MULT * newxprt->sc_max_requests; + (size_t)svcrdma_max_requests); + newxprt->sc_max_bc_requests = RPCRDMA_MAX_BC_REQUESTS; + total_reqs = newxprt->sc_max_requests + newxprt->sc_max_bc_requests; + newxprt->sc_sq_depth = total_reqs * RPCRDMA_SQ_DEPTH_MULT; /* * Limit ORD based on client limit, local device limit, and @@ -957,7 +961,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) dprintk("svcrdma: error creating SQ CQ for connect request\n"); goto errout; } - cq_attr.cqe = newxprt->sc_max_requests; + cq_attr.cqe = total_reqs; newxprt->sc_rq_cq = ib_create_cq(newxprt->sc_cm_id->device, rq_comp_handler, cq_event_handler, @@ -972,7 +976,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) qp_attr.event_handler = qp_event_handler; qp_attr.qp_context = &newxprt->sc_xprt; qp_attr.cap.max_send_wr = newxprt->sc_sq_depth; - qp_attr.cap.max_recv_wr = newxprt->sc_max_requests; + qp_attr.cap.max_recv_wr = total_reqs; qp_attr.cap.max_send_sge = newxprt->sc_max_sge; qp_attr.cap.max_recv_sge = newxprt->sc_max_sge; qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR; @@ -1068,7 +1072,7 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) newxprt->sc_cm_id->device->local_dma_lkey; /* Post receive buffers */ - for (i = 0; i < newxprt->sc_max_requests; i++) { + for (i = 0; i < total_reqs; i++) { ret = svc_rdma_post_recv(newxprt); if (ret) { dprintk("svcrdma: failure posting receive buffers\n");