From patchwork Wed May 28 14:33:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 4255941 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 243A8BF90B for ; Wed, 28 May 2014 14:33:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2688A2026C for ; Wed, 28 May 2014 14:33:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A7FE620212 for ; Wed, 28 May 2014 14:33:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932066AbaE1Odp (ORCPT ); Wed, 28 May 2014 10:33:45 -0400 Received: from mail-ie0-f173.google.com ([209.85.223.173]:46026 "EHLO mail-ie0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753689AbaE1Odo (ORCPT ); Wed, 28 May 2014 10:33:44 -0400 Received: by mail-ie0-f173.google.com with SMTP id lx4so10372475iec.18 for ; Wed, 28 May 2014 07:33:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:subject:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=gySU0WSXOrsiVMrgpho/laDmdYLuicRBuNGkY3eQmmA=; b=HpqixCwZRFRLjakqO+t5JtiUIG0iGFAZN7BBe7niyMrvv/CYL4jvSVvKGGZR0CPzYO rKv/BA3igQ3e93pXJPdD4N+fhvOyoC2Xg4q+dTfkjvo0fbo5GIDZQmdsLQhKoAFIVO3q kEQTCfwLGa7/83guKiwEHd53bsZZImxJck6XGMoUPop76oj06urOQGrClMJoKuSLQfHo 3M0bfArI84KlKwv7j151rcEN8od282qQS6XS88Jh4DKeVGsnoZddNtvPN6/XHHmvnFIo 9x2o3KIwuHpPCOYaIDUzdhNj6iVPsd4+rzfPMGpSd2kHmW7osjfLtfc4EJK7YG09tw3I CmBQ== X-Received: by 10.42.120.15 with SMTP id d15mr38352032icr.35.1401287623918; Wed, 28 May 2014 07:33:43 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by mx.google.com with ESMTPSA id fk7sm16214835igb.9.2014.05.28.07.33.43 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 28 May 2014 07:33:43 -0700 (PDT) From: Chuck Lever Subject: [PATCH v5 13/24] xprtrmda: Reduce calls to ib_poll_cq() in completion handlers To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Anna.Schumaker@netapp.com Date: Wed, 28 May 2014 10:33:42 -0400 Message-ID: <20140528143342.23214.16257.stgit@manet.1015granger.net> In-Reply-To: <20140528142521.23214.39655.stgit@manet.1015granger.net> References: <20140528142521.23214.39655.stgit@manet.1015granger.net> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Change the completion handlers to grab up to 16 items per ib_poll_cq() call. No extra ib_poll_cq() is needed if fewer than 16 items are returned. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/verbs.c | 56 ++++++++++++++++++++++++++------------- net/sunrpc/xprtrdma/xprt_rdma.h | 4 +++ 2 files changed, 42 insertions(+), 18 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index c7d5281..b8caee9 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -162,14 +162,23 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc) } static int -rpcrdma_sendcq_poll(struct ib_cq *cq) +rpcrdma_sendcq_poll(struct ib_cq *cq, struct rpcrdma_ep *ep) { - struct ib_wc wc; - int rc; + struct ib_wc *wcs; + int count, rc; - while ((rc = ib_poll_cq(cq, 1, &wc)) == 1) - rpcrdma_sendcq_process_wc(&wc); - return rc; + do { + wcs = ep->rep_send_wcs; + + rc = ib_poll_cq(cq, RPCRDMA_POLLSIZE, wcs); + if (rc <= 0) + return rc; + + count = rc; + while (count-- > 0) + rpcrdma_sendcq_process_wc(wcs++); + } while (rc == RPCRDMA_POLLSIZE); + return 0; } /* @@ -183,9 +192,10 @@ rpcrdma_sendcq_poll(struct ib_cq *cq) static void rpcrdma_sendcq_upcall(struct ib_cq *cq, void *cq_context) { + struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context; int rc; - rc = rpcrdma_sendcq_poll(cq); + rc = rpcrdma_sendcq_poll(cq, ep); if (rc) { dprintk("RPC: %s: ib_poll_cq failed: %i\n", __func__, rc); @@ -202,7 +212,7 @@ rpcrdma_sendcq_upcall(struct ib_cq *cq, void *cq_context) return; } - rpcrdma_sendcq_poll(cq); + rpcrdma_sendcq_poll(cq, ep); } static void @@ -241,14 +251,23 @@ out_schedule: } static int -rpcrdma_recvcq_poll(struct ib_cq *cq) +rpcrdma_recvcq_poll(struct ib_cq *cq, struct rpcrdma_ep *ep) { - struct ib_wc wc; - int rc; + struct ib_wc *wcs; + int count, rc; - while ((rc = ib_poll_cq(cq, 1, &wc)) == 1) - rpcrdma_recvcq_process_wc(&wc); - return rc; + do { + wcs = ep->rep_recv_wcs; + + rc = ib_poll_cq(cq, RPCRDMA_POLLSIZE, wcs); + if (rc <= 0) + return rc; + + count = rc; + while (count-- > 0) + rpcrdma_recvcq_process_wc(wcs++); + } while (rc == RPCRDMA_POLLSIZE); + return 0; } /* @@ -266,9 +285,10 @@ rpcrdma_recvcq_poll(struct ib_cq *cq) static void rpcrdma_recvcq_upcall(struct ib_cq *cq, void *cq_context) { + struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context; int rc; - rc = rpcrdma_recvcq_poll(cq); + rc = rpcrdma_recvcq_poll(cq, ep); if (rc) { dprintk("RPC: %s: ib_poll_cq failed: %i\n", __func__, rc); @@ -285,7 +305,7 @@ rpcrdma_recvcq_upcall(struct ib_cq *cq, void *cq_context) return; } - rpcrdma_recvcq_poll(cq); + rpcrdma_recvcq_poll(cq, ep); } #ifdef RPC_DEBUG @@ -721,7 +741,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, INIT_DELAYED_WORK(&ep->rep_connect_worker, rpcrdma_connect_worker); sendcq = ib_create_cq(ia->ri_id->device, rpcrdma_sendcq_upcall, - rpcrdma_cq_async_error_upcall, NULL, + rpcrdma_cq_async_error_upcall, ep, ep->rep_attr.cap.max_send_wr + 1, 0); if (IS_ERR(sendcq)) { rc = PTR_ERR(sendcq); @@ -738,7 +758,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, } recvcq = ib_create_cq(ia->ri_id->device, rpcrdma_recvcq_upcall, - rpcrdma_cq_async_error_upcall, NULL, + rpcrdma_cq_async_error_upcall, ep, ep->rep_attr.cap.max_recv_wr + 1, 0); if (IS_ERR(recvcq)) { rc = PTR_ERR(recvcq); diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 334ab6e..cb4c882 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -74,6 +74,8 @@ struct rpcrdma_ia { * RDMA Endpoint -- one per transport instance */ +#define RPCRDMA_POLLSIZE (16) + struct rpcrdma_ep { atomic_t rep_cqcount; int rep_cqinit; @@ -88,6 +90,8 @@ struct rpcrdma_ep { struct rdma_conn_param rep_remote_cma; struct sockaddr_storage rep_remote_addr; struct delayed_work rep_connect_worker; + struct ib_wc rep_send_wcs[RPCRDMA_POLLSIZE]; + struct ib_wc rep_recv_wcs[RPCRDMA_POLLSIZE]; }; #define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit)