From patchwork Wed Apr 30 19:31:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 4095881 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C5E489F271 for ; Wed, 30 Apr 2014 19:31:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C121820320 for ; Wed, 30 Apr 2014 19:31:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BEF29202F8 for ; Wed, 30 Apr 2014 19:31:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1945941AbaD3TbY (ORCPT ); Wed, 30 Apr 2014 15:31:24 -0400 Received: from mail-ig0-f180.google.com ([209.85.213.180]:57352 "EHLO mail-ig0-f180.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1945938AbaD3TbX (ORCPT ); Wed, 30 Apr 2014 15:31:23 -0400 Received: by mail-ig0-f180.google.com with SMTP id c1so2266568igq.7 for ; Wed, 30 Apr 2014 12:31:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:subject:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=gySU0WSXOrsiVMrgpho/laDmdYLuicRBuNGkY3eQmmA=; b=Pk7Bdwu71l66tA3uzquShDwKv8tRcWIWda++JOp7K8txWWj1GEyz3QcAvLVXoavU4N AM2ooWR9PEOu4TvKfJnnTjhogu4gpZhbHc8Ajhu05Te7kmOjNC6058dpPaU/Txf6uS9D jfAkZz/UVu8GEixgblV5oj8ydPtQqc4Tg1b+/VLyPI9bEz7nrLlyVg12SvwyQce2tT/K o2v1jELYG0ZyqMXwLLkTcrf5zxORsG1U/GiKgGPMs5E6XA6qb7mZat45jxzoY8RE72Ns 1S6EHH8zJRbSqjSXXUatU8Zw0pUjZ1VMR5aKj4S5ko4+ehQQZ+U59BmjsWD9o86GjfOS 0RVg== X-Received: by 10.50.114.41 with SMTP id jd9mr39984103igb.33.1398886283542; Wed, 30 Apr 2014 12:31:23 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by mx.google.com with ESMTPSA id d10sm8559086igc.8.2014.04.30.12.31.22 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 30 Apr 2014 12:31:23 -0700 (PDT) From: Chuck Lever Subject: [PATCH V3 13/17] xprtrmda: Reduce calls to ib_poll_cq() in completion handlers To: linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org Cc: Anna.Schumaker@netapp.com Date: Wed, 30 Apr 2014 15:31:21 -0400 Message-ID: <20140430193121.5663.62284.stgit@manet.1015granger.net> In-Reply-To: <20140430191433.5663.16217.stgit@manet.1015granger.net> References: <20140430191433.5663.16217.stgit@manet.1015granger.net> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Change the completion handlers to grab up to 16 items per ib_poll_cq() call. No extra ib_poll_cq() is needed if fewer than 16 items are returned. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/verbs.c | 56 ++++++++++++++++++++++++++------------- net/sunrpc/xprtrdma/xprt_rdma.h | 4 +++ 2 files changed, 42 insertions(+), 18 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index c7d5281..b8caee9 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -162,14 +162,23 @@ rpcrdma_sendcq_process_wc(struct ib_wc *wc) } static int -rpcrdma_sendcq_poll(struct ib_cq *cq) +rpcrdma_sendcq_poll(struct ib_cq *cq, struct rpcrdma_ep *ep) { - struct ib_wc wc; - int rc; + struct ib_wc *wcs; + int count, rc; - while ((rc = ib_poll_cq(cq, 1, &wc)) == 1) - rpcrdma_sendcq_process_wc(&wc); - return rc; + do { + wcs = ep->rep_send_wcs; + + rc = ib_poll_cq(cq, RPCRDMA_POLLSIZE, wcs); + if (rc <= 0) + return rc; + + count = rc; + while (count-- > 0) + rpcrdma_sendcq_process_wc(wcs++); + } while (rc == RPCRDMA_POLLSIZE); + return 0; } /* @@ -183,9 +192,10 @@ rpcrdma_sendcq_poll(struct ib_cq *cq) static void rpcrdma_sendcq_upcall(struct ib_cq *cq, void *cq_context) { + struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context; int rc; - rc = rpcrdma_sendcq_poll(cq); + rc = rpcrdma_sendcq_poll(cq, ep); if (rc) { dprintk("RPC: %s: ib_poll_cq failed: %i\n", __func__, rc); @@ -202,7 +212,7 @@ rpcrdma_sendcq_upcall(struct ib_cq *cq, void *cq_context) return; } - rpcrdma_sendcq_poll(cq); + rpcrdma_sendcq_poll(cq, ep); } static void @@ -241,14 +251,23 @@ out_schedule: } static int -rpcrdma_recvcq_poll(struct ib_cq *cq) +rpcrdma_recvcq_poll(struct ib_cq *cq, struct rpcrdma_ep *ep) { - struct ib_wc wc; - int rc; + struct ib_wc *wcs; + int count, rc; - while ((rc = ib_poll_cq(cq, 1, &wc)) == 1) - rpcrdma_recvcq_process_wc(&wc); - return rc; + do { + wcs = ep->rep_recv_wcs; + + rc = ib_poll_cq(cq, RPCRDMA_POLLSIZE, wcs); + if (rc <= 0) + return rc; + + count = rc; + while (count-- > 0) + rpcrdma_recvcq_process_wc(wcs++); + } while (rc == RPCRDMA_POLLSIZE); + return 0; } /* @@ -266,9 +285,10 @@ rpcrdma_recvcq_poll(struct ib_cq *cq) static void rpcrdma_recvcq_upcall(struct ib_cq *cq, void *cq_context) { + struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context; int rc; - rc = rpcrdma_recvcq_poll(cq); + rc = rpcrdma_recvcq_poll(cq, ep); if (rc) { dprintk("RPC: %s: ib_poll_cq failed: %i\n", __func__, rc); @@ -285,7 +305,7 @@ rpcrdma_recvcq_upcall(struct ib_cq *cq, void *cq_context) return; } - rpcrdma_recvcq_poll(cq); + rpcrdma_recvcq_poll(cq, ep); } #ifdef RPC_DEBUG @@ -721,7 +741,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, INIT_DELAYED_WORK(&ep->rep_connect_worker, rpcrdma_connect_worker); sendcq = ib_create_cq(ia->ri_id->device, rpcrdma_sendcq_upcall, - rpcrdma_cq_async_error_upcall, NULL, + rpcrdma_cq_async_error_upcall, ep, ep->rep_attr.cap.max_send_wr + 1, 0); if (IS_ERR(sendcq)) { rc = PTR_ERR(sendcq); @@ -738,7 +758,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, } recvcq = ib_create_cq(ia->ri_id->device, rpcrdma_recvcq_upcall, - rpcrdma_cq_async_error_upcall, NULL, + rpcrdma_cq_async_error_upcall, ep, ep->rep_attr.cap.max_recv_wr + 1, 0); if (IS_ERR(recvcq)) { rc = PTR_ERR(recvcq); diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 334ab6e..cb4c882 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -74,6 +74,8 @@ struct rpcrdma_ia { * RDMA Endpoint -- one per transport instance */ +#define RPCRDMA_POLLSIZE (16) + struct rpcrdma_ep { atomic_t rep_cqcount; int rep_cqinit; @@ -88,6 +90,8 @@ struct rpcrdma_ep { struct rdma_conn_param rep_remote_cma; struct sockaddr_storage rep_remote_addr; struct delayed_work rep_connect_worker; + struct ib_wc rep_send_wcs[RPCRDMA_POLLSIZE]; + struct ib_wc rep_recv_wcs[RPCRDMA_POLLSIZE]; }; #define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit)