From patchwork Wed Nov 9 19:05:13 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 9420045 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2F58E6048E for ; Wed, 9 Nov 2016 19:05:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 207D3293B7 for ; Wed, 9 Nov 2016 19:05:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 156AB293C6; Wed, 9 Nov 2016 19:05:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BFF5293B7 for ; Wed, 9 Nov 2016 19:05:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753322AbcKITFQ (ORCPT ); Wed, 9 Nov 2016 14:05:16 -0500 Received: from mail-it0-f67.google.com ([209.85.214.67]:35135 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753195AbcKITFP (ORCPT ); Wed, 9 Nov 2016 14:05:15 -0500 Received: by mail-it0-f67.google.com with SMTP id b123so18631061itb.2; Wed, 09 Nov 2016 11:05:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=epndme48vsizSlnTm0ZTvFFcy4mLVR5Z8Ecacpcsq1o=; b=xvvpIPnYimzFCcYJg4J6EM+w8dtIf+5wtvw5sMkwjWYm2W5slw5L+dGP6cR1YN45TO BjZToq18/EeiDmBAhZTCAiJ6mMWslsvOMD1zMyjpxIBdFjuoBPCYH9nlcdGrhVTpx2no FJNsRcdVJFsNDs8e+fY7K79Lnm8ZyPLNslpjq8J9mxq0CgMS6lLp3MusME11O/AoN7sp 5FNWgo1HXngwHgIk9QBSnMtp9+7GFyd1JUtgEb8tRDiItAMiuMj7CHUmMlZNL37VinR2 NQYzZpzsFcmAk7ORxAWtaWKGVYTGuKz98O5icyMM+27Nw7XaAZYUAZrqScpAHFzJEOca +qoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:subject:from:to:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=epndme48vsizSlnTm0ZTvFFcy4mLVR5Z8Ecacpcsq1o=; b=cu0oRIDfbAy0K+f0+tPXk9/dYCYWLC+4+oYlM5uVLuqGLTOuHJsF97TZhawwoG2I5K xlZ7XdA/lcF59PZx6JNu0tH5yuSxaTBU7RKWoCKYs75gb/v8AMQdmw9V2qmbqmwj6f3E +Sz0XfIVkov5zFEARRRZCDMPtEMGVt09o5i3YBq2b+9WLnIP343czJWTQwSYTeurFcn1 f1gWLQfbOD2DLt/E0Nb8OF/mGoigT2FFGHC5klpwF+7v8O3f9G0jNIwXK+QN5s721E0j Iaonb8JtvouihVOvAW2SNd6pxwy3pyllOgdWCMcpfntVTCVTdVbtq8YAgTAN54Qjnd6g C7GQ== X-Gm-Message-State: ABUngvdRniA6eiDXAlEV0xKmtCFePvOsBGnlRQVDJklQCNbSlsV8W5AxZPTQWSvr1aoTRQ== X-Received: by 10.36.29.7 with SMTP id 7mr14555441itj.32.1478718314029; Wed, 09 Nov 2016 11:05:14 -0800 (PST) Received: from manet.1015granger.net ([2604:8800:100:81fc:ec4:7aff:fe6c:1dce]) by smtp.gmail.com with ESMTPSA id n102sm548917ioi.38.2016.11.09.11.05.13 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Nov 2016 11:05:13 -0800 (PST) Subject: [PATCH v1 03/14] xprtrdma: Make FRWR send queue entry accounting more accurate From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Wed, 09 Nov 2016 14:05:13 -0500 Message-ID: <20161109190513.15007.7060.stgit@manet.1015granger.net> In-Reply-To: <20161109184735.15007.96507.stgit@manet.1015granger.net> References: <20161109184735.15007.96507.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Verbs providers may perform house-keeping on a send queue during each signaled send completion. It is necessary therefore for a verbs consumer (like xprtrdma) to occasionally force a signaled send completion if it runs unsignaled some of the time. xprtrdma does not need signaled completions for Send or FastReg Work Requests. So, it forces a signal about half way through the send queue by counting the number of Send Queue Entries it consumes. It currently does this by counting each ib_post_send as one SQE. Commit c9918ff56dfb ("xprtrdma: Add ro_unmap_sync method for FRWR") introduced the ability for frwr_op_unmap_sync to post more than one WR with a single post_send. Thus the underlying assumption of one WR per ib_post_send is no longer true. Also, FastReg is currently never signaled. It should be signaled once in a while to keep the accounting of consumed SQEs accurate. While we're here, convert the CQCOUNT macros to the currently preferred kernel coding style, which is inline functions. Fixes: c9918ff56dfb ("xprtrdma: Add ro_unmap_sync method for FRWR") Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/frwr_ops.c | 13 ++++++++++--- net/sunrpc/xprtrdma/verbs.c | 10 ++-------- net/sunrpc/xprtrdma/xprt_rdma.h | 20 ++++++++++++++++++-- 3 files changed, 30 insertions(+), 13 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index 26b26be..adbf52c 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -421,7 +421,7 @@ IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : IB_ACCESS_REMOTE_READ; - DECR_CQCOUNT(&r_xprt->rx_ep); + rpcrdma_set_signaled(&r_xprt->rx_ep, ®_wr->wr); rc = ib_post_send(ia->ri_id->qp, ®_wr->wr, &bad_wr); if (rc) goto out_senderr; @@ -486,7 +486,7 @@ struct rpcrdma_ia *ia = &r_xprt->rx_ia; struct rpcrdma_mw *mw, *tmp; struct rpcrdma_frmr *f; - int rc; + int count, rc; dprintk("RPC: %s: req %p\n", __func__, req); @@ -496,6 +496,7 @@ * a single ib_post_send() call. */ f = NULL; + count = 0; invalidate_wrs = pos = prev = NULL; list_for_each_entry(mw, &req->rl_registered, mw_list) { if ((rep->rr_wc_flags & IB_WC_WITH_INVALIDATE) && @@ -505,6 +506,7 @@ } pos = __frwr_prepare_linv_wr(mw); + count++; if (!invalidate_wrs) invalidate_wrs = pos; @@ -523,7 +525,12 @@ f->fr_invwr.send_flags = IB_SEND_SIGNALED; f->fr_cqe.done = frwr_wc_localinv_wake; reinit_completion(&f->fr_linv_done); - INIT_CQCOUNT(&r_xprt->rx_ep); + + /* Initialize CQ count, since there is always a signaled + * WR being posted here. The new cqcount depends on how + * many SQEs are about to be consumed. + */ + rpcrdma_init_cqcount(&r_xprt->rx_ep, count); /* Transport disconnect drains the receive CQ before it * replaces the QP. The RPC reply handler won't call us diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index ec74289..451f5f2 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -532,7 +532,7 @@ static void rpcrdma_destroy_id(struct rdma_cm_id *id) ep->rep_cqinit = ep->rep_attr.cap.max_send_wr/2 - 1; if (ep->rep_cqinit <= 2) ep->rep_cqinit = 0; /* always signal? */ - INIT_CQCOUNT(ep); + rpcrdma_init_cqcount(ep, 0); init_waitqueue_head(&ep->rep_connect_wait); INIT_DELAYED_WORK(&ep->rep_connect_worker, rpcrdma_connect_worker); @@ -1311,13 +1311,7 @@ struct rpcrdma_regbuf * dprintk("RPC: %s: posting %d s/g entries\n", __func__, send_wr->num_sge); - if (DECR_CQCOUNT(ep) > 0) - send_wr->send_flags = 0; - else { /* Provider must take a send completion every now and then */ - INIT_CQCOUNT(ep); - send_wr->send_flags = IB_SEND_SIGNALED; - } - + rpcrdma_set_signaled(ep, send_wr); rc = ib_post_send(ia->ri_id->qp, send_wr, &send_wr_fail); if (rc) goto out_postsend_err; diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 6e1bba3..f6ae1b2 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -95,8 +95,24 @@ struct rpcrdma_ep { struct delayed_work rep_connect_worker; }; -#define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit) -#define DECR_CQCOUNT(ep) atomic_sub_return(1, &(ep)->rep_cqcount) +static inline void +rpcrdma_init_cqcount(struct rpcrdma_ep *ep, int count) +{ + atomic_set(&ep->rep_cqcount, ep->rep_cqinit - count); +} + +/* To update send queue accounting, provider must take a + * send completion every now and then. + */ +static inline void +rpcrdma_set_signaled(struct rpcrdma_ep *ep, struct ib_send_wr *send_wr) +{ + send_wr->send_flags = 0; + if (unlikely(atomic_sub_return(1, &ep->rep_cqcount) <= 0)) { + rpcrdma_init_cqcount(ep, 0); + send_wr->send_flags = IB_SEND_SIGNALED; + } +} /* Pre-allocate extra Work Requests for handling backward receives * and sends. This is a fixed value because the Work Queues are