From patchwork Fri Oct 20 14:48:45 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 10020471 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C37C9602CB for ; Fri, 20 Oct 2017 14:48:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A360328ED3 for ; Fri, 20 Oct 2017 14:48:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9704B28F19; Fri, 20 Oct 2017 14:48:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0859728ED3 for ; Fri, 20 Oct 2017 14:48:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752142AbdJTOst (ORCPT ); Fri, 20 Oct 2017 10:48:49 -0400 Received: from mail-io0-f196.google.com ([209.85.223.196]:50498 "EHLO mail-io0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752102AbdJTOsr (ORCPT ); Fri, 20 Oct 2017 10:48:47 -0400 Received: by mail-io0-f196.google.com with SMTP id 97so13465673iok.7; Fri, 20 Oct 2017 07:48:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=LVrCJTp8FQopqmP3RTeJL2pqT4Jo0JBjudnuZcXK+pM=; b=AWY7HLnab3UVUW4lUFxEYtvGysLbfpp0KQaEHs4WF8Apfa2b9pxLho99UZX8oX/LbX vsggRI8S8iU82piTUWHgvqW3/76Ck2mdE6YKZK57uH7MpiIH5GP64Zvyb40P92R74plZ DVFO0V9CoCy+ba5zqHSx0M0u/BDtGFTf6Cc8Y1ill1aSsAf9lfKv8oZh+0okd06HxOtE MajpEfkPrBpd4uiVyW7Gm1II/8z3BEjLG02XLFH+fNTZLwLdlTumMZXZC30qgctyIGmj w+5JElbpJuFfJVwFIQGIC6JLjYEKIK1nrSQXyM4YYXr7WMPESzRpZHivN3Utp6mxOxDM btjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=LVrCJTp8FQopqmP3RTeJL2pqT4Jo0JBjudnuZcXK+pM=; b=r8aBIrzHdAFiVYT/5j1Xo/c8mqC3QFWQzfyFaCnkS+RcIDdVgVkUOUZ+f3rFaM5tkK sKqOEei+hz9SUmr8td1u/8ZZgKopRlXJbIDqzvxXcx3W6dXGebgBrkhlChivzvNLf51M 7GeLdW/B1TQ2T37F9TMHOIGxJDYR3LsKM8Jdatk5h3uUP1sg5ctU4XjfzR6ZUP89LpyC 41OtkM275bT1n9qsYyButYqh2rAqYLyeBV/4GTC7f5L1T8Y5WEeRNoAHWjlyNI5wnkTJ /2Vav6xgS7OrQXNJE0TgOM89JuEIrJbpWiyR/7LjM1wgu7yXGjrp6URC4iHWnvIoxhw0 Zv+Q== X-Gm-Message-State: AMCzsaVDRsnh1dHggU/ZVlfQd5JTUw8huTastbvYIOkEmA2piWQXT4J6 BSWUYcJYfmFLUc/nFYH0dq9+wg== X-Google-Smtp-Source: ABhQp+TiPgHWGuyaDJUezFE+Sr1Dm3L6zvevFhOPj7fUQINaLpZtFlkyhCakQ+ppdYs+tYOzebCJaA== X-Received: by 10.107.7.22 with SMTP id 22mr6915927ioh.106.1508510925952; Fri, 20 Oct 2017 07:48:45 -0700 (PDT) Received: from manet.1015granger.net (c-68-46-169-226.hsd1.mi.comcast.net. [68.46.169.226]) by smtp.gmail.com with ESMTPSA id y16sm577708ita.11.2017.10.20.07.48.45 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Oct 2017 07:48:45 -0700 (PDT) Subject: [PATCH 9/9] xprtrdma: Remove atomic send completion counting From: Chuck Lever To: anna.schumaker@netapp.com Cc: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Fri, 20 Oct 2017 10:48:45 -0400 Message-ID: <20171020144844.14869.66895.stgit@manet.1015granger.net> In-Reply-To: <20171020143635.14869.15714.stgit@manet.1015granger.net> References: <20171020143635.14869.15714.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The sendctx circular queue now guarantees that xprtrdma cannot overflow the Send Queue, so remove the remaining bits of the original Send WQE counting mechanism. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/frwr_ops.c | 8 -------- net/sunrpc/xprtrdma/verbs.c | 4 ---- net/sunrpc/xprtrdma/xprt_rdma.h | 21 --------------------- 3 files changed, 33 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c index 3053fb0..404166a 100644 --- a/net/sunrpc/xprtrdma/frwr_ops.c +++ b/net/sunrpc/xprtrdma/frwr_ops.c @@ -419,7 +419,6 @@ IB_ACCESS_REMOTE_WRITE | IB_ACCESS_LOCAL_WRITE : IB_ACCESS_REMOTE_READ; - rpcrdma_set_signaled(&r_xprt->rx_ep, ®_wr->wr); rc = ib_post_send(ia->ri_id->qp, ®_wr->wr, &bad_wr); if (rc) goto out_senderr; @@ -507,12 +506,6 @@ f->fr_cqe.done = frwr_wc_localinv_wake; reinit_completion(&f->fr_linv_done); - /* Initialize CQ count, since there is always a signaled - * WR being posted here. The new cqcount depends on how - * many SQEs are about to be consumed. - */ - rpcrdma_init_cqcount(&r_xprt->rx_ep, count); - /* Transport disconnect drains the receive CQ before it * replaces the QP. The RPC reply handler won't call us * unless ri_id->qp is a valid pointer. @@ -545,7 +538,6 @@ /* Find and reset the MRs in the LOCAL_INV WRs that did not * get posted. */ - rpcrdma_init_cqcount(&r_xprt->rx_ep, -count); while (bad_wr) { f = container_of(bad_wr, struct rpcrdma_frmr, fr_invwr); diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 9a824fe..22128a8 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -553,10 +553,6 @@ ep->rep_send_batch = min_t(unsigned int, RPCRDMA_MAX_SEND_BATCH, cdata->max_requests >> 2); ep->rep_send_count = ep->rep_send_batch; - ep->rep_cqinit = ep->rep_attr.cap.max_send_wr/2 - 1; - if (ep->rep_cqinit <= 2) - ep->rep_cqinit = 0; /* always signal? */ - rpcrdma_init_cqcount(ep, 0); init_waitqueue_head(&ep->rep_connect_wait); INIT_DELAYED_WORK(&ep->rep_connect_worker, rpcrdma_connect_worker); diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index bccd5d8..6e64c82 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -95,8 +95,6 @@ enum { struct rpcrdma_ep { unsigned int rep_send_count; unsigned int rep_send_batch; - atomic_t rep_cqcount; - int rep_cqinit; int rep_connected; struct ib_qp_init_attr rep_attr; wait_queue_head_t rep_connect_wait; @@ -106,25 +104,6 @@ struct rpcrdma_ep { struct delayed_work rep_connect_worker; }; -static inline void -rpcrdma_init_cqcount(struct rpcrdma_ep *ep, int count) -{ - atomic_set(&ep->rep_cqcount, ep->rep_cqinit - count); -} - -/* To update send queue accounting, provider must take a - * send completion every now and then. - */ -static inline void -rpcrdma_set_signaled(struct rpcrdma_ep *ep, struct ib_send_wr *send_wr) -{ - send_wr->send_flags = 0; - if (unlikely(atomic_sub_return(1, &ep->rep_cqcount) <= 0)) { - rpcrdma_init_cqcount(ep, 0); - send_wr->send_flags = IB_SEND_SIGNALED; - } -} - /* Pre-allocate extra Work Requests for handling backward receives * and sends. This is a fixed value because the Work Queues are * allocated when the forward channel is set up.