From patchwork Wed Dec 16 22:23:20 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 7866381 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E54629F54E for ; Wed, 16 Dec 2015 22:23:25 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 216CF202EC for ; Wed, 16 Dec 2015 22:23:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4B69B203C0 for ; Wed, 16 Dec 2015 22:23:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965992AbbLPWXW (ORCPT ); Wed, 16 Dec 2015 17:23:22 -0500 Received: from mail-ig0-f182.google.com ([209.85.213.182]:38706 "EHLO mail-ig0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S934036AbbLPWXV (ORCPT ); Wed, 16 Dec 2015 17:23:21 -0500 Received: by mail-ig0-f182.google.com with SMTP id jw2so9062891igc.1; Wed, 16 Dec 2015 14:23:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=HCgNbjBTItZvcK15iWBeKxWJGZnj4THOD8Ru1Sclmpg=; b=NVhrwC282Wuf7a8e6XggpmhEF0mPGh0InpwZuYjOJuFJeViaQqX/Rbv+Ywp/XdlSWa qOmYpHHSvJR9LxsmEWJ9q/a0cDtkAmFIaIjTeYJstQTbGUuO4vTXQpFN+I3sYqENc0AS Ii67lys2bRBp2P3v50uPBNXdjQIG9hmvx9H/WClS8/hrTTNSXZUGhUwSjZCWT8f/6g0P wbRgoMJc44JjW1LtIZgroVbiv1+8+PYRxbum+8I3MquX/nrQ9rkGsHYsBeWIMMUalQZ9 /rTdz759vXtQULs5DMAmqeIV3ohZkWfSEjBO284Z6iRWXeqAcZNE6LzVb2Fev+gpVal1 tjKw== X-Received: by 10.107.7.36 with SMTP id 36mr30945878ioh.109.1450304601159; Wed, 16 Dec 2015 14:23:21 -0800 (PST) Received: from manet.1015granger.net (c-68-46-169-226.hsd1.mi.comcast.net. [68.46.169.226]) by smtp.gmail.com with ESMTPSA id q6sm10784237ige.14.2015.12.16.14.23.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 16 Dec 2015 14:23:20 -0800 (PST) Subject: [PATCH v4 10/10] xprtrdma: Revert commit e7104a2a9606 ('xprtrdma: Cap req_cqinit'). From: Chuck Lever To: anna.schumaker@netapp.com Cc: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Wed, 16 Dec 2015 17:23:20 -0500 Message-ID: <20151216222320.13110.10891.stgit@manet.1015granger.net> In-Reply-To: <20151216221444.13110.43538.stgit@manet.1015granger.net> References: <20151216221444.13110.43538.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The root of the problem was that sends (especially unsignalled FASTREG and LOCAL_INV Work Requests) were not properly flow- controlled, which allowed a send queue overrun. Now that the RPC/RDMA reply handler waits for invalidation to complete, the send queue is properly flow-controlled. Thus this limit is no longer necessary. Signed-off-by: Chuck Lever Tested-by: Devesh Sharma --- net/sunrpc/xprtrdma/verbs.c | 6 ++---- net/sunrpc/xprtrdma/xprt_rdma.h | 6 ------ 2 files changed, 2 insertions(+), 10 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index f23f3d6..1867e3a 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -608,10 +608,8 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia, /* set trigger for requesting send completion */ ep->rep_cqinit = ep->rep_attr.cap.max_send_wr/2 - 1; - if (ep->rep_cqinit > RPCRDMA_MAX_UNSIGNALED_SENDS) - ep->rep_cqinit = RPCRDMA_MAX_UNSIGNALED_SENDS; - else if (ep->rep_cqinit <= 2) - ep->rep_cqinit = 0; + if (ep->rep_cqinit <= 2) + ep->rep_cqinit = 0; /* always signal? */ INIT_CQCOUNT(ep); init_waitqueue_head(&ep->rep_connect_wait); INIT_DELAYED_WORK(&ep->rep_connect_worker, rpcrdma_connect_worker); diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index b8bac41..a563ffc 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -87,12 +87,6 @@ struct rpcrdma_ep { struct delayed_work rep_connect_worker; }; -/* - * Force a signaled SEND Work Request every so often, - * in case the provider needs to do some housekeeping. - */ -#define RPCRDMA_MAX_UNSIGNALED_SENDS (32) - #define INIT_CQCOUNT(ep) atomic_set(&(ep)->rep_cqcount, (ep)->rep_cqinit) #define DECR_CQCOUNT(ep) atomic_sub_return(1, &(ep)->rep_cqcount)