From patchwork Wed Jul 9 16:57:47 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 4518441 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E6198BEEAA for ; Wed, 9 Jul 2014 16:57:52 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0ABFD2037D for ; Wed, 9 Jul 2014 16:57:52 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FA29201CE for ; Wed, 9 Jul 2014 16:57:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756579AbaGIQ5u (ORCPT ); Wed, 9 Jul 2014 12:57:50 -0400 Received: from mail-ig0-f177.google.com ([209.85.213.177]:53483 "EHLO mail-ig0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756262AbaGIQ5t (ORCPT ); Wed, 9 Jul 2014 12:57:49 -0400 Received: by mail-ig0-f177.google.com with SMTP id r10so2099397igi.10 for ; Wed, 09 Jul 2014 09:57:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=65TaAWy7Z8yigVStqrRDjs2T9Ne4U8dfp4O76kM9jm4=; b=NgeYzLjwFXBRtI6d+wlcwlYcTFSJSHjVusPl2yYZDvCYaj4Jpd0kFnlQaVmrXqEnU5 UF0h7dapeBM8UyIxoZHL3n3gzW82PZ1Sa16Stbj5xEDhTEXdU4A1lT9lhVNP6ydfCH6p On0+oiwjZH8LiXr0+9/tzr0KR6j+iYH+RxqNfzCNydWbY/S8rcrZB9W9NzaGXEjUnu0P CgQCiqX2cRbGkoKFu0BGxn2hel283EHD2+tIuU+AQUFL6aTPHaGZx5AW/xPWp/TAnUpR oEeA2NzXui85cimBIt0aj/nScYLmrTtHyV+wPUjO8DCRLs++HkrngmaQH0rJYNz1QZbD r5Fg== X-Received: by 10.50.117.2 with SMTP id ka2mr14547933igb.33.1404925068800; Wed, 09 Jul 2014 09:57:48 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by mx.google.com with ESMTPSA id v6sm16912151igz.21.2014.07.09.09.57.48 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jul 2014 09:57:48 -0700 (PDT) Subject: [PATCH v2 09/21] xprtrdma: Chain together all MWs in same buffer pool From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Wed, 09 Jul 2014 12:57:47 -0400 Message-ID: <20140709165747.3496.70681.stgit@manet.1015granger.net> In-Reply-To: <20140709163326.3496.37893.stgit@manet.1015granger.net> References: <20140709163326.3496.37893.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP During connection loss recovery, need to visit every MW in a buffer pool. Any MW that is in use by an RPC will not be on the rb_mws list. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/verbs.c | 4 ++++ net/sunrpc/xprtrdma/xprt_rdma.h | 4 +++- 2 files changed, 7 insertions(+), 1 deletion(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 14d24ec..aa14a36 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -1074,6 +1074,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, p += cdata->padding; INIT_LIST_HEAD(&buf->rb_mws); + INIT_LIST_HEAD(&buf->rb_all); r = (struct rpcrdma_mw *)p; switch (ia->ri_memreg_strategy) { case RPCRDMA_FRMR: @@ -1098,6 +1099,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, ib_dereg_mr(r->r.frmr.fr_mr); goto out; } + list_add(&r->mw_all, &buf->rb_all); list_add(&r->mw_list, &buf->rb_mws); ++r; } @@ -1116,6 +1118,7 @@ rpcrdma_buffer_create(struct rpcrdma_buffer *buf, struct rpcrdma_ep *ep, " failed %i\n", __func__, rc); goto out; } + list_add(&r->mw_all, &buf->rb_all); list_add(&r->mw_list, &buf->rb_mws); ++r; } @@ -1225,6 +1228,7 @@ rpcrdma_buffer_destroy(struct rpcrdma_buffer *buf) while (!list_empty(&buf->rb_mws)) { r = list_entry(buf->rb_mws.next, struct rpcrdma_mw, mw_list); + list_del(&r->mw_all); list_del(&r->mw_list); switch (ia->ri_memreg_strategy) { case RPCRDMA_FRMR: diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 84c3455..c1d8652 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -151,7 +151,7 @@ struct rpcrdma_rep { * An external memory region is any buffer or page that is registered * on the fly (ie, not pre-registered). * - * Each rpcrdma_buffer has a list of these anchored in rb_mws. During + * Each rpcrdma_buffer has a list of free MWs anchored in rb_mws. During * call_allocate, rpcrdma_buffer_get() assigns one to each segment in * an rpcrdma_req. Then rpcrdma_register_external() grabs these to keep * track of registration metadata while each RPC is pending. @@ -175,6 +175,7 @@ struct rpcrdma_mw { struct rpcrdma_frmr frmr; } r; struct list_head mw_list; + struct list_head mw_all; }; /* @@ -246,6 +247,7 @@ struct rpcrdma_buffer { atomic_t rb_credits; /* most recent server credits */ int rb_max_requests;/* client max requests */ struct list_head rb_mws; /* optional memory windows/fmrs/frmrs */ + struct list_head rb_all; int rb_send_index; struct rpcrdma_req **rb_send_bufs; int rb_recv_index;