From patchwork Thu Sep 17 20:46:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 7210471 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 800809F32B for ; Thu, 17 Sep 2015 20:46:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 14B8B2077D for ; Thu, 17 Sep 2015 20:46:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9B41020689 for ; Thu, 17 Sep 2015 20:46:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751487AbbIQUqm (ORCPT ); Thu, 17 Sep 2015 16:46:42 -0400 Received: from mail-qg0-f42.google.com ([209.85.192.42]:34568 "EHLO mail-qg0-f42.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751939AbbIQUql (ORCPT ); Thu, 17 Sep 2015 16:46:41 -0400 Received: by qgez77 with SMTP id z77so23607495qge.1; Thu, 17 Sep 2015 13:46:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=1uA+y8Uwi/94bt0e+272TOrxJL125+U70JKQle7O7b4=; b=f9LJVdh/2YjGdiRR+orN73sDUE2DQkJrVKcmYvfW7YxMjVrc4jBDqxah+NkOv5fQMf 4IrSdbk0iwX8lnUnKao5Kzsy3yCeR2yiO3XEUjmxpGt9r05dB0Ki3v0QAKF4Lfyd/s1f g0TxXLxwkW18xL25MY3Xo2zpS7P/F/MwIyMKXrCrUE2WgR/QbaEnjIkSl7pj4nM6Lmss wXL9U0lljJ6iDjZ2S1QnZVHGMeXfOvJY9vMotEJ+h9QGXBlGB44DlvVaYaLdfYMdHu8G Nee5IvAgVVhFvrRTkrjO/6GLcUeotwW2FpFh4sJjhFMiwFlezjeg4XuNul0NdEnQrZOX NrUw== X-Received: by 10.140.31.37 with SMTP id e34mr2114661qge.17.1442522800325; Thu, 17 Sep 2015 13:46:40 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by smtp.gmail.com with ESMTPSA id m136sm2031929qhb.37.2015.09.17.13.46.39 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 17 Sep 2015 13:46:40 -0700 (PDT) Subject: [PATCH v1 18/18] xprtrdma: Add class for RDMA backwards direction transport From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Thu, 17 Sep 2015 16:46:39 -0400 Message-ID: <20150917204639.19671.49626.stgit@manet.1015granger.net> In-Reply-To: <20150917202829.19671.90044.stgit@manet.1015granger.net> References: <20150917202829.19671.90044.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID,T_RP_MATCHES_RCVD,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP To support the server-side of an NFSv4.1 backchannel on RDMA connections, add a transport class for backwards direction operation. Signed-off-by: Chuck Lever --- include/linux/sunrpc/xprt.h | 1 net/sunrpc/xprt.c | 1 net/sunrpc/xprtrdma/svc_rdma_transport.c | 14 +- net/sunrpc/xprtrdma/transport.c | 243 ++++++++++++++++++++++++++++++ net/sunrpc/xprtrdma/xprt_rdma.h | 2 5 files changed, 256 insertions(+), 5 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h index 6156491..4f1b0b6 100644 --- a/include/linux/sunrpc/xprt.h +++ b/include/linux/sunrpc/xprt.h @@ -83,6 +83,7 @@ struct rpc_rqst { __u32 * rq_buffer; /* XDR encode buffer */ size_t rq_callsize, rq_rcvsize; + void * rq_privdata; /* xprt-specific per-rqst data */ size_t rq_xmit_bytes_sent; /* total bytes sent */ size_t rq_reply_bytes_recvd; /* total reply bytes */ /* received */ diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c index ab5dd62..9480354 100644 --- a/net/sunrpc/xprt.c +++ b/net/sunrpc/xprt.c @@ -1419,3 +1419,4 @@ void xprt_put(struct rpc_xprt *xprt) if (atomic_dec_and_test(&xprt->count)) xprt_destroy(xprt); } +EXPORT_SYMBOL_GPL(xprt_put); diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c index c4083a3..6bd4c1e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c @@ -1194,12 +1194,14 @@ static void __svc_rdma_free(struct work_struct *work) { struct svcxprt_rdma *rdma = container_of(work, struct svcxprt_rdma, sc_work); - dprintk("svcrdma: svc_rdma_free(%p)\n", rdma); + struct svc_xprt *xprt = &rdma->sc_xprt; + + dprintk("svcrdma: %s(%p)\n", __func__, rdma); /* We should only be called from kref_put */ - if (atomic_read(&rdma->sc_xprt.xpt_ref.refcount) != 0) + if (atomic_read(&xprt->xpt_ref.refcount) != 0) pr_err("svcrdma: sc_xprt still in use? (%d)\n", - atomic_read(&rdma->sc_xprt.xpt_ref.refcount)); + atomic_read(&xprt->xpt_ref.refcount)); /* * Destroy queued, but not processed read completions. Note @@ -1234,6 +1236,12 @@ static void __svc_rdma_free(struct work_struct *work) pr_err("svcrdma: dma still in use? (%d)\n", atomic_read(&rdma->sc_dma_used)); + /* Final put of backchannel client transport */ + if (xprt->xpt_bc_xprt) { + xprt_put(xprt->xpt_bc_xprt); + xprt->xpt_bc_xprt = NULL; + } + /* De-allocate fastreg mr */ rdma_dealloc_frmr_q(rdma); diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c index 7d6c06f..1030425 100644 --- a/net/sunrpc/xprtrdma/transport.c +++ b/net/sunrpc/xprtrdma/transport.c @@ -51,6 +51,7 @@ #include #include #include +#include #include "xprt_rdma.h" @@ -148,7 +149,10 @@ static struct ctl_table sunrpc_table[] = { #define RPCRDMA_MAX_REEST_TO (30U * HZ) #define RPCRDMA_IDLE_DISC_TO (5U * 60 * HZ) -static struct rpc_xprt_ops xprt_rdma_procs; /* forward reference */ +static struct rpc_xprt_ops xprt_rdma_procs; +#if defined(CONFIG_SUNRPC_BACKCHANNEL) +static struct rpc_xprt_ops xprt_rdma_bc_procs; +#endif static void xprt_rdma_format_addresses4(struct rpc_xprt *xprt, struct sockaddr *sap) @@ -500,7 +504,7 @@ xprt_rdma_allocate(struct rpc_task *task, size_t size) if (req == NULL) return NULL; - flags = GFP_NOIO | __GFP_NOWARN; + flags = RPCRDMA_DEF_GFP; if (RPC_IS_SWAPPER(task)) flags = __GFP_MEMALLOC | GFP_NOWAIT | __GFP_NOWARN; @@ -685,6 +689,199 @@ xprt_rdma_disable_swap(struct rpc_xprt *xprt) { } +#if defined(CONFIG_SUNRPC_BACKCHANNEL) + +/* Server-side transport endpoint wants a whole page for its send + * buffer. The client RPC code constructs the RPC header in this + * buffer before it invokes ->send_request. + */ +static void * +xprt_rdma_bc_allocate(struct rpc_task *task, size_t size) +{ + struct rpc_rqst *rqst = task->tk_rqstp; + struct svc_rdma_op_ctxt *ctxt; + struct svcxprt_rdma *rdma; + struct svc_xprt *sxprt; + struct page *page; + + if (size > PAGE_SIZE) { + WARN_ONCE(1, "failed to handle buffer allocation (size %zu)\n", + size); + return NULL; + } + + page = alloc_page(RPCRDMA_DEF_GFP); + if (!page) + return NULL; + + sxprt = rqst->rq_xprt->bc_xprt; + rdma = container_of(sxprt, struct svcxprt_rdma, sc_xprt); + ctxt = svc_rdma_get_context_gfp(rdma, RPCRDMA_DEF_GFP); + if (!ctxt) { + put_page(page); + return NULL; + } + + rqst->rq_privdata = ctxt; + ctxt->pages[0] = page; + ctxt->count = 1; + return page_address(page); +} + +static void +xprt_rdma_bc_free(void *buffer) +{ + /* No-op: ctxt and page have already been freed. */ +} + +static int +rpcrdma_bc_send_request(struct svcxprt_rdma *rdma, struct rpc_rqst *rqst) +{ + struct rpc_xprt *xprt = rqst->rq_xprt; + struct rpcrdma_xprt *r_xprt = rpcx_to_rdmax(xprt); + struct rpcrdma_msg *headerp = (struct rpcrdma_msg *)rqst->rq_buffer; + struct svc_rdma_op_ctxt *ctxt; + int rc; + + /* Space in the send buffer for an RPC/RDMA header is reserved + * via xprt->tsh_size */ + headerp->rm_xid = rqst->rq_xid; + headerp->rm_vers = rpcrdma_version; + headerp->rm_credit = cpu_to_be32(r_xprt->rx_buf.rb_bc_max_requests); + headerp->rm_type = rdma_msg; + headerp->rm_body.rm_chunks[0] = xdr_zero; + headerp->rm_body.rm_chunks[1] = xdr_zero; + headerp->rm_body.rm_chunks[2] = xdr_zero; + + pr_info("%s: %*ph\n", __func__, 64, rqst->rq_buffer); + + ctxt = (struct svc_rdma_op_ctxt *)rqst->rq_privdata; + rc = svc_rdma_bc_post_send(rdma, ctxt, &rqst->rq_snd_buf); + if (rc) + goto drop_connection; + return rc; + +drop_connection: + pr_info("Failed to send backwards request\n"); + svc_rdma_put_context(ctxt, 1); + xprt_disconnect_done(xprt); + return -ENOTCONN; +} + +/* Take an RPC request and sent it on the passive end of a + * transport connection. + */ +static int +xprt_rdma_bc_send_request(struct rpc_task *task) +{ + struct rpc_rqst *rqst = task->tk_rqstp; + struct svc_xprt *sxprt = rqst->rq_xprt->bc_xprt; + struct svcxprt_rdma *rdma; + u32 len; + + pr_info("%s: sending request with xid: %08x\n", + __func__, be32_to_cpu(rqst->rq_xid)); + + if (!mutex_trylock(&sxprt->xpt_mutex)) { + rpc_sleep_on(&sxprt->xpt_bc_pending, task, NULL); + if (!mutex_trylock(&sxprt->xpt_mutex)) + return -EAGAIN; + rpc_wake_up_queued_task(&sxprt->xpt_bc_pending, task); + } + + len = -ENOTCONN; + rdma = container_of(sxprt, struct svcxprt_rdma, sc_xprt); + if (!test_bit(XPT_DEAD, &sxprt->xpt_flags)) + len = rpcrdma_bc_send_request(rdma, rqst); + + mutex_unlock(&sxprt->xpt_mutex); + + if (len < 0) + return len; + return 0; +} + +static void +xprt_rdma_bc_close(struct rpc_xprt *xprt) +{ + pr_info("RPC: %s: xprt %p\n", __func__, xprt); +} + +static void +xprt_rdma_bc_put(struct rpc_xprt *xprt) +{ + pr_info("RPC: %s: xprt %p\n", __func__, xprt); + + xprt_free(xprt); + module_put(THIS_MODULE); +} + +/* It shouldn't matter if the number of backchannel session slots + * doesn't match the number of RPC/RDMA credits. That just means + * one or the other will have extra slots that aren't used. + */ +static struct rpc_xprt * +xprt_setup_rdma_bc(struct xprt_create *args) +{ + struct rpc_xprt *xprt; + struct rpcrdma_xprt *new_xprt; + + if (args->addrlen > sizeof(xprt->addr)) { + dprintk("RPC: %s: address too large\n", __func__); + return ERR_PTR(-EBADF); + } + + xprt = xprt_alloc(args->net, sizeof(*new_xprt), + RPCRDMA_MAX_BC_REQUESTS, + RPCRDMA_MAX_BC_REQUESTS); + if (xprt == NULL) { + dprintk("RPC: %s: couldn't allocate rpc_xprt\n", + __func__); + return ERR_PTR(-ENOMEM); + } + + xprt->timeout = &xprt_rdma_default_timeout; + xprt_set_bound(xprt); + xprt_set_connected(xprt); + xprt->bind_timeout = RPCRDMA_BIND_TO; + xprt->reestablish_timeout = RPCRDMA_INIT_REEST_TO; + xprt->idle_timeout = RPCRDMA_IDLE_DISC_TO; + + xprt->prot = XPRT_TRANSPORT_BC_RDMA; + xprt->tsh_size = RPCRDMA_HDRLEN_MIN / sizeof(__be32); + xprt->ops = &xprt_rdma_bc_procs; + + memcpy(&xprt->addr, args->dstaddr, args->addrlen); + xprt->addrlen = args->addrlen; + xprt_rdma_format_addresses(xprt, (struct sockaddr *)&xprt->addr); + xprt->resvport = 0; + + xprt->max_payload = xprt_rdma_max_inline_read; + + new_xprt = rpcx_to_rdmax(xprt); + new_xprt->rx_buf.rb_bc_max_requests = xprt->max_reqs; + + xprt_get(xprt); + args->bc_xprt->xpt_bc_xprt = xprt; + xprt->bc_xprt = args->bc_xprt; + + if (!try_module_get(THIS_MODULE)) + goto out_fail; + + /* Final put for backchannel xprt is in __svc_rdma_free */ + xprt_get(xprt); + return xprt; + +out_fail: + xprt_rdma_free_addresses(xprt); + args->bc_xprt->xpt_bc_xprt = NULL; + xprt_put(xprt); + xprt_free(xprt); + return ERR_PTR(-EINVAL); +} + +#endif /* CONFIG_SUNRPC_BACKCHANNEL */ + /* * Plumbing for rpc transport switch and kernel module */ @@ -722,6 +919,32 @@ static struct xprt_class xprt_rdma = { .setup = xprt_setup_rdma, }; +#if defined(CONFIG_SUNRPC_BACKCHANNEL) + +static struct rpc_xprt_ops xprt_rdma_bc_procs = { + .reserve_xprt = xprt_reserve_xprt_cong, + .release_xprt = xprt_release_xprt_cong, + .alloc_slot = xprt_alloc_slot, + .release_request = xprt_release_rqst_cong, + .buf_alloc = xprt_rdma_bc_allocate, + .buf_free = xprt_rdma_bc_free, + .send_request = xprt_rdma_bc_send_request, + .set_retrans_timeout = xprt_set_retrans_timeout_def, + .close = xprt_rdma_bc_close, + .destroy = xprt_rdma_bc_put, + .print_stats = xprt_rdma_print_stats +}; + +static struct xprt_class xprt_rdma_bc = { + .list = LIST_HEAD_INIT(xprt_rdma_bc.list), + .name = "rdma backchannel", + .owner = THIS_MODULE, + .ident = XPRT_TRANSPORT_BC_RDMA, + .setup = xprt_setup_rdma_bc, +}; + +#endif /* CONFIG_SUNRPC_BACKCHANNEL */ + void xprt_rdma_cleanup(void) { int rc; @@ -739,6 +962,13 @@ void xprt_rdma_cleanup(void) __func__, rc); frwr_destroy_recovery_wq(); + +#if defined(CONFIG_SUNRPC_BACKCHANNEL) + rc = xprt_unregister_transport(&xprt_rdma_bc); + if (rc) + dprintk("RPC: %s: xprt_unregister(bc) returned %i\n", + __func__, rc); +#endif } int xprt_rdma_init(void) @@ -755,6 +985,15 @@ int xprt_rdma_init(void) return rc; } +#if defined(CONFIG_SUNRPC_BACKCHANNEL) + rc = xprt_register_transport(&xprt_rdma_bc); + if (rc) { + xprt_unregister_transport(&xprt_rdma); + frwr_destroy_recovery_wq(); + return rc; + } +#endif + dprintk("RPCRDMA Module Init, register RPC RDMA transport\n"); dprintk("Defaults:\n"); diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index 45a8b70..b4855de 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -148,6 +148,8 @@ rdmab_to_msg(struct rpcrdma_regbuf *rb) return (struct rpcrdma_msg *)rb->rg_base; } +#define RPCRDMA_DEF_GFP (GFP_NOIO | __GFP_NOWARN) + /* * struct rpcrdma_rep -- this structure encapsulates state required to recv * and complete a reply, asychronously. It needs several pieces of