From patchwork Mon Sep 11 15:18:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 9947541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 983E3602C9 for ; Mon, 11 Sep 2017 15:17:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8ADA528B8A for ; Mon, 11 Sep 2017 15:17:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7DDF628B9D; Mon, 11 Sep 2017 15:17:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID, URIBL_BLACK autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A965F28B8A for ; Mon, 11 Sep 2017 15:16:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751398AbdIKPQ6 (ORCPT ); Mon, 11 Sep 2017 11:16:58 -0400 Received: from mail-pg0-f65.google.com ([74.125.83.65]:38351 "EHLO mail-pg0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750853AbdIKPQ5 (ORCPT ); Mon, 11 Sep 2017 11:16:57 -0400 Received: by mail-pg0-f65.google.com with SMTP id t3so4763861pgt.5; Mon, 11 Sep 2017 08:16:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=w2gsrAJxu7Rv5fkefwU+j5HoOciXO2aQ8hdF8i1gb9Q=; b=Vs+8zi2JnG5Rn7xYyYJfil/74T0QqrLTtOEah0HeMwMJwN5cwWXwwwZWhoD0gCV6Nm ETH5QnaW6wCVG3ppaAnwBVitw4rA5KQNMty4LI6m5vjraLL2J8a4Nlu8x4wHYveCiuj7 ZRBjzKgOF3w82PKv74A8+QoKNz6ba+dWyuIGtZ5HSaHND7ImkMVYHl03VIgQgnftowxm MKMMwoB5Zadl01aQxVocRFCUJNUe5lzTR/qZMKz4SYxuDBN2qaiLjJ2PDAKVSSnD3IL1 5obgrNPeeeszhfcmcCSDf9mOKWo+vKQQsIF81CMVrDB1qH62ss8eXABh0NjaDo9iMLpf IDag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=w2gsrAJxu7Rv5fkefwU+j5HoOciXO2aQ8hdF8i1gb9Q=; b=gn6BnIsF7k7tYXXraj9Ugmt8h+IezbYX0//HP68RB2rM5+AGwU7XCTkrrrU719xodP y4mtqgulh68/S88dVAfasC2ayfxDBKBv0JSbr9CYCrMuLDlWnFTH/5EbeGgpmvwAlyKo GLqGWJNRLo9LR/yY8eY4DfFESAUDwnYEvDwujdBsogIKbxYOPyAD6Q16hboYrQpYfj2/ pqXZCb1h+zrl/wqr6xw/ayyyDeDqIbeDRZnHG40pi/i90Lb0BPrz+XmBf0W6cSV3O170 lHg3l2L6uJqtYHBfmz+U1MxVoEafdzuWLOFVpv2HEamvjux80W8Btp/cJYbdQUBMVeJs YiOg== X-Gm-Message-State: AHPjjUjRm+uG5DKj577MPQ67Qi8eUrPQtOXcuLDW/Sk23cE76PEdFFkT NJmiY2cVGmAFJKY0 X-Google-Smtp-Source: ADKCNb4pY4r3N8SSSbTGQz8187YpZurvAopmzIZbHQ0vowXZ1RUwUO8XKhWMtdnxrT1H8papuYsQXw== X-Received: by 10.101.69.136 with SMTP id o8mr12068545pgq.127.1505143016927; Mon, 11 Sep 2017 08:16:56 -0700 (PDT) Received: from lebasque.1015granger.net ([8.25.222.2]) by smtp.gmail.com with ESMTPSA id q23sm5786272pfk.182.2017.09.11.08.16.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 11 Sep 2017 08:16:56 -0700 (PDT) Subject: [PATCH v1 5/7] xprtrdma: Manage RDMA Send arguments via lock-free circular queue From: Chuck Lever To: jgunthorpe@obsidianresearch.com, sagi@grimberg.me Cc: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Mon, 11 Sep 2017 11:18:49 -0400 Message-ID: <150514312482.1307.8481866545401169354.stgit@lebasque.1015granger.net> In-Reply-To: <150514269466.1307.1991885208569456320.stgit@lebasque.1015granger.net> References: <150514269466.1307.1991885208569456320.stgit@lebasque.1015granger.net> User-Agent: StGit/0.17.1-24-g0acc MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Rather than using the SGE array in struct rpcrdma_req, use a sendctx to manage Send SGEs. Unmap these SGEs in the Send completion handler instead of in xprt_rdma_free. This allows Send WRs to outlive RPCs. Reported-by: Sagi Grimberg Suggested-by: Jason Gunthorpe Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/rpc_rdma.c | 31 ++++++++++++------------------- net/sunrpc/xprtrdma/transport.c | 1 - net/sunrpc/xprtrdma/verbs.c | 24 ++++++++++++++++-------- net/sunrpc/xprtrdma/xprt_rdma.h | 6 +----- 4 files changed, 29 insertions(+), 33 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index 545f5d0..8e71d03 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -517,8 +517,9 @@ static bool rpcrdma_prepare_hdr_sge(struct rpcrdma_ia *ia, struct rpcrdma_req *req, u32 len) { + struct rpcrdma_sendctx *sc = req->rl_sendctx; struct rpcrdma_regbuf *rb = req->rl_rdmabuf; - struct ib_sge *sge = &req->rl_send_sge[0]; + struct ib_sge *sge = sc->sc_sges; if (!rpcrdma_dma_map_regbuf(ia, rb)) goto out_regbuf; @@ -528,7 +529,7 @@ rpcrdma_prepare_hdr_sge(struct rpcrdma_ia *ia, struct rpcrdma_req *req, ib_dma_sync_single_for_device(rdmab_device(rb), sge->addr, sge->length, DMA_TO_DEVICE); - req->rl_send_wr.num_sge++; + sc->sc_wr.num_sge++; return true; out_regbuf: @@ -543,10 +544,11 @@ static bool rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req, struct xdr_buf *xdr, enum rpcrdma_chunktype rtype) { + struct rpcrdma_sendctx *sc = req->rl_sendctx; unsigned int sge_no, page_base, len, remaining; struct rpcrdma_regbuf *rb = req->rl_sendbuf; struct ib_device *device = ia->ri_device; - struct ib_sge *sge = req->rl_send_sge; + struct ib_sge *sge = sc->sc_sges; u32 lkey = ia->ri_pd->local_dma_lkey; struct page *page, **ppages; @@ -637,8 +639,9 @@ rpcrdma_prepare_msg_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req, } out: - req->rl_send_wr.num_sge += sge_no; - req->rl_mapped_sges = sge_no - 1; + sc->sc_wr.num_sge += sge_no; + sc->sc_unmap_count = sge_no - 1; + sc->sc_device = device; return true; out_regbuf: @@ -669,8 +672,11 @@ rpcrdma_prepare_send_sges(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req, u32 hdrlen, struct xdr_buf *xdr, enum rpcrdma_chunktype rtype) { - req->rl_send_wr.num_sge = 0; + req->rl_sendctx = rpcrdma_sendctx_get_locked(&r_xprt->rx_buf); + if (!req->rl_sendctx) + return -ENOBUFS; + req->rl_sendctx->sc_wr.num_sge = 0; if (!rpcrdma_prepare_hdr_sge(&r_xprt->rx_ia, req, hdrlen)) return -EIO; @@ -681,19 +687,6 @@ rpcrdma_prepare_send_sges(struct rpcrdma_xprt *r_xprt, return 0; } -void -rpcrdma_unmap_sges(struct rpcrdma_ia *ia, struct rpcrdma_req *req) -{ - struct ib_device *device = ia->ri_device; - struct ib_sge *sge; - int count; - - sge = &req->rl_send_sge[2]; - for (count = req->rl_mapped_sges; count--; sge++) - ib_dma_unmap_page(device, sge->addr, sge->length, - DMA_TO_DEVICE); -} - /** * __rpcrdma_unmap_send_sges - DMA-unmap Send buffers * @sc: send_ctx containing SGEs to unmap diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c index 18cb8b4..4cca4a7 100644 --- a/net/sunrpc/xprtrdma/transport.c +++ b/net/sunrpc/xprtrdma/transport.c @@ -688,7 +688,6 @@ xprt_rdma_free(struct rpc_task *task) rpcrdma_remove_req(&r_xprt->rx_buf, req); if (!list_empty(&req->rl_registered)) ia->ri_ops->ro_unmap_safe(r_xprt, req, !RPC_IS_ASYNC(task)); - rpcrdma_unmap_sges(ia, req); rpcrdma_buffer_put(req); } diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c index 219f2d8..3409de8 100644 --- a/net/sunrpc/xprtrdma/verbs.c +++ b/net/sunrpc/xprtrdma/verbs.c @@ -128,11 +128,17 @@ rpcrdma_qp_async_error_upcall(struct ib_event *event, void *context) static void rpcrdma_wc_send(struct ib_cq *cq, struct ib_wc *wc) { + struct ib_cqe *cqe = wc->wr_cqe; + struct rpcrdma_sendctx *sc = + container_of(cqe, struct rpcrdma_sendctx, sc_cqe); + /* WARNING: Only wr_cqe and status are reliable at this point */ if (wc->status != IB_WC_SUCCESS && wc->status != IB_WC_WR_FLUSH_ERR) pr_err("rpcrdma: Send: %s (%u/0x%x)\n", ib_wc_status_msg(wc->status), wc->status, wc->vendor_err); + + rpcrdma_sendctx_put_locked(sc); } /* Perform basic sanity checking to avoid using garbage @@ -1104,13 +1110,8 @@ rpcrdma_create_req(struct rpcrdma_xprt *r_xprt) spin_lock(&buffer->rb_reqslock); list_add(&req->rl_all, &buffer->rb_allreqs); spin_unlock(&buffer->rb_reqslock); - req->rl_cqe.done = rpcrdma_wc_send; req->rl_buffer = &r_xprt->rx_buf; INIT_LIST_HEAD(&req->rl_registered); - req->rl_send_wr.next = NULL; - req->rl_send_wr.wr_cqe = &req->rl_cqe; - req->rl_send_wr.sg_list = req->rl_send_sge; - req->rl_send_wr.opcode = IB_WR_SEND; return req; } @@ -1401,7 +1402,6 @@ rpcrdma_buffer_put(struct rpcrdma_req *req) struct rpcrdma_buffer *buffers = req->rl_buffer; struct rpcrdma_rep *rep = req->rl_reply; - req->rl_send_wr.num_sge = 0; req->rl_reply = NULL; spin_lock(&buffers->rb_lock); @@ -1533,7 +1533,8 @@ rpcrdma_ep_post(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep, struct rpcrdma_req *req) { - struct ib_send_wr *send_wr = &req->rl_send_wr; + struct rpcrdma_sendctx *sc = req->rl_sendctx; + struct ib_send_wr *send_wr = &sc->sc_wr; struct ib_send_wr *send_wr_fail; int rc; @@ -1547,10 +1548,17 @@ rpcrdma_ep_post(struct rpcrdma_ia *ia, dprintk("RPC: %s: posting %d s/g entries\n", __func__, send_wr->num_sge); - rpcrdma_set_signaled(ep, send_wr); + if (!ep->rep_send_count) { + send_wr->send_flags |= IB_SEND_SIGNALED; + ep->rep_send_count = ep->rep_send_batch; + } else { + send_wr->send_flags &= ~IB_SEND_SIGNALED; + --ep->rep_send_count; + } rc = ib_post_send(ia->ri_id->qp, send_wr, &send_wr_fail); if (rc) goto out_postsend_err; + return 0; out_postsend_err: diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h index f9ac1fd..64cfc11 100644 --- a/net/sunrpc/xprtrdma/xprt_rdma.h +++ b/net/sunrpc/xprtrdma/xprt_rdma.h @@ -367,19 +367,16 @@ struct rpcrdma_buffer; struct rpcrdma_req { struct list_head rl_list; __be32 rl_xid; - unsigned int rl_mapped_sges; unsigned int rl_connect_cookie; struct rpcrdma_buffer *rl_buffer; struct rpcrdma_rep *rl_reply; struct xdr_stream rl_stream; struct xdr_buf rl_hdrbuf; - struct ib_send_wr rl_send_wr; - struct ib_sge rl_send_sge[RPCRDMA_MAX_SEND_SGES]; + struct rpcrdma_sendctx *rl_sendctx; struct rpcrdma_regbuf *rl_rdmabuf; /* xprt header */ struct rpcrdma_regbuf *rl_sendbuf; /* rq_snd_buf */ struct rpcrdma_regbuf *rl_recvbuf; /* rq_rcv_buf */ - struct ib_cqe rl_cqe; struct list_head rl_all; bool rl_backchannel; @@ -678,7 +675,6 @@ int rpcrdma_prepare_send_sges(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req, u32 hdrlen, struct xdr_buf *xdr, enum rpcrdma_chunktype rtype); -void rpcrdma_unmap_sges(struct rpcrdma_ia *, struct rpcrdma_req *); void __rpcrdma_unmap_send_sges(struct rpcrdma_sendctx *sc); int rpcrdma_marshal_req(struct rpcrdma_xprt *r_xprt, struct rpc_rqst *rqst); void rpcrdma_set_max_header_sizes(struct rpcrdma_xprt *);