From patchwork Mon Jun 12 14:10:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13276578 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A63AC7EE2F for ; Mon, 12 Jun 2023 14:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236989AbjFLOKR (ORCPT ); Mon, 12 Jun 2023 10:10:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237154AbjFLOKE (ORCPT ); Mon, 12 Jun 2023 10:10:04 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9375010C4; Mon, 12 Jun 2023 07:10:03 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2EBFE629A1; Mon, 12 Jun 2023 14:10:03 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 48BC2C4339B; Mon, 12 Jun 2023 14:10:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686579002; bh=MiP1czKYRYivAwfgCIWihLzXQRc2PLrqklSil8z8u5E=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=puFId1sMd+4LMQ6PVCtR0+9WWZfLNlx0Hx89KKYiC4H4REQp/oggAbog7ko0wvG2m 3IjR736zedRiZGG2OUo+Du/jh4rUbEFzcaf0uOxXt6zL+xpF5lYQh6QsUPKUuc4LBh S3pKid1WYR6ht1XvmDHnXTY6ZEc7ofKE+bdACmHc3FmWqmnjZQa8vLP6yTGm3uRJbd PzqoHDlt4tVt7MIS5f6HIYPu8fN8i9565OeUs+7jNBgGLpA08F/MGVehAB4UuZQVGs SXYaKPfJHShpAKC3jcVonvpTeXOKQAdqGOPQSiC2ZLQPF8gSL+1gryx4cdixgW0jFT T61qsVl8YnO5A== Subject: [PATCH v2 1/5] SUNRPC: Revert cc93ce9529a6 ("svcrdma: Retain the page backing rq_res.head[0].iov_base") From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: Chuck Lever , linux-rdma@vger.kernel.org, tom@talpey.com Date: Mon, 12 Jun 2023 10:10:01 -0400 Message-ID: <168657900128.5619.7769165526407423007.stgit@manet.1015granger.net> In-Reply-To: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> References: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Pre-requisite for releasing pages in the send completion handler. Reverted by hand: patch -R would not apply cleanly. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index a35d1e055b1a..8e7ccef74207 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -975,11 +975,6 @@ int svc_rdma_sendto(struct svc_rqst *rqstp) ret = svc_rdma_send_reply_msg(rdma, sctxt, rctxt, rqstp); if (ret < 0) goto put_ctxt; - - /* Prevent svc_xprt_release() from releasing the page backing - * rq_res.head[0].iov_base. It's no longer being accessed by - * the I/O device. */ - rqstp->rq_respages++; return 0; reply_chunk: From patchwork Mon Jun 12 14:10:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13276579 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BB8BC88CB6 for ; Mon, 12 Jun 2023 14:10:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232421AbjFLOKS (ORCPT ); Mon, 12 Jun 2023 10:10:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237265AbjFLOKK (ORCPT ); Mon, 12 Jun 2023 10:10:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04A8610C4; Mon, 12 Jun 2023 07:10:10 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9494D629A4; Mon, 12 Jun 2023 14:10:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A93E3C4339B; Mon, 12 Jun 2023 14:10:08 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686579009; bh=UwlfcDaX65rLpTr8OxD6CkN609SIsOzbBQmxeWWqu3k=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=kPFupDfi7/0QvsJ1FAyhXAgZx6WQgaZpQ3hGZtIs0h2chCoJ1dNM3Kz9kbXEb3K6Y Oj6GMDnsS4qV7rLe7d6QmvNmQlIOHSdWSaqNy+W/vNaBsnY91uefkq+8aZUUVac9HH 0ss9Zs35RhoLLprroQGd3pf/EsyQCkwNOCuji1cfH6pDlBsLOY5O46EU3TDm6X7cbX 1rSdm3hDYBvMGHwzMv6ocsPyLNRCWpRTFUKORQ/MrV/XLnqA8syFWN0+Thgcy8uywZ UX7FA5BsvjDRUHbTrIeqcIHAclTF7fwQtyKTwGtp+fWuYj8Cp3jjSwcG/brm7+Ag50 W68C6k9tFfddA== Subject: [PATCH v2 2/5] SUNRPC: Revert 579900670ac7 ("svcrdma: Remove unused sc_pages field") From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: Chuck Lever , linux-rdma@vger.kernel.org, tom@talpey.com Date: Mon, 12 Jun 2023 10:10:07 -0400 Message-ID: <168657900778.5619.16189837402481584636.stgit@manet.1015granger.net> In-Reply-To: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> References: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Pre-requisite for releasing pages in the send completion handler. Reverted by hand: patch -R would not apply cleanly. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 3 ++- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 25 +++++++++++++++++++++++++ 2 files changed, 27 insertions(+), 1 deletion(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index a0f3ea357977..8e654da55170 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -158,8 +158,9 @@ struct svc_rdma_send_ctxt { struct xdr_buf sc_hdrbuf; struct xdr_stream sc_stream; void *sc_xprt_buf; + int sc_page_count; int sc_cur_sge_no; - + struct page *sc_pages[RPCSVC_MAXPAGES]; struct ib_sge sc_sges[]; }; diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 8e7ccef74207..4c62bc41ea40 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -213,6 +213,7 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma) ctxt->sc_send_wr.num_sge = 0; ctxt->sc_cur_sge_no = 0; + ctxt->sc_page_count = 0; return ctxt; out_empty: @@ -227,6 +228,8 @@ struct svc_rdma_send_ctxt *svc_rdma_send_ctxt_get(struct svcxprt_rdma *rdma) * svc_rdma_send_ctxt_put - Return send_ctxt to free list * @rdma: controlling svcxprt_rdma * @ctxt: object to return to the free list + * + * Pages left in sc_pages are DMA unmapped and released. */ void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *ctxt) @@ -234,6 +237,9 @@ void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma, struct ib_device *device = rdma->sc_cm_id->device; unsigned int i; + for (i = 0; i < ctxt->sc_page_count; ++i) + put_page(ctxt->sc_pages[i]); + /* The first SGE contains the transport header, which * remains mapped until @ctxt is destroyed. */ @@ -798,6 +804,25 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, svc_rdma_xb_dma_map, &args); } +/* The svc_rqst and all resources it owns are released as soon as + * svc_rdma_sendto returns. Transfer pages under I/O to the ctxt + * so they are released by the Send completion handler. + */ +static inline void svc_rdma_save_io_pages(struct svc_rqst *rqstp, + struct svc_rdma_send_ctxt *ctxt) +{ + int i, pages = rqstp->rq_next_page - rqstp->rq_respages; + + ctxt->sc_page_count += pages; + for (i = 0; i < pages; i++) { + ctxt->sc_pages[i] = rqstp->rq_respages[i]; + rqstp->rq_respages[i] = NULL; + } + + /* Prevent svc_xprt_release from releasing pages in rq_pages */ + rqstp->rq_next_page = rqstp->rq_respages; +} + /* Prepare the portion of the RPC Reply that will be transmitted * via RDMA Send. The RPC-over-RDMA transport header is prepared * in sc_sges[0], and the RPC xdr_buf is prepared in following sges. From patchwork Mon Jun 12 14:10:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13276580 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED876C7EE2F for ; Mon, 12 Jun 2023 14:10:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236827AbjFLOKV (ORCPT ); Mon, 12 Jun 2023 10:10:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236861AbjFLOKU (ORCPT ); Mon, 12 Jun 2023 10:10:20 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F40A10C4; Mon, 12 Jun 2023 07:10:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 117C8629A1; Mon, 12 Jun 2023 14:10:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28271C433EF; Mon, 12 Jun 2023 14:10:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686579015; bh=AfTZqjleRnP/Mi+DwkW3OxBl5qEOliwMKT65wBpbHk4=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=MknqHRi9yrMVxgByalC57smUPP9814JnieiOqEREjhVIqwCGtvLEVEA0aUs5aia3K rLQ7kmNj0eAYFOgTCffjITQbjpi3laGQvkJhBzVuAzBkI+RoH25QsZjqSko1kbcZdl 5Mmt7tLeXbG3gpEQFmQAato1FVbaCoGHgy0JUJoslEygGjh+wR3+TUo40boaK0B3fE LQGlMuOMggCnE4Y+bzhoyeZAqhTRNNGUWnW0VSDnBoAe9bbfgwdu2U7ML59lMceCQd 4Luff6neHPMVxMCJvGVB03cAn/kGOX7JXaG0ZeuJ1wRQh/ak+Lq7xKDdfIwBrmgCHI NnFfzWrLXrWYA== Subject: [PATCH v2 3/5] svcrdma: Revert 2a1e4f21d841 ("svcrdma: Normalize Send page handling") From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: Chuck Lever , linux-rdma@vger.kernel.org, tom@talpey.com Date: Mon, 12 Jun 2023 10:10:14 -0400 Message-ID: <168657901418.5619.12840449463788983763.stgit@manet.1015granger.net> In-Reply-To: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> References: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Get rid of the completion wait in svc_rdma_sendto(), and release pages in the send completion handler again. A subsequent patch will handle releasing those pages more efficiently. Reverted by hand: patch -R would not apply cleanly. Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc_rdma.h | 1 - net/sunrpc/xprtrdma/svc_rdma_backchannel.c | 8 +------- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 27 ++++++++++++--------------- 3 files changed, 13 insertions(+), 23 deletions(-) diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h index 8e654da55170..a5ee0af2a310 100644 --- a/include/linux/sunrpc/svc_rdma.h +++ b/include/linux/sunrpc/svc_rdma.h @@ -154,7 +154,6 @@ struct svc_rdma_send_ctxt { struct ib_send_wr sc_send_wr; struct ib_cqe sc_cqe; - struct completion sc_done; struct xdr_buf sc_hdrbuf; struct xdr_stream sc_stream; void *sc_xprt_buf; diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c index aa2227a7e552..7420a2c990c7 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c +++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c @@ -93,13 +93,7 @@ static int svc_rdma_bc_sendto(struct svcxprt_rdma *rdma, */ get_page(virt_to_page(rqst->rq_buffer)); sctxt->sc_send_wr.opcode = IB_WR_SEND; - ret = svc_rdma_send(rdma, sctxt); - if (ret < 0) - return ret; - - ret = wait_for_completion_killable(&sctxt->sc_done); - svc_rdma_send_ctxt_put(rdma, sctxt); - return ret; + return svc_rdma_send(rdma, sctxt); } /* Server-side transport endpoint wants a whole page for its send diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 4c62bc41ea40..1ae4236d04a3 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -147,7 +147,6 @@ svc_rdma_send_ctxt_alloc(struct svcxprt_rdma *rdma) ctxt->sc_send_wr.wr_cqe = &ctxt->sc_cqe; ctxt->sc_send_wr.sg_list = ctxt->sc_sges; ctxt->sc_send_wr.send_flags = IB_SEND_SIGNALED; - init_completion(&ctxt->sc_done); ctxt->sc_cqe.done = svc_rdma_wc_send; ctxt->sc_xprt_buf = buffer; xdr_buf_init(&ctxt->sc_hdrbuf, ctxt->sc_xprt_buf, @@ -286,12 +285,12 @@ static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc) container_of(cqe, struct svc_rdma_send_ctxt, sc_cqe); svc_rdma_wake_send_waiters(rdma, 1); - complete(&ctxt->sc_done); if (unlikely(wc->status != IB_WC_SUCCESS)) goto flushed; trace_svcrdma_wc_send(wc, &ctxt->sc_cid); + svc_rdma_send_ctxt_put(rdma, ctxt); return; flushed: @@ -299,6 +298,7 @@ static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc) trace_svcrdma_wc_send_err(wc, &ctxt->sc_cid); else trace_svcrdma_wc_send_flush(wc, &ctxt->sc_cid); + svc_rdma_send_ctxt_put(rdma, ctxt); svc_xprt_deferred_close(&rdma->sc_xprt); } @@ -315,8 +315,6 @@ int svc_rdma_send(struct svcxprt_rdma *rdma, struct svc_rdma_send_ctxt *ctxt) struct ib_send_wr *wr = &ctxt->sc_send_wr; int ret; - reinit_completion(&ctxt->sc_done); - /* Sync the transport header buffer */ ib_dma_sync_single_for_device(rdma->sc_pd->device, wr->sg_list[0].addr, @@ -808,8 +806,8 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, * svc_rdma_sendto returns. Transfer pages under I/O to the ctxt * so they are released by the Send completion handler. */ -static inline void svc_rdma_save_io_pages(struct svc_rqst *rqstp, - struct svc_rdma_send_ctxt *ctxt) +static void svc_rdma_save_io_pages(struct svc_rqst *rqstp, + struct svc_rdma_send_ctxt *ctxt) { int i, pages = rqstp->rq_next_page - rqstp->rq_respages; @@ -852,6 +850,8 @@ static int svc_rdma_send_reply_msg(struct svcxprt_rdma *rdma, if (ret < 0) return ret; + svc_rdma_save_io_pages(rqstp, sctxt); + if (rctxt->rc_inv_rkey) { sctxt->sc_send_wr.opcode = IB_WR_SEND_WITH_INV; sctxt->sc_send_wr.ex.invalidate_rkey = rctxt->rc_inv_rkey; @@ -859,13 +859,7 @@ static int svc_rdma_send_reply_msg(struct svcxprt_rdma *rdma, sctxt->sc_send_wr.opcode = IB_WR_SEND; } - ret = svc_rdma_send(rdma, sctxt); - if (ret < 0) - return ret; - - ret = wait_for_completion_killable(&sctxt->sc_done); - svc_rdma_send_ctxt_put(rdma, sctxt); - return ret; + return svc_rdma_send(rdma, sctxt); } /** @@ -931,8 +925,7 @@ void svc_rdma_send_error_msg(struct svcxprt_rdma *rdma, sctxt->sc_sges[0].length = sctxt->sc_hdrbuf.len; if (svc_rdma_send(rdma, sctxt)) goto put_ctxt; - - wait_for_completion_killable(&sctxt->sc_done); + return; put_ctxt: svc_rdma_send_ctxt_put(rdma, sctxt); @@ -1006,6 +999,10 @@ int svc_rdma_sendto(struct svc_rqst *rqstp) if (ret != -E2BIG && ret != -EINVAL) goto put_ctxt; + /* Send completion releases payload pages that were part + * of previously posted RDMA Writes. + */ + svc_rdma_save_io_pages(rqstp, sctxt); svc_rdma_send_error_msg(rdma, sctxt, rctxt, ret); return 0; From patchwork Mon Jun 12 14:10:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13276581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 381E0C88CB2 for ; Mon, 12 Jun 2023 14:10:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237013AbjFLOK1 (ORCPT ); Mon, 12 Jun 2023 10:10:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236919AbjFLOK0 (ORCPT ); Mon, 12 Jun 2023 10:10:26 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19CF910FE; Mon, 12 Jun 2023 07:10:23 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 7767C629A1; Mon, 12 Jun 2023 14:10:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9062AC433EF; Mon, 12 Jun 2023 14:10:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686579021; bh=HDTW0GgJCQOTBs0V+gahHpF2WL4jQpJxgOJw45fgDwE=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=fRuNnYozSLdVPorIE0Yi9C4RpKlo70TALnB+QA8bk4vTJ8sM8OT1OFq5iitkxWOoc hHmVky/P8rSeUHgmPqnM9oTSy9O9Nwu0zrjNYjjRQeYzCNN9DRPxsqBSvgjojvnRYd V3KTUBR7/ojOXK5+DO/JNxs4VRZiMZSdck4bZG0pSf7Pr/YflpU6eKCl2RPTytZZSQ nj9FDYbYlmFWkibNi90yQO0+9MI/tq32V86r1mNarq6fOzpsoQAOSWKB4jhz4xcG3x +urlOpzc5JbZHky2l9QtXQGMUYQzyGEKrBSE/mglZIj7LXO35rwUookNIfICHdRNwa Z7+uHHAx2MZGQ== Subject: [PATCH v2 4/5] svcrdma: Prevent page release when nothing was received From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: Chuck Lever , linux-rdma@vger.kernel.org, tom@talpey.com Date: Mon, 12 Jun 2023 10:10:20 -0400 Message-ID: <168657902069.5619.1670908567999246810.stgit@manet.1015granger.net> In-Reply-To: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> References: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever I noticed that svc_rqst_release_pages() was still unnecessarily releasing a page when svc_rdma_recvfrom() returns zero. Fixes: a53d5cb0646a ("svcrdma: Avoid releasing a page in svc_xprt_release()") Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 46a719ba4917..5bd16d19b16e 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -804,6 +804,12 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp) clear_bit(XPT_DATA, &xprt->xpt_flags); spin_unlock(&rdma_xprt->sc_rq_dto_lock); + /* Prevent svc_xprt_release() from releasing pages in rq_pages + * when returning 0 or an error. + */ + rqstp->rq_respages = rqstp->rq_pages; + rqstp->rq_next_page = rqstp->rq_respages; + /* Unblock the transport for the next receive */ svc_xprt_received(xprt); if (!ctxt) @@ -815,12 +821,6 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp) DMA_FROM_DEVICE); svc_rdma_build_arg_xdr(rqstp, ctxt); - /* Prevent svc_xprt_release from releasing pages in rq_pages - * if we return 0 or an error. - */ - rqstp->rq_respages = rqstp->rq_pages; - rqstp->rq_next_page = rqstp->rq_respages; - ret = svc_rdma_xdr_decode_req(&rqstp->rq_arg, ctxt); if (ret < 0) goto out_err; From patchwork Mon Jun 12 14:10:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13276582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDEC0C88CB4 for ; Mon, 12 Jun 2023 14:10:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236919AbjFLOKe (ORCPT ); Mon, 12 Jun 2023 10:10:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52308 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237044AbjFLOKc (ORCPT ); Mon, 12 Jun 2023 10:10:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5173410FF; Mon, 12 Jun 2023 07:10:29 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id DAA62629A1; Mon, 12 Jun 2023 14:10:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F051AC433D2; Mon, 12 Jun 2023 14:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1686579028; bh=X0+nBclNyvOxNp4D2C4RvpHhcmAZyrcuNkgUbCgAI30=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=sajRYHN37jBk3vRmdTN0BNS+U1wG6gAtrvC+Hjhsnu3XwNR6BB2CVMNS5ETPNvWXU jUrEpraGlv+rjNHeXBAtweVFw+KFDJdUKQ+HwiUMn4pGZPjt1DySg9QI/bvr1o3hpe 7QeLBbcR0z/PfAMjmPeThXGx6FIT/1oAuIB0gMY82lKcHXCMDZy5SRG75rqa+m3PeL rr5bGWFunVmDGOhqup9AKUWu4lpL0wkoQ92uU2bnEB5dzfpUoELiLyvn0ONQ4OEHBC DCRUvf7OUL3AolTs+/R3fLJxKdG147ZZoDtPrFNl7H3Idv3xhnkVsFUjaE90cDAHa/ itWTAwaXA0PPg== Subject: [PATCH v2 5/5] SUNRPC: Optimize page release in svc_rdma_sendto() From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: Chuck Lever , Tom Talpey , linux-rdma@vger.kernel.org, tom@talpey.com Date: Mon, 12 Jun 2023 10:10:27 -0400 Message-ID: <168657902707.5619.1368921836287929972.stgit@manet.1015granger.net> In-Reply-To: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> References: <168657879115.5619.5573632864481586166.stgit@manet.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org From: Chuck Lever Now that we have bulk page allocation and release APIs, it's more efficient to use those than it is for nfsd threads to wait for send completions. Previous patches have eliminated the calls to wait_for_completion() and complete(), in order to avoid scheduler overhead. Now release pages-under-I/O in the send completion handler using the efficient bulk release API. I've measured a 7% reduction in cumulative CPU utilization in svc_rdma_sendto(), svc_rdma_wc_send(), and svc_xprt_release(). In particular, using release_pages() instead of complete() cuts the time per svc_rdma_wc_send() call by two-thirds. This helps improve scalability because svc_rdma_wc_send() is single-threaded per connection. Signed-off-by: Chuck Lever Reviewed-by: Tom Talpey --- net/sunrpc/xprtrdma/svc_rdma_sendto.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 1ae4236d04a3..24228f3611e8 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -236,8 +236,8 @@ void svc_rdma_send_ctxt_put(struct svcxprt_rdma *rdma, struct ib_device *device = rdma->sc_cm_id->device; unsigned int i; - for (i = 0; i < ctxt->sc_page_count; ++i) - put_page(ctxt->sc_pages[i]); + if (ctxt->sc_page_count) + release_pages(ctxt->sc_pages, ctxt->sc_page_count); /* The first SGE contains the transport header, which * remains mapped until @ctxt is destroyed.