From patchwork Mon Jul 20 19:03:49 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 6829971 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 7FFDC9F38B for ; Mon, 20 Jul 2015 19:03:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9D59120534 for ; Mon, 20 Jul 2015 19:03:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AB6CE2038D for ; Mon, 20 Jul 2015 19:03:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755410AbbGTTDw (ORCPT ); Mon, 20 Jul 2015 15:03:52 -0400 Received: from mail-qk0-f182.google.com ([209.85.220.182]:32781 "EHLO mail-qk0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755295AbbGTTDw (ORCPT ); Mon, 20 Jul 2015 15:03:52 -0400 Received: by qkdl129 with SMTP id l129so118449118qkd.0; Mon, 20 Jul 2015 12:03:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=K1f7WcQ19BfogtSfZtKcLok9oNhcE3WsaSGu79vdGWA=; b=cDB9NYxUHrEE2q6e6ug+ProDrDB2cfdvtX5tL+tAGuEOY241AnXQGf0b+8wyo88pEk NjH99xM1csutAkqHWg5/7qG+2jFAn73knXS8oeLdCAppKPsZla8khigoG/XOhyjgVBAf 5WRiBIacBcVuugCBy+1sXpVnbOcJgeR1BlrCx/aM0eUObDJpTxnH3GR1Uixv0hOBR2CO Wdoo51CxIRk4tMFwn4UgCrW35kGg27FT15OPT0daTPm/4/ZDOF/WZOr8DYyrzNQAjmZ/ EyOhS2wJNjEM3wiPY+1yMJzvT/C0qPBOEigFO6vFRH/r6udK+XgwpspI4tPccwCaPi7g wkPw== X-Received: by 10.140.235.132 with SMTP id g126mr44723521qhc.101.1437419031416; Mon, 20 Jul 2015 12:03:51 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by smtp.gmail.com with ESMTPSA id s5sm11422706qgs.1.2015.07.20.12.03.50 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 20 Jul 2015 12:03:50 -0700 (PDT) Subject: [PATCH v3 09/15] xprtrdma: Always provide a write list when sending NFS READ From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Mon, 20 Jul 2015 15:03:49 -0400 Message-ID: <20150720190349.10997.53285.stgit@manet.1015granger.net> In-Reply-To: <20150720185624.10997.51574.stgit@manet.1015granger.net> References: <20150720185624.10997.51574.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-8.0 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The client has been setting up a reply chunk for NFS READs that are smaller than the inline threshold. This is not efficient: both the server and client CPUs have to copy the reply's data payload into and out of the memory region that is then transferred via RDMA. Using the write list, the data payload is moved by the device and no extra data copying is necessary. Signed-off-by: Chuck Lever Reviewed-by: Devesh Sharma Reviewed-By: Sagi Grimberg Tested-by: Devesh Sharma --- net/sunrpc/xprtrdma/rpc_rdma.c | 21 ++++----------------- 1 file changed, 4 insertions(+), 17 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index 950b654..e7cf976 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -418,28 +418,15 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) /* * Chunks needed for results? * + * o Read ops return data as write chunk(s), header as inline. * o If the expected result is under the inline threshold, all ops * return as inline (but see later). * o Large non-read ops return as a single reply chunk. - * o Large read ops return data as write chunk(s), header as inline. - * - * Note: the NFS code sending down multiple result segments implies - * the op is one of read, readdir[plus], readlink or NFSv4 getacl. - */ - - /* - * This code can handle read chunks, write chunks OR reply - * chunks -- only one type. If the request is too big to fit - * inline, then we will choose read chunks. If the request is - * a READ, then use write chunks to separate the file data - * into pages; otherwise use reply chunks. */ - if (rpcrdma_results_inline(rqst)) - wtype = rpcrdma_noch; - else if (rqst->rq_rcv_buf.page_len == 0) - wtype = rpcrdma_replych; - else if (rqst->rq_rcv_buf.flags & XDRBUF_READ) + if (rqst->rq_rcv_buf.flags & XDRBUF_READ) wtype = rpcrdma_writech; + else if (rpcrdma_results_inline(rqst)) + wtype = rpcrdma_noch; else wtype = rpcrdma_replych;