From patchwork Thu Jul 9 20:42:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 6759001 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 97766C05AC for ; Thu, 9 Jul 2015 20:42:44 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BD12420790 for ; Thu, 9 Jul 2015 20:42:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C98E52078F for ; Thu, 9 Jul 2015 20:42:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753686AbbGIUml (ORCPT ); Thu, 9 Jul 2015 16:42:41 -0400 Received: from mail-qg0-f48.google.com ([209.85.192.48]:35499 "EHLO mail-qg0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753668AbbGIUmk (ORCPT ); Thu, 9 Jul 2015 16:42:40 -0400 Received: by qget71 with SMTP id t71so120745843qge.2; Thu, 09 Jul 2015 13:42:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=igqMifWvlmY0ACl7dXhw1+mLiCl5fsq4IL4XwUtvwbk=; b=XjLFXfmcs/jDPchac+MhVWFPaALYKspM9dNLBrqKMPnJM50brPOACDwPLLIvA86mvk wBSzdqY2QWpX4ijYHi/jgvfmVZr5RHwf88D5nLkNkx5tM8JleaY8JaPViw5e+6Cmbk8+ 0AvEUJHGfEmdtceUUN6GWKNxJeqJsULfCbKFr/ZGgpUeYJNCevQ6glRexHJojIt4bwRI 8k/AFd2+9O9meoz9GYJYNq/b5zvDqX3/pHuRKatZudkFvgvKD0N7nnXv2PWNS9ds6vT6 s3pdXTK+PDMYG6I7MJ4Qv9gwnnEDAi5j4WpTZCv4q57KRWZ/kvVnQNWhamU8DJ7vgFww 2qcw== X-Received: by 10.140.22.147 with SMTP id 19mr27269598qgn.52.1436474559833; Thu, 09 Jul 2015 13:42:39 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by smtp.gmail.com with ESMTPSA id 6sm4002107qkx.38.2015.07.09.13.42.38 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Jul 2015 13:42:38 -0700 (PDT) Subject: [PATCH v1 06/12] xprtrdma: Always provide a write list when sending NFS READ From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Thu, 09 Jul 2015 16:42:37 -0400 Message-ID: <20150709204237.26247.297.stgit@manet.1015granger.net> In-Reply-To: <20150709203242.26247.4848.stgit@manet.1015granger.net> References: <20150709203242.26247.4848.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The client has been setting up a reply chunk for NFS READs that are smaller than the inline threshold. This is not efficient: both the server and client CPUs have to copy the reply's data payload into and out of the memory region that is then transferred via RDMA. Using the write list, the data payload is moved by the device and no extra data copying is necessary. Signed-off-by: Chuck Lever Reviewed-By: Sagi Grimberg --- net/sunrpc/xprtrdma/rpc_rdma.c | 21 ++++----------------- 1 file changed, 4 insertions(+), 17 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index 8cf9402..e569da4 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -427,28 +427,15 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) /* * Chunks needed for results? * + * o Read ops return data as write chunk(s), header as inline. * o If the expected result is under the inline threshold, all ops * return as inline (but see later). * o Large non-read ops return as a single reply chunk. - * o Large read ops return data as write chunk(s), header as inline. - * - * Note: the NFS code sending down multiple result segments implies - * the op is one of read, readdir[plus], readlink or NFSv4 getacl. - */ - - /* - * This code can handle read chunks, write chunks OR reply - * chunks -- only one type. If the request is too big to fit - * inline, then we will choose read chunks. If the request is - * a READ, then use write chunks to separate the file data - * into pages; otherwise use reply chunks. */ - if (rpcrdma_results_inline(rqst)) - wtype = rpcrdma_noch; - else if (rqst->rq_rcv_buf.page_len == 0) - wtype = rpcrdma_replych; - else if (rqst->rq_rcv_buf.flags & XDRBUF_READ) + if (rqst->rq_rcv_buf.flags & XDRBUF_READ) wtype = rpcrdma_writech; + else if (rpcrdma_results_inline(rqst)) + wtype = rpcrdma_noch; else wtype = rpcrdma_replych;