From patchwork Mon Jul 13 16:31:17 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 6780481 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6B500C05AC for ; Mon, 13 Jul 2015 16:31:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 783492053F for ; Mon, 13 Jul 2015 16:31:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 942E32052A for ; Mon, 13 Jul 2015 16:31:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751835AbbGMQbV (ORCPT ); Mon, 13 Jul 2015 12:31:21 -0400 Received: from mail-qk0-f173.google.com ([209.85.220.173]:34334 "EHLO mail-qk0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751492AbbGMQbU (ORCPT ); Mon, 13 Jul 2015 12:31:20 -0400 Received: by qkcl188 with SMTP id l188so74569547qkc.1; Mon, 13 Jul 2015 09:31:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=QyDSqTt1sfS+Amo2h534mJ9kaBoua8eFRtso+nvOBN4=; b=WOOCdGpExFmYMrzmJe37TIKEJ8vU5VD5mIo4WqD1zirdHGXGywl/uIncKdI06NnOdt 0wqQoHpV+JDzWAQsJeV3tn/p79H5//7BWFnfdqti9RTidUCstdlAgzZbDvyxmOybhpwd KPu3Nareb54rKZ1RUczMiQWCm4qRH+dmBiApCF2zD4fwgFvdMO0z6NxkEGaABAzCmXj0 RLoQCBCY4+PQXLf0bcnXT9Jit0S275V723Hzit5U9gFqMI+tKRZAVAsNvMpIOWFPXGqM FIamBb8vjPVHLjMKl2kzlVG+KW4wbUJCtPTp4Jx+1dNimfl89gRRUxfHsVW5R6lvS+14 NYAQ== X-Received: by 10.140.38.69 with SMTP id s63mr54736558qgs.10.1436805079783; Mon, 13 Jul 2015 09:31:19 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by smtp.gmail.com with ESMTPSA id t33sm11210738qge.19.2015.07.13.09.31.18 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 13 Jul 2015 09:31:18 -0700 (PDT) Subject: [PATCH v2 10/14] xprtrdma: Don't provide a reply chunk when expecting a short reply From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Mon, 13 Jul 2015 12:31:17 -0400 Message-ID: <20150713163117.17630.57213.stgit@manet.1015granger.net> In-Reply-To: <20150713160617.17630.97475.stgit@manet.1015granger.net> References: <20150713160617.17630.97475.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-8.2 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently Linux always offers a reply chunk, even when the reply can be sent inline (ie. is smaller than 1KB). On the client, registering a memory region can be expensive. A server may choose not to use the reply chunk, wasting the cost of the registration. This is a change only for RPC replies smaller than 1KB which the server constructs in the RPC reply send buffer. Because the elements of the reply must be XDR encoded, a copy-free data transfer has no benefit in this case. Signed-off-by: Chuck Lever --- net/sunrpc/xprtrdma/rpc_rdma.c | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index e7cf976..62150ae 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -420,7 +420,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) * * o Read ops return data as write chunk(s), header as inline. * o If the expected result is under the inline threshold, all ops - * return as inline (but see later). + * return as inline. * o Large non-read ops return as a single reply chunk. */ if (rqst->rq_rcv_buf.flags & XDRBUF_READ) @@ -476,17 +476,6 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) headerp->rm_body.rm_nochunks.rm_empty[2] = xdr_zero; /* new length after pullup */ rpclen = rqst->rq_svec[0].iov_len; - /* Currently we try to not actually use read inline. - * Reply chunks have the desirable property that - * they land, packed, directly in the target buffers - * without headers, so they require no fixup. The - * additional RDMA Write op sends the same amount - * of data, streams on-the-wire and adds no overhead - * on receive. Therefore, we request a reply chunk - * for non-writes wherever feasible and efficient. - */ - if (wtype == rpcrdma_noch) - wtype = rpcrdma_replych; } if (rtype != rpcrdma_noch) {