From patchwork Thu Jul 9 20:42:46 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 6759041 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id A2DDAC05AC for ; Thu, 9 Jul 2015 20:42:55 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BE2D320790 for ; Thu, 9 Jul 2015 20:42:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id F405320791 for ; Thu, 9 Jul 2015 20:42:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754050AbbGIUmu (ORCPT ); Thu, 9 Jul 2015 16:42:50 -0400 Received: from mail-qg0-f54.google.com ([209.85.192.54]:34791 "EHLO mail-qg0-f54.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753619AbbGIUmu (ORCPT ); Thu, 9 Jul 2015 16:42:50 -0400 Received: by qgep37 with SMTP id p37so28658647qge.1; Thu, 09 Jul 2015 13:42:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:subject:from:to:date:message-id:in-reply-to:references :user-agent:mime-version:content-type:content-transfer-encoding; bh=Bb2c/B/O0tozM/Dr9UaxHCta+mGB1cnB0itOm37tyRE=; b=gWPpv6c50eH3KZAj94HXwQ7YuTyeWjFk/JRIo/Bg7uEvROKDJ+hO9veyBavWQsQxsH jDJ2xKzgNYOwG70LY82ub+KRpXFG+vgE+NWVKPEmqIOX257wuayNPrKplta6kzvTxNvZ jV/FknQDZoTE0lUm1r10riIsPWiOK/i25vX6ARMEvHOnsaY7enCvOA95dVB83Ejzk0bG CTfERESdwNuxOqPx1b/bg5XBypQIRk4zbUHzPsGjKsUBBq+f4qBZVae1RMVr042M8J9Z 03mj80P7KgQqlbeJRqm0Xg5FM/jA75dTKEaL5CcJMCNEEAMBTuN8lSeFhVB49Foat6Y4 0XBw== X-Received: by 10.140.94.132 with SMTP id g4mr17001645qge.63.1436474569383; Thu, 09 Jul 2015 13:42:49 -0700 (PDT) Received: from manet.1015granger.net ([2604:8800:100:81fc:82ee:73ff:fe43:d64f]) by smtp.gmail.com with ESMTPSA id 93sm4168020qkz.4.2015.07.09.13.42.47 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Jul 2015 13:42:48 -0700 (PDT) Subject: [PATCH v1 07/12] xprtrdma: Don't provide a reply chunk when expecting a short reply From: Chuck Lever To: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org Date: Thu, 09 Jul 2015 16:42:46 -0400 Message-ID: <20150709204246.26247.10367.stgit@manet.1015granger.net> In-Reply-To: <20150709203242.26247.4848.stgit@manet.1015granger.net> References: <20150709203242.26247.4848.stgit@manet.1015granger.net> User-Agent: StGit/0.17.1-3-g7d0f MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Currently Linux always offers a reply chunk, even for small replies (unless a read or write list is needed for the RPC operation). A comment in rpcrdma_marshal_req() reads: > Currently we try to not actually use read inline. > Reply chunks have the desirable property that > they land, packed, directly in the target buffers > without headers, so they require no fixup. The > additional RDMA Write op sends the same amount > of data, streams on-the-wire and adds no overhead > on receive. Therefore, we request a reply chunk > for non-writes wherever feasible and efficient. This considers only the network bandwidth cost of sending the RPC reply. For replies which are only a few dozen bytes, this is typically not a good trade-off. If the server chooses to return the reply inline: - The client has registered and invalidated a memory region to catch the reply, which is then not used If the server chooses to use the reply chunk: - The server sends a few bytes using a heavyweight RDMA WRITE for operation. The entire RPC reply is conveyed in two RDMA operations (WRITE_ONLY, SEND) instead of one. Note that both the server and client have to prepare or copy the reply data anyway to construct these replies. There's no benefit to using an RDMA transfer since the host CPU has to be involved. Signed-off-by: Chuck Lever Reviewed-By: Sagi Grimberg --- net/sunrpc/xprtrdma/rpc_rdma.c | 14 +------------- 1 file changed, 1 insertion(+), 13 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c index e569da4..8ac1448c 100644 --- a/net/sunrpc/xprtrdma/rpc_rdma.c +++ b/net/sunrpc/xprtrdma/rpc_rdma.c @@ -429,7 +429,7 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) * * o Read ops return data as write chunk(s), header as inline. * o If the expected result is under the inline threshold, all ops - * return as inline (but see later). + * return as inline. * o Large non-read ops return as a single reply chunk. */ if (rqst->rq_rcv_buf.flags & XDRBUF_READ) @@ -503,18 +503,6 @@ rpcrdma_marshal_req(struct rpc_rqst *rqst) headerp->rm_body.rm_nochunks.rm_empty[2] = xdr_zero; /* new length after pullup */ rpclen = rqst->rq_svec[0].iov_len; - /* - * Currently we try to not actually use read inline. - * Reply chunks have the desirable property that - * they land, packed, directly in the target buffers - * without headers, so they require no fixup. The - * additional RDMA Write op sends the same amount - * of data, streams on-the-wire and adds no overhead - * on receive. Therefore, we request a reply chunk - * for non-writes wherever feasible and efficient. - */ - if (wtype == rpcrdma_noch) - wtype = rpcrdma_replych; } }