From patchwork Tue Jul 15 14:11:34 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever III X-Patchwork-Id: 4553921 Return-Path: X-Original-To: patchwork-linux-nfs@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3B5BBC0514 for ; Tue, 15 Jul 2014 14:11:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id BF98A20122 for ; Tue, 15 Jul 2014 14:11:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E4E382011E for ; Tue, 15 Jul 2014 14:11:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750940AbaGOOLi (ORCPT ); Tue, 15 Jul 2014 10:11:38 -0400 Received: from mail-ig0-f173.google.com ([209.85.213.173]:55278 "EHLO mail-ig0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750722AbaGOOLi (ORCPT ); Tue, 15 Jul 2014 10:11:38 -0400 Received: by mail-ig0-f173.google.com with SMTP id h18so2950634igc.6 for ; Tue, 15 Jul 2014 07:11:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:subject:to:cc:date:message-id:user-agent:mime-version :content-type:content-transfer-encoding; bh=SPV5DJxdKKgta02oH5xU2C1uKm18Rt2tQhUJ1vgVCAA=; b=CbDXeY+uMqAy1t4FurMgTJOz7BeHmD1GJNnki6Q2AZKcm+PQjvw4ORmDW/xAw4RiYR aVUVktIFqoOSB/r1dyL5j6XS7ItIvCEZ/+5mwgc5+NhUcvdl7gz/rWf1Pxl1Nh9zSI6V SbJHirVDukcnDK5NR2XSIGGRlIr8piUeHX3ZCw7kR7oSGhFRjspe671Dh0nyjo+TVuOy xgtRg/DGq5sDG6dEVnLNwxb7imoKyBiuDhGk97ZoTZWHsieP7pRbRPeW/PFQu6WX5tMh 81fDjkod7mBwZw/UbFMU0Z8nnbtdakyUnkkVGkrkFvbVj0W4lIOZU/njlLlfz2bD6rUB wkdQ== X-Received: by 10.43.63.15 with SMTP id xc15mr12294318icb.66.1405433496990; Tue, 15 Jul 2014 07:11:36 -0700 (PDT) Received: from klimt.1015granger.net ([2604:8800:100:81fc:be5f:f4ff:fed6:c3ba]) by mx.google.com with ESMTPSA id z6sm35184510igl.1.2014.07.15.07.11.36 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Jul 2014 07:11:36 -0700 (PDT) From: Chuck Lever Subject: [PATCH] svcrdma: Add zero padding if the client doesn't send it To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Tue, 15 Jul 2014 10:11:34 -0400 Message-ID: <20140715140417.23046.27242.stgit@klimt.1015granger.net> User-Agent: StGIT/0.14.3 MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP See RFC 5666 section 3.7: clients don't have to send zero XDR padding. BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=246 Signed-off-by: Chuck Lever --- Hi Bruce- This is an alternative solution to changing the sanity check in the XDR WRITE decoder. It adjusts the incoming xdr_buf to include a zero pad just after the transport has received each RPC request. Thoughts? net/sunrpc/xprtrdma/svc_rdma_recvfrom.c | 30 ++++++++++++++++++++++++++++++ 1 files changed, 30 insertions(+), 0 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-nfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c index 8f92a61..9a3465d 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c +++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c @@ -43,6 +43,7 @@ #include #include #include +#include #include #include #include @@ -435,6 +436,34 @@ static int rdma_read_chunks(struct svcxprt_rdma *xprt, return ret; } +/* + * To avoid a separate RDMA READ just for a handful of zero bytes, + * RFC 5666 section 3.7 allows the client to omit the XDR zero pad + * in chunk lists. + */ +static void +rdma_fix_xdr_pad(struct xdr_buf *buf) +{ + unsigned int page_len = buf->page_len; + unsigned int size = (XDR_QUADLEN(page_len) << 2) - page_len; + unsigned int offset, pg_no; + char *p; + + if (size == 0) + return; + + offset = page_len & ~PAGE_MASK; + pg_no = page_len >> PAGE_SHIFT; + + p = kmap_atomic(buf->pages[pg_no]); + memset(p + offset, 0, size); + kunmap_atomic(p); + + buf->page_len += size; + buf->buflen += size; + buf->len += size; +} + static int rdma_read_complete(struct svc_rqst *rqstp, struct svc_rdma_op_ctxt *head) { @@ -449,6 +478,7 @@ static int rdma_read_complete(struct svc_rqst *rqstp, rqstp->rq_pages[page_no] = head->pages[page_no]; } /* Point rq_arg.pages past header */ + rdma_fix_xdr_pad(&head->arg); rqstp->rq_arg.pages = &rqstp->rq_pages[head->hdr_count]; rqstp->rq_arg.page_len = head->arg.page_len; rqstp->rq_arg.page_base = head->arg.page_base;