From patchwork Wed Jan 29 16:09:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356437 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 60769139A for ; Wed, 29 Jan 2020 16:09:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3FE9A2071E for ; Wed, 29 Jan 2020 16:09:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RBINS3ra" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726880AbgA2QJ3 (ORCPT ); Wed, 29 Jan 2020 11:09:29 -0500 Received: from mail-yb1-f195.google.com ([209.85.219.195]:36423 "EHLO mail-yb1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QJ3 (ORCPT ); Wed, 29 Jan 2020 11:09:29 -0500 Received: by mail-yb1-f195.google.com with SMTP id w9so91861ybs.3 for ; Wed, 29 Jan 2020 08:09:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=LTMvTQVA4iVgG4sSgvmo3F/nyb4ms19PmRgM8uHui8o=; b=RBINS3raUcRrE6ZN1ijdB3K19QCkY4ahbZE6RZopi9lAFJoY+wrGRzdUap5QC5xk34 06mVESALvVCHXYqfAzoNLoIwIoMDhJeWgiNpZC2yjqLhFg3V6rvX2TxigZJmJqW3sIYD n6RLri83Uq/XlPqztrMtpcOvWMFTH7Okic/KrbxoiCt28Zu/7MJdxabs1TS4NcLhYvvk lt8YfXRCBOfXLVjKe1bG6LujXKKwYMhptaS/nL3DJ94sEz/gJuUHOiqvJ9PIhkxw3sBu O1M+v05HffWplQuEPqr32WulveyiT8aXAokvwsQ/+3RknG5JPVkzkEJN2uSTuW46v2aj 2Ggw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=LTMvTQVA4iVgG4sSgvmo3F/nyb4ms19PmRgM8uHui8o=; b=C9nRPrtTKk1ICtzUnLTEM5kQ7A7A4OjAGq+8wlT75s0hFAJRjFmri+GcG+DFv3IhFE M0qtPZC7ihfEfG9259B0aDzju47HWTuDqeePvmvmvnGyofVtyDpq3JmEnyMlSwCq5sjL ywS17qM95Rk40oEzckifUjWCGQGZgarxF8A/r4Qo60ZDGnLhalop8/K8tykgdiI/zf1L F1DC3p+tIhTxHvky4ZQiSEQRWSdzu57/9UxiF9/y01anJaA5QPbi83cUuOjuTWxuRlHc eKWcHOHfx68O5EP+xMWe4V3/dHLINqLVOBjAylQOpSb9zyl5XzlAqHaqtG7TYtKSb82i fRxg== X-Gm-Message-State: APjAAAWGhyimYHA8jwnByNRQUqoouScYOz5ZK1fkWtk/r32A4u9Fq6+i S2bcZPi0zUj31as1J2LfFMm/r3zq X-Google-Smtp-Source: APXvYqzNjo4pXmUxHO3RJv/8H3JJNsvLD/cJcFHdV1TDyWcl6+1BKVnQFpisZBmb0MH7a/bOaHxe0Q== X-Received: by 2002:a25:3803:: with SMTP id f3mr177077yba.144.1580314168576; Wed, 29 Jan 2020 08:09:28 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id m138sm1109360ywd.56.2020.01.29.08.09.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:28 -0800 (PST) Subject: [PATCH RFC 1/8] nfsd: Fix NFSv4 READ on RDMA when using readv From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:27 -0500 Message-ID: <20200129160927.3024.90505.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org svcrdma expects that the payload falls precisely into the xdr_buf page vector. Adding "xdr->iov = NULL" forces xdr_reserve_space to always use pages from xdr->buf->pages when calling nfsd_readv. This code is called only when fops->splice_read is missing or when RQ_SPLICE_OK is clear, so it's not a noticeable problem in many common cases. Fixes: b04209806384 ("nfsd4: allow exotic read compounds") Buglink: https://bugzilla.kernel.org/show_bug.cgi?id=198053 Signed-off-by: Chuck Lever --- fs/nfsd/nfs4xdr.c | 15 ++++++--------- 1 file changed, 6 insertions(+), 9 deletions(-) diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 75d1fc13a9c9..92a6ada60932 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -3521,17 +3521,14 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, u32 zzz = 0; int pad; + /* Ensure xdr_reserve_space behaves itself */ + if (xdr->iov == xdr->buf->head) { + xdr->iov = NULL; + xdr->end = xdr->p; + } + len = maxcount; v = 0; - - thislen = min_t(long, len, ((void *)xdr->end - (void *)xdr->p)); - p = xdr_reserve_space(xdr, (thislen+3)&~3); - WARN_ON_ONCE(!p); - resp->rqstp->rq_vec[v].iov_base = p; - resp->rqstp->rq_vec[v].iov_len = thislen; - v++; - len -= thislen; - while (len) { thislen = min_t(long, len, PAGE_SIZE); p = xdr_reserve_space(xdr, (thislen+3)&~3); From patchwork Wed Jan 29 16:09:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356439 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EF5501398 for ; Wed, 29 Jan 2020 16:09:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C47BE20720 for ; Wed, 29 Jan 2020 16:09:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="QQk5mpzV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726977AbgA2QJg (ORCPT ); Wed, 29 Jan 2020 11:09:36 -0500 Received: from mail-yw1-f65.google.com ([209.85.161.65]:39742 "EHLO mail-yw1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QJg (ORCPT ); Wed, 29 Jan 2020 11:09:36 -0500 Received: by mail-yw1-f65.google.com with SMTP id h126so68897ywc.6 for ; Wed, 29 Jan 2020 08:09:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=YWb6W4W0bAey5G1twxdG8nPbjFaIcfh7a77eUBo17aw=; b=QQk5mpzV8w6qV/iXxag6TNa02XNpVpw4RY5v92/FmyHdb4AspLIrtM7Vz55hkbgLd0 XBcaxRSM1933lv97EYiEEQT59sRisEjcOwd8BIHWtAH40rU+KdQrPHd8BZm4Q3Mv6eQy A8q22YPsV7a8ulDG9N6NFmNkz52U54Fvu66HhdhQBkSBe03+Xtes/NpfCYoP7u9N77HJ 27R503vcFTt/kgP7jLfbGHdQceZPdNg1h+K8C5GazjTTUPVcMhS81p4J18/JKrel6AVt LA+DpyW35po07kfDyzsnAHoFHgZmPr/g7pF3o0/VXY45g7mujIlVwReVC3HQS3ZRukn5 nBYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=YWb6W4W0bAey5G1twxdG8nPbjFaIcfh7a77eUBo17aw=; b=RepqdqLjDhaONje5HUsFUpPSNitYoedTf+7tVlaqpzXQAuLJiSv0jUxZ8n7u0R1XaU TsipLW+6KO+wFrcvoB0snZVVqUpe/qFThgkBIfrBjLGFegB1kVMa58DQFCudyve3qoF+ fdM2ZeedPKSu0wjM4zrsfkF51YaU0uY6J5q5jnNIWmACLR42LbctFfpaLL9kMYQRwlq1 yaBfyGEadJLueqDfBSUgvYfIAoEcXcps2x4Ua5L36s9toQBs05g24TTKpgVPqR9T28LK QLSdnmfuvnaV+Ssp5DSfrjFn58CbBccGlnreC/2fpHMlm8O0we7SRNTIrODk/tmbXRxk 7Rig== X-Gm-Message-State: APjAAAV9EGPj741rJwS6saDCD+CguRuNJePyAPPZ7jMAkG/ZW8ZL88BT qVMNxXHfcls8PSjFhKkKd4s= X-Google-Smtp-Source: APXvYqz0CezxXTfFJw3hKQ7ANmy5LlxbPn2NPcyvt7xIZ9rt2nzsR357QiY0rS7/g1qATedAMnIeKQ== X-Received: by 2002:a81:49d2:: with SMTP id w201mr21094383ywa.123.1580314174884; Wed, 29 Jan 2020 08:09:34 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id l39sm1159874ywk.36.2020.01.29.08.09.34 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:34 -0800 (PST) Subject: [PATCH RFC 2/8] SUNRPC: Add XDR infrastructure for automatically padding xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:33 -0500 Message-ID: <20200129160933.3024.87495.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Server/client agnostic API changes to support padding xdr_buf::pages automatically. Signed-off-by: Chuck Lever --- include/linux/sunrpc/xdr.h | 74 ++++++++++++++++++++++++++++++++++---------- net/sunrpc/svc.c | 2 + net/sunrpc/svc_xprt.c | 14 +++++--- 3 files changed, 67 insertions(+), 23 deletions(-) diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h index b41f34977995..47f55aad5a1e 100644 --- a/include/linux/sunrpc/xdr.h +++ b/include/linux/sunrpc/xdr.h @@ -24,6 +24,34 @@ */ #define XDR_QUADLEN(l) (((l) + 3) >> 2) +/** + * xdr_align_size - Calculate padded size of an object + * @n: Size of an object being XDR encoded (in bytes) + * + * Return value: + * Size (in bytes) of the object including xdr padding + */ +static inline size_t +xdr_align_size(size_t n) +{ + const size_t mask = sizeof(__u32) - 1; + + return (n + mask) & ~mask; +} + +/** + * xdr_pad_size - Calculate XDR padding size + * @len: Actual size of object being XDR encoded (in bytes) + * + * Return value: + * Size (in bytes) of needed XDR padding + */ +static inline size_t +xdr_pad_size(size_t len) +{ + return xdr_align_size(len) - len; +} + /* * Generic opaque `network object.' At the kernel level, this type * is used only by lockd. @@ -39,9 +67,6 @@ struct xdr_netobj { * Features a header (for a linear buffer containing RPC headers * and the data payload for short messages), and then an array of * pages. - * The tail iovec allows you to append data after the page array. Its - * main interest is for appending padding to the pages in order to - * satisfy the int_32-alignment requirements in RFC1832. * * For the future, we might want to string several of these together * in a list if anybody wants to make use of NFSv4 COMPOUND @@ -55,6 +80,7 @@ struct xdr_buf { struct page ** pages; /* Array of pages */ unsigned int page_base, /* Start of page data */ page_len, /* Length of page data */ + page_pad, /* XDR padding needed for pages */ flags; /* Flags for data disposition */ #define XDRBUF_READ 0x01 /* target of file read */ #define XDRBUF_WRITE 0x02 /* source of file write */ @@ -72,11 +98,39 @@ struct xdr_buf { buf->tail[0].iov_len = 0; buf->pages = NULL; buf->page_len = 0; + buf->page_pad = 0; buf->flags = 0; buf->len = 0; buf->buflen = len; } +/** + * xdr_buf_set_pagelen - Set the length of the page list + * @buf: XDR buffer containing a message + * @len: Size of @buf's page list (in bytes) + * + */ +static inline void +xdr_buf_set_pagelen(struct xdr_buf *buf, size_t len) +{ + buf->page_len = len; + buf->page_pad = xdr_pad_size(len); +} + +/** + * xdr_buf_msglen - Return the length of the content in @buf + * @buf: XDR buffer containing an XDR-encoded message + * + * Return value: + * Size (in bytes) of the content in @buf + */ +static inline size_t +xdr_buf_msglen(const struct xdr_buf *buf) +{ + return buf->head[0].iov_len + buf->page_len + buf->page_pad + + buf->tail[0].iov_len; +} + /* * pre-xdr'ed macros. */ @@ -285,20 +339,6 @@ ssize_t xdr_stream_decode_string(struct xdr_stream *xdr, char *str, size_t size); ssize_t xdr_stream_decode_string_dup(struct xdr_stream *xdr, char **str, size_t maxlen, gfp_t gfp_flags); -/** - * xdr_align_size - Calculate padded size of an object - * @n: Size of an object being XDR encoded (in bytes) - * - * Return value: - * Size (in bytes) of the object including xdr padding - */ -static inline size_t -xdr_align_size(size_t n) -{ - const size_t mask = sizeof(__u32) - 1; - - return (n + mask) & ~mask; -} /** * xdr_stream_encode_u32 - Encode a 32-bit integer diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index 187dd4e73d64..63a3afac7100 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -1516,7 +1516,7 @@ static __printf(2,3) void svc_printk(struct svc_rqst *rqstp, const char *fmt, .. rqstp->rq_res.pages = rqstp->rq_respages + 1; rqstp->rq_res.len = 0; rqstp->rq_res.page_base = 0; - rqstp->rq_res.page_len = 0; + xdr_buf_set_pagelen(&rqstp->rq_res, 0); rqstp->rq_res.buflen = PAGE_SIZE; rqstp->rq_res.tail[0].iov_base = NULL; rqstp->rq_res.tail[0].iov_len = 0; diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c index de3c077733a7..032c3bc91d43 100644 --- a/net/sunrpc/svc_xprt.c +++ b/net/sunrpc/svc_xprt.c @@ -510,7 +510,7 @@ static void svc_xprt_release(struct svc_rqst *rqstp) rqstp->rq_deferred = NULL; svc_free_res_pages(rqstp); - rqstp->rq_res.page_len = 0; + xdr_buf_set_pagelen(&rqstp->rq_res, 0); rqstp->rq_res.page_base = 0; /* Reset response buffer and release @@ -666,7 +666,7 @@ static int svc_alloc_arg(struct svc_rqst *rqstp) arg->pages = rqstp->rq_pages + 1; arg->page_base = 0; /* save at least one page for response */ - arg->page_len = (pages-2)*PAGE_SIZE; + xdr_buf_set_pagelen(arg, (pages - 2) << PAGE_SHIFT); arg->len = (pages-1)*PAGE_SIZE; arg->tail[0].iov_len = 0; return 0; @@ -902,9 +902,13 @@ int svc_send(struct svc_rqst *rqstp) /* calculate over-all length */ xb = &rqstp->rq_res; - xb->len = xb->head[0].iov_len + - xb->page_len + - xb->tail[0].iov_len; + xb->len = xdr_buf_msglen(xb); + if ((xb->head[0].iov_len & 3) != 0) + trace_printk("head=[%p,%zu] page=%u/%u tail=[%p,%zu] msglen=%zu buflen=%u", + xb->head[0].iov_base, xb->head[0].iov_len, + xb->page_len, xb->page_pad, + xb->tail[0].iov_base, xb->tail[0].iov_len, + xdr_buf_msglen(xb), xb->buflen); /* Grab mutex to serialize outgoing data. */ mutex_lock(&xprt->xpt_mutex); From patchwork Wed Jan 29 16:09:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356441 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE1C2139A for ; Wed, 29 Jan 2020 16:09:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CD2A2207FF for ; Wed, 29 Jan 2020 16:09:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="es7mGX9p" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727074AbgA2QJm (ORCPT ); Wed, 29 Jan 2020 11:09:42 -0500 Received: from mail-yw1-f67.google.com ([209.85.161.67]:39752 "EHLO mail-yw1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QJm (ORCPT ); Wed, 29 Jan 2020 11:09:42 -0500 Received: by mail-yw1-f67.google.com with SMTP id h126so69079ywc.6 for ; Wed, 29 Jan 2020 08:09:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=sdEFzA4A97ct3iTWmgmgaA+i8kKdubjYfk10Cf5kSV0=; b=es7mGX9p8LvLNVkQUS9bjtomnkc6PD7yHe00ck0LcqJXcUqpABiCYYhiENZd3zdULZ afbo82ZGMM9ZIbqMe2rjpeUQz9aObhEr5E3w2bp5864YxEvfG11dQUT0UxN8ZrfF9w50 RC7LUTyFP7SQ7yf2IJ33UXEiRB469JXvAoCyGiQGTjWCZeo3hHuyC9JmpY9s2dM3mxVM bl1+Aa0ocnaN2Vwu8yk+Dxe9yIpRP/5tgqC1e/fUHYNBBRbdnCywcaSXljx5df1SvuWw a7V58r5Ynswm6rcH/h2u5yXsHsbAaOVfZ6EyrbB/6cwj8z3jQqBJK+vSXg3XYB8kyUlc +qTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=sdEFzA4A97ct3iTWmgmgaA+i8kKdubjYfk10Cf5kSV0=; b=F82llu3dJoiRe7wvXiMbtssvW0QEh/o9M7bxloPVN/JAiFeHBNtpI0vSJlW4yFykcr V49GSYnFoJfgebCKQIzGbyUgJM+0fZqM//Obz2Xu8qeILcbojzCeySN7ZklRAbsBfLHB ZWeiZEO0VTYC43ZoHvXv9DMB3cp8OOp/pz+jdLw0MUNyHPITVj1pJYe+Dhy+2GXC+pjU 8iU8RmyBTPOPs3dR9MUOkOQuMo8fBmcMDwOg1b39c2/wln6mP7BBXiUTkpXnErJMDx1F FWdkkICqehDz1o3gLF1UrTjacAvO7Oh8VfsMaewZ9x0kgKhsMAYv5b15zWX4IzQBb4T5 9+7Q== X-Gm-Message-State: APjAAAWono5x5MGiQyPs4NG1kzdUBMniF/gNX1/OW1ul579MDOLhJSgc /gVr0+IEDaUVGY8elLfXAGg= X-Google-Smtp-Source: APXvYqzePXGgTLfTPTiEY24j8QSFxG/PBR+J4tUMa5xE6DfZEPev/YKr2IRyi4Q5mq11dxDNsh3UAg== X-Received: by 2002:a0d:c3c2:: with SMTP id f185mr20013834ywd.21.1580314181153; Wed, 29 Jan 2020 08:09:41 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id g190sm1090380ywd.85.2020.01.29.08.09.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:40 -0800 (PST) Subject: [PATCH RFC 3/8] SUNRPC: TCP transport support for automated padding of xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:40 -0500 Message-ID: <20200129160939.3024.63670.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Signed-off-by: Chuck Lever --- net/sunrpc/svcsock.c | 39 ++++++++++++++++++++++++--------------- 1 file changed, 24 insertions(+), 15 deletions(-) diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c index 2934dd711715..966ea431f845 100644 --- a/net/sunrpc/svcsock.c +++ b/net/sunrpc/svcsock.c @@ -187,14 +187,12 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr, size_t base = xdr->page_base; unsigned int pglen = xdr->page_len; unsigned int flags = MSG_MORE | MSG_SENDPAGE_NOTLAST; - int slen; + int slen = xdr_buf_msglen(xdr); int len = 0; - slen = xdr->len; - /* send head */ if (slen == xdr->head[0].iov_len) - flags = 0; + flags = MSG_EOR; len = kernel_sendpage(sock, headpage, headoffset, xdr->head[0].iov_len, flags); if (len != xdr->head[0].iov_len) @@ -207,7 +205,7 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr, size = PAGE_SIZE - base < pglen ? PAGE_SIZE - base : pglen; while (pglen > 0) { if (slen == size) - flags = 0; + flags = MSG_EOR; result = kernel_sendpage(sock, *ppage, base, size, flags); if (result > 0) len += result; @@ -219,11 +217,21 @@ int svc_send_common(struct socket *sock, struct xdr_buf *xdr, base = 0; ppage++; } + if (xdr->page_pad) { + if (!xdr->tail[0].iov_len) + flags = MSG_EOR; + result = kernel_sendpage(sock, ZERO_PAGE(0), 0, + xdr->page_pad, flags); + if (result > 0) + len += result; + if (result != xdr->page_pad) + goto out; + } /* send tail */ if (xdr->tail[0].iov_len) { result = kernel_sendpage(sock, tailpage, tailoffset, - xdr->tail[0].iov_len, 0); + xdr->tail[0].iov_len, MSG_EOR); if (result > 0) len += result; } @@ -272,9 +280,9 @@ static int svc_sendto(struct svc_rqst *rqstp, struct xdr_buf *xdr) rqstp->rq_respages[0], tailoff); out: - dprintk("svc: socket %p sendto([%p %zu... ], %d) = %d (addr %s)\n", + dprintk("svc: socket %p sendto([%p %zu... ], %zu) = %d (addr %s)\n", svsk, xdr->head[0].iov_base, xdr->head[0].iov_len, - xdr->len, len, svc_print_addr(rqstp, buf, sizeof(buf))); + xdr_buf_msglen(xdr), len, svc_print_addr(rqstp, buf, sizeof(buf))); return len; } @@ -1134,24 +1142,25 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp) static int svc_tcp_sendto(struct svc_rqst *rqstp) { struct xdr_buf *xbufp = &rqstp->rq_res; + u32 reclen = xdr_buf_msglen(xbufp); + __be32 marker; int sent; - __be32 reclen; /* Set up the first element of the reply kvec. * Any other kvecs that may be in use have been taken * care of by the server implementation itself. */ - reclen = htonl(0x80000000|((xbufp->len ) - 4)); - memcpy(xbufp->head[0].iov_base, &reclen, 4); + marker = cpu_to_be32(0x80000000 | (reclen - sizeof(marker))); + memcpy(xbufp->head[0].iov_base, &marker, sizeof(marker)); sent = svc_sendto(rqstp, &rqstp->rq_res); - if (sent != xbufp->len) { + if (sent != reclen) { printk(KERN_NOTICE - "rpc-srv/tcp: %s: %s %d when sending %d bytes " + "rpc-srv/tcp: %s: %s %d when sending %u bytes " "- shutting down socket\n", rqstp->rq_xprt->xpt_server->sv_name, - (sent<0)?"got error":"sent only", - sent, xbufp->len); + (sent<0)?"got error":"sent", + sent, reclen); set_bit(XPT_CLOSE, &rqstp->rq_xprt->xpt_flags); svc_xprt_enqueue(rqstp->rq_xprt); sent = -EAGAIN; From patchwork Wed Jan 29 16:09:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356443 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5EC011398 for ; Wed, 29 Jan 2020 16:09:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 396142071E for ; Wed, 29 Jan 2020 16:09:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MXWailiG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727077AbgA2QJs (ORCPT ); Wed, 29 Jan 2020 11:09:48 -0500 Received: from mail-yw1-f47.google.com ([209.85.161.47]:36259 "EHLO mail-yw1-f47.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QJs (ORCPT ); Wed, 29 Jan 2020 11:09:48 -0500 Received: by mail-yw1-f47.google.com with SMTP id n184so80900ywc.3 for ; Wed, 29 Jan 2020 08:09:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=WkKvIfJrTQeGZW7yCw+WL84e1ZK0VONjBHhyJOTwwMc=; b=MXWailiGENaEtGej4aV6KDeGxeRbk8UtJLGFd32zq9rCSP+hjkpspDcXaj1dpfY2q9 Kym6Y0x594hBb4JEMaqolSEDUoVL2lEwfppoCrBEcPOpKP9rD78FIn6S7JhPdbQ0nX8w 6JBlRz7aeI+FKwfpXewSFzF8+m50NVB8iY3l1eeVYnBUT9a4CPxf8k6Uro85TfvDSNH3 aNFBCBIvvUqWwBJP/jZJNF1/m/UsI63JlG4wP4zgNsrTysdPau6q1qxFGCa1Vk4b1IdF voxTei0wuIOmg3f9yoLkthgDD42O3DJyMPCHOPxpQyaOX0/PI6f4AuU00NSjBgzeasWk DztQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=WkKvIfJrTQeGZW7yCw+WL84e1ZK0VONjBHhyJOTwwMc=; b=qpx6+LvpdVKZ94MYRb+46Q6+MRC50K4o6ZVBfkCH5mBLh1iI7tSK1Jg04MboRErXKE ZFOoDOFsG29Jw0yQ5CS9bLqO+lbgvc9kTmbQ4v7HtcIK7winxaDsHNs45nbbcLfMtQXd KwFBpmhKBEMscsIjZJ03IB5zCW7UpHvakf78AVqczw+WK+13Eu6oDJPR76n77NvwWQWP GmuGvW8ETCuVWywzsNYuHc9XFcYNCxT9Wqt3Ub/6nwNt+K33KqKJAYEM5+Zo+tNHaRjb AyeRIdQ0EPCkRWnLN/QUVcyfRnSB1i1nAjjHdjpF9kgQzKx0/xWAE82irVIIyLGiaqQF N71A== X-Gm-Message-State: APjAAAWxwt2sP7AH6IVGTg0UvmyYtrkDR3p4L3NV7kZXu2cO2MCm3q/c oTYX+uz2LbkEqOOe8xNV46E= X-Google-Smtp-Source: APXvYqyzkNQ1RKmfKV97FjhVlUgkMwiIomofroHWIpya1zqPF8wOADMT5yJ+ggpOfw6YnEzBoyuZvw== X-Received: by 2002:a81:2e16:: with SMTP id u22mr20761128ywu.422.1580314187429; Wed, 29 Jan 2020 08:09:47 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id m138sm1109683ywd.56.2020.01.29.08.09.46 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:47 -0800 (PST) Subject: [PATCH RFC 4/8] svcrdma: RDMA transport support for automated padding of xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:46 -0500 Message-ID: <20200129160946.3024.22245.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org --- net/sunrpc/xprtrdma/svc_rdma_rw.c | 13 +++++++++++++ net/sunrpc/xprtrdma/svc_rdma_sendto.c | 27 +++++++++++++++++++-------- 2 files changed, 32 insertions(+), 8 deletions(-) diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c index 467d40a1dffa..a7fb886ea136 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_rw.c +++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c @@ -14,6 +14,8 @@ #include "xprt_rdma.h" #include +static const __be32 xdr_padding = xdr_zero; + static void svc_rdma_write_done(struct ib_cq *cq, struct ib_wc *wc); static void svc_rdma_wc_read_done(struct ib_cq *cq, struct ib_wc *wc); @@ -559,6 +561,9 @@ int svc_rdma_send_reply_chunk(struct svcxprt_rdma *rdma, __be32 *rp_ch, { struct svc_rdma_write_info *info; int consumed, ret; + struct kvec pad = { + .iov_base = (void *)&xdr_padding, + }; info = svc_rdma_write_info_alloc(rdma, rp_ch); if (!info) @@ -577,6 +582,14 @@ int svc_rdma_send_reply_chunk(struct svcxprt_rdma *rdma, __be32 *rp_ch, if (ret < 0) goto out_err; consumed += xdr->page_len; + + if (xdr->page_pad) { + pad.iov_len = xdr->page_pad; + ret = svc_rdma_send_xdr_kvec(info, &pad); + if (ret < 0) + goto out_err; + consumed += pad.iov_len; + } } if (xdr->tail[0].iov_len) { diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c index 33f817519964..d0f9acfe60a6 100644 --- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c +++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c @@ -112,6 +112,8 @@ #include "xprt_rdma.h" #include +static const __be32 xdr_padding = xdr_zero; + static void svc_rdma_wc_send(struct ib_cq *cq, struct ib_wc *wc); static inline struct svc_rdma_send_ctxt * @@ -320,11 +322,6 @@ int svc_rdma_send(struct svcxprt_rdma *rdma, struct ib_send_wr *wr) return ret; } -static u32 xdr_padsize(u32 len) -{ - return (len & 3) ? (4 - (len & 3)) : 0; -} - /* Returns length of transport header, in bytes. */ static unsigned int svc_rdma_reply_hdr_len(__be32 *rdma_resp) @@ -561,6 +558,8 @@ static bool svc_rdma_pull_up_needed(struct svcxprt_rdma *rdma, remaining); pageoff = 0; } + if (xdr->page_pad) + ++elements; } /* xdr->tail */ @@ -593,7 +592,7 @@ static int svc_rdma_pull_up_reply_msg(struct svcxprt_rdma *rdma, if (wr_lst) { u32 xdrpad; - xdrpad = xdr_padsize(xdr->page_len); + xdrpad = xdr_pad_size(xdr->page_len); if (taillen && xdrpad) { tailbase += xdrpad; taillen -= xdrpad; @@ -614,12 +613,16 @@ static int svc_rdma_pull_up_reply_msg(struct svcxprt_rdma *rdma, dst += len; pageoff = 0; } + if (xdr->page_pad) { + memcpy(dst, &xdr_padding, xdr->page_pad); + dst += xdr->page_pad; + } } if (taillen) memcpy(dst, tailbase, taillen); - ctxt->sc_sges[0].length += xdr->len; + ctxt->sc_sges[0].length += xdr_buf_msglen(xdr); ib_dma_sync_single_for_device(rdma->sc_pd->device, ctxt->sc_sges[0].addr, ctxt->sc_sges[0].length, @@ -668,7 +671,7 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, if (wr_lst) { base = xdr->tail[0].iov_base; len = xdr->tail[0].iov_len; - xdr_pad = xdr_padsize(xdr->page_len); + xdr_pad = xdr_pad_size(xdr->page_len); if (len && xdr_pad) { base += xdr_pad; @@ -693,6 +696,14 @@ int svc_rdma_map_reply_msg(struct svcxprt_rdma *rdma, remaining -= len; page_off = 0; } + if (xdr->page_pad) { + ++ctxt->sc_cur_sge_no; + ret = svc_rdma_dma_map_buf(rdma, ctxt, + (unsigned char *)&xdr_padding, + xdr->page_pad); + if (ret < 0) + return ret; + } base = xdr->tail[0].iov_base; len = xdr->tail[0].iov_len; From patchwork Wed Jan 29 16:09:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356445 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6BEA1139A for ; Wed, 29 Jan 2020 16:09:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4AE4E2071E for ; Wed, 29 Jan 2020 16:09:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="mZ0jEqnM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726959AbgA2QJy (ORCPT ); Wed, 29 Jan 2020 11:09:54 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:43170 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QJy (ORCPT ); Wed, 29 Jan 2020 11:09:54 -0500 Received: by mail-yw1-f68.google.com with SMTP id v126so54943ywc.10 for ; Wed, 29 Jan 2020 08:09:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=ZThcUy2zV3P2cAwdY3NUiX8vOGUC+wN33NQGZpDoLek=; b=mZ0jEqnMuHh8+HMDMXIYo+f6v4AzMgi6dKMQglFMlae3869o0UVygtxR+Mceyke0qi b8s4i9iL8r88ednusMvg5ZdUJF35qNAlwRA6a8hEEc0L9oEJCKKE6TM0J+Kh4MKDaFCr 0O2/BuPu90qMcEzwT9hrEyO6QfslI1vSnzgs0gm38pk+JYfN8BehdrNnlguEorIU6hxA gOzzi5os6ltLPvnU4TXTSULz9CeJLqcurl23+H3N+1QWkaijC8MD8V27ei/Tjz+7+U9A KXLNiPk+Bmv9g7iwvf79P5ocvGKAFMeXSQnyHunbUaVPNwrvim/Idr/8kv212VEK2Cp3 u0lQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=ZThcUy2zV3P2cAwdY3NUiX8vOGUC+wN33NQGZpDoLek=; b=bRNpbLKZ3/pwgnKMcXX9rTaqjAyJcLe4GRZGuNZRGoaT8IGyijFLAM/A1BvYKtIeyQ aVrvZlcSOyv0KKOXjgIViExrkXxBzzT1Vcd0ibCuMUq2+oJ4sHFFDRMiWgx3aAttBZ1+ Sy1jR6eFQRufvRQd/DEkEvN3IXW5dtUcPVqozBwzFOreXGPQZxDU+iYEnFhQ6gSQhXHi w6lKvJ3EI4Grxdj+GW1bygL7vLhladkGKaT/kdm/g58HksoRuUQF4TwKKh6Hnsjt9Tcm 3euJfpNHbs/P0209qNignfswTh+ntQqPHzW8fjDPG7L4f3InpaQHmvqkl7s9BSRnPO1X P3TA== X-Gm-Message-State: APjAAAWkZr65lrR+c0pSLXScWgNfcQIGAUZ/JDiuo5LhVxeMPyvKVzn0 gBHoQ6ptA8A3ta+bg18ZudQ= X-Google-Smtp-Source: APXvYqyRi2+LCTRkdc2rCp5MH+8AzdC5C879NxhSTHhhzw28V142wQDEmHSKsARSpG/IRTiQen4x6Q== X-Received: by 2002:a81:a0c3:: with SMTP id x186mr20126179ywg.344.1580314193675; Wed, 29 Jan 2020 08:09:53 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id z12sm1136629ywl.27.2020.01.29.08.09.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:53 -0800 (PST) Subject: [PATCH RFC 5/8] NFSD: NFSv2 support for automated padding of xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:52 -0500 Message-ID: <20200129160952.3024.44348.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Signed-off-by: Chuck Lever --- fs/nfsd/nfsxdr.c | 20 ++++---------------- 1 file changed, 4 insertions(+), 16 deletions(-) diff --git a/fs/nfsd/nfsxdr.c b/fs/nfsd/nfsxdr.c index b51fe515f06f..06e3a021b87a 100644 --- a/fs/nfsd/nfsxdr.c +++ b/fs/nfsd/nfsxdr.c @@ -455,13 +455,7 @@ __be32 *nfs2svc_encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *f *p++ = htonl(resp->len); xdr_ressize_check(rqstp, p); - rqstp->rq_res.page_len = resp->len; - if (resp->len & 3) { - /* need to pad the tail */ - rqstp->rq_res.tail[0].iov_base = p; - *p = 0; - rqstp->rq_res.tail[0].iov_len = 4 - (resp->len&3); - } + xdr_buf_set_pagelen(&rqstp->rq_res, resp->len); return 1; } @@ -475,13 +469,7 @@ __be32 *nfs2svc_encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *f xdr_ressize_check(rqstp, p); /* now update rqstp->rq_res to reflect data as well */ - rqstp->rq_res.page_len = resp->count; - if (resp->count & 3) { - /* need to pad the tail */ - rqstp->rq_res.tail[0].iov_base = p; - *p = 0; - rqstp->rq_res.tail[0].iov_len = 4 - (resp->count&3); - } + xdr_buf_set_pagelen(&rqstp->rq_res, resp->count); return 1; } @@ -494,8 +482,8 @@ __be32 *nfs2svc_encode_fattr(struct svc_rqst *rqstp, __be32 *p, struct svc_fh *f p = resp->buffer; *p++ = 0; /* no more entries */ *p++ = htonl((resp->common.err == nfserr_eof)); - rqstp->rq_res.page_len = (((unsigned long)p-1) & ~PAGE_MASK)+1; - + xdr_buf_set_pagelen(&rqstp->rq_res, + (((unsigned long)p - 1) & ~PAGE_MASK) + 1); return 1; } From patchwork Wed Jan 29 16:09:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356447 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 919D1159A for ; Wed, 29 Jan 2020 16:10:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 706B62071E for ; Wed, 29 Jan 2020 16:10:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NN7z8+7E" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726817AbgA2QKB (ORCPT ); Wed, 29 Jan 2020 11:10:01 -0500 Received: from mail-yw1-f67.google.com ([209.85.161.67]:44789 "EHLO mail-yw1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QKA (ORCPT ); Wed, 29 Jan 2020 11:10:00 -0500 Received: by mail-yw1-f67.google.com with SMTP id t141so50610ywc.11 for ; Wed, 29 Jan 2020 08:10:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=FvixT/cv88cGCv79br0TFn8gyk6T4ue7cRMWPBqvE6Y=; b=NN7z8+7Es1/CpgQ2pl3Kfip0ByF+7opezVav9X0jKeX9Ls63I/WDYEECXhaLO6eWHH ROkTg6KOWK5ICi58hfFGrOGFG75XmH8vMhKwC78MFcGVxH1dm2KeVpkBoU7gJciKy8WY 90kzleOg/tbvWbKjE/DtOwaqfOBjfRNESttAldkNBPxk49Yna1jvBFZoDtpN8kTLalpY YKt3pppCGBNAFA3aBcC468S7F6p49cZ+QbxK0ZFzzW4SY0BN8Ha5YjciamLKlFVqDtOJ 3EjY13WtIMUKxQ3bU/5VboK9/OdrQpbN0p6zf26b3m9z9TBueSHva7AUT8BreigUWUyi +hhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=FvixT/cv88cGCv79br0TFn8gyk6T4ue7cRMWPBqvE6Y=; b=tLRgsEn+TvdZy0fW2PVG6x91O9PdM0lQ9AirD/LdGUq25vcve1fY8D9Pnq4pHPI8ai dJN9EUhg0P60v/TCFZfaTd4njprwD1KHEW6WzibDh6/s9YOtK9YJ/5CRAhyGRxXFqYMi hI+KL6ITk/OHueMYIcvZAwp9ikXbiGkfNvucbDFFRUE3R7+X9VtHv/ih0KKkxc7HHXCC iyhQYT+m3cOMIfVkp8Uxq2aq9GowvFNfq3gW8rKORo7jfHp5UkEgcI44vAE8ygv/kn4x v9GZ9SbMaND5XjWxIs5k/Yoi5a3YGenNU52gCaEk+ciFZd2GYVFjRZcsDz8+VZKBQGAk Q59A== X-Gm-Message-State: APjAAAVDupcVzdU/ULUw+Xqh6GYxYdeJSNVC8dtiU6jHiPr9jvMmGbST W42SHMs5xnQzCSmzbVCrhx0= X-Google-Smtp-Source: APXvYqyxWnc9GByHliEPMPTF+5T3ITRrG1mKLQTEDfrEoz7IOOYVeIRZA3vROCGWkJ4bSpG0gQFkJA== X-Received: by 2002:a81:334a:: with SMTP id z71mr20239356ywz.238.1580314199906; Wed, 29 Jan 2020 08:09:59 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id h193sm1129635ywc.88.2020.01.29.08.09.59 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:09:59 -0800 (PST) Subject: [PATCH RFC 6/8] NFSD: NFSv3 support for automated padding of xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:09:58 -0500 Message-ID: <20200129160958.3024.25842.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Signed-off-by: Chuck Lever --- fs/nfsd/nfs3xdr.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c index 195ab7a0fc89..91d96a329033 100644 --- a/fs/nfsd/nfs3xdr.c +++ b/fs/nfsd/nfs3xdr.c @@ -709,13 +709,7 @@ void fill_post_wcc(struct svc_fh *fhp) if (resp->status == 0) { *p++ = htonl(resp->len); xdr_ressize_check(rqstp, p); - rqstp->rq_res.page_len = resp->len; - if (resp->len & 3) { - /* need to pad the tail */ - rqstp->rq_res.tail[0].iov_base = p; - *p = 0; - rqstp->rq_res.tail[0].iov_len = 4 - (resp->len&3); - } + xdr_buf_set_pagelen(&rqstp->rq_res, resp->len); return 1; } else return xdr_ressize_check(rqstp, p); @@ -733,14 +727,7 @@ void fill_post_wcc(struct svc_fh *fhp) *p++ = htonl(resp->eof); *p++ = htonl(resp->count); /* xdr opaque count */ xdr_ressize_check(rqstp, p); - /* now update rqstp->rq_res to reflect data as well */ - rqstp->rq_res.page_len = resp->count; - if (resp->count & 3) { - /* need to pad the tail */ - rqstp->rq_res.tail[0].iov_base = p; - *p = 0; - rqstp->rq_res.tail[0].iov_len = 4 - (resp->count & 3); - } + xdr_buf_set_pagelen(&rqstp->rq_res, resp->count); return 1; } else return xdr_ressize_check(rqstp, p); @@ -817,7 +804,7 @@ void fill_post_wcc(struct svc_fh *fhp) xdr_ressize_check(rqstp, p); if (rqstp->rq_res.head[0].iov_len + (2<<2) > PAGE_SIZE) return 1; /*No room for trailer */ - rqstp->rq_res.page_len = (resp->count) << 2; + xdr_buf_set_pagelen(&rqstp->rq_res, resp->count << 2); /* add the 'tail' to the end of the 'head' page - page 0. */ rqstp->rq_res.tail[0].iov_base = p; From patchwork Wed Jan 29 16:10:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356449 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 447011398 for ; Wed, 29 Jan 2020 16:10:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1A1632071E for ; Wed, 29 Jan 2020 16:10:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cLC4PLcp" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726840AbgA2QKH (ORCPT ); Wed, 29 Jan 2020 11:10:07 -0500 Received: from mail-yw1-f68.google.com ([209.85.161.68]:39782 "EHLO mail-yw1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QKH (ORCPT ); Wed, 29 Jan 2020 11:10:07 -0500 Received: by mail-yw1-f68.google.com with SMTP id h126so69765ywc.6 for ; Wed, 29 Jan 2020 08:10:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=lYPbrMLHi8rck1zIT7mg0NARHSsZEMZTsgzCvB2zmZ8=; b=cLC4PLcpEQ+oivEeu25wf/XZj3mreOIR6wfeIdmKY+6+4SWRpMCj6SEpMomLWnl6qc YTyMZDI24/uJz0Wp6DEW+FLIOX1INt4Vd75FIC+Mu2YuIE3dedTmdq7SAYGUbIW1XG+v QNGNqZU8Dsm9R1Qp0+zX30LZAbVz9TaGIZlm3ChGcwlBgRwPGlgLppJPWO2EN1ChMQvh zIWCp0mIiVGyU9m4yZYhJJFN80ZWovdEZ4xjVKmuEZMvaMwDCK+9mBsfm/3hqtCpxk4n vDDwobrAif4jdWKUyQdUBucmgESx17tKsNpz3wBkWIBhccG6nvFn5jN0NZFZELuOMB8e 8AGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=lYPbrMLHi8rck1zIT7mg0NARHSsZEMZTsgzCvB2zmZ8=; b=nKv563b0B9Zd67+cTZDp+yQZwFhT9CE0KIfpMh4cn8BrjLoL0aTMkxZhWG2EPWXbJS O0YHb48qpu3QaCTh435B4sQr2Nm5zW/tWW/EEdVEa8S/J5VU7tMLhpnv8EI4saCdJdV0 57kwFPzOjB5FYoouTf5AV9aE1sdlGYCXx+gO9gP/SpY0SgaFbIF2f8w/hWf+silqY8oX WGdVnJ907E78TPBL3NKNlggUZUivr+9D8vFApT+9nimKVSsKxh1MRt4lRobytJdrYQls 5xptmlreNEL9I+b2U2ilQXaAVNz/0ITKdQdAo2pQyIF3r/ENOIWcYZ5RDrXXfDrlrpnb jtVQ== X-Gm-Message-State: APjAAAUWvk9KNVBuw/TeL4Z7QQcQkJzNTHtwhbxalP4UQ+9xmkFXdJKi 4b7EKq42zTlPshMS3l6BWgA1A5H0 X-Google-Smtp-Source: APXvYqwqmU9wPsOSbpgQkS5gsxYWU77EYJJsWCMZpues1YoTlpNHjOYYekNL3Q5fknfPEIlaZ14lbg== X-Received: by 2002:a0d:e28c:: with SMTP id l134mr22230176ywe.54.1580314206183; Wed, 29 Jan 2020 08:10:06 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id j68sm1124817ywg.6.2020.01.29.08.10.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:10:05 -0800 (PST) Subject: [PATCH RFC 7/8] sunrpc: Add new contractual constraint on xdr_buf API From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:10:05 -0500 Message-ID: <20200129161005.3024.19820.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Server-side only. Let's move some XDR padding logic into the transports, where it can be handled automatically. Essentially, all three sections of the xdr_buf always begin on XDR alignment, and the RPC layer will insert padding between the sections to enforce that guarantee, as the buffer is transmitted. o head[0] has always begun on an XDR boundary, so no change there. o Insertion should be "almost never" for the boundary between head[0] and the page list. This is determined by checking if head[0].iov_len is XDR-aligned. o Insertion might be somewhat frequent for the boundary between the page list and tail[0]. This is determined by checking if (head[0].iov_len + page_len) is XDR-aligned. Whatever is contained in each section of the xdr_buf remains the responsibility of the upper layer to form correctly. So if nfsd4_encode_readv wants to stuff a bunch of XDR items into the page list, it is free to do so without confusing the transport. The only alignment concern is the last item in the page list, which will be automatically handled by RPC and the transport layers. One benefit of this change is that padding logic duplicated throughout the NFSD Reply encoders can be eliminated. Those changes will occur in subsequent patches. Signed-off-by: Chuck Lever --- fs/nfsd/nfs4xdr.c | 55 +++++++++++++---------------------------------------- net/sunrpc/xdr.c | 12 ++++++------ 2 files changed, 19 insertions(+), 48 deletions(-) diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c index 92a6ada60932..c4d5595dd38e 100644 --- a/fs/nfsd/nfs4xdr.c +++ b/fs/nfsd/nfs4xdr.c @@ -3457,10 +3457,6 @@ static __be32 nfsd4_encode_splice_read( __be32 nfserr; __be32 *p = xdr->p - 2; - /* Make sure there will be room for padding if needed */ - if (xdr->end - xdr->p < 1) - return nfserr_resource; - nfserr = nfsd_splice_read(read->rd_rqstp, read->rd_fhp, file, read->rd_offset, &maxcount, &eof); read->rd_length = maxcount; @@ -3470,32 +3466,18 @@ static __be32 nfsd4_encode_splice_read( * page length; reset it so as not to confuse * xdr_truncate_encode: */ - buf->page_len = 0; + xdr_buf_set_pagelen(buf, 0); return nfserr; } *(p++) = htonl(eof); *(p++) = htonl(maxcount); - buf->page_len = maxcount; - buf->len += maxcount; + xdr_buf_set_pagelen(buf, maxcount); + buf->len = xdr_buf_msglen(buf); xdr->page_ptr += (buf->page_base + maxcount + PAGE_SIZE - 1) / PAGE_SIZE; - /* Use rest of head for padding and remaining ops: */ - buf->tail[0].iov_base = xdr->p; - buf->tail[0].iov_len = 0; - xdr->iov = buf->tail; - if (maxcount&3) { - int pad = 4 - (maxcount&3); - - *(xdr->p++) = 0; - - buf->tail[0].iov_base += maxcount&3; - buf->tail[0].iov_len = pad; - buf->len += pad; - } - space_left = min_t(int, (void *)xdr->end - (void *)xdr->p, buf->buflen - buf->len); buf->buflen = buf->len + space_left; @@ -3518,8 +3500,6 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, __be32 nfserr; __be32 tmp; __be32 *p; - u32 zzz = 0; - int pad; /* Ensure xdr_reserve_space behaves itself */ if (xdr->iov == xdr->buf->head) { @@ -3531,7 +3511,7 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, v = 0; while (len) { thislen = min_t(long, len, PAGE_SIZE); - p = xdr_reserve_space(xdr, (thislen+3)&~3); + p = xdr_reserve_space(xdr, thislen); WARN_ON_ONCE(!p); resp->rqstp->rq_vec[v].iov_base = p; resp->rqstp->rq_vec[v].iov_len = thislen; @@ -3540,23 +3520,18 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, } read->rd_vlen = v; - len = maxcount; nfserr = nfsd_readv(resp->rqstp, read->rd_fhp, file, read->rd_offset, resp->rqstp->rq_vec, read->rd_vlen, &maxcount, &eof); read->rd_length = maxcount; if (nfserr) return nfserr; - xdr_truncate_encode(xdr, starting_len + 8 + ((maxcount+3)&~3)); + xdr_truncate_encode(xdr, starting_len + 8 + maxcount); tmp = htonl(eof); write_bytes_to_xdr_buf(xdr->buf, starting_len , &tmp, 4); tmp = htonl(maxcount); write_bytes_to_xdr_buf(xdr->buf, starting_len + 4, &tmp, 4); - - pad = (maxcount&3) ? 4 - (maxcount&3) : 0; - write_bytes_to_xdr_buf(xdr->buf, starting_len + 8 + maxcount, - &zzz, pad); return 0; } @@ -3610,8 +3585,7 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, nfsd4_encode_readlink(struct nfsd4_compoundres *resp, __be32 nfserr, struct nfsd4_readlink *readlink) { int maxcount; - __be32 wire_count; - int zero = 0; + __be32 tmp; struct xdr_stream *xdr = &resp->xdr; int length_offset = xdr->buf->len; __be32 *p; @@ -3621,9 +3595,12 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, return nfserr_resource; maxcount = PAGE_SIZE; + /* XXX: This is probably going to result + * in the same bad behavior of RPC/RDMA */ p = xdr_reserve_space(xdr, maxcount); if (!p) return nfserr_resource; + /* * XXX: By default, vfs_readlink() will truncate symlinks if they * would overflow the buffer. Is this kosher in NFSv4? If not, one @@ -3639,12 +3616,9 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, return nfserr; } - wire_count = htonl(maxcount); - write_bytes_to_xdr_buf(xdr->buf, length_offset, &wire_count, 4); - xdr_truncate_encode(xdr, length_offset + 4 + ALIGN(maxcount, 4)); - if (maxcount & 3) - write_bytes_to_xdr_buf(xdr->buf, length_offset + 4 + maxcount, - &zero, 4 - (maxcount&3)); + tmp = cpu_to_be32(maxcount); + write_bytes_to_xdr_buf(xdr->buf, length_offset, &tmp, 4); + xdr_truncate_encode(xdr, length_offset + 4 + xdr_align_size(maxcount)); return 0; } @@ -3718,6 +3692,7 @@ static __be32 nfsd4_encode_readv(struct nfsd4_compoundres *resp, } if (nfserr) goto err_no_verf; + WARN_ON(maxcount != xdr_align_size(maxcount)); if (readdir->cookie_offset) { wire_offset = cpu_to_be64(offset); @@ -4568,10 +4543,6 @@ void nfsd4_release_compoundargs(struct svc_rqst *rqstp) * All that remains is to write the tag and operation count... */ struct nfsd4_compoundres *resp = rqstp->rq_resp; - struct xdr_buf *buf = resp->xdr.buf; - - WARN_ON_ONCE(buf->len != buf->head[0].iov_len + buf->page_len + - buf->tail[0].iov_len); rqstp->rq_next_page = resp->xdr.page_ptr + 1; diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index f3104be8ff5d..d2dadb200024 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -675,15 +675,15 @@ void xdr_truncate_encode(struct xdr_stream *xdr, size_t len) int fraglen; int new; - if (len > buf->len) { + if (len > xdr_buf_msglen(buf)) { WARN_ON_ONCE(1); return; } xdr_commit_encode(xdr); - fraglen = min_t(int, buf->len - len, tail->iov_len); + fraglen = min_t(int, xdr_buf_msglen(buf) - len, tail->iov_len); tail->iov_len -= fraglen; - buf->len -= fraglen; + buf->len = xdr_buf_msglen(buf); if (tail->iov_len) { xdr->p = tail->iov_base + tail->iov_len; WARN_ON_ONCE(!xdr->end); @@ -691,12 +691,12 @@ void xdr_truncate_encode(struct xdr_stream *xdr, size_t len) return; } WARN_ON_ONCE(fraglen); - fraglen = min_t(int, buf->len - len, buf->page_len); + fraglen = min_t(int, xdr_buf_msglen(buf) - len, buf->page_len); buf->page_len -= fraglen; - buf->len -= fraglen; + buf->page_pad = xdr_pad_size(buf->page_len); + buf->len = xdr_buf_msglen(buf); new = buf->page_base + buf->page_len; - xdr->page_ptr = buf->pages + (new >> PAGE_SHIFT); if (buf->page_len) { From patchwork Wed Jan 29 16:10:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 11356451 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A5AF81398 for ; Wed, 29 Jan 2020 16:10:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7A7C82071E for ; Wed, 29 Jan 2020 16:10:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SGr2tpLB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726952AbgA2QKO (ORCPT ); Wed, 29 Jan 2020 11:10:14 -0500 Received: from mail-yw1-f66.google.com ([209.85.161.66]:38787 "EHLO mail-yw1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726564AbgA2QKN (ORCPT ); Wed, 29 Jan 2020 11:10:13 -0500 Received: by mail-yw1-f66.google.com with SMTP id 10so74850ywv.5 for ; Wed, 29 Jan 2020 08:10:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=sender:subject:from:to:cc:date:message-id:in-reply-to:references :user-agent:mime-version:content-transfer-encoding; bh=IBeAT3MNso6QQ1ymMCvS4pnVjcGugaVKGx5OtZXOog0=; b=SGr2tpLB1R3O5qvIOvqtEoGdX6Pas7HNnX1Vt4XfsvfXapbvQa0A5qkAQ3RYtejQ9/ 6Lp8Aypz6vjlUCSEW30WQIn2ZtKWZQ5JnYDrF7QgyW3ebRk4t0svxTkj65JRNiRa1T4+ /hxsvE9qnqpInPNzkVzNc74E0G4tuoaF/ZmBu3RTrmpy9aLZtrPcgnhoWRbHOZFwtqD8 Ld1qosocTDa3sUadjapiWZZsPz07usZm/T5hrwvT9nvVhSQJR43Kg0qOPtLJ3HCj5U4u QrAzjLJL4X5bHSghod7r6LcdXeERttO78+BfQQBk4qrAIS3qhTNjdFuVV7ewpXUA7sPD PG1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:subject:from:to:cc:date:message-id :in-reply-to:references:user-agent:mime-version :content-transfer-encoding; bh=IBeAT3MNso6QQ1ymMCvS4pnVjcGugaVKGx5OtZXOog0=; b=NE38zFWpCdietkl80Y4P/HwzEAnHi0lkZJY6lsuAjqd6lCeOjljBhMC2q15cAuHL76 xmoIkgEMF0LxUw0kbe+RyoP96BPnzi8Bb60xL30StKg4PDpktrS878QnBMgGACi1/h7s TGsm1uR6lx4KPSCT+y7sRmRYeSqyF+te9VT8YIsFBDnLFLs99jVfy1QDje3g1kVZaDI+ w4EJM/tn5KYdiySBETO+oGV28paMB301vGBB3g6c80aNfaZtvBJZQGAEp43UoR9HvhPc yNIVIS0h1cFij/FVq4jNAcXATYImxiK/uMA3ltpKz8zhkOIMSzIKA0OfJ1/grRaVQ+nd P/OA== X-Gm-Message-State: APjAAAUJyIUrnAMT8D1G3U7dgO07lox7DeFH6+vlt8Kw7KiLcu+2fzfG I4EZ63MnYZDMiQIQKz/zh1J7W8z/ X-Google-Smtp-Source: APXvYqx9Cpep+bNC+zHbwzBM0daomLY9pMO/x8zx5YV93YOH8YfaiGCrp1QaEwGkqV+np8JxZa3kmw== X-Received: by 2002:a81:d8d:: with SMTP id 135mr20031943ywn.127.1580314212471; Wed, 29 Jan 2020 08:10:12 -0800 (PST) Received: from bazille.1015granger.net (c-68-61-232-219.hsd1.mi.comcast.net. [68.61.232.219]) by smtp.gmail.com with ESMTPSA id d137sm1152575ywd.86.2020.01.29.08.10.11 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 29 Jan 2020 08:10:12 -0800 (PST) Subject: [PATCH RFC 8/8] SUNRPC: GSS support for automated padding of xdr_buf::pages From: Chuck Lever To: bfields@fieldses.org Cc: linux-nfs@vger.kernel.org Date: Wed, 29 Jan 2020 11:10:11 -0500 Message-ID: <20200129161011.3024.26645.stgit@bazille.1015granger.net> In-Reply-To: <20200129155516.3024.56575.stgit@bazille.1015granger.net> References: <20200129155516.3024.56575.stgit@bazille.1015granger.net> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-nfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org Signed-off-by: Chuck Lever --- net/sunrpc/auth_gss/gss_krb5_crypto.c | 13 ++++---- net/sunrpc/auth_gss/gss_krb5_wrap.c | 11 ++++--- net/sunrpc/auth_gss/svcauth_gss.c | 51 +++++++++++++++++++-------------- net/sunrpc/xdr.c | 3 ++ 4 files changed, 45 insertions(+), 33 deletions(-) diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c index 6f2d30d7b766..eb5a43b61b42 100644 --- a/net/sunrpc/auth_gss/gss_krb5_crypto.c +++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c @@ -412,8 +412,9 @@ err = crypto_ahash_init(req); if (err) goto out; - err = xdr_process_buf(body, body_offset, body->len - body_offset, - checksummer, req); + err = xdr_process_buf(body, body_offset, + xdr_buf_msglen(body) - body_offset, checksummer, + req); if (err) goto out; if (header != NULL) { @@ -682,12 +683,10 @@ struct decryptor_desc { SYNC_SKCIPHER_REQUEST_ON_STACK(req, cipher); u8 *data; struct page **save_pages; - u32 len = buf->len - offset; + u32 len = xdr_buf_msglen(buf) - offset; - if (len > GSS_KRB5_MAX_BLOCKSIZE * 2) { - WARN_ON(0); + if (len > GSS_KRB5_MAX_BLOCKSIZE * 2) return -ENOMEM; - } data = kmalloc(GSS_KRB5_MAX_BLOCKSIZE * 2, GFP_NOFS); if (!data) return -ENOMEM; @@ -800,7 +799,7 @@ struct decryptor_desc { if (err) return GSS_S_FAILURE; - nbytes = buf->len - offset - GSS_KRB5_TOK_HDR_LEN; + nbytes = xdr_buf_msglen(buf) - offset - GSS_KRB5_TOK_HDR_LEN; nblocks = (nbytes + blocksize - 1) / blocksize; cbcbytes = 0; if (nblocks > 2) diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c index 14a0aff0cd84..8d71d561f430 100644 --- a/net/sunrpc/auth_gss/gss_krb5_wrap.c +++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c @@ -405,12 +405,13 @@ static void rotate_buf_a_little(struct xdr_buf *buf, unsigned int shift) BUG_ON(shift > LOCAL_BUF_LEN); read_bytes_from_xdr_buf(buf, 0, head, shift); - for (i = 0; i + shift < buf->len; i += LOCAL_BUF_LEN) { - this_len = min(LOCAL_BUF_LEN, buf->len - (i + shift)); + for (i = 0; i + shift < xdr_buf_msglen(buf); i += LOCAL_BUF_LEN) { + this_len = min_t(unsigned int, LOCAL_BUF_LEN, + xdr_buf_msglen(buf) - (i + shift)); read_bytes_from_xdr_buf(buf, i+shift, tmp, this_len); write_bytes_to_xdr_buf(buf, i, tmp, this_len); } - write_bytes_to_xdr_buf(buf, buf->len - shift, head, shift); + write_bytes_to_xdr_buf(buf, xdr_buf_msglen(buf) - shift, head, shift); } static void _rotate_left(struct xdr_buf *buf, unsigned int shift) @@ -418,7 +419,7 @@ static void _rotate_left(struct xdr_buf *buf, unsigned int shift) int shifted = 0; int this_shift; - shift %= buf->len; + shift %= xdr_buf_msglen(buf); while (shifted < shift) { this_shift = min(shift - shifted, LOCAL_BUF_LEN); rotate_buf_a_little(buf, this_shift); @@ -430,7 +431,7 @@ static void rotate_left(u32 base, struct xdr_buf *buf, unsigned int shift) { struct xdr_buf subbuf; - xdr_buf_subsegment(buf, &subbuf, base, buf->len - base); + xdr_buf_subsegment(buf, &subbuf, base, xdr_buf_msglen(buf) - base); _rotate_left(&subbuf, shift); } diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c index c62d1f10978b..893b9114cb8a 100644 --- a/net/sunrpc/auth_gss/svcauth_gss.c +++ b/net/sunrpc/auth_gss/svcauth_gss.c @@ -907,12 +907,6 @@ u32 svcauth_gss_flavor(struct auth_domain *dom) return stat; } -static inline int -total_buf_len(struct xdr_buf *buf) -{ - return buf->head[0].iov_len + buf->page_len + buf->tail[0].iov_len; -} - static void fix_priv_head(struct xdr_buf *buf, int pad) { @@ -941,7 +935,7 @@ u32 svcauth_gss_flavor(struct auth_domain *dom) /* buf->len is the number of bytes from the original start of the * request to the end, where head[0].iov_len is just the bytes * not yet read from the head, so these two values are different: */ - remaining_len = total_buf_len(buf); + remaining_len = xdr_buf_msglen(buf); if (priv_len > remaining_len) return -EINVAL; pad = remaining_len - priv_len; @@ -961,7 +955,7 @@ u32 svcauth_gss_flavor(struct auth_domain *dom) /* XXX: This is very inefficient. It would be better to either do * this while we encrypt, or maybe in the receive code, if we can peak * ahead and work out the service and mechanism there. */ - offset = buf->head[0].iov_len % 4; + offset = buf->head[0].iov_len & 3; if (offset) { buf->buflen = RPCSVC_MAXPAYLOAD; xdr_shift_buf(buf, offset); @@ -1671,12 +1665,30 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} int integ_offset, integ_len; int stat = -EINVAL; + /* Fill in pad bytes for xdr_buf::pages: */ + if (resbuf->page_pad) { + if (resbuf->tail[0].iov_base != NULL) { + memmove((u8 *)resbuf->tail[0].iov_base + sizeof(__be32), + resbuf->tail[0].iov_base, + resbuf->tail[0].iov_len); + } else { + resbuf->tail[0].iov_base = + (u8 *)resbuf->head[0].iov_base + + resbuf->head[0].iov_len; + resbuf->tail[0].iov_len = 0; + } + memset(resbuf->tail[0].iov_base, 0, sizeof(__be32)); + resbuf->tail[0].iov_base = (u8 *)resbuf->tail[0].iov_base + + (4 - resbuf->page_pad); + resbuf->tail[0].iov_len += resbuf->page_pad; + resbuf->page_pad = 0; + } + p = svcauth_gss_prepare_to_wrap(resbuf, gsd); if (p == NULL) goto out; integ_offset = (u8 *)(p + 1) - (u8 *)resbuf->head[0].iov_base; - integ_len = resbuf->len - integ_offset; - BUG_ON(integ_len % 4); + integ_len = xdr_buf_msglen(resbuf) - integ_offset; *p++ = htonl(integ_len); *p++ = htonl(gc->gc_seq); if (xdr_buf_subsegment(resbuf, &integ_buf, integ_offset, integ_len)) { @@ -1716,7 +1728,6 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} struct page **inpages = NULL; __be32 *p, *len; int offset; - int pad; p = svcauth_gss_prepare_to_wrap(resbuf, gsd); if (p == NULL) @@ -1735,7 +1746,7 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} * there is RPC_MAX_AUTH_SIZE slack space available in * both the head and tail. */ - if (resbuf->tail[0].iov_base) { + if (resbuf->tail[0].iov_base != NULL) { BUG_ON(resbuf->tail[0].iov_base >= resbuf->head[0].iov_base + PAGE_SIZE); BUG_ON(resbuf->tail[0].iov_base < resbuf->head[0].iov_base); @@ -1746,6 +1757,7 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} resbuf->tail[0].iov_base, resbuf->tail[0].iov_len); resbuf->tail[0].iov_base += RPC_MAX_AUTH_SIZE; + /* XXX: insert padding for resbuf->pages */ } /* * If there is no current tail data, make sure there is @@ -1754,21 +1766,18 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} * is RPC_MAX_AUTH_SIZE slack space available in both the * head and tail. */ - if (resbuf->tail[0].iov_base == NULL) { + else { if (resbuf->head[0].iov_len + 2*RPC_MAX_AUTH_SIZE > PAGE_SIZE) return -ENOMEM; resbuf->tail[0].iov_base = resbuf->head[0].iov_base + resbuf->head[0].iov_len + RPC_MAX_AUTH_SIZE; - resbuf->tail[0].iov_len = 0; + memset(resbuf->tail[0].iov_base, 0, sizeof(__be32)); + resbuf->tail[0].iov_len = resbuf->page_pad; + resbuf->page_pad = 0; } if (gss_wrap(gsd->rsci->mechctx, offset, resbuf, inpages)) return -ENOMEM; - *len = htonl(resbuf->len - offset); - pad = 3 - ((resbuf->len - offset - 1)&3); - p = (__be32 *)(resbuf->tail[0].iov_base + resbuf->tail[0].iov_len); - memset(p, 0, pad); - resbuf->tail[0].iov_len += pad; - resbuf->len += pad; + *len = cpu_to_be32(xdr_buf_msglen(resbuf) - offset); return 0; } @@ -1789,7 +1798,7 @@ static void destroy_use_gss_proxy_proc_entry(struct net *net) {} /* normally not set till svc_send, but we need it here: */ /* XXX: what for? Do we mess it up the moment we call svc_putu32 * or whatever? */ - resbuf->len = total_buf_len(resbuf); + resbuf->len = xdr_buf_msglen(resbuf); switch (gc->gc_svc) { case RPC_GSS_SVC_NONE: break; diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c index d2dadb200024..798ebb406058 100644 --- a/net/sunrpc/xdr.c +++ b/net/sunrpc/xdr.c @@ -1132,6 +1132,9 @@ void xdr_enter_page(struct xdr_stream *xdr, unsigned int len) base -= buf->page_len; subbuf->page_len = 0; } + /* XXX: Still need to deal with case where buf->page_pad is non-zero */ + WARN_ON(buf->page_pad); + subbuf->page_pad = 0; if (base < buf->tail[0].iov_len) { subbuf->tail[0].iov_base = buf->tail[0].iov_base + base;