From patchwork Fri Nov 10 16:28:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13452647 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA865C4167D for ; Fri, 10 Nov 2023 17:59:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229484AbjKJR7x (ORCPT ); Fri, 10 Nov 2023 12:59:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55694 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235110AbjKJR7O (ORCPT ); Fri, 10 Nov 2023 12:59:14 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3E7963E424 for ; Fri, 10 Nov 2023 08:28:35 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 38F3FC433C7 for ; Fri, 10 Nov 2023 16:28:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699633714; bh=zB4gtgeUsd/z45jK1/biV1ZJOlO8IUzHXhr8gB075RM=; h=Subject:From:To:Date:In-Reply-To:References:From; b=fSgd2zBp23Y4Tu8qyWosCiAYRFyCQBVOGaI1Khe1L4RSoE0jWBOXuHA27KQLZkcBr 1dJjW8lQXxyEDsIBbrES0RutG0k2NIbl8+fJSf0+rwlFXXQG9LHwazNG27IoDnh4DD aLxF38G9F5aWL9VOfEluCKM02g2QedbntC74fTOcKXYzitsZrQFhtYWF7Oikr7tTgw MNzgY9PeSpoYk2st57X74Yf0vZcjn7fH5s+Yn6jQ4HcYF6MHghPISvmMZ5dLcno2ch BR9KmtUy+kCO5zL1BguYAul1ZVSyKJlPxq1hjd9jQFqi4jl8BdPZ7VydIdAcAm7mHm NUr8oGxnakC3Q== Subject: [PATCH v2 1/3] NFSD: Fix "start of NFS reply" pointer passed to nfsd_cache_update() From: Chuck Lever To: linux-nfs@vger.kernel.org Date: Fri, 10 Nov 2023 11:28:33 -0500 Message-ID: <169963371324.5404.3057239228897633466.stgit@bazille.1015granger.net> In-Reply-To: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> References: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever The "statp + 1" pointer that is passed to nfsd_cache_update() is supposed to point to the start of the egress NFS Reply header. In fact, it does point there for AUTH_SYS and RPCSEC_GSS_KRB5 requests. But both krb5i and krb5p add fields between the RPC header's accept_stat field and the start of the NFS Reply header. In those cases, "statp + 1" points at the extra fields instead of the Reply. The result is that nfsd_cache_update() caches what looks to the client like garbage. A connection break can occur for a number of reasons, but the most common reason when using krb5i/p is a GSS sequence number window underrun. When an underrun is detected, the server is obliged to drop the RPC and the connection to force a retransmit with a fresh GSS sequence number. The client presents the same XID, it hits in the server's DRC, and the server returns the garbage cache entry. The "statp + 1" argument has been used since the oldest changeset in the kernel history repo, so it has been in nfsd_dispatch() literally since before history began. The problem arose only when the server-side GSS implementation was added twenty years ago. This particular patch applies cleanly to v6.5 and later, but needs some context adjustment to apply to earlier kernels. Before v5.16, nfsd_dispatch() does not use xdr_stream, so saving the NFS header pointer before calling ->pc_encode is still an appropriate fix but it needs to be implemented differently. Cc: # v5.16+ Signed-off-by: Chuck Lever Tested-by: Jeff Layton --- fs/nfsd/nfssvc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index d6122bb2d167..60aacca2bca6 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp) const struct svc_procedure *proc = rqstp->rq_procinfo; __be32 *statp = rqstp->rq_accept_statp; struct nfsd_cacherep *rp; + __be32 *nfs_reply; /* * Give the xdr decoder a chance to change this if it wants @@ -1014,6 +1015,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp) if (test_bit(RQ_DROPME, &rqstp->rq_flags)) goto out_update_drop; + nfs_reply = xdr_inline_decode(&rqstp->rq_res_stream, 0); if (!proc->pc_encode(rqstp, &rqstp->rq_res_stream)) goto out_encode_err; @@ -1023,7 +1025,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp) */ smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter + 1); - nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, statp + 1); + nfsd_cache_update(rqstp, rp, rqstp->rq_cachetype, nfs_reply); out_cached_reply: return 1; From patchwork Fri Nov 10 16:28:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13452732 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81931C4332F for ; Fri, 10 Nov 2023 19:52:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236274AbjKJTw1 (ORCPT ); Fri, 10 Nov 2023 14:52:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346617AbjKJTwN (ORCPT ); Fri, 10 Nov 2023 14:52:13 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3B1A3E425 for ; Fri, 10 Nov 2023 08:28:40 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8280FC433C8 for ; Fri, 10 Nov 2023 16:28:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699633720; bh=/cYSvJjSofIZtx0Epialyqso0kKqyNa9nMBfGzbA3a4=; h=Subject:From:To:Date:In-Reply-To:References:From; b=Jn5pUa8HSN/vKWaN+F187hPuFvpmEUHRw+0SZjR/e7msZY0JVn3M6rIFpymiOkO7r 3Mv7o21+PtKzYGgwasKSStfplf9HitlgDPKHHNoO/ebP0PQvtZaDjgge/vWU2pnrh8 AMinKQC8EgSmjhEQ0TqV6RG25nmFutRUIPWFGbxgdNIGZrBRz0roaENuGrVoYUehxl boCaB5yk6s+SpxuR5bx8hQtdYzjd4hfK7g7oOe3waMmbVjTJGErEXFwrmTDuDJ0CZ1 pseFQmoxJjimZT4jyzCAZlQIG26BBD16i3dLfsYn3EwQ+oqEMwb4Mt1cwhCyOVzy1d tY4ob3uWdhO8A== Subject: [PATCH v2 2/3] NFSD: Update nfsd_cache_append() to use xdr_stream From: Chuck Lever To: linux-nfs@vger.kernel.org Date: Fri, 10 Nov 2023 11:28:39 -0500 Message-ID: <169963371954.5404.15414594140516305111.stgit@bazille.1015granger.net> In-Reply-To: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> References: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever When inserting a DRC-cached response into the reply buffer, ensure that the buffer's xdr_stream is updated properly. Otherwise the server will send a garbage response. This should have been part of last year's series to handle NFS response encoding using the xdr_stream utilities. Cc: # v5.16+ Signed-off-by: Chuck Lever --- fs/nfsd/nfscache.c | 23 ++++++++--------------- 1 file changed, 8 insertions(+), 15 deletions(-) diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index fd56a52aa5fb..9046331e0e5e 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -641,24 +641,17 @@ void nfsd_cache_update(struct svc_rqst *rqstp, struct nfsd_cacherep *rp, return; } -/* - * Copy cached reply to current reply buffer. Should always fit. - * FIXME as reply is in a page, we should just attach the page, and - * keep a refcount.... - */ static int nfsd_cache_append(struct svc_rqst *rqstp, struct kvec *data) { - struct kvec *vec = &rqstp->rq_res.head[0]; - - if (vec->iov_len + data->iov_len > PAGE_SIZE) { - printk(KERN_WARNING "nfsd: cached reply too large (%zd).\n", - data->iov_len); - return 0; - } - memcpy((char*)vec->iov_base + vec->iov_len, data->iov_base, data->iov_len); - vec->iov_len += data->iov_len; - return 1; + __be32 *p; + + p = xdr_reserve_space(&rqstp->rq_res_stream, data->iov_len); + if (unlikely(!p)) + return false; + memcpy(p, data->iov_base, data->iov_len); + xdr_commit_encode(&rqstp->rq_res_stream); + return true; } /* From patchwork Fri Nov 10 16:28:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13452770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3FCC9C4167B for ; Fri, 10 Nov 2023 20:26:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234723AbjKJU02 (ORCPT ); Fri, 10 Nov 2023 15:26:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229920AbjKJU01 (ORCPT ); Fri, 10 Nov 2023 15:26:27 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3F03E3E426 for ; Fri, 10 Nov 2023 08:28:47 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE952C433C8 for ; Fri, 10 Nov 2023 16:28:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699633726; bh=ofzckPKZQhmtghTAmb1LRnlOAEV5OMTZOC9+n7Iirg0=; h=Subject:From:To:Date:In-Reply-To:References:From; b=YpHG93RscztVVHFaf0Jf8/EbjNqSb76JCmPCOuwdg5WeZI2smV5jhwOm6OSdaunqZ bV+Y+6yICoaRsI6nDsNpm68fDV+aXayRxDTBCV5DphUa+elFkoeaNk/ltzxkOlfyIt mrwuyoZcNKUnc9egqo39n46d0Ym2AH6jMPrMOA4FqWojve9VAKOAjK4Ok+BRkVydJm 4Wqq78I/yWUw7fAdNZM9yxdoUsv7hTgSuovQJ+x+FIF+ZfK8vQHxqrVFrFnyiqmnPi 7qL7b6WrWq+hPoHJrZvYq5HkGpIBV8cKfgO9Dw8sauPhHAb8EG6m2bnEMQSxDIUNYK KAGkV8qZ+ikZw== Subject: [PATCH v2 3/3] NFSD: Fix checksum mismatches in the duplicate reply cache From: Chuck Lever To: linux-nfs@vger.kernel.org Date: Fri, 10 Nov 2023 11:28:45 -0500 Message-ID: <169963372584.5404.2548015812135074126.stgit@bazille.1015granger.net> In-Reply-To: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> References: <169963305585.5404.9796036538735192053.stgit@bazille.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever nfsd_cache_csum() currently assumes that the server's RPC layer has been advancing rq_arg.head[0].iov_base as it decodes an incoming request, because that's the way it used to work. On entry, it expects that buf->head[0].iov_base points to the start of the NFS header, and excludes the already-decoded RPC header. These days however, head[0].iov_base now points to the start of the RPC header during all processing. It no longer points at the NFS Call header when execution arrives at nfsd_cache_csum(). In a retransmitted RPC the XID and the NFS header are supposed to be the same as the original message, but the contents of the retransmitted RPC header can be different. For example, for krb5, the GSS sequence number will be different between the two. Thus if the RPC header is always included in the DRC checksum computation, the checksum of the retransmitted message might not match the checksum of the original message, even though the NFS part of these messages is identical. The result is that, even if a matching XID is found in the DRC, the checksum mismatch causes the server to execute the retransmitted RPC transaction again. It might be appropriate to consider applying this fix kernels as early as v6.3, if someone can make it apply cleanly and test it properly. Cc: # v6.6+ Signed-off-by: Chuck Lever --- fs/nfsd/cache.h | 4 ++- fs/nfsd/nfscache.c | 64 +++++++++++++++++++++++++++++++++++----------------- fs/nfsd/nfssvc.c | 10 +++++++- 3 files changed, 54 insertions(+), 24 deletions(-) diff --git a/fs/nfsd/cache.h b/fs/nfsd/cache.h index 929248c6ca84..4cbe0434cbb8 100644 --- a/fs/nfsd/cache.h +++ b/fs/nfsd/cache.h @@ -84,8 +84,8 @@ int nfsd_net_reply_cache_init(struct nfsd_net *nn); void nfsd_net_reply_cache_destroy(struct nfsd_net *nn); int nfsd_reply_cache_init(struct nfsd_net *); void nfsd_reply_cache_shutdown(struct nfsd_net *); -int nfsd_cache_lookup(struct svc_rqst *rqstp, - struct nfsd_cacherep **cacherep); +int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start, + unsigned int len, struct nfsd_cacherep **cacherep); void nfsd_cache_update(struct svc_rqst *rqstp, struct nfsd_cacherep *rp, int cachetype, __be32 *statp); int nfsd_reply_cache_stats_show(struct seq_file *m, void *v); diff --git a/fs/nfsd/nfscache.c b/fs/nfsd/nfscache.c index 9046331e0e5e..d3273a396659 100644 --- a/fs/nfsd/nfscache.c +++ b/fs/nfsd/nfscache.c @@ -369,33 +369,52 @@ nfsd_reply_cache_scan(struct shrinker *shrink, struct shrink_control *sc) return freed; } -/* - * Walk an xdr_buf and get a CRC for at most the first RC_CSUMLEN bytes +/** + * nfsd_cache_csum - Checksum incoming NFS Call arguments + * @buf: buffer containing a whole RPC Call message + * @start: starting byte of the NFS Call header + * @remaining: size of the NFS Call header, in bytes + * + * Compute a weak checksum of the leading bytes of an NFS procedure + * call header to help verify that a retransmitted Call matches an + * entry in the duplicate reply cache. + * + * To avoid assumptions about how the RPC message is laid out in + * @buf and what else it might contain (eg, a GSS MIC suffix), the + * caller passes us the exact location and length of the NFS Call + * header. + * + * Returns a 32-bit checksum value, as defined in RFC 793. */ -static __wsum -nfsd_cache_csum(struct svc_rqst *rqstp) +static __wsum nfsd_cache_csum(struct xdr_buf *buf, unsigned int start, + unsigned int remaining) { + unsigned int base, len; + struct xdr_buf subbuf; + __wsum csum = 0; + void *p; int idx; - unsigned int base; - __wsum csum; - struct xdr_buf *buf = &rqstp->rq_arg; - const unsigned char *p = buf->head[0].iov_base; - size_t csum_len = min_t(size_t, buf->head[0].iov_len + buf->page_len, - RC_CSUMLEN); - size_t len = min(buf->head[0].iov_len, csum_len); + + if (remaining > RC_CSUMLEN) + remaining = RC_CSUMLEN; + if (xdr_buf_subsegment(buf, &subbuf, start, remaining)) + return csum; /* rq_arg.head first */ - csum = csum_partial(p, len, 0); - csum_len -= len; + if (subbuf.head[0].iov_len) { + len = min_t(unsigned int, subbuf.head[0].iov_len, remaining); + csum = csum_partial(subbuf.head[0].iov_base, len, csum); + remaining -= len; + } /* Continue into page array */ - idx = buf->page_base / PAGE_SIZE; - base = buf->page_base & ~PAGE_MASK; - while (csum_len) { - p = page_address(buf->pages[idx]) + base; - len = min_t(size_t, PAGE_SIZE - base, csum_len); + idx = subbuf.page_base / PAGE_SIZE; + base = subbuf.page_base & ~PAGE_MASK; + while (remaining) { + p = page_address(subbuf.pages[idx]) + base; + len = min_t(unsigned int, PAGE_SIZE - base, remaining); csum = csum_partial(p, len, csum); - csum_len -= len; + remaining -= len; base = 0; ++idx; } @@ -466,6 +485,8 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct nfsd_cacherep *key, /** * nfsd_cache_lookup - Find an entry in the duplicate reply cache * @rqstp: Incoming Call to find + * @start: starting byte in @rqstp->rq_arg of the NFS Call header + * @len: size of the NFS Call header, in bytes * @cacherep: OUT: DRC entry for this request * * Try to find an entry matching the current call in the cache. When none @@ -479,7 +500,8 @@ nfsd_cache_insert(struct nfsd_drc_bucket *b, struct nfsd_cacherep *key, * %RC_REPLY: Reply from cache * %RC_DROPIT: Do not process the request further */ -int nfsd_cache_lookup(struct svc_rqst *rqstp, struct nfsd_cacherep **cacherep) +int nfsd_cache_lookup(struct svc_rqst *rqstp, unsigned int start, + unsigned int len, struct nfsd_cacherep **cacherep) { struct nfsd_net *nn; struct nfsd_cacherep *rp, *found; @@ -495,7 +517,7 @@ int nfsd_cache_lookup(struct svc_rqst *rqstp, struct nfsd_cacherep **cacherep) goto out; } - csum = nfsd_cache_csum(rqstp); + csum = nfsd_cache_csum(&rqstp->rq_arg, start, len); /* * Since the common case is a cache miss followed by an insert, diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c index 60aacca2bca6..d8eeb760b5d9 100644 --- a/fs/nfsd/nfssvc.c +++ b/fs/nfsd/nfssvc.c @@ -981,6 +981,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp) const struct svc_procedure *proc = rqstp->rq_procinfo; __be32 *statp = rqstp->rq_accept_statp; struct nfsd_cacherep *rp; + unsigned int start, len; __be32 *nfs_reply; /* @@ -989,6 +990,13 @@ int nfsd_dispatch(struct svc_rqst *rqstp) */ rqstp->rq_cachetype = proc->pc_cachetype; + /* + * ->pc_decode advances the argument stream past the NFS + * Call header, so grab the header's starting location and + * size now for the call to nfsd_cache_lookup(). + */ + start = xdr_stream_pos(&rqstp->rq_arg_stream); + len = xdr_stream_remaining(&rqstp->rq_arg_stream); if (!proc->pc_decode(rqstp, &rqstp->rq_arg_stream)) goto out_decode_err; @@ -1002,7 +1010,7 @@ int nfsd_dispatch(struct svc_rqst *rqstp) smp_store_release(&rqstp->rq_status_counter, rqstp->rq_status_counter | 1); rp = NULL; - switch (nfsd_cache_lookup(rqstp, &rp)) { + switch (nfsd_cache_lookup(rqstp, start, len, &rp)) { case RC_DOIT: break; case RC_REPLY: