From patchwork Fri Mar 17 23:01:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13179564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BE8CC7618E for ; Fri, 17 Mar 2023 23:02:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229934AbjCQXCW (ORCPT ); Fri, 17 Mar 2023 19:02:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34894 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229516AbjCQXCV (ORCPT ); Fri, 17 Mar 2023 19:02:21 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D3BB4FF13 for ; Fri, 17 Mar 2023 16:01:59 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 2A33A60BC5 for ; Fri, 17 Mar 2023 23:01:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34A28C433EF; Fri, 17 Mar 2023 23:01:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679094086; bh=1hDX9JSkvaA2c2aqNndSO+fLzP94W74efh2KnHoeg9w=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=Qtbx+f6+q/Yu4YpUjnU6Dr9dM4F86BtfJpFDciFsIR6baqv+oSFdYfdDEwbpVBarz FAlNL9XCmivlYWgWeEgbpVbQkxRKokJ2QQqOhCoAMa3gJgFJa16xiin9FvDKdXWrvY fxKlTc3G3xRINDJnZ34Hal78YcYFOFCQRKWXq+2knKPQ1Q5u7EZ4uftFoqD+lUPnCf JP6p04mFd/dF/BghcTp4tbGctWKpv6YvGyvP3ga7dheSXNYoUNrQRH6t7OevmxYaa0 Pgs4UF416G9CXZHpk3uQ4YYHrimtB6ToS4fF0FhkbHTEmUxNHlUy9/FBw0559vvtyX iWdH1r7V20qdw== Subject: [PATCH v1 1/3] nfsd: don't replace page in rq_pages if it's a continuation of last page From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: viro@zeniv.linux.org.uk, dcritch@redhat.com, d.lesca@solinos.it Date: Fri, 17 Mar 2023 19:01:25 -0400 Message-ID: <167909408510.1672.11351348839166110078.stgit@klimt.1015granger.net> In-Reply-To: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> References: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Jeff Layton The splice read calls nfsd_splice_actor to put the pages containing file data into the svc_rqst->rq_pages array. It's possible however to get a splice result that only has a partial page at the end, if (e.g.) the filesystem hands back a short read that doesn't cover the whole page. nfsd_splice_actor will plop the partial page into its rq_pages array and return. Then later, when nfsd_splice_actor is called again, the remainder of the page may end up being filled out. At this point, nfsd_splice_actor will put the page into the array _again_ corrupting the reply. If this is done enough times, rq_next_page will overrun the array and corrupt the trailing fields -- the rq_respages and rq_next_page pointers themselves. If we've already added the page to the array in the last pass, don't add it to the array a second time when dealing with a splice continuation. This was originally handled properly in nfsd_splice_actor, but commit 91e23b1c3982 ("NFSD: Clean up nfsd_splice_actor()") removed the check for it. Fixes: 91e23b1c3982 ("NFSD: Clean up nfsd_splice_actor()") Cc: Al Viro Reported-by: Dario Lesca Tested-by: David Critch Link: https://bugzilla.redhat.com/show_bug.cgi?id=2150630 Signed-off-by: Jeff Layton Signed-off-by: Chuck Lever --- fs/nfsd/vfs.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index 502e1b7742db..5783209f17fc 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -941,8 +941,15 @@ nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf, struct page *last_page; last_page = page + (offset + sd->len - 1) / PAGE_SIZE; - for (page += offset / PAGE_SIZE; page <= last_page; page++) + for (page += offset / PAGE_SIZE; page <= last_page; page++) { + /* + * Skip page replacement when extending the contents + * of the current page. + */ + if (page == *(rqstp->rq_next_page - 1)) + continue; svc_rqst_replace_page(rqstp, page); + } if (rqstp->rq_res.page_len == 0) // first call rqstp->rq_res.page_base = offset % PAGE_SIZE; rqstp->rq_res.page_len += sd->len; From patchwork Fri Mar 17 23:01:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13179567 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B5647C6FD1D for ; Fri, 17 Mar 2023 23:06:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229601AbjCQXGx (ORCPT ); Fri, 17 Mar 2023 19:06:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43412 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229590AbjCQXGw (ORCPT ); Fri, 17 Mar 2023 19:06:52 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1786FBBAD for ; Fri, 17 Mar 2023 16:06:51 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 73D73B826F3 for ; Fri, 17 Mar 2023 23:01:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE202C433D2; Fri, 17 Mar 2023 23:01:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679094093; bh=FqDfDULg+rObi6ppG11Q+pma/ZaLD8W3624yYDaRCT8=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=tu4CcCqnPonqFIKRa116H9tUOM88frUIWoo4g1aZEirWuS/gXRt+cBfQQUBGMwTJd EXwPMQDtlJQgeGcqnPHWn79XcxALeiN/rainot7j5EUP6mm5mk6f5Tis+xaqfBiXW9 IW16PDxXZhw3Gjx8CohHm+L0uoBjuvRnPEkPOyU2N4Mc947j5WlcW/46Lshm7WEWIE zSDt/sLgjhNrAMCMPTu0hP0tC2Q+wmrxU2zyKnVauxLYWzymMm6GimPti0ATV8MAh6 m5v+spBazEyTtUFwKh/y6Sz3kihowpWvaGnfEQg7+w5kFG+F7I3vJTCBhW73HmAgVo CGqUv7axB1yDQ== Subject: [PATCH v1 2/3] SUNRPC: add bounds checking to svc_rqst_replace_page From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: viro@zeniv.linux.org.uk, dcritch@redhat.com, d.lesca@solinos.it Date: Fri, 17 Mar 2023 19:01:31 -0400 Message-ID: <167909409175.1672.9527418654666808791.stgit@klimt.1015granger.net> In-Reply-To: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> References: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever There have been several bugs over the years where the NFSD splice actor has attempted to write outside the rq_pages array. Suggested-by: Jeff Layton Signed-off-by: Chuck Lever --- include/linux/sunrpc/svc.h | 2 +- include/trace/events/sunrpc.h | 25 +++++++++++++++++++++++++ net/sunrpc/svc.c | 15 ++++++++++++++- 3 files changed, 40 insertions(+), 2 deletions(-) diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h index 877891536c2f..f5af055280ff 100644 --- a/include/linux/sunrpc/svc.h +++ b/include/linux/sunrpc/svc.h @@ -422,7 +422,7 @@ struct svc_serv *svc_create(struct svc_program *, unsigned int, int (*threadfn)(void *data)); struct svc_rqst *svc_rqst_alloc(struct svc_serv *serv, struct svc_pool *pool, int node); -void svc_rqst_replace_page(struct svc_rqst *rqstp, +bool svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page); void svc_rqst_free(struct svc_rqst *); void svc_exit_thread(struct svc_rqst *); diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h index 3ca54536f8f7..5a3bb42e1f50 100644 --- a/include/trace/events/sunrpc.h +++ b/include/trace/events/sunrpc.h @@ -1790,6 +1790,31 @@ DEFINE_EVENT(svc_rqst_status, svc_send, TP_PROTO(const struct svc_rqst *rqst, int status), TP_ARGS(rqst, status)); +TRACE_EVENT(svc_replace_page_err, + TP_PROTO(const struct svc_rqst *rqst), + + TP_ARGS(rqst), + TP_STRUCT__entry( + SVC_RQST_ENDPOINT_FIELDS(rqst) + + __field(const void *, begin) + __field(const void *, respages) + __field(const void *, nextpage) + ), + + TP_fast_assign( + SVC_RQST_ENDPOINT_ASSIGNMENTS(rqst); + + __entry->begin = rqst->rq_pages; + __entry->respages = rqst->rq_respages; + __entry->nextpage = rqst->rq_next_page; + ), + + TP_printk(SVC_RQST_ENDPOINT_FORMAT " begin=%p respages=%p nextpage=%p", + SVC_RQST_ENDPOINT_VARARGS, + __entry->begin, __entry->respages, __entry->nextpage) +); + TRACE_EVENT(svc_stats_latency, TP_PROTO( const struct svc_rqst *rqst diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c index fea7ce8fba14..633aa1eb476b 100644 --- a/net/sunrpc/svc.c +++ b/net/sunrpc/svc.c @@ -842,9 +842,21 @@ EXPORT_SYMBOL_GPL(svc_set_num_threads); * * When replacing a page in rq_pages, batch the release of the * replaced pages to avoid hammering the page allocator. + * + * Return values: + * %true: page replaced + * %false: array bounds checking failed */ -void svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page) +bool svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page) { + struct page **begin = rqstp->rq_pages; + struct page **end = &rqstp->rq_pages[RPCSVC_MAXPAGES]; + + if (unlikely(rqstp->rq_next_page < begin || rqstp->rq_next_page > end)) { + trace_svc_replace_page_err(rqstp); + return false; + } + if (*rqstp->rq_next_page) { if (!pagevec_space(&rqstp->rq_pvec)) __pagevec_release(&rqstp->rq_pvec); @@ -853,6 +865,7 @@ void svc_rqst_replace_page(struct svc_rqst *rqstp, struct page *page) get_page(page); *(rqstp->rq_next_page++) = page; + return true; } EXPORT_SYMBOL_GPL(svc_rqst_replace_page); From patchwork Fri Mar 17 23:01:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chuck Lever X-Patchwork-Id: 13179566 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B103BC74A5B for ; Fri, 17 Mar 2023 23:03:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229844AbjCQXDL (ORCPT ); Fri, 17 Mar 2023 19:03:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36388 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229971AbjCQXDK (ORCPT ); Fri, 17 Mar 2023 19:03:10 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AAB7A305FD for ; Fri, 17 Mar 2023 16:02:44 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 4B91F60B46 for ; Fri, 17 Mar 2023 23:01:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 566B2C433D2; Fri, 17 Mar 2023 23:01:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1679094099; bh=uhETMx3Y+MC6Z0xrt6qTQ/9nXmtm6z4r+CzxRN57Qto=; h=Subject:From:To:Cc:Date:In-Reply-To:References:From; b=sfL7d0PDNGRj3Ql7hVaMql2GfqFmEzGvNqBgFbhCTNCSbwf8uUEb+BT/u5EXpToZG vjupKNBSU6WXIdpR7qNIm6MHm6FgDYXB/Q+fHFJWuirqEcuIVAxAJMUt2ABqu1J3Ux iOkZ11e8ijhr3GStYB1LkRDQ80+eJQQa5/UXM1d/jcjK1Xw1DmksPP6e7MJ7w/sz+L zFQ8PQQS/T4wjU7TfhYO32TFa35LiEuQkj2ihkLhPvxf8ABaIJwXZtWjL3NrelWcw2 6Q+DBuLh+UgTM6FUKQKDHmOxgCj/TmiuW+tfqo4jvhX6hIKKPToovdBCI2mbYErh9m Ytk9rfb/qYNCg== Subject: [PATCH v1 3/3] NFSD: Watch for rq_pages bounds checking errors in nfsd_splice_actor() From: Chuck Lever To: linux-nfs@vger.kernel.org Cc: viro@zeniv.linux.org.uk, dcritch@redhat.com, d.lesca@solinos.it Date: Fri, 17 Mar 2023 19:01:38 -0400 Message-ID: <167909409842.1672.18284039985769877598.stgit@klimt.1015granger.net> In-Reply-To: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> References: <167909365790.1672.13118429954916842449.stgit@klimt.1015granger.net> User-Agent: StGit/1.5 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-nfs@vger.kernel.org From: Chuck Lever This is a "should never happen" condition, but if for some reason the pipe splice actor should attempt to walk past the end of rq_pages, it needs to terminate the READ operation to prevent corruption of the pointer addresses in the fields just beyond the array. A server crash is thus prevented. Since the code is not behaving, the READ operation returns -EIO to the client. None of the READ payload data can be trusted if the splice actor isn't operating as expected. Suggested-by: Jeff Layton Signed-off-by: Chuck Lever --- fs/nfsd/vfs.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c index 5783209f17fc..10aa68ca82ef 100644 --- a/fs/nfsd/vfs.c +++ b/fs/nfsd/vfs.c @@ -930,6 +930,9 @@ nfsd_open_verified(struct svc_rqst *rqstp, struct svc_fh *fhp, int may_flags, * Grab and keep cached pages associated with a file in the svc_rqst * so that they can be passed to the network sendmsg/sendpage routines * directly. They will be released after the sending has completed. + * + * Return values: Number of bytes consumed, or -EIO if there are no + * remaining pages in rqstp->rq_pages. */ static int nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf, @@ -948,7 +951,8 @@ nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf, */ if (page == *(rqstp->rq_next_page - 1)) continue; - svc_rqst_replace_page(rqstp, page); + if (unlikely(!svc_rqst_replace_page(rqstp, page))) + return -EIO; } if (rqstp->rq_res.page_len == 0) // first call rqstp->rq_res.page_base = offset % PAGE_SIZE;