From patchwork Sat Oct 5 18:23:03 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823405 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3A041798C; Sat, 5 Oct 2024 18:23:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152596; cv=none; b=iwNaGKniw+oIz6Y3nv/4tuATSzN0lyW2vikEoAxw1kAPisLZJQTYl8/PcmNeaXEbA6rkYTWeP4kScKJq23dx6xPZ5zjgb6DXLBzpMzTUAyieuB7GsIPQlTzC7xodwO753UxT2ehfx9IJxFiONSmBXHhW6mDDs8NZHl4L3QzajNI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152596; c=relaxed/simple; bh=Lub6l4bdnCM8qmKWg3YLqHjftY/hKRNcH1zFmmZwC6U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BUJMJFnRb4mIVLGZjstPAdueOaWDnsMywViP+4tmcJh4XOQyv2hS8+bN2qLhPHas2kjh+77Rqp0pD3eEllf/wcf2zklTyJFqPgRaLbtbMYhpVlb7exhw9jkj0J3uifKJvKdGE/AnyuCfHhzI0YNTcoGzaVgWLhEf74WLaizWCy8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=YYOKLBd5; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="YYOKLBd5" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=lj5Z3Kg2yzAsYe8hTus7ua3lJbABa7liSHvLTdCCTdo=; b=YYOKLBd5Fmf65wuJruSFPFGbLV qMwkZTXPFnrCrmP1Y71x1uGfQenrE0GUPqVf3sgQN8KRWMIS0wPZVrxhECiVhNzSTyIHh+f7mNmR2 OJq14HPOmwFCJrYqEapMEk9jhLZ6uiUlNByMJgmkEfZJ3sPHbM/PNdtuAOVm6qkSlkhJJCk1Rd1Ss UCXFkAnR2rcSkZ7KU9nF8lU+o6n+tFMXr78QYTa+ox04rQBaEwjYoFNRfUAvURwIpf3ieyI8nRcqN 2Rf9T3Vjs9q2BPUk2sJcpGhfGNiHiavxp5KwrfF9Z7hC8FElWqYX/wOQgq5o7yzVAoEE2znMMEMmz 5+VTQUKA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sx9Qs-0000000DNyR-09KI; Sat, 05 Oct 2024 18:23:10 +0000 From: "Matthew Wilcox (Oracle)" To: David Howells Cc: "Matthew Wilcox (Oracle)" , netfs@lists.linux.dev, linux-fsdevel@vger.kernel.org Subject: [PATCH 1/3] netfs: Remove call to folio_index() Date: Sat, 5 Oct 2024 19:23:03 +0100 Message-ID: <20241005182307.3190401-2-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005182307.3190401-1-willy@infradead.org> References: <20241005182307.3190401-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Calling folio_index() is pointless overhead; directly dereferencing folio->index is fine. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: David Howells --- include/trace/events/netfs.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h index 1d7c52821e55..72a208fd4496 100644 --- a/include/trace/events/netfs.h +++ b/include/trace/events/netfs.h @@ -451,7 +451,7 @@ TRACE_EVENT(netfs_folio, struct address_space *__m = READ_ONCE(folio->mapping); __entry->ino = __m ? __m->host->i_ino : 0; __entry->why = why; - __entry->index = folio_index(folio); + __entry->index = folio->index; __entry->nr = folio_nr_pages(folio); ), From patchwork Sat Oct 5 18:23:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823403 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D498614C5AF; Sat, 5 Oct 2024 18:23:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152593; cv=none; b=hrCO9yDdxDP8olaNIJxoNikKbRqP/l/+/WSvWWud6/KQ/A6R4AZQB/XyAt4vXi7RG5z6IFbg7RRUkkAp5Ztnuh7mrZtLeFDCEr6W67aHK39vTg7D+DofOOdCgOMyjiFf8p+SzyjymGZzRuca6Gdwv2fUgeZOJCDLc1ApWUOLG0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152593; c=relaxed/simple; bh=PXuiujFwWzpItEPq1UPYxLIcU7HElA51JtDFSLY2/U4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UqrV3tAnmqSRiuDvNub6qrKYyMEPVm09RWHHZ/9bO5dQEci3+rWqqzLfi1P5YSBBWj7UJScb0J0DVLlYO4ezIKozN6XZmwgnaM9rCCRkiL61p4cSBtDpottzqUHcHfpP7/xbt9d0cbd40TS1z06Hm1wtWkDvLMBOel1v/S5UeWo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=QzPJpljA; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QzPJpljA" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=hatWCb0adjweeI0RuXy1B2AriOu0/g40e2M7PFr5fPc=; b=QzPJpljA5DLUa9HZ0ad5UDXnNi RUiymF227H6q7Y6DKkefcX79fcoeERNg741bK9t+nYOi9HJkvi1+Gbdkm5V2L6GeYzWij8BekNw7I ijIZnzlHt/xuzxMaDt8C/0DMeTSCg0LoVysWU3ebwYfa43JCmWFw2Q6BlnNupVhD3gxvXDaYczB1m 5OFiowJLSCWC2D9zDw9PdRWtm7Sc+zUdTrjaHqyQeIjintACBpthsPAycR0v17aOLnbbHFYbRwA2g cS2jbVMmaYqEnD/ejOt3jo4lj2RaVuuVaHmgQ9DbU1hy7Owzsx4StWB7DHzz8Sois7WjpLEh0NTd+ mOpnKmqA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sx9Qs-0000000DNyT-0aTP; Sat, 05 Oct 2024 18:23:10 +0000 From: "Matthew Wilcox (Oracle)" To: David Howells Cc: "Matthew Wilcox (Oracle)" , netfs@lists.linux.dev, linux-fsdevel@vger.kernel.org Subject: [PATCH 2/3] netfs: Fix a few minor bugs in netfs_page_mkwrite() Date: Sat, 5 Oct 2024 19:23:04 +0100 Message-ID: <20241005182307.3190401-3-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005182307.3190401-1-willy@infradead.org> References: <20241005182307.3190401-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 We can't return with VM_FAULT_SIGBUS | VM_FAULT_LOCKED; the core code will not unlock the folio in this instance. Introduce a new "unlock" error exit to handle this case. Use it to handle the "folio is truncated" check, and change the "writeback interrupted by a fatal signal" to do a NOPAGE exit instead of letting the core code install the folio currently under writeback before killing the process. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: David Howells --- fs/netfs/buffered_write.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index b3910dfcb56d..ff2814da88b1 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -491,7 +491,9 @@ EXPORT_SYMBOL(netfs_file_write_iter); /* * Notification that a previously read-only page is about to become writable. - * Note that the caller indicates a single page of a multipage folio. + * The caller indicates the precise page that needs to be written to, but + * we only track group on a per-folio basis, so we block more often than + * we might otherwise. */ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_group) { @@ -501,7 +503,7 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr struct address_space *mapping = file->f_mapping; struct inode *inode = file_inode(file); struct netfs_inode *ictx = netfs_inode(inode); - vm_fault_t ret = VM_FAULT_RETRY; + vm_fault_t ret = VM_FAULT_NOPAGE; int err; _enter("%lx", folio->index); @@ -510,21 +512,15 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr if (folio_lock_killable(folio) < 0) goto out; - if (folio->mapping != mapping) { - folio_unlock(folio); - ret = VM_FAULT_NOPAGE; - goto out; - } - - if (folio_wait_writeback_killable(folio)) { - ret = VM_FAULT_LOCKED; - goto out; - } + if (folio->mapping != mapping) + goto unlock; + if (folio_wait_writeback_killable(folio) < 0) + goto unlock; /* Can we see a streaming write here? */ if (WARN_ON(!folio_test_uptodate(folio))) { - ret = VM_FAULT_SIGBUS | VM_FAULT_LOCKED; - goto out; + ret = VM_FAULT_SIGBUS; + goto unlock; } group = netfs_folio_group(folio); @@ -559,5 +555,8 @@ vm_fault_t netfs_page_mkwrite(struct vm_fault *vmf, struct netfs_group *netfs_gr out: sb_end_pagefault(inode->i_sb); return ret; +unlock: + folio_unlock(folio); + goto out; } EXPORT_SYMBOL(netfs_page_mkwrite); From patchwork Sat Oct 5 18:23:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13823404 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0346B1798C; Sat, 5 Oct 2024 18:23:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=90.155.50.34 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152593; cv=none; b=s8Sy2ZGNvmETuKe18BFDa7CMc7iE0pRgnjYtWyouYUPLoLlmLgQpuvruQWNKu7KG7rrKpbvmqrOtn7wNtQo0zxnlzfAckVsXFgrm9kAvee/UAWtNrEeyExtW/zg5jHcmqyYUwcoCfXhSZL/RTesr5900J/COR9r4+JFKcpPDwBA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1728152593; c=relaxed/simple; bh=Bkfc98jvsr99Q1J+52LHAqxCauq9ipRT7C/V1lH2kyU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xs0kertE5D1sfHlO6t2EhoUbbSsRrOKc3RiNejP21pWW2mFoe51Ba0cTDSfqGxQ/ADszaiQxEiiONimd415WbTCvpAbLeeQLytvfIUDdCqxrDyPw6HuWQKoatPLmQrJP5ulPdAzXiLTwUc8YpEim+WHmk31oiNSKAMSzLbpzzLE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org; spf=none smtp.mailfrom=infradead.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b=dQuQM+O/; arc=none smtp.client-ip=90.155.50.34 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=infradead.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dQuQM+O/" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=38iyqFmBi6nr0q6R/nMSDCTMhx6kS48rE2gRZaxGkRc=; b=dQuQM+O/cFuQ3lPszMxrC+Uvz9 19QEI2c0LiTiafw+hmHSp63DUGy3lyZFGACGJYoIyRUdLJe/YpPhqC26CvSqW37PyHo1WRYSjIuft Iei3nkq9zeF7zfqX76kgd1/XEtUludoYGlV3yh8uDAXmJ+S5BNpkYb4lIlBv6JG3XUKvKJSdneIub H0UYaM6jbp7szkYUU2yFXwKlzRJHJ05rcVuZgULI7ooa7Vje//dcb6JaqhBHPC/EYW+gCyzBWnRcL IVa/L9mWKDeHJY+B+96hzPil9ocgcTyw6FJepr57APon/TmBrAvnlBU1pjeMP3Qc7uwA0lTtlUHi/ yJBopNsA==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1sx9Qs-0000000DNyV-0yLM; Sat, 05 Oct 2024 18:23:10 +0000 From: "Matthew Wilcox (Oracle)" To: David Howells Cc: "Matthew Wilcox (Oracle)" , netfs@lists.linux.dev, linux-fsdevel@vger.kernel.org Subject: [PATCH 3/3] netfs: Remove unnecessary references to pages Date: Sat, 5 Oct 2024 19:23:05 +0100 Message-ID: <20241005182307.3190401-4-willy@infradead.org> X-Mailer: git-send-email 2.46.0 In-Reply-To: <20241005182307.3190401-1-willy@infradead.org> References: <20241005182307.3190401-1-willy@infradead.org> Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 These places should all use folios instead of pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: David Howells --- fs/netfs/buffered_read.c | 8 ++++---- fs/netfs/buffered_write.c | 14 +++++++------- 2 files changed, 11 insertions(+), 11 deletions(-) diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c index c40e226053cc..17aaec00002b 100644 --- a/fs/netfs/buffered_read.c +++ b/fs/netfs/buffered_read.c @@ -646,7 +646,7 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, if (unlikely(always_fill)) { if (pos - offset + len <= i_size) return false; /* Page entirely before EOF */ - zero_user_segment(&folio->page, 0, plen); + folio_zero_segment(folio, 0, plen); folio_mark_uptodate(folio); return true; } @@ -665,7 +665,7 @@ static bool netfs_skip_folio_read(struct folio *folio, loff_t pos, size_t len, return false; zero_out: - zero_user_segments(&folio->page, 0, offset, offset + len, plen); + folio_zero_segments(folio, 0, offset, offset + len, plen); return true; } @@ -732,7 +732,7 @@ int netfs_write_begin(struct netfs_inode *ctx, if (folio_test_uptodate(folio)) goto have_folio; - /* If the page is beyond the EOF, we want to clear it - unless it's + /* If the folio is beyond the EOF, we want to clear it - unless it's * within the cache granule containing the EOF, in which case we need * to preload the granule. */ @@ -792,7 +792,7 @@ int netfs_write_begin(struct netfs_inode *ctx, EXPORT_SYMBOL(netfs_write_begin); /* - * Preload the data into a page we're proposing to write into. + * Preload the data into a folio we're proposing to write into. */ int netfs_prefetch_for_write(struct file *file, struct folio *folio, size_t offset, size_t len) diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c index ff2814da88b1..b4826360a411 100644 --- a/fs/netfs/buffered_write.c +++ b/fs/netfs/buffered_write.c @@ -83,13 +83,13 @@ static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, * netfs_perform_write - Copy data into the pagecache. * @iocb: The operation parameters * @iter: The source buffer - * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * @netfs_group: Grouping for dirty folios (eg. ceph snaps). * - * Copy data into pagecache pages attached to the inode specified by @iocb. + * Copy data into pagecache folios attached to the inode specified by @iocb. * The caller must hold appropriate inode locks. * - * Dirty pages are tagged with a netfs_folio struct if they're not up to date - * to indicate the range modified. Dirty pages may also be tagged with a + * Dirty folios are tagged with a netfs_folio struct if they're not up to date + * to indicate the range modified. Dirty folios may also be tagged with a * netfs-specific grouping such that data from an old group gets flushed before * a new one is started. */ @@ -223,11 +223,11 @@ ssize_t netfs_perform_write(struct kiocb *iocb, struct iov_iter *iter, * we try to read it. */ if (fpos >= ctx->zero_point) { - zero_user_segment(&folio->page, 0, offset); + folio_zero_segment(folio, 0, offset); copied = copy_folio_from_iter_atomic(folio, offset, part, iter); if (unlikely(copied == 0)) goto copy_failed; - zero_user_segment(&folio->page, offset + copied, flen); + folio_zero_segment(folio, offset + copied, flen); __netfs_set_group(folio, netfs_group); folio_mark_uptodate(folio); trace_netfs_folio(folio, netfs_modify_and_clear); @@ -407,7 +407,7 @@ EXPORT_SYMBOL(netfs_perform_write); * netfs_buffered_write_iter_locked - write data to a file * @iocb: IO state structure (file, offset, etc.) * @from: iov_iter with data to write - * @netfs_group: Grouping for dirty pages (eg. ceph snaps). + * @netfs_group: Grouping for dirty folios (eg. ceph snaps). * * This function does all the work needed for actually writing data to a * file. It does all basic checks, removes SUID from the file, updates