diff mbox series

[7/7] netfs: Remove outdated comments about prefaulting

Message ID 20250129181802.6E1E4149@davehans-spike.ostc.intel.com (mailing list archive)
State New
Headers show
Series Move prefaulting into write slow paths | expand

Commit Message

Dave Hansen Jan. 29, 2025, 6:18 p.m. UTC
From: Dave Hansen <dave.hansen@linux.intel.com>

I originally set out to make netfs_perform_write() behavior more
consistent with generic_perform_write(). However, netfs currently
treats a failure to make forward progress as a hard error and does not
retry where the generic code will loop around and retry.

Instead of a major code restructuring, just focus on improving the
comments.

The comment refers to a possible deadlock and to userspace address
checks. Neither of those things are a concern when using
copy_folio_from_iter_atomic() for atomic usercopies. It prevents
deadlocks by disabling page faults and it leverages user copy
functions that have their own access_ok() checks.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jeff Layton <jlayton@kernel.org>
Cc: netfs@lists.linux.dev
---

 b/fs/netfs/buffered_write.c |   13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)
diff mbox series

Patch

diff -puN fs/netfs/buffered_write.c~netfs-postfault fs/netfs/buffered_write.c
--- a/fs/netfs/buffered_write.c~netfs-postfault	2025-01-29 09:03:38.167859749 -0800
+++ b/fs/netfs/buffered_write.c	2025-01-29 09:03:38.171860082 -0800
@@ -152,16 +152,9 @@  ssize_t netfs_perform_write(struct kiocb
 		offset = pos & (max_chunk - 1);
 		part = min(max_chunk - offset, iov_iter_count(iter));
 
-		/* Bring in the user pages that we will copy from _first_ lest
-		 * we hit a nasty deadlock on copying from the same page as
-		 * we're writing to, without it being marked uptodate.
-		 *
-		 * Not only is this an optimisation, but it is also required to
-		 * check that the address is actually valid, when atomic
-		 * usercopies are used below.
-		 *
-		 * We rely on the page being held onto long enough by the LRU
-		 * that we can grab it below if this causes it to be read.
+		/* Bring in the user folios that are copied from before taking
+		 * locks on the mapping folios. This helps ensure forward
+		 * progress if they are the same folios.
 		 */
 		ret = -EFAULT;
 		if (unlikely(fault_in_iov_iter_readable(iter, part) == part))