diff mbox series

[2/3] afs: Fix afs_write_end() to handle short writes

Message ID 162367682522.460125.5652091227576721609.stgit@warthog.procyon.org.uk (mailing list archive)
State New, archived
Headers show
Series [1/3] afs: Handle len being extending over page end in write_begin/write_end | expand

Commit Message

David Howells June 14, 2021, 1:20 p.m. UTC
Fix afs_write_end() to correctly handle a short copy into the intended
write region of the page.  Two things are necessary:

 (1) If the page is not up to date, then we should just return 0
     (ie. indicating a zero-length copy).  The loop in
     generic_perform_write() will go around again, possibly breaking up the
     iterator into discrete chunks.

     This is analogous to commit b9de313cf05fe08fa59efaf19756ec5283af672a
     for ceph.

 (2) The page should not have been set uptodate if it wasn't completely set
     up by netfs_write_begin() (this will be fixed in the next page), so we
     need to set PG_uptodate here in such a case.

Fixes: 3003bbd0697b ("afs: Use the netfs_write_begin() helper")
Reported-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: linux-afs@lists.infradead.org
---

 fs/afs/write.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

Matthew Wilcox June 14, 2021, 1:28 p.m. UTC | #1
On Mon, Jun 14, 2021 at 02:20:25PM +0100, David Howells wrote:
> Fix afs_write_end() to correctly handle a short copy into the intended
> write region of the page.  Two things are necessary:
> 
>  (1) If the page is not up to date, then we should just return 0
>      (ie. indicating a zero-length copy).  The loop in
>      generic_perform_write() will go around again, possibly breaking up the
>      iterator into discrete chunks.

Does this actually work?  What about the situation where you're reading
the last page of a file and thus (almost) always reading fewer bytes
than a PAGE_SIZE?

>      This is analogous to commit b9de313cf05fe08fa59efaf19756ec5283af672a
>      for ceph.
> 
>  (2) The page should not have been set uptodate if it wasn't completely set
>      up by netfs_write_begin() (this will be fixed in the next page), so we
>      need to set PG_uptodate here in such a case.

s/page/patch/

and you have a really bad habit of breaking the layering and referring
to PG_foo.  Please just refer to PageFoo.  Filesystems shouldn't
care what the implementation of PageFoo is.

> +	len = min_t(size_t, len, thp_size(page) - from);
> +	if (!PageUptodate(page)) {
> +		if (copied < len) {
> +			copied = 0;
> +			goto out;
> +		}
> +
> +		SetPageUptodate(page);
> +	}
David Howells June 14, 2021, 1:37 p.m. UTC | #2
Matthew Wilcox <willy@infradead.org> wrote:

> >  (1) If the page is not up to date, then we should just return 0
> >      (ie. indicating a zero-length copy).  The loop in
> >      generic_perform_write() will go around again, possibly breaking up the
> >      iterator into discrete chunks.
> 
> Does this actually work?  What about the situation where you're reading
> the last page of a file and thus (almost) always reading fewer bytes
> than a PAGE_SIZE?

Al Viro made such a change for Ceph - and we're writing, not reading.

I was thinking that it would break if reading from a pipe, but Jeff pointed
out that the iov_iter_advance() in generic_perform_write() uses the return
value of ->write_end() to advance the iterator.  So it might loop endlessly,
but it doesn't appear it will corrupt your data.

David
Matthew Wilcox June 14, 2021, 1:46 p.m. UTC | #3
On Mon, Jun 14, 2021 at 02:37:12PM +0100, David Howells wrote:
> Matthew Wilcox <willy@infradead.org> wrote:
> 
> > >  (1) If the page is not up to date, then we should just return 0
> > >      (ie. indicating a zero-length copy).  The loop in
> > >      generic_perform_write() will go around again, possibly breaking up the
> > >      iterator into discrete chunks.
> > 
> > Does this actually work?  What about the situation where you're reading
> > the last page of a file and thus (almost) always reading fewer bytes
> > than a PAGE_SIZE?
> 
> Al Viro made such a change for Ceph - and we're writing, not reading.

I'd feel better if you said "xfstests doesn't show any new problems"
than arguing to authority.

I know the operation which triggers this path is a call to write(),
but if, say, the file is 32 bytes long, not in cache, and you write
bytes 32-63, the client must READ bytes 0-31 from the server, which
is less than a full page.
David Howells June 14, 2021, 2:04 p.m. UTC | #4
Matthew Wilcox <willy@infradead.org> wrote:

> > Al Viro made such a change for Ceph - and we're writing, not reading.
> 
> I'd feel better if you said "xfstests doesn't show any new problems"
> than arguing to authority.

I'm kind of referring it to Al - I added him to the to: list.  And xfstests
doesn't show any new problems - but that doesn't mean that this path got
tested.

David
Matthew Wilcox June 14, 2021, 2:37 p.m. UTC | #5
On Mon, Jun 14, 2021 at 02:20:25PM +0100, David Howells wrote:
> @@ -135,8 +145,6 @@ int afs_write_end(struct file *file, struct address_space *mapping,
>  		write_sequnlock(&vnode->cb_lock);
>  	}
>  
> -	ASSERT(PageUptodate(page));
> -
>  	if (PagePrivate(page)) {
>  		priv = page_private(page);
>  		f = afs_page_dirty_from(page, priv);

Why are you removing this assertion?  Does AFS now support dirty,
partially-uptodate pages?  If so, a subsequent read() to that
page is going to need to be careful to only read the parts of the page
from the server that haven't been written ...
David Howells June 14, 2021, 3:38 p.m. UTC | #6
Matthew Wilcox <willy@infradead.org> wrote:

> > -	ASSERT(PageUptodate(page));
> > -
> >  	if (PagePrivate(page)) {
> >  		priv = page_private(page);
> >  		f = afs_page_dirty_from(page, priv);
> 
> Why are you removing this assertion?  Does AFS now support dirty,
> partially-uptodate pages?  If so, a subsequent read() to that
> page is going to need to be careful to only read the parts of the page
> from the server that haven't been written ...

Because the previous hunk in the patch:

	+	if (!PageUptodate(page)) {
	+		if (copied < len) {
	+			copied = 0;
	+			goto out;
	+		}
	+
	+		SetPageUptodate(page);
	+	}

means you can't get there unless PageUptodate() is true by that point.

David
Matthew Wilcox June 14, 2021, 3:43 p.m. UTC | #7
On Mon, Jun 14, 2021 at 04:38:21PM +0100, David Howells wrote:
> Matthew Wilcox <willy@infradead.org> wrote:
> 
> > > -	ASSERT(PageUptodate(page));
> > > -
> > >  	if (PagePrivate(page)) {
> > >  		priv = page_private(page);
> > >  		f = afs_page_dirty_from(page, priv);
> > 
> > Why are you removing this assertion?  Does AFS now support dirty,
> > partially-uptodate pages?  If so, a subsequent read() to that
> > page is going to need to be careful to only read the parts of the page
> > from the server that haven't been written ...
> 
> Because the previous hunk in the patch:
> 
> 	+	if (!PageUptodate(page)) {
> 	+		if (copied < len) {
> 	+			copied = 0;
> 	+			goto out;
> 	+		}
> 	+
> 	+		SetPageUptodate(page);
> 	+	}
> 
> means you can't get there unless PageUptodate() is true by that point.

Isn't the point of an assertion to check that this is true?
David Howells June 14, 2021, 9:11 p.m. UTC | #8
Matthew Wilcox <willy@infradead.org> wrote:

> > means you can't get there unless PageUptodate() is true by that point.
> 
> Isn't the point of an assertion to check that this is true?

The assertion was meant to check that that it was true given that the page was
set uptodate somewhere else before this function was even called.  With this
patch, however, it's now set in this function if it wasn't already right at
the top - so the assertion should now be redundant.  I can put it back if you
really insist.

David
Al Viro June 18, 2021, 3:40 a.m. UTC | #9
On Mon, Jun 14, 2021 at 02:46:03PM +0100, Matthew Wilcox wrote:
> On Mon, Jun 14, 2021 at 02:37:12PM +0100, David Howells wrote:
> > Matthew Wilcox <willy@infradead.org> wrote:
> > 
> > > >  (1) If the page is not up to date, then we should just return 0
> > > >      (ie. indicating a zero-length copy).  The loop in
> > > >      generic_perform_write() will go around again, possibly breaking up the
> > > >      iterator into discrete chunks.
> > > 
> > > Does this actually work?  What about the situation where you're reading
> > > the last page of a file and thus (almost) always reading fewer bytes
> > > than a PAGE_SIZE?
> > 
> > Al Viro made such a change for Ceph - and we're writing, not reading.
> 
> I'd feel better if you said "xfstests doesn't show any new problems"
> than arguing to authority.
> 
> I know the operation which triggers this path is a call to write(),
> but if, say, the file is 32 bytes long, not in cache, and you write
> bytes 32-63, the client must READ bytes 0-31 from the server, which
> is less than a full page.

[as commented on IRC several days ago]

Short copy has nothing to do with destination; it's about failures on
source - e.g. source page we'd prefaulted before locking the destination
got evicted by the time we got around to copying; we can't afford
page faults while holding some pages locked, so we do it with
pagefault_disable() and get a short copy on #PF.

The story with short copies is this:
	* write() is about to copy the next chunk of data into
page cache of the file we are writing into.  We have decided what
part of the destination page will be copied over, faulted the
source in and locked the destination page.

	* if the page is not uptodate, we might need to read
some parts before copying new data into it; the work that needs
to be done depends upon the part of page we are going to overwrite.
E.g. if we are going to copy over the entire thing, we do
_not_ want to bother reading anything into it - if copying
works, we'll destroy the previous contents anyway.
	That's what ->write_begin() is about - it should
do whatever's needed in preparation to copying new data.

	* NOW we can copy the data.  Hopefully the copy will
be successful (i.e. we don't run into evicted source pages,
memory errors, races with munmap(), etc.), but it might fail
halfway through - we are doing that part with page faults
disabled.

	* finally we can do write to disk/server/whatnot.
That's what ->write_end() is for.  Ideally, it'll just
send the newly copied data on its way.  However, in case of
short copy we might have problems.  Consider e.g. a block
filesystem that has 4 blocks per page; the chunk we were
going to write went from the middle of the 1st to the
middle of the 4th block.  ->write_begin() made sure that
1st and 4th blocks had been uptodate.  It had not bothered
with the 2nd and the 3rd blocks, since we were going to
overwrite them anyway.  And had the copy succeeded, we'd
be fine - page fully uptodate, can write the data to
disk and be done with that.  However, the copy failed
halfway through the 3rd block.  What do we have?
1st block: uptodate, partly old data, partly new one.
2nd block: uptodate, new data
3rd block: beginning is filled with new data, garbage in the rest
4th block: uptodate, old data.
What to do?  Everything up to the beginning of the 3rd block
is fine, but the 3rd one is a hopeless mess.  We can't write it
out - the garbage would end up on disk.  We can't replace the
garbage with valid data without reading it from disk - and that'll
lose the new data we'd managed to copy there.

The best we can do in such situation is to treat that as
having advanced to the beginning of the third block, despite
having copied more than that.  The caller (generic_perform_write())
will choose the next chunk starting at that point (beginning of
the 3rd block) and repeat the whole sequence for that chunk,
including the fault-in.

So ->write_end() gets 3 numbers - two describing the range we
prepared for (what ->write_begin() had received) and the third
telling how much had been actually copied.  Again, "short copy"
here does not refer to any preparations done by ->write_begin() -
it's about having told ->write_begin() we would copy over given
range and only managing to fill a part of that range.

Note that if page is uptodate, we are fine - _everything_
in that page matches what we want in file, so we can deal with
sending it to disk/server/whatnot.  If there'd been a short
copy the caller will obviously need to continue from the point
where the copy stopped, but that's not our problem.

What to do in case of short copy into non-uptodate page is
up to filesystem.  Saying "sod it, I'm not taking any of
that, just repeat the entire thing" is always fine.  We might
do better than that (see above for one such example), but
the caller will be OK if we don't.  It's a rare case, and
you either need something like race with munmap() of part of
source buffer from another thread or severe memory pressure
for that to trigger in the first place.
diff mbox series

Patch

diff --git a/fs/afs/write.c b/fs/afs/write.c
index f68274c39f47..4b3f16ca2810 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -121,6 +121,16 @@  int afs_write_end(struct file *file, struct address_space *mapping,
 	_enter("{%llx:%llu},{%lx}",
 	       vnode->fid.vid, vnode->fid.vnode, page->index);
 
+	len = min_t(size_t, len, thp_size(page) - from);
+	if (!PageUptodate(page)) {
+		if (copied < len) {
+			copied = 0;
+			goto out;
+		}
+
+		SetPageUptodate(page);
+	}
+
 	if (copied == 0)
 		goto out;
 
@@ -135,8 +145,6 @@  int afs_write_end(struct file *file, struct address_space *mapping,
 		write_sequnlock(&vnode->cb_lock);
 	}
 
-	ASSERT(PageUptodate(page));
-
 	if (PagePrivate(page)) {
 		priv = page_private(page);
 		f = afs_page_dirty_from(page, priv);