diff mbox series

[v3] mm: mark async iocb read as NOWAIT once some data has been, copied

Message ID c6e8331e-9d64-0f9a-1a83-5cff13cbc4cb@kernel.dk (mailing list archive)
State New, archived
Headers show
Series [v3] mm: mark async iocb read as NOWAIT once some data has been, copied | expand

Commit Message

Jens Axboe Oct. 17, 2020, 8:07 p.m. UTC
Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
we should no longer attempt to async lock a new page. Instead make sure
we return the copied amount, and let the caller retry, instead of
returning -EIOCBQUEUED for a new page.

This should only be possible with read-ahead disabled on the below
device, and multiple threads racing on the same file. Haven't been able
to reproduce on anything else.

Cc: stable@vger.kernel.org # v5.9
Fixes: 1a0a7853b901 ("mm: support async buffered reads in generic_file_buffered_read()")
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---

V3
- Place the 'written' check up top so we catch cases where 'written' is
  already passed in as non-zero (Kent)

 mm/filemap.c | 8 ++++++++
 1 file changed, 8 insertions(+)

Comments

Johannes Weiner Oct. 20, 2020, 7:03 p.m. UTC | #1
On Sat, Oct 17, 2020 at 02:07:26PM -0600, Jens Axboe wrote:
> Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
> we should no longer attempt to async lock a new page.

It could be useful to elaborate on the (user-visible) failure scenario
here a bit, as I don't think it's obvious.

> Instead make sure we return the copied amount, and let the caller
> retry, instead of returning -EIOCBQUEUED for a new page.

We *wouldn't* return -EIOCBQUEUED, though, would we? We'd do the async
path, put the caller on a waitqueue, but then return `written' instead
of letting it know.

> @@ -2199,6 +2199,14 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  	last_index = (*ppos + iter->count + PAGE_SIZE-1) >> PAGE_SHIFT;
>  	offset = *ppos & ~PAGE_MASK;
>  
> +	/*
> +	 * If we've already successfully copied some data, then we
> +	 * can no longer safely return -EIOCBQUEUED. Hence mark
> +	 * an async read NOWAIT at that point.
> +	 */
> +	if (written && (iocb->ki_flags & IOCB_WAITQ))
> +		iocb->ki_flags |= IOCB_NOWAIT;

That looks correct to me, FWIW. It took a second to verify with all
spaghetti in this function :-) But the ra/!uptodate path already has
its own guard, and this is needed for the readpage fallback.
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 1a6beaf69f49..e4101b5bfa82 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2199,6 +2199,14 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 	last_index = (*ppos + iter->count + PAGE_SIZE-1) >> PAGE_SHIFT;
 	offset = *ppos & ~PAGE_MASK;
 
+	/*
+	 * If we've already successfully copied some data, then we
+	 * can no longer safely return -EIOCBQUEUED. Hence mark
+	 * an async read NOWAIT at that point.
+	 */
+	if (written && (iocb->ki_flags & IOCB_WAITQ))
+		iocb->ki_flags |= IOCB_NOWAIT;
+
 	for (;;) {
 		struct page *page;
 		pgoff_t end_index;