diff mbox series

mm: mark async iocb read as NOWAIT once some data has been, copied

Message ID 59eae433-9b30-f094-534d-eb5cf0a40a80@kernel.dk (mailing list archive)
State New, archived
Headers show
Series mm: mark async iocb read as NOWAIT once some data has been, copied | expand

Commit Message

Jens Axboe Oct. 17, 2020, 3:30 p.m. UTC
Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
we should no longer attempt to async lock a new page. Instead make sure
we return the copied amount, and let the caller retry, instead of
returning -EIOCBQUEUED for a new page.

This should only be possible with read-ahead disabled on the below
device, and multiple threads racing on the same file. Haven't been able
to reproduce on anything else.

Cc: stable@vger.kernel.org # v5.9
Fixes: 1a0a7853b901 ("mm: support async buffered reads in generic_file_buffered_read()")
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 mm/filemap.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

Comments

Matthew Wilcox (Oracle) Oct. 17, 2020, 4:28 p.m. UTC | #1
On Sat, Oct 17, 2020 at 09:30:59AM -0600, Jens Axboe wrote:
> Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
> we should no longer attempt to async lock a new page. Instead make sure
> we return the copied amount, and let the caller retry, instead of
> returning -EIOCBQUEUED for a new page.
> 
> This should only be possible with read-ahead disabled on the below
> device, and multiple threads racing on the same file. Haven't been able
> to reproduce on anything else.

Wouldn't this do the job just as well?

@@ -2211,6 +2211,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
 
                put_page(page);
                written += ret;
+               if (iocb->ki_flags & IOCB_WAITQ)
+                       iocb->ki_flags |= IOCB_NOWAIT;
                if (!iov_iter_count(iter))
                        goto out;
                if (ret < nr) {
Jens Axboe Oct. 17, 2020, 5:30 p.m. UTC | #2
On 10/17/20 10:28 AM, Matthew Wilcox wrote:
> On Sat, Oct 17, 2020 at 09:30:59AM -0600, Jens Axboe wrote:
>> Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
>> we should no longer attempt to async lock a new page. Instead make sure
>> we return the copied amount, and let the caller retry, instead of
>> returning -EIOCBQUEUED for a new page.
>>
>> This should only be possible with read-ahead disabled on the below
>> device, and multiple threads racing on the same file. Haven't been able
>> to reproduce on anything else.
> 
> Wouldn't this do the job just as well?
> 
> @@ -2211,6 +2211,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>  
>                 put_page(page);
>                 written += ret;
> +               if (iocb->ki_flags & IOCB_WAITQ)
> +                       iocb->ki_flags |= IOCB_NOWAIT;
>                 if (!iov_iter_count(iter))
>                         goto out;
>                 if (ret < nr) {

Indeed, that's cleaner. Let me re-test with that just to be on the safe
side, and I'll post a new one.
Kent Overstreet Oct. 17, 2020, 7:13 p.m. UTC | #3
On Sat, Oct 17, 2020 at 11:30:47AM -0600, Jens Axboe wrote:
> On 10/17/20 10:28 AM, Matthew Wilcox wrote:
> > On Sat, Oct 17, 2020 at 09:30:59AM -0600, Jens Axboe wrote:
> >> Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
> >> we should no longer attempt to async lock a new page. Instead make sure
> >> we return the copied amount, and let the caller retry, instead of
> >> returning -EIOCBQUEUED for a new page.
> >>
> >> This should only be possible with read-ahead disabled on the below
> >> device, and multiple threads racing on the same file. Haven't been able
> >> to reproduce on anything else.
> > 
> > Wouldn't this do the job just as well?
> > 
> > @@ -2211,6 +2211,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
> >  
> >                 put_page(page);
> >                 written += ret;
> > +               if (iocb->ki_flags & IOCB_WAITQ)
> > +                       iocb->ki_flags |= IOCB_NOWAIT;
> >                 if (!iov_iter_count(iter))
> >                         goto out;
> >                 if (ret < nr) {
> 
> Indeed, that's cleaner. Let me re-test with that just to be on the safe
> side, and I'll post a new one.

generic_file_buffered_read() has written passed in, which can be nonzero (DIO
fallback) - we probably want the check to be at the top of the loop.
Jens Axboe Oct. 17, 2020, 7:36 p.m. UTC | #4
On 10/17/20 1:13 PM, Kent Overstreet wrote:
> On Sat, Oct 17, 2020 at 11:30:47AM -0600, Jens Axboe wrote:
>> On 10/17/20 10:28 AM, Matthew Wilcox wrote:
>>> On Sat, Oct 17, 2020 at 09:30:59AM -0600, Jens Axboe wrote:
>>>> Once we've copied some data for an iocb that is marked with IOCB_WAITQ,
>>>> we should no longer attempt to async lock a new page. Instead make sure
>>>> we return the copied amount, and let the caller retry, instead of
>>>> returning -EIOCBQUEUED for a new page.
>>>>
>>>> This should only be possible with read-ahead disabled on the below
>>>> device, and multiple threads racing on the same file. Haven't been able
>>>> to reproduce on anything else.
>>>
>>> Wouldn't this do the job just as well?
>>>
>>> @@ -2211,6 +2211,8 @@ ssize_t generic_file_buffered_read(struct kiocb *iocb,
>>>  
>>>                 put_page(page);
>>>                 written += ret;
>>> +               if (iocb->ki_flags & IOCB_WAITQ)
>>> +                       iocb->ki_flags |= IOCB_NOWAIT;
>>>                 if (!iov_iter_count(iter))
>>>                         goto out;
>>>                 if (ret < nr) {
>>
>> Indeed, that's cleaner. Let me re-test with that just to be on the safe
>> side, and I'll post a new one.
> 
> generic_file_buffered_read() has written passed in, which can be nonzero (DIO
> fallback) - we probably want the check to be at the top of the loop.

Yeah good point. That calling convention is... miserable. I'll move it up top.
diff mbox series

Patch

diff --git a/mm/filemap.c b/mm/filemap.c
index 1a6beaf69f49..029787fda56b 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2185,6 +2185,7 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 	pgoff_t index;
 	pgoff_t last_index;
 	pgoff_t prev_index;
+	ssize_t this_written = 0;
 	unsigned long offset;      /* offset into pagecache page */
 	unsigned int prev_offset;
 	int error = 0;
@@ -2207,6 +2208,14 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 
 		cond_resched();
 find_page:
+		/*
+		 * If we've already successfully copied some data, then we
+		 * can no longer safely return -EIOCBQUEUED. Hence mark
+		 * an async read NOWAIT at that point.
+		 */
+		if (this_written && (iocb->ki_flags & IOCB_WAITQ))
+			iocb->ki_flags |= IOCB_NOWAIT;
+
 		if (fatal_signal_pending(current)) {
 			error = -EINTR;
 			goto out;
@@ -2239,7 +2248,7 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 			 * serialisations and why it's safe.
 			 */
 			if (iocb->ki_flags & IOCB_WAITQ) {
-				if (written) {
+				if (this_written) {
 					put_page(page);
 					goto out;
 				}
@@ -2328,7 +2337,7 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 		prev_offset = offset;
 
 		put_page(page);
-		written += ret;
+		this_written += ret;
 		if (!iov_iter_count(iter))
 			goto out;
 		if (ret < nr) {
@@ -2448,6 +2457,7 @@  ssize_t generic_file_buffered_read(struct kiocb *iocb,
 
 	*ppos = ((loff_t)index << PAGE_SHIFT) + offset;
 	file_accessed(filp);
+	written += this_written;
 	return written ? written : error;
 }
 EXPORT_SYMBOL_GPL(generic_file_buffered_read);