diff mbox series

[05/27] block: ensure that async polled IO is marked REQ_NOWAIT

Message ID 20181130165646.27341-6-axboe@kernel.dk (mailing list archive)
State New, archived
Headers show
Series [01/27] aio: fix failure to put the file pointer | expand

Commit Message

Jens Axboe Nov. 30, 2018, 4:56 p.m. UTC
We can't wait for polled events to complete, as they may require active
polling from whoever submitted it. If that is the same task that is
submitting new IO, we could deadlock waiting for IO to complete that
this task is supposed to be completing itself.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
---
 fs/block_dev.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

Comments

Bart Van Assche Nov. 30, 2018, 5:12 p.m. UTC | #1
On Fri, 2018-11-30 at 09:56 -0700, Jens Axboe wrote:
> We can't wait for polled events to complete, as they may require active
> polling from whoever submitted it. If that is the same task that is
> submitting new IO, we could deadlock waiting for IO to complete that
> this task is supposed to be completing itself.
> 
> Signed-off-by: Jens Axboe <axboe@kernel.dk>
> ---
>  fs/block_dev.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/block_dev.c b/fs/block_dev.c
> index 6de8d35f6e41..ebc3d5a0f424 100644
> --- a/fs/block_dev.c
> +++ b/fs/block_dev.c
> @@ -402,8 +402,16 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>  
>  		nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
>  		if (!nr_pages) {
> -			if (iocb->ki_flags & IOCB_HIPRI)
> +			if (iocb->ki_flags & IOCB_HIPRI) {
>  				bio->bi_opf |= REQ_HIPRI;
> +				/*
> +				 * For async polled IO, we can't wait for
> +				 * requests to complete, as they may also be
> +				 * polled and require active reaping.
> +				 */
> +				if (!is_sync)
> +					bio->bi_opf |= REQ_NOWAIT;
> +			}
>  
>  			qc = submit_bio(bio);
>  			WRITE_ONCE(iocb->ki_cookie, qc);

Setting REQ_NOWAIT from inside the block layer will make the code that
submits requests harder to review. Have you considered to make this code
fail I/O if REQ_NOWAIT has not been set and to require that the context
that submits I/O sets REQ_NOWAIT?

Thanks,

Bart.
Jens Axboe Nov. 30, 2018, 5:17 p.m. UTC | #2
On 11/30/18 10:12 AM, Bart Van Assche wrote:
> On Fri, 2018-11-30 at 09:56 -0700, Jens Axboe wrote:
>> We can't wait for polled events to complete, as they may require active
>> polling from whoever submitted it. If that is the same task that is
>> submitting new IO, we could deadlock waiting for IO to complete that
>> this task is supposed to be completing itself.
>>
>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>> ---
>>  fs/block_dev.c | 10 +++++++++-
>>  1 file changed, 9 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/block_dev.c b/fs/block_dev.c
>> index 6de8d35f6e41..ebc3d5a0f424 100644
>> --- a/fs/block_dev.c
>> +++ b/fs/block_dev.c
>> @@ -402,8 +402,16 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
>>  
>>  		nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
>>  		if (!nr_pages) {
>> -			if (iocb->ki_flags & IOCB_HIPRI)
>> +			if (iocb->ki_flags & IOCB_HIPRI) {
>>  				bio->bi_opf |= REQ_HIPRI;
>> +				/*
>> +				 * For async polled IO, we can't wait for
>> +				 * requests to complete, as they may also be
>> +				 * polled and require active reaping.
>> +				 */
>> +				if (!is_sync)
>> +					bio->bi_opf |= REQ_NOWAIT;
>> +			}
>>  
>>  			qc = submit_bio(bio);
>>  			WRITE_ONCE(iocb->ki_cookie, qc);
> 
> Setting REQ_NOWAIT from inside the block layer will make the code that
> submits requests harder to review. Have you considered to make this code
> fail I/O if REQ_NOWAIT has not been set and to require that the context
> that submits I/O sets REQ_NOWAIT?

It's technically still feasible to do for sync polled IO, it's only
the async case that makes it a potential deadlock.
Christoph Hellwig Dec. 4, 2018, 2:48 p.m. UTC | #3
On Fri, Nov 30, 2018 at 10:17:49AM -0700, Jens Axboe wrote:
> > Setting REQ_NOWAIT from inside the block layer will make the code that
> > submits requests harder to review. Have you considered to make this code
> > fail I/O if REQ_NOWAIT has not been set and to require that the context
> > that submits I/O sets REQ_NOWAIT?
> 
> It's technically still feasible to do for sync polled IO, it's only
> the async case that makes it a potential deadlock.

I wonder if we want a REQ_ASYNC_POLL compound flag #define that sets
REQ_POLL and REQ_NOWAIT to make this blindly obvious.
Jens Axboe Dec. 4, 2018, 6:13 p.m. UTC | #4
On 12/4/18 7:48 AM, Christoph Hellwig wrote:
> On Fri, Nov 30, 2018 at 10:17:49AM -0700, Jens Axboe wrote:
>>> Setting REQ_NOWAIT from inside the block layer will make the code that
>>> submits requests harder to review. Have you considered to make this code
>>> fail I/O if REQ_NOWAIT has not been set and to require that the context
>>> that submits I/O sets REQ_NOWAIT?
>>
>> It's technically still feasible to do for sync polled IO, it's only
>> the async case that makes it a potential deadlock.
> 
> I wonder if we want a REQ_ASYNC_POLL compound flag #define that sets
> REQ_POLL and REQ_NOWAIT to make this blindly obvious.

Yeah that might make sense, all the async cases should certainly use it,
and sync can keep using REQ_POLL. I'll add that and fold where I can.
diff mbox series

Patch

diff --git a/fs/block_dev.c b/fs/block_dev.c
index 6de8d35f6e41..ebc3d5a0f424 100644
--- a/fs/block_dev.c
+++ b/fs/block_dev.c
@@ -402,8 +402,16 @@  __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages)
 
 		nr_pages = iov_iter_npages(iter, BIO_MAX_PAGES);
 		if (!nr_pages) {
-			if (iocb->ki_flags & IOCB_HIPRI)
+			if (iocb->ki_flags & IOCB_HIPRI) {
 				bio->bi_opf |= REQ_HIPRI;
+				/*
+				 * For async polled IO, we can't wait for
+				 * requests to complete, as they may also be
+				 * polled and require active reaping.
+				 */
+				if (!is_sync)
+					bio->bi_opf |= REQ_NOWAIT;
+			}
 
 			qc = submit_bio(bio);
 			WRITE_ONCE(iocb->ki_cookie, qc);