diff mbox series

null_blk: fix poll request timeout handling

Message ID 20230815060443.660263-1-chengming.zhou@linux.dev (mailing list archive)
State New, archived
Headers show
Series null_blk: fix poll request timeout handling | expand

Commit Message

Chengming Zhou Aug. 15, 2023, 6:04 a.m. UTC
From: Chengming Zhou <zhouchengming@bytedance.com>

When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
kernel if poll requests timeout triggered, as reported by David. [1]

BUG: kernel NULL pointer dereference, address: 0000000000000008
Workqueue: kblockd blk_mq_timeout_work
RIP: 0010:null_timeout_rq+0x4e/0x91
Call Trace:
 ? __die_body+0x1a/0x5c
 ? page_fault_oops+0x6f/0x9c
 ? kernelmode_fixup_or_oops+0xc6/0xd6
 ? __bad_area_nosemaphore+0x44/0x1eb
 ? exc_page_fault+0xe2/0xf4
 ? asm_exc_page_fault+0x22/0x30
 ? null_timeout_rq+0x4e/0x91
 blk_mq_handle_expired+0x31/0x4b
 bt_iter+0x68/0x84
 ? bt_tags_iter+0x81/0x81
 __sbitmap_for_each_set.constprop.0+0xb0/0xf2
 ? __blk_mq_complete_request_remote+0xf/0xf
 bt_for_each+0x46/0x64
 ? __blk_mq_complete_request_remote+0xf/0xf
 ? percpu_ref_get_many+0xc/0x2a
 blk_mq_queue_tag_busy_iter+0x14d/0x18e
 blk_mq_timeout_work+0x95/0x127
 process_one_work+0x185/0x263
 worker_thread+0x1b5/0x227
 ? rescuer_thread+0x287/0x287
 kthread+0xfa/0x102
 ? kthread_complete_and_exit+0x1b/0x1b
 ret_from_fork+0x22/0x30

This is indeed a race problem between null_timeout_rq() and null_poll().

null_poll()				null_timeout_rq()
  spin_lock(&nq->poll_lock)
  list_splice_init(&nq->poll_list, &list)
  spin_unlock(&nq->poll_lock)

  while (!list_empty(&list))
    req = list_first_entry()
    list_del_init()
    ...
    blk_mq_add_to_batch()
    // req->rq_next = NULL
					spin_lock(&nq->poll_lock)

					// rq->queuelist->next == NULL
					list_del_init(&rq->queuelist)

					spin_unlock(&nq->poll_lock)

What's worse is that we don't call blk_mq_complete_request_remote()
before blk_mq_add_to_batch(), so these completed requests have wrong
rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:

```
bpftrace -e 'kretfunc:null_blk:null_poll {
  $iob=(struct io_comp_batch *)args->iob;
  @[$iob->req_list->state]=count();
}'

@[1]: 51708
```

Fix these problems by setting requests state to MQ_RQ_COMPLETE under
nq->poll_lock protection, in which null_timeout_rq() can safely detect
this race and early return.

[1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/

Fixes: 0a593fbbc245 ("null_blk: poll queue support")
Reported-by: David Howells <dhowells@redhat.com>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
 drivers/block/null_blk/main.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

Comments

David Howells Aug. 15, 2023, 8:19 a.m. UTC | #1
chengming.zhou@linux.dev wrote:

> From: Chengming Zhou <zhouchengming@bytedance.com>
> 
> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
> kernel if poll requests timeout triggered, as reported by David. [1]
> 
> BUG: kernel NULL pointer dereference, address: 0000000000000008
> Workqueue: kblockd blk_mq_timeout_work
> RIP: 0010:null_timeout_rq+0x4e/0x91
> Call Trace:
>  ? __die_body+0x1a/0x5c
>  ? page_fault_oops+0x6f/0x9c
>  ? kernelmode_fixup_or_oops+0xc6/0xd6
>  ? __bad_area_nosemaphore+0x44/0x1eb
>  ? exc_page_fault+0xe2/0xf4
>  ? asm_exc_page_fault+0x22/0x30
>  ? null_timeout_rq+0x4e/0x91
>  blk_mq_handle_expired+0x31/0x4b
>  bt_iter+0x68/0x84
>  ? bt_tags_iter+0x81/0x81
>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
>  ? __blk_mq_complete_request_remote+0xf/0xf
>  bt_for_each+0x46/0x64
>  ? __blk_mq_complete_request_remote+0xf/0xf
>  ? percpu_ref_get_many+0xc/0x2a
>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
>  blk_mq_timeout_work+0x95/0x127
>  process_one_work+0x185/0x263
>  worker_thread+0x1b5/0x227
>  ? rescuer_thread+0x287/0x287
>  kthread+0xfa/0x102
>  ? kthread_complete_and_exit+0x1b/0x1b
>  ret_from_fork+0x22/0x30
> 
> This is indeed a race problem between null_timeout_rq() and null_poll().
> 
> null_poll()				null_timeout_rq()
>   spin_lock(&nq->poll_lock)
>   list_splice_init(&nq->poll_list, &list)
>   spin_unlock(&nq->poll_lock)
> 
>   while (!list_empty(&list))
>     req = list_first_entry()
>     list_del_init()
>     ...
>     blk_mq_add_to_batch()
>     // req->rq_next = NULL
> 					spin_lock(&nq->poll_lock)
> 
> 					// rq->queuelist->next == NULL
> 					list_del_init(&rq->queuelist)
> 
> 					spin_unlock(&nq->poll_lock)
> 
> What's worse is that we don't call blk_mq_complete_request_remote()
> before blk_mq_add_to_batch(), so these completed requests have wrong
> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
> 
> ```
> bpftrace -e 'kretfunc:null_blk:null_poll {
>   $iob=(struct io_comp_batch *)args->iob;
>   @[$iob->req_list->state]=count();
> }'
> 
> @[1]: 51708
> ```
> 
> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
> nq->poll_lock protection, in which null_timeout_rq() can safely detect
> this race and early return.
> 
> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
> 
> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
> Reported-by: David Howells <dhowells@redhat.com>
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>

Okay, the oops no longer seems to happen, so on that basis:

Tested-by: David Howells <dhowells@redhat.com>


However, running:

	./fio/t/io_uring -n4 /dev/nullb0

and then interrupting it with ctrl-C after a while dumps a whole load of
messages into the dmesg log (excerpt attached).  It seems wrong that the user
should be able to generate a dump like this just by interrupting - but I guess
as it's null_blk it probably doesn't matter.

David
---
null_blk: rq 00000000bb2d3264 timed out
timeout error, dev nullb0, sector 328372624 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000abcc1075 timed out
timeout error, dev nullb0, sector 378610072 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000d4bdc71f timed out
timeout error, dev nullb0, sector 185005312 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000f4ffddee timed out
timeout error, dev nullb0, sector 206118608 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 000000001e68b709 timed out
timeout error, dev nullb0, sector 310381160 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000bfeafe97 timed out
timeout error, dev nullb0, sector 52036480 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000aa67d21c timed out
timeout error, dev nullb0, sector 22746448 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000faec1291 timed out
timeout error, dev nullb0, sector 391201440 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 00000000c634428c timed out
timeout error, dev nullb0, sector 237216136 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 0000000077f91a5d timed out
timeout error, dev nullb0, sector 453778912 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
null_blk: rq 000000003076467c timed out
null_blk: rq 000000009c172678 timed out
null_blk: rq 000000002df50b48 timed out
null_blk: rq 00000000e4c66900 timed out
null_blk: rq 0000000082606e31 timed out
null_blk: rq 00000000fe21ffdc timed out
null_blk: rq 000000005e5c5173 timed out
null_blk: rq 00000000b0a0d20c timed out
null_blk: rq 000000008c729e47 timed out
null_blk: rq 00000000970f75a0 timed out
null_blk: rq 000000002ad3c45a timed out
Ming Lei Aug. 15, 2023, 8:31 a.m. UTC | #2
On Tue, Aug 15, 2023 at 02:04:42PM +0800, chengming.zhou@linux.dev wrote:
> From: Chengming Zhou <zhouchengming@bytedance.com>
> 
> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
> kernel if poll requests timeout triggered, as reported by David. [1]

Just be curious, how is the timeout triggered when running
"./fio/t/io_uring -r20 /dev/nullb0"?

David mentioned that the issue is triggered in 6.5-rc1, maybe one
regression?

> 
> BUG: kernel NULL pointer dereference, address: 0000000000000008
> Workqueue: kblockd blk_mq_timeout_work
> RIP: 0010:null_timeout_rq+0x4e/0x91
> Call Trace:
>  ? __die_body+0x1a/0x5c
>  ? page_fault_oops+0x6f/0x9c
>  ? kernelmode_fixup_or_oops+0xc6/0xd6
>  ? __bad_area_nosemaphore+0x44/0x1eb
>  ? exc_page_fault+0xe2/0xf4
>  ? asm_exc_page_fault+0x22/0x30
>  ? null_timeout_rq+0x4e/0x91
>  blk_mq_handle_expired+0x31/0x4b
>  bt_iter+0x68/0x84
>  ? bt_tags_iter+0x81/0x81
>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
>  ? __blk_mq_complete_request_remote+0xf/0xf
>  bt_for_each+0x46/0x64
>  ? __blk_mq_complete_request_remote+0xf/0xf
>  ? percpu_ref_get_many+0xc/0x2a
>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
>  blk_mq_timeout_work+0x95/0x127
>  process_one_work+0x185/0x263
>  worker_thread+0x1b5/0x227
>  ? rescuer_thread+0x287/0x287
>  kthread+0xfa/0x102
>  ? kthread_complete_and_exit+0x1b/0x1b
>  ret_from_fork+0x22/0x30
> 
> This is indeed a race problem between null_timeout_rq() and null_poll().
> 
> null_poll()				null_timeout_rq()
>   spin_lock(&nq->poll_lock)
>   list_splice_init(&nq->poll_list, &list)
>   spin_unlock(&nq->poll_lock)
> 
>   while (!list_empty(&list))
>     req = list_first_entry()
>     list_del_init()
>     ...
>     blk_mq_add_to_batch()
>     // req->rq_next = NULL
> 					spin_lock(&nq->poll_lock)
> 
> 					// rq->queuelist->next == NULL
> 					list_del_init(&rq->queuelist)
> 
> 					spin_unlock(&nq->poll_lock)
> 
> What's worse is that we don't call blk_mq_complete_request_remote()
> before blk_mq_add_to_batch(), so these completed requests have wrong
> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
> 
> ```
> bpftrace -e 'kretfunc:null_blk:null_poll {
>   $iob=(struct io_comp_batch *)args->iob;
>   @[$iob->req_list->state]=count();
> }'
> 
> @[1]: 51708
> ```
> 
> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
> nq->poll_lock protection, in which null_timeout_rq() can safely detect
> this race and early return.
> 
> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
> 
> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
> Reported-by: David Howells <dhowells@redhat.com>
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> ---
>  drivers/block/null_blk/main.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index 864013019d6b..968090935eb2 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -1643,9 +1643,12 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
>  	struct nullb_queue *nq = hctx->driver_data;
>  	LIST_HEAD(list);
>  	int nr = 0;
> +	struct request *rq;
>  
>  	spin_lock(&nq->poll_lock);
>  	list_splice_init(&nq->poll_list, &list);
> +	list_for_each_entry(rq, &list, queuelist)
> +		blk_mq_set_request_complete(rq);
>  	spin_unlock(&nq->poll_lock);
>  
>  	while (!list_empty(&list)) {
> @@ -1671,16 +1674,21 @@ static enum blk_eh_timer_return null_timeout_rq(struct request *rq)
>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
>  	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
>  
> -	pr_info("rq %p timed out\n", rq);
> -
>  	if (hctx->type == HCTX_TYPE_POLL) {
>  		struct nullb_queue *nq = hctx->driver_data;
>  
>  		spin_lock(&nq->poll_lock);
> +		/* The request may have completed meanwhile. */
> +		if (blk_mq_request_completed(rq)) {
> +			spin_unlock(&nq->poll_lock);
> +			return BLK_EH_DONE;
> +		}
>  		list_del_init(&rq->queuelist);
>  		spin_unlock(&nq->poll_lock);
>  	}

I think null_process_cmd() is needed for un-completed request.


Thanks,
Ming
Chengming Zhou Aug. 15, 2023, 9:45 a.m. UTC | #3
On 2023/8/15 16:31, Ming Lei wrote:
> On Tue, Aug 15, 2023 at 02:04:42PM +0800, chengming.zhou@linux.dev wrote:
>> From: Chengming Zhou <zhouchengming@bytedance.com>
>>
>> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
>> kernel if poll requests timeout triggered, as reported by David. [1]
> 
> Just be curious, how is the timeout triggered when running
> "./fio/t/io_uring -r20 /dev/nullb0"?

I tried "./fio/t/io_uring -r20 /dev/nullb0" multiple times, sometimes
program exit ok, sometimes it dump many timeout messages and kernel BUG.

I just used David's "./fio/t/io_uring -n4 /dev/nullb0", run a while
then ctrl-C, the program will always dump many timeout messages.

It seems that program exit is not clean, leave some requests in driver
haven't been polled & completed? I don't know.

> 
> David mentioned that the issue is triggered in 6.5-rc1, maybe one
> regression?
> 

I just tested using v6.4.9, found the same timeout and kernel BUG
using "./fio/t/io_uring -n4 /dev/nullb0".

>>
>> BUG: kernel NULL pointer dereference, address: 0000000000000008
>> Workqueue: kblockd blk_mq_timeout_work
>> RIP: 0010:null_timeout_rq+0x4e/0x91
>> Call Trace:
>>  ? __die_body+0x1a/0x5c
>>  ? page_fault_oops+0x6f/0x9c
>>  ? kernelmode_fixup_or_oops+0xc6/0xd6
>>  ? __bad_area_nosemaphore+0x44/0x1eb
>>  ? exc_page_fault+0xe2/0xf4
>>  ? asm_exc_page_fault+0x22/0x30
>>  ? null_timeout_rq+0x4e/0x91
>>  blk_mq_handle_expired+0x31/0x4b
>>  bt_iter+0x68/0x84
>>  ? bt_tags_iter+0x81/0x81
>>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
>>  ? __blk_mq_complete_request_remote+0xf/0xf
>>  bt_for_each+0x46/0x64
>>  ? __blk_mq_complete_request_remote+0xf/0xf
>>  ? percpu_ref_get_many+0xc/0x2a
>>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
>>  blk_mq_timeout_work+0x95/0x127
>>  process_one_work+0x185/0x263
>>  worker_thread+0x1b5/0x227
>>  ? rescuer_thread+0x287/0x287
>>  kthread+0xfa/0x102
>>  ? kthread_complete_and_exit+0x1b/0x1b
>>  ret_from_fork+0x22/0x30
>>
>> This is indeed a race problem between null_timeout_rq() and null_poll().
>>
>> null_poll()				null_timeout_rq()
>>   spin_lock(&nq->poll_lock)
>>   list_splice_init(&nq->poll_list, &list)
>>   spin_unlock(&nq->poll_lock)
>>
>>   while (!list_empty(&list))
>>     req = list_first_entry()
>>     list_del_init()
>>     ...
>>     blk_mq_add_to_batch()
>>     // req->rq_next = NULL
>> 					spin_lock(&nq->poll_lock)
>>
>> 					// rq->queuelist->next == NULL
>> 					list_del_init(&rq->queuelist)
>>
>> 					spin_unlock(&nq->poll_lock)
>>
>> What's worse is that we don't call blk_mq_complete_request_remote()
>> before blk_mq_add_to_batch(), so these completed requests have wrong
>> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
>>
>> ```
>> bpftrace -e 'kretfunc:null_blk:null_poll {
>>   $iob=(struct io_comp_batch *)args->iob;
>>   @[$iob->req_list->state]=count();
>> }'
>>
>> @[1]: 51708
>> ```
>>
>> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
>> nq->poll_lock protection, in which null_timeout_rq() can safely detect
>> this race and early return.
>>
>> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
>>
>> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
>> Reported-by: David Howells <dhowells@redhat.com>
>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
>> ---
>>  drivers/block/null_blk/main.c | 12 ++++++++++--
>>  1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
>> index 864013019d6b..968090935eb2 100644
>> --- a/drivers/block/null_blk/main.c
>> +++ b/drivers/block/null_blk/main.c
>> @@ -1643,9 +1643,12 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
>>  	struct nullb_queue *nq = hctx->driver_data;
>>  	LIST_HEAD(list);
>>  	int nr = 0;
>> +	struct request *rq;
>>  
>>  	spin_lock(&nq->poll_lock);
>>  	list_splice_init(&nq->poll_list, &list);
>> +	list_for_each_entry(rq, &list, queuelist)
>> +		blk_mq_set_request_complete(rq);
>>  	spin_unlock(&nq->poll_lock);
>>  
>>  	while (!list_empty(&list)) {
>> @@ -1671,16 +1674,21 @@ static enum blk_eh_timer_return null_timeout_rq(struct request *rq)
>>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
>>  	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
>>  
>> -	pr_info("rq %p timed out\n", rq);
>> -
>>  	if (hctx->type == HCTX_TYPE_POLL) {
>>  		struct nullb_queue *nq = hctx->driver_data;
>>  
>>  		spin_lock(&nq->poll_lock);
>> +		/* The request may have completed meanwhile. */
>> +		if (blk_mq_request_completed(rq)) {
>> +			spin_unlock(&nq->poll_lock);
>> +			return BLK_EH_DONE;
>> +		}
>>  		list_del_init(&rq->queuelist);
>>  		spin_unlock(&nq->poll_lock);
>>  	}
> 
> I think null_process_cmd() is needed for un-completed request.
> 

The end of function will set BLK_STS_TIMEOUT error and complete request
using blk_mq_complete_request(), not sure if null_process_cmd() is
needed in this error case?

Thanks.
Ming Lei Aug. 15, 2023, 12:18 p.m. UTC | #4
On Tue, Aug 15, 2023 at 05:45:10PM +0800, Chengming Zhou wrote:
> On 2023/8/15 16:31, Ming Lei wrote:
> > On Tue, Aug 15, 2023 at 02:04:42PM +0800, chengming.zhou@linux.dev wrote:
> >> From: Chengming Zhou <zhouchengming@bytedance.com>
> >>
> >> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
> >> kernel if poll requests timeout triggered, as reported by David. [1]
> > 
> > Just be curious, how is the timeout triggered when running
> > "./fio/t/io_uring -r20 /dev/nullb0"?
> 
> I tried "./fio/t/io_uring -r20 /dev/nullb0" multiple times, sometimes
> program exit ok, sometimes it dump many timeout messages and kernel BUG.
> 
> I just used David's "./fio/t/io_uring -n4 /dev/nullb0", run a while
> then ctrl-C, the program will always dump many timeout messages.
> 
> It seems that program exit is not clean, leave some requests in driver
> haven't been polled & completed? I don't know.

Yeah, probably one io_uring exit issue.

> 
> > 
> > David mentioned that the issue is triggered in 6.5-rc1, maybe one
> > regression?
> > 
> 
> I just tested using v6.4.9, found the same timeout and kernel BUG
> using "./fio/t/io_uring -n4 /dev/nullb0".
> 
> >>
> >> BUG: kernel NULL pointer dereference, address: 0000000000000008
> >> Workqueue: kblockd blk_mq_timeout_work
> >> RIP: 0010:null_timeout_rq+0x4e/0x91
> >> Call Trace:
> >>  ? __die_body+0x1a/0x5c
> >>  ? page_fault_oops+0x6f/0x9c
> >>  ? kernelmode_fixup_or_oops+0xc6/0xd6
> >>  ? __bad_area_nosemaphore+0x44/0x1eb
> >>  ? exc_page_fault+0xe2/0xf4
> >>  ? asm_exc_page_fault+0x22/0x30
> >>  ? null_timeout_rq+0x4e/0x91
> >>  blk_mq_handle_expired+0x31/0x4b
> >>  bt_iter+0x68/0x84
> >>  ? bt_tags_iter+0x81/0x81
> >>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
> >>  ? __blk_mq_complete_request_remote+0xf/0xf
> >>  bt_for_each+0x46/0x64
> >>  ? __blk_mq_complete_request_remote+0xf/0xf
> >>  ? percpu_ref_get_many+0xc/0x2a
> >>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
> >>  blk_mq_timeout_work+0x95/0x127
> >>  process_one_work+0x185/0x263
> >>  worker_thread+0x1b5/0x227
> >>  ? rescuer_thread+0x287/0x287
> >>  kthread+0xfa/0x102
> >>  ? kthread_complete_and_exit+0x1b/0x1b
> >>  ret_from_fork+0x22/0x30
> >>
> >> This is indeed a race problem between null_timeout_rq() and null_poll().
> >>
> >> null_poll()				null_timeout_rq()
> >>   spin_lock(&nq->poll_lock)
> >>   list_splice_init(&nq->poll_list, &list)
> >>   spin_unlock(&nq->poll_lock)
> >>
> >>   while (!list_empty(&list))
> >>     req = list_first_entry()
> >>     list_del_init()
> >>     ...
> >>     blk_mq_add_to_batch()
> >>     // req->rq_next = NULL
> >> 					spin_lock(&nq->poll_lock)
> >>
> >> 					// rq->queuelist->next == NULL
> >> 					list_del_init(&rq->queuelist)
> >>
> >> 					spin_unlock(&nq->poll_lock)
> >>
> >> What's worse is that we don't call blk_mq_complete_request_remote()
> >> before blk_mq_add_to_batch(), so these completed requests have wrong
> >> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
> >>
> >> ```
> >> bpftrace -e 'kretfunc:null_blk:null_poll {
> >>   $iob=(struct io_comp_batch *)args->iob;
> >>   @[$iob->req_list->state]=count();
> >> }'
> >>
> >> @[1]: 51708
> >> ```
> >>
> >> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
> >> nq->poll_lock protection, in which null_timeout_rq() can safely detect
> >> this race and early return.
> >>
> >> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
> >>
> >> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
> >> Reported-by: David Howells <dhowells@redhat.com>
> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> >> ---
> >>  drivers/block/null_blk/main.c | 12 ++++++++++--
> >>  1 file changed, 10 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> >> index 864013019d6b..968090935eb2 100644
> >> --- a/drivers/block/null_blk/main.c
> >> +++ b/drivers/block/null_blk/main.c
> >> @@ -1643,9 +1643,12 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
> >>  	struct nullb_queue *nq = hctx->driver_data;
> >>  	LIST_HEAD(list);
> >>  	int nr = 0;
> >> +	struct request *rq;
> >>  
> >>  	spin_lock(&nq->poll_lock);
> >>  	list_splice_init(&nq->poll_list, &list);
> >> +	list_for_each_entry(rq, &list, queuelist)
> >> +		blk_mq_set_request_complete(rq);
> >>  	spin_unlock(&nq->poll_lock);
> >>  
> >>  	while (!list_empty(&list)) {
> >> @@ -1671,16 +1674,21 @@ static enum blk_eh_timer_return null_timeout_rq(struct request *rq)
> >>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
> >>  	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
> >>  
> >> -	pr_info("rq %p timed out\n", rq);
> >> -
> >>  	if (hctx->type == HCTX_TYPE_POLL) {
> >>  		struct nullb_queue *nq = hctx->driver_data;
> >>  
> >>  		spin_lock(&nq->poll_lock);
> >> +		/* The request may have completed meanwhile. */
> >> +		if (blk_mq_request_completed(rq)) {
> >> +			spin_unlock(&nq->poll_lock);
> >> +			return BLK_EH_DONE;
> >> +		}
> >>  		list_del_init(&rq->queuelist);
> >>  		spin_unlock(&nq->poll_lock);
> >>  	}
> > 
> > I think null_process_cmd() is needed for un-completed request.
> > 
> 
> The end of function will set BLK_STS_TIMEOUT error and complete request
> using blk_mq_complete_request(), not sure if null_process_cmd() is
> needed in this error case?

Indeed, then no need to process this cmd, and sorry for the noise.


Thanks,
Ming
Chengming Zhou Aug. 16, 2023, 9:15 a.m. UTC | #5
On 2023/8/15 16:19, David Howells wrote:
> chengming.zhou@linux.dev wrote:
> 
>> From: Chengming Zhou <zhouchengming@bytedance.com>
>>
>> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
>> kernel if poll requests timeout triggered, as reported by David. [1]
>>
>> BUG: kernel NULL pointer dereference, address: 0000000000000008
>> Workqueue: kblockd blk_mq_timeout_work
>> RIP: 0010:null_timeout_rq+0x4e/0x91
>> Call Trace:
>>  ? __die_body+0x1a/0x5c
>>  ? page_fault_oops+0x6f/0x9c
>>  ? kernelmode_fixup_or_oops+0xc6/0xd6
>>  ? __bad_area_nosemaphore+0x44/0x1eb
>>  ? exc_page_fault+0xe2/0xf4
>>  ? asm_exc_page_fault+0x22/0x30
>>  ? null_timeout_rq+0x4e/0x91
>>  blk_mq_handle_expired+0x31/0x4b
>>  bt_iter+0x68/0x84
>>  ? bt_tags_iter+0x81/0x81
>>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
>>  ? __blk_mq_complete_request_remote+0xf/0xf
>>  bt_for_each+0x46/0x64
>>  ? __blk_mq_complete_request_remote+0xf/0xf
>>  ? percpu_ref_get_many+0xc/0x2a
>>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
>>  blk_mq_timeout_work+0x95/0x127
>>  process_one_work+0x185/0x263
>>  worker_thread+0x1b5/0x227
>>  ? rescuer_thread+0x287/0x287
>>  kthread+0xfa/0x102
>>  ? kthread_complete_and_exit+0x1b/0x1b
>>  ret_from_fork+0x22/0x30
>>
>> This is indeed a race problem between null_timeout_rq() and null_poll().
>>
>> null_poll()				null_timeout_rq()
>>   spin_lock(&nq->poll_lock)
>>   list_splice_init(&nq->poll_list, &list)
>>   spin_unlock(&nq->poll_lock)
>>
>>   while (!list_empty(&list))
>>     req = list_first_entry()
>>     list_del_init()
>>     ...
>>     blk_mq_add_to_batch()
>>     // req->rq_next = NULL
>> 					spin_lock(&nq->poll_lock)
>>
>> 					// rq->queuelist->next == NULL
>> 					list_del_init(&rq->queuelist)
>>
>> 					spin_unlock(&nq->poll_lock)
>>
>> What's worse is that we don't call blk_mq_complete_request_remote()
>> before blk_mq_add_to_batch(), so these completed requests have wrong
>> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
>>
>> ```
>> bpftrace -e 'kretfunc:null_blk:null_poll {
>>   $iob=(struct io_comp_batch *)args->iob;
>>   @[$iob->req_list->state]=count();
>> }'
>>
>> @[1]: 51708
>> ```
>>
>> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
>> nq->poll_lock protection, in which null_timeout_rq() can safely detect
>> this race and early return.
>>
>> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
>>
>> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
>> Reported-by: David Howells <dhowells@redhat.com>
>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> 
> Okay, the oops no longer seems to happen, so on that basis:
> 
> Tested-by: David Howells <dhowells@redhat.com>
> 

Yes, this patch just fixes the kernel oops when request timeout happened.

> 
> However, running:
> 
> 	./fio/t/io_uring -n4 /dev/nullb0
> 
> and then interrupting it with ctrl-C after a while dumps a whole load of
> messages into the dmesg log (excerpt attached).  It seems wrong that the user
> should be able to generate a dump like this just by interrupting - but I guess
> as it's null_blk it probably doesn't matter.

I can reproduce it, maybe an issue in io_uring. Although io_uring will reap
all pending requests when task exit, it seems that it will block for some
seconds before doing reap, so timeout happen. I'm not sure, just some guess ;-)

Thanks.

> 
> David
> ---
> null_blk: rq 00000000bb2d3264 timed out
> timeout error, dev nullb0, sector 328372624 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000abcc1075 timed out
> timeout error, dev nullb0, sector 378610072 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000d4bdc71f timed out
> timeout error, dev nullb0, sector 185005312 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000f4ffddee timed out
> timeout error, dev nullb0, sector 206118608 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 000000001e68b709 timed out
> timeout error, dev nullb0, sector 310381160 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000bfeafe97 timed out
> timeout error, dev nullb0, sector 52036480 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000aa67d21c timed out
> timeout error, dev nullb0, sector 22746448 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000faec1291 timed out
> timeout error, dev nullb0, sector 391201440 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000c634428c timed out
> timeout error, dev nullb0, sector 237216136 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 0000000077f91a5d timed out
> timeout error, dev nullb0, sector 453778912 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 000000003076467c timed out
> null_blk: rq 000000009c172678 timed out
> null_blk: rq 000000002df50b48 timed out
> null_blk: rq 00000000e4c66900 timed out
> null_blk: rq 0000000082606e31 timed out
> null_blk: rq 00000000fe21ffdc timed out
> null_blk: rq 000000005e5c5173 timed out
> null_blk: rq 00000000b0a0d20c timed out
> null_blk: rq 000000008c729e47 timed out
> null_blk: rq 00000000970f75a0 timed out
> null_blk: rq 000000002ad3c45a timed out
>
Ming Lei Aug. 17, 2023, 9:02 a.m. UTC | #6
On Tue, Aug 15, 2023 at 05:45:10PM +0800, Chengming Zhou wrote:
> On 2023/8/15 16:31, Ming Lei wrote:
> > On Tue, Aug 15, 2023 at 02:04:42PM +0800, chengming.zhou@linux.dev wrote:
> >> From: Chengming Zhou <zhouchengming@bytedance.com>
> >>
> >> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
> >> kernel if poll requests timeout triggered, as reported by David. [1]
> > 
> > Just be curious, how is the timeout triggered when running
> > "./fio/t/io_uring -r20 /dev/nullb0"?
> 
> I tried "./fio/t/io_uring -r20 /dev/nullb0" multiple times, sometimes
> program exit ok, sometimes it dump many timeout messages and kernel BUG.
> 
> I just used David's "./fio/t/io_uring -n4 /dev/nullb0", run a while
> then ctrl-C, the program will always dump many timeout messages.
> 
> It seems that program exit is not clean, leave some requests in driver
> haven't been polled & completed? I don't know.

Just run a trace on this issue, looks it is caused by delayed __fput():

1) it is pretty easy to cause __fput(io_uring) too late in case of passing -n4

2) then io_ring_exit_work() is scheduled(for polling in-completed IOs) too late
since it is started in io_uring_release()

3) null_blk sets timeout as 5sec, so the issue can be triggered easier

But not see delayed __fput() triggered on nvme yet, still a bit
confused.

> > 
> > David mentioned that the issue is triggered in 6.5-rc1, maybe one
> > regression?
> > 
> 
> I just tested using v6.4.9, found the same timeout and kernel BUG
> using "./fio/t/io_uring -n4 /dev/nullb0".
> 
> >>
> >> BUG: kernel NULL pointer dereference, address: 0000000000000008
> >> Workqueue: kblockd blk_mq_timeout_work
> >> RIP: 0010:null_timeout_rq+0x4e/0x91
> >> Call Trace:
> >>  ? __die_body+0x1a/0x5c
> >>  ? page_fault_oops+0x6f/0x9c
> >>  ? kernelmode_fixup_or_oops+0xc6/0xd6
> >>  ? __bad_area_nosemaphore+0x44/0x1eb
> >>  ? exc_page_fault+0xe2/0xf4
> >>  ? asm_exc_page_fault+0x22/0x30
> >>  ? null_timeout_rq+0x4e/0x91
> >>  blk_mq_handle_expired+0x31/0x4b
> >>  bt_iter+0x68/0x84
> >>  ? bt_tags_iter+0x81/0x81
> >>  __sbitmap_for_each_set.constprop.0+0xb0/0xf2
> >>  ? __blk_mq_complete_request_remote+0xf/0xf
> >>  bt_for_each+0x46/0x64
> >>  ? __blk_mq_complete_request_remote+0xf/0xf
> >>  ? percpu_ref_get_many+0xc/0x2a
> >>  blk_mq_queue_tag_busy_iter+0x14d/0x18e
> >>  blk_mq_timeout_work+0x95/0x127
> >>  process_one_work+0x185/0x263
> >>  worker_thread+0x1b5/0x227
> >>  ? rescuer_thread+0x287/0x287
> >>  kthread+0xfa/0x102
> >>  ? kthread_complete_and_exit+0x1b/0x1b
> >>  ret_from_fork+0x22/0x30
> >>
> >> This is indeed a race problem between null_timeout_rq() and null_poll().
> >>
> >> null_poll()				null_timeout_rq()
> >>   spin_lock(&nq->poll_lock)
> >>   list_splice_init(&nq->poll_list, &list)
> >>   spin_unlock(&nq->poll_lock)
> >>
> >>   while (!list_empty(&list))
> >>     req = list_first_entry()
> >>     list_del_init()
> >>     ...
> >>     blk_mq_add_to_batch()
> >>     // req->rq_next = NULL
> >> 					spin_lock(&nq->poll_lock)
> >>
> >> 					// rq->queuelist->next == NULL
> >> 					list_del_init(&rq->queuelist)
> >>
> >> 					spin_unlock(&nq->poll_lock)
> >>
> >> What's worse is that we don't call blk_mq_complete_request_remote()
> >> before blk_mq_add_to_batch(), so these completed requests have wrong
> >> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
> >>
> >> ```
> >> bpftrace -e 'kretfunc:null_blk:null_poll {
> >>   $iob=(struct io_comp_batch *)args->iob;
> >>   @[$iob->req_list->state]=count();
> >> }'
> >>
> >> @[1]: 51708
> >> ```
> >>
> >> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
> >> nq->poll_lock protection, in which null_timeout_rq() can safely detect
> >> this race and early return.
> >>
> >> [1] https://lore.kernel.org/all/3893581.1691785261@warthog.procyon.org.uk/
> >>
> >> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
> >> Reported-by: David Howells <dhowells@redhat.com>
> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> >> ---
> >>  drivers/block/null_blk/main.c | 12 ++++++++++--
> >>  1 file changed, 10 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> >> index 864013019d6b..968090935eb2 100644
> >> --- a/drivers/block/null_blk/main.c
> >> +++ b/drivers/block/null_blk/main.c
> >> @@ -1643,9 +1643,12 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
> >>  	struct nullb_queue *nq = hctx->driver_data;
> >>  	LIST_HEAD(list);
> >>  	int nr = 0;
> >> +	struct request *rq;
> >>  
> >>  	spin_lock(&nq->poll_lock);
> >>  	list_splice_init(&nq->poll_list, &list);
> >> +	list_for_each_entry(rq, &list, queuelist)
> >> +		blk_mq_set_request_complete(rq);
> >>  	spin_unlock(&nq->poll_lock);
> >>  
> >>  	while (!list_empty(&list)) {
> >> @@ -1671,16 +1674,21 @@ static enum blk_eh_timer_return null_timeout_rq(struct request *rq)
> >>  	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
> >>  	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
> >>  
> >> -	pr_info("rq %p timed out\n", rq);
> >> -
> >>  	if (hctx->type == HCTX_TYPE_POLL) {
> >>  		struct nullb_queue *nq = hctx->driver_data;
> >>  
> >>  		spin_lock(&nq->poll_lock);
> >> +		/* The request may have completed meanwhile. */
> >> +		if (blk_mq_request_completed(rq)) {
> >> +			spin_unlock(&nq->poll_lock);
> >> +			return BLK_EH_DONE;
> >> +		}
> >>  		list_del_init(&rq->queuelist);
> >>  		spin_unlock(&nq->poll_lock);
> >>  	}
> > 
> > I think null_process_cmd() is needed for un-completed request.
> > 
> 
> The end of function will set BLK_STS_TIMEOUT error and complete request
> using blk_mq_complete_request(), not sure if null_process_cmd() is
> needed in this error case?

The null_blk fix looks fine:

Reviewed-by: Ming Lei <ming.lei@redhat.com>


thanks,
Ming
diff mbox series

Patch

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 864013019d6b..968090935eb2 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1643,9 +1643,12 @@  static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
 	struct nullb_queue *nq = hctx->driver_data;
 	LIST_HEAD(list);
 	int nr = 0;
+	struct request *rq;
 
 	spin_lock(&nq->poll_lock);
 	list_splice_init(&nq->poll_list, &list);
+	list_for_each_entry(rq, &list, queuelist)
+		blk_mq_set_request_complete(rq);
 	spin_unlock(&nq->poll_lock);
 
 	while (!list_empty(&list)) {
@@ -1671,16 +1674,21 @@  static enum blk_eh_timer_return null_timeout_rq(struct request *rq)
 	struct blk_mq_hw_ctx *hctx = rq->mq_hctx;
 	struct nullb_cmd *cmd = blk_mq_rq_to_pdu(rq);
 
-	pr_info("rq %p timed out\n", rq);
-
 	if (hctx->type == HCTX_TYPE_POLL) {
 		struct nullb_queue *nq = hctx->driver_data;
 
 		spin_lock(&nq->poll_lock);
+		/* The request may have completed meanwhile. */
+		if (blk_mq_request_completed(rq)) {
+			spin_unlock(&nq->poll_lock);
+			return BLK_EH_DONE;
+		}
 		list_del_init(&rq->queuelist);
 		spin_unlock(&nq->poll_lock);
 	}
 
+	pr_info("rq %p timed out\n", rq);
+
 	/*
 	 * If the device is marked as blocking (i.e. memory backed or zoned
 	 * device), the submission path may be blocked waiting for resources