diff mbox series

[-next,v2] blk-mq: fix panic during blk_mq_run_work_fn()

Message ID 20220520032542.3331610-1-yukuai3@huawei.com (mailing list archive)
State New, archived
Headers show
Series [-next,v2] blk-mq: fix panic during blk_mq_run_work_fn() | expand

Commit Message

Yu Kuai May 20, 2022, 3:25 a.m. UTC
Our test report a following crash:

BUG: kernel NULL pointer dereference, address: 0000000000000018
PGD 0 P4D 0
Oops: 0000 [#1] SMP NOPTI
CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G           O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
Workqueue: kblockd blk_mq_run_work_fn
RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
Call Trace:
 __blk_mq_do_dispatch_sched+0x2a7/0x2c0
 ? newidle_balance+0x23e/0x2f0
 __blk_mq_sched_dispatch_requests+0x13f/0x190
 blk_mq_sched_dispatch_requests+0x30/0x60
 __blk_mq_run_hw_queue+0x47/0xd0
 process_one_work+0x1b0/0x350
 worker_thread+0x49/0x300
 ? rescuer_thread+0x3a0/0x3a0
 kthread+0xfe/0x140
 ? kthread_park+0x90/0x90
 ret_from_fork+0x22/0x30

After digging from vmcore, I found that the queue is cleaned
up(blk_cleanup_queue() is done) and tag set is
freed(blk_mq_free_tag_set() is done).

There are two problems here:

1) blk_mq_delay_run_hw_queues() will only be called from
__blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
This seems impossible because blk_cleanup_queue() is done, and there
should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
more accurate") fix the problem in bfq. And currently ohter schedulers
don't have such problem.

2) 'hctx->run_work' still exists after blk_cleanup_queue().
blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
all the 'run_work'. However, there is no guarantee that new 'run_work'
won't be queued after that(and before blk_mq_exit_queue() is done).

The first problem is not the root cause, it will only increase the
probability of the second problem. this patch fix the second problem by
checking the 'QUEUE_FLAG_DEAD' before queuing 'hctx->run_work', and
using 'queue_lock' to synchronize queuing new work and cacelling the
old work.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-core.c |  3 +++
 block/blk-mq.c   | 10 ++++++++--
 2 files changed, 11 insertions(+), 2 deletions(-)

Comments

Ming Lei May 20, 2022, 3:44 a.m. UTC | #1
On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
> Our test report a following crash:
> 
> BUG: kernel NULL pointer dereference, address: 0000000000000018
> PGD 0 P4D 0
> Oops: 0000 [#1] SMP NOPTI
> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G           O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
> Workqueue: kblockd blk_mq_run_work_fn
> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
> Call Trace:
>  __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>  ? newidle_balance+0x23e/0x2f0
>  __blk_mq_sched_dispatch_requests+0x13f/0x190
>  blk_mq_sched_dispatch_requests+0x30/0x60
>  __blk_mq_run_hw_queue+0x47/0xd0
>  process_one_work+0x1b0/0x350
>  worker_thread+0x49/0x300
>  ? rescuer_thread+0x3a0/0x3a0
>  kthread+0xfe/0x140
>  ? kthread_park+0x90/0x90
>  ret_from_fork+0x22/0x30
> 
> After digging from vmcore, I found that the queue is cleaned
> up(blk_cleanup_queue() is done) and tag set is
> freed(blk_mq_free_tag_set() is done).
> 
> There are two problems here:
> 
> 1) blk_mq_delay_run_hw_queues() will only be called from
> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
> This seems impossible because blk_cleanup_queue() is done, and there
> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
> more accurate") fix the problem in bfq. And currently ohter schedulers
> don't have such problem.
> 
> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
> all the 'run_work'. However, there is no guarantee that new 'run_work'
> won't be queued after that(and before blk_mq_exit_queue() is done).

It is blk_mq_run_hw_queue() caller's responsibility to grab
->q_usage_counter for avoiding queue cleaned up, so please fix the user
side.


Thanks, 
Ming
Yu Kuai May 20, 2022, 6:23 a.m. UTC | #2
在 2022/05/20 11:44, Ming Lei 写道:
> On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
>> Our test report a following crash:
>>
>> BUG: kernel NULL pointer dereference, address: 0000000000000018
>> PGD 0 P4D 0
>> Oops: 0000 [#1] SMP NOPTI
>> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G           O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
>> Workqueue: kblockd blk_mq_run_work_fn
>> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
>> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
>> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
>> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
>> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
>> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
>> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
>> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000) knlGS:0000000000000000
>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
>> Call Trace:
>>   __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>>   ? newidle_balance+0x23e/0x2f0
>>   __blk_mq_sched_dispatch_requests+0x13f/0x190
>>   blk_mq_sched_dispatch_requests+0x30/0x60
>>   __blk_mq_run_hw_queue+0x47/0xd0
>>   process_one_work+0x1b0/0x350
>>   worker_thread+0x49/0x300
>>   ? rescuer_thread+0x3a0/0x3a0
>>   kthread+0xfe/0x140
>>   ? kthread_park+0x90/0x90
>>   ret_from_fork+0x22/0x30
>>
>> After digging from vmcore, I found that the queue is cleaned
>> up(blk_cleanup_queue() is done) and tag set is
>> freed(blk_mq_free_tag_set() is done).
>>
>> There are two problems here:
>>
>> 1) blk_mq_delay_run_hw_queues() will only be called from
>> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
>> This seems impossible because blk_cleanup_queue() is done, and there
>> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
>> more accurate") fix the problem in bfq. And currently ohter schedulers
>> don't have such problem.
>>
>> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
>> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
>> all the 'run_work'. However, there is no guarantee that new 'run_work'
>> won't be queued after that(and before blk_mq_exit_queue() is done).
> 
> It is blk_mq_run_hw_queue() caller's responsibility to grab
> ->q_usage_counter for avoiding queue cleaned up, so please fix the user
> side.
> 
Hi,

Thanks for your advice.

blk_mq_run_hw_queue() can be called async, in order to do that, what I
can think of is that grab 'q_usage_counte' before queuing 'run->work'
and release it after. Which is very similar to this patch...

Kuai
> 
> Thanks,
> Ming
> 
> .
>
Yu Kuai May 20, 2022, 7:02 a.m. UTC | #3
在 2022/05/20 14:23, yukuai (C) 写道:
> 在 2022/05/20 11:44, Ming Lei 写道:
>> On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
>>> Our test report a following crash:
>>>
>>> BUG: kernel NULL pointer dereference, address: 0000000000000018
>>> PGD 0 P4D 0
>>> Oops: 0000 [#1] SMP NOPTI
>>> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G           
>>> O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
>>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
>>> rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
>>> Workqueue: kblockd blk_mq_run_work_fn
>>> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
>>> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
>>> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
>>> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
>>> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
>>> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
>>> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
>>> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000) 
>>> knlGS:0000000000000000
>>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
>>> Call Trace:
>>>   __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>>>   ? newidle_balance+0x23e/0x2f0
>>>   __blk_mq_sched_dispatch_requests+0x13f/0x190
>>>   blk_mq_sched_dispatch_requests+0x30/0x60
>>>   __blk_mq_run_hw_queue+0x47/0xd0
>>>   process_one_work+0x1b0/0x350
>>>   worker_thread+0x49/0x300
>>>   ? rescuer_thread+0x3a0/0x3a0
>>>   kthread+0xfe/0x140
>>>   ? kthread_park+0x90/0x90
>>>   ret_from_fork+0x22/0x30
>>>
>>> After digging from vmcore, I found that the queue is cleaned
>>> up(blk_cleanup_queue() is done) and tag set is
>>> freed(blk_mq_free_tag_set() is done).
>>>
>>> There are two problems here:
>>>
>>> 1) blk_mq_delay_run_hw_queues() will only be called from
>>> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
>>> This seems impossible because blk_cleanup_queue() is done, and there
>>> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
>>> more accurate") fix the problem in bfq. And currently ohter schedulers
>>> don't have such problem.
>>>
>>> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
>>> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
>>> all the 'run_work'. However, there is no guarantee that new 'run_work'
>>> won't be queued after that(and before blk_mq_exit_queue() is done).
>>
>> It is blk_mq_run_hw_queue() caller's responsibility to grab
>> ->q_usage_counter for avoiding queue cleaned up, so please fix the user
>> side.
>>
> Hi,
> 
> Thanks for your advice.
> 
> blk_mq_run_hw_queue() can be called async, in order to do that, what I
> can think of is that grab 'q_usage_counte' before queuing 'run->work'
> and release it after. Which is very similar to this patch...

Hi,

How do you think about following change:

diff --git a/block/blk-mq.c b/block/blk-mq.c
index cedc355218db..7d5370b5b5e1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1627,8 +1627,16 @@ static void __blk_mq_delay_run_hw_queue(struct 
blk_mq_hw_ctx *hctx, bool async,
                 put_cpu();
         }

+       /*
+        * No need to queue work if there is no io, and this can avoid race
+        * with blk_cleanup_queue().
+        */
+       if (!percpu_ref_tryget(&hctx->queue->q_usage_counter))
+               return;
+
         kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), 
&hctx->run_work,
                                     msecs_to_jiffies(msecs));
+       percpu_ref_put(&hctx->queue->q_usage_counter);
  }
Ming Lei May 20, 2022, 8:34 a.m. UTC | #4
On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
> 在 2022/05/20 14:23, yukuai (C) 写道:
> > 在 2022/05/20 11:44, Ming Lei 写道:
> > > On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
> > > > Our test report a following crash:
> > > > 
> > > > BUG: kernel NULL pointer dereference, address: 0000000000000018
> > > > PGD 0 P4D 0
> > > > Oops: 0000 [#1] SMP NOPTI
> > > > CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
> > > > O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
> > > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> > > > rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
> > > > Workqueue: kblockd blk_mq_run_work_fn
> > > > RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
> > > > RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
> > > > RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
> > > > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
> > > > RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
> > > > R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
> > > > R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
> > > > FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
> > > > knlGS:0000000000000000
> > > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
> > > > Call Trace:
> > > >   __blk_mq_do_dispatch_sched+0x2a7/0x2c0
> > > >   ? newidle_balance+0x23e/0x2f0
> > > >   __blk_mq_sched_dispatch_requests+0x13f/0x190
> > > >   blk_mq_sched_dispatch_requests+0x30/0x60
> > > >   __blk_mq_run_hw_queue+0x47/0xd0
> > > >   process_one_work+0x1b0/0x350
> > > >   worker_thread+0x49/0x300
> > > >   ? rescuer_thread+0x3a0/0x3a0
> > > >   kthread+0xfe/0x140
> > > >   ? kthread_park+0x90/0x90
> > > >   ret_from_fork+0x22/0x30
> > > > 
> > > > After digging from vmcore, I found that the queue is cleaned
> > > > up(blk_cleanup_queue() is done) and tag set is
> > > > freed(blk_mq_free_tag_set() is done).
> > > > 
> > > > There are two problems here:
> > > > 
> > > > 1) blk_mq_delay_run_hw_queues() will only be called from
> > > > __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
> > > > This seems impossible because blk_cleanup_queue() is done, and there
> > > > should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
> > > > more accurate") fix the problem in bfq. And currently ohter schedulers
> > > > don't have such problem.
> > > > 
> > > > 2) 'hctx->run_work' still exists after blk_cleanup_queue().
> > > > blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
> > > > all the 'run_work'. However, there is no guarantee that new 'run_work'
> > > > won't be queued after that(and before blk_mq_exit_queue() is done).
> > > 
> > > It is blk_mq_run_hw_queue() caller's responsibility to grab
> > > ->q_usage_counter for avoiding queue cleaned up, so please fix the user
> > > side.
> > > 
> > Hi,
> > 
> > Thanks for your advice.
> > 
> > blk_mq_run_hw_queue() can be called async, in order to do that, what I
> > can think of is that grab 'q_usage_counte' before queuing 'run->work'
> > and release it after. Which is very similar to this patch...
> 
> Hi,
> 
> How do you think about following change:
> 

I think the issue is in blk_mq_map_queue_type() which may touch tagset.

So please try the following patch:


diff --git a/block/blk-mq.c b/block/blk-mq.c
index ed1869a305c4..5789e971ac83 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
  */
 static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
 {
-	struct blk_mq_hw_ctx *hctx;
-
+	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
 	/*
 	 * If the IO scheduler does not respect hardware queues when
 	 * dispatching, we just don't bother with multiple HW queues and
@@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
 	 * just causes lock contention inside the scheduler and pointless cache
 	 * bouncing.
 	 */
-	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
-				     raw_smp_processor_id());
+	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
+
 	if (!blk_mq_hctx_stopped(hctx))
 		return hctx;
 	return NULL;


Thanks,
Ming
Yu Kuai May 20, 2022, 8:49 a.m. UTC | #5
在 2022/05/20 16:34, Ming Lei 写道:
> On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
>> 在 2022/05/20 14:23, yukuai (C) 写道:
>>> 在 2022/05/20 11:44, Ming Lei 写道:
>>>> On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
>>>>> Our test report a following crash:
>>>>>
>>>>> BUG: kernel NULL pointer dereference, address: 0000000000000018
>>>>> PGD 0 P4D 0
>>>>> Oops: 0000 [#1] SMP NOPTI
>>>>> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
>>>>> O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
>>>>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
>>>>> rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
>>>>> Workqueue: kblockd blk_mq_run_work_fn
>>>>> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
>>>>> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
>>>>> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
>>>>> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
>>>>> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
>>>>> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
>>>>> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
>>>>> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
>>>>> knlGS:0000000000000000
>>>>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
>>>>> Call Trace:
>>>>>    __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>>>>>    ? newidle_balance+0x23e/0x2f0
>>>>>    __blk_mq_sched_dispatch_requests+0x13f/0x190
>>>>>    blk_mq_sched_dispatch_requests+0x30/0x60
>>>>>    __blk_mq_run_hw_queue+0x47/0xd0
>>>>>    process_one_work+0x1b0/0x350
>>>>>    worker_thread+0x49/0x300
>>>>>    ? rescuer_thread+0x3a0/0x3a0
>>>>>    kthread+0xfe/0x140
>>>>>    ? kthread_park+0x90/0x90
>>>>>    ret_from_fork+0x22/0x30
>>>>>
>>>>> After digging from vmcore, I found that the queue is cleaned
>>>>> up(blk_cleanup_queue() is done) and tag set is
>>>>> freed(blk_mq_free_tag_set() is done).
>>>>>
>>>>> There are two problems here:
>>>>>
>>>>> 1) blk_mq_delay_run_hw_queues() will only be called from
>>>>> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
>>>>> This seems impossible because blk_cleanup_queue() is done, and there
>>>>> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
>>>>> more accurate") fix the problem in bfq. And currently ohter schedulers
>>>>> don't have such problem.
>>>>>
>>>>> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
>>>>> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
>>>>> all the 'run_work'. However, there is no guarantee that new 'run_work'
>>>>> won't be queued after that(and before blk_mq_exit_queue() is done).
>>>>
>>>> It is blk_mq_run_hw_queue() caller's responsibility to grab
>>>> ->q_usage_counter for avoiding queue cleaned up, so please fix the user
>>>> side.
>>>>
>>> Hi,
>>>
>>> Thanks for your advice.
>>>
>>> blk_mq_run_hw_queue() can be called async, in order to do that, what I
>>> can think of is that grab 'q_usage_counte' before queuing 'run->work'
>>> and release it after. Which is very similar to this patch...
>>
>> Hi,
>>
>> How do you think about following change:
>>
> 
> I think the issue is in blk_mq_map_queue_type() which may touch tagset.
> 
> So please try the following patch:
> 
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index ed1869a305c4..5789e971ac83 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
>    */
>   static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>   {
> -	struct blk_mq_hw_ctx *hctx;
> -
> +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
>   	/*
>   	 * If the IO scheduler does not respect hardware queues when
>   	 * dispatching, we just don't bother with multiple HW queues and
> @@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>   	 * just causes lock contention inside the scheduler and pointless cache
>   	 * bouncing.
>   	 */
> -	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
> -				     raw_smp_processor_id());
> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
> +
>   	if (!blk_mq_hctx_stopped(hctx))
>   		return hctx;
>   	return NULL;

Hi, Ming

This patch do make sense, however, this doesn't fix the root cause, it
just bypass the problem like commit ddc25c86b466 ("block, bfq: make 
bfq_has_work() more accurate"), which will prevent
blk_mq_delay_run_hw_queues() to be called in such case.

I do think we need to make sure 'run_work' doesn't exist after
blk_cleanup_queue().

Thanks,
Kuai
Ming Lei May 20, 2022, 9:53 a.m. UTC | #6
On Fri, May 20, 2022 at 04:49:19PM +0800, yukuai (C) wrote:
> 在 2022/05/20 16:34, Ming Lei 写道:
> > On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
> > > 在 2022/05/20 14:23, yukuai (C) 写道:
> > > > 在 2022/05/20 11:44, Ming Lei 写道:
> > > > > On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
> > > > > > Our test report a following crash:
> > > > > > 
> > > > > > BUG: kernel NULL pointer dereference, address: 0000000000000018
> > > > > > PGD 0 P4D 0
> > > > > > Oops: 0000 [#1] SMP NOPTI
> > > > > > CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
> > > > > > O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
> > > > > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> > > > > > rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
> > > > > > Workqueue: kblockd blk_mq_run_work_fn
> > > > > > RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
> > > > > > RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
> > > > > > RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
> > > > > > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
> > > > > > RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
> > > > > > R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
> > > > > > R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
> > > > > > FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
> > > > > > knlGS:0000000000000000
> > > > > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > > > CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
> > > > > > Call Trace:
> > > > > >    __blk_mq_do_dispatch_sched+0x2a7/0x2c0
> > > > > >    ? newidle_balance+0x23e/0x2f0
> > > > > >    __blk_mq_sched_dispatch_requests+0x13f/0x190
> > > > > >    blk_mq_sched_dispatch_requests+0x30/0x60
> > > > > >    __blk_mq_run_hw_queue+0x47/0xd0
> > > > > >    process_one_work+0x1b0/0x350
> > > > > >    worker_thread+0x49/0x300
> > > > > >    ? rescuer_thread+0x3a0/0x3a0
> > > > > >    kthread+0xfe/0x140
> > > > > >    ? kthread_park+0x90/0x90
> > > > > >    ret_from_fork+0x22/0x30
> > > > > > 
> > > > > > After digging from vmcore, I found that the queue is cleaned
> > > > > > up(blk_cleanup_queue() is done) and tag set is
> > > > > > freed(blk_mq_free_tag_set() is done).
> > > > > > 
> > > > > > There are two problems here:
> > > > > > 
> > > > > > 1) blk_mq_delay_run_hw_queues() will only be called from
> > > > > > __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
> > > > > > This seems impossible because blk_cleanup_queue() is done, and there
> > > > > > should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
> > > > > > more accurate") fix the problem in bfq. And currently ohter schedulers
> > > > > > don't have such problem.
> > > > > > 
> > > > > > 2) 'hctx->run_work' still exists after blk_cleanup_queue().
> > > > > > blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
> > > > > > all the 'run_work'. However, there is no guarantee that new 'run_work'
> > > > > > won't be queued after that(and before blk_mq_exit_queue() is done).
> > > > > 
> > > > > It is blk_mq_run_hw_queue() caller's responsibility to grab
> > > > > ->q_usage_counter for avoiding queue cleaned up, so please fix the user
> > > > > side.
> > > > > 
> > > > Hi,
> > > > 
> > > > Thanks for your advice.
> > > > 
> > > > blk_mq_run_hw_queue() can be called async, in order to do that, what I
> > > > can think of is that grab 'q_usage_counte' before queuing 'run->work'
> > > > and release it after. Which is very similar to this patch...
> > > 
> > > Hi,
> > > 
> > > How do you think about following change:
> > > 
> > 
> > I think the issue is in blk_mq_map_queue_type() which may touch tagset.
> > 
> > So please try the following patch:
> > 
> > 
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index ed1869a305c4..5789e971ac83 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
> >    */
> >   static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
> >   {
> > -	struct blk_mq_hw_ctx *hctx;
> > -
> > +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> >   	/*
> >   	 * If the IO scheduler does not respect hardware queues when
> >   	 * dispatching, we just don't bother with multiple HW queues and
> > @@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
> >   	 * just causes lock contention inside the scheduler and pointless cache
> >   	 * bouncing.
> >   	 */
> > -	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
> > -				     raw_smp_processor_id());
> > +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
> > +
> >   	if (!blk_mq_hctx_stopped(hctx))
> >   		return hctx;
> >   	return NULL;
> 
> Hi, Ming
> 
> This patch do make sense, however, this doesn't fix the root cause, it

Isn't the root cause that tagset is referred after blk_cleanup_queue
returns?

> just bypass the problem like commit ddc25c86b466 ("block, bfq: make
> bfq_has_work() more accurate"), which will prevent
> blk_mq_delay_run_hw_queues() to be called in such case.

How can?

> 
> I do think we need to make sure 'run_work' doesn't exist after
> blk_cleanup_queue().

Both hctx and request queue are fine to be referred after blk_cleanup_queue
returns, what can't be referred is tagset.


Thanks,
Ming
Yu Kuai May 20, 2022, 10:56 a.m. UTC | #7
在 2022/05/20 17:53, Ming Lei 写道:
> On Fri, May 20, 2022 at 04:49:19PM +0800, yukuai (C) wrote:
>> 在 2022/05/20 16:34, Ming Lei 写道:
>>> On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
>>>> 在 2022/05/20 14:23, yukuai (C) 写道:
>>>>> 在 2022/05/20 11:44, Ming Lei 写道:
>>>>>> On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
>>>>>>> Our test report a following crash:
>>>>>>>
>>>>>>> BUG: kernel NULL pointer dereference, address: 0000000000000018
>>>>>>> PGD 0 P4D 0
>>>>>>> Oops: 0000 [#1] SMP NOPTI
>>>>>>> CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
>>>>>>> O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
>>>>>>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
>>>>>>> rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
>>>>>>> Workqueue: kblockd blk_mq_run_work_fn
>>>>>>> RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
>>>>>>> RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
>>>>>>> RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
>>>>>>> RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
>>>>>>> RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
>>>>>>> R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
>>>>>>> R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
>>>>>>> FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
>>>>>>> knlGS:0000000000000000
>>>>>>> CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
>>>>>>> CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
>>>>>>> Call Trace:
>>>>>>>     __blk_mq_do_dispatch_sched+0x2a7/0x2c0
>>>>>>>     ? newidle_balance+0x23e/0x2f0
>>>>>>>     __blk_mq_sched_dispatch_requests+0x13f/0x190
>>>>>>>     blk_mq_sched_dispatch_requests+0x30/0x60
>>>>>>>     __blk_mq_run_hw_queue+0x47/0xd0
>>>>>>>     process_one_work+0x1b0/0x350
>>>>>>>     worker_thread+0x49/0x300
>>>>>>>     ? rescuer_thread+0x3a0/0x3a0
>>>>>>>     kthread+0xfe/0x140
>>>>>>>     ? kthread_park+0x90/0x90
>>>>>>>     ret_from_fork+0x22/0x30
>>>>>>>
>>>>>>> After digging from vmcore, I found that the queue is cleaned
>>>>>>> up(blk_cleanup_queue() is done) and tag set is
>>>>>>> freed(blk_mq_free_tag_set() is done).
>>>>>>>
>>>>>>> There are two problems here:
>>>>>>>
>>>>>>> 1) blk_mq_delay_run_hw_queues() will only be called from
>>>>>>> __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
>>>>>>> This seems impossible because blk_cleanup_queue() is done, and there
>>>>>>> should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
>>>>>>> more accurate") fix the problem in bfq. And currently ohter schedulers
>>>>>>> don't have such problem.
>>>>>>>
>>>>>>> 2) 'hctx->run_work' still exists after blk_cleanup_queue().
>>>>>>> blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
>>>>>>> all the 'run_work'. However, there is no guarantee that new 'run_work'
>>>>>>> won't be queued after that(and before blk_mq_exit_queue() is done).
>>>>>>
>>>>>> It is blk_mq_run_hw_queue() caller's responsibility to grab
>>>>>> ->q_usage_counter for avoiding queue cleaned up, so please fix the user
>>>>>> side.
>>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for your advice.
>>>>>
>>>>> blk_mq_run_hw_queue() can be called async, in order to do that, what I
>>>>> can think of is that grab 'q_usage_counte' before queuing 'run->work'
>>>>> and release it after. Which is very similar to this patch...
>>>>
>>>> Hi,
>>>>
>>>> How do you think about following change:
>>>>
>>>
>>> I think the issue is in blk_mq_map_queue_type() which may touch tagset.
>>>
>>> So please try the following patch:
>>>
>>>
>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>> index ed1869a305c4..5789e971ac83 100644
>>> --- a/block/blk-mq.c
>>> +++ b/block/blk-mq.c
>>> @@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
>>>     */
>>>    static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>>>    {
>>> -	struct blk_mq_hw_ctx *hctx;
>>> -
>>> +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
>>>    	/*
>>>    	 * If the IO scheduler does not respect hardware queues when
>>>    	 * dispatching, we just don't bother with multiple HW queues and
>>> @@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
>>>    	 * just causes lock contention inside the scheduler and pointless cache
>>>    	 * bouncing.
>>>    	 */
>>> -	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
>>> -				     raw_smp_processor_id());
>>> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
>>> +
>>>    	if (!blk_mq_hctx_stopped(hctx))
>>>    		return hctx;
>>>    	return NULL;
>>
>> Hi, Ming
>>
>> This patch do make sense, however, this doesn't fix the root cause, it
> 
> Isn't the root cause that tagset is referred after blk_cleanup_queue
> returns?

No, it's not the root cause. If we can make sure 'hctx->run_work' won't
exist after blk_cleanup_queue(), such problem won't be triggered.

Actually, blk_cleaup_queue() already call blk_mq_cancel_work_sync() to
do that, however, new 'hctx->run_work' can be queued after that.
> 
>> just bypass the problem like commit ddc25c86b466 ("block, bfq: make
>> bfq_has_work() more accurate"), which will prevent
>> blk_mq_delay_run_hw_queues() to be called in such case.
> 
> How can?
See the call trace:

__blk_mq_do_dispatch_sched+0x2a7/0x2c0
? newidle_balance+0x23e/0x2f0
__blk_mq_sched_dispatch_requests+0x13f/0x190
blk_mq_sched_dispatch_requests+0x30/0x60
__blk_mq_run_hw_queue+0x47/0xd0
process_one_work+0x1b0/0x350 -> hctx->run_work

details how blk_mq_delay_run_hw_queues() is called:
__blk_mq_do_dispatch_sched
  if (e->type->ops.has_work && !e->type->ops.has_work(hctx))
   break -> has_work has to return true.

  rq = e->type->ops.dispatch_request(hctx);
  if (!rq)
   run_queue = true
   break; -> dispatch has to failed

  if (run_queue)
   blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);

Thus if 'has_work' is accurate, blk_mq_delay_run_hw_queues() won't be
called if there is no io.
> 
>>
>> I do think we need to make sure 'run_work' doesn't exist after
>> blk_cleanup_queue().
> 
> Both hctx and request queue are fine to be referred after blk_cleanup_queue
> returns, what can't be referred is tagset.

I agree with that, however, I think we still need to reach an agreement
about root cause of this problem...

Thanks,
Kuai
Ming Lei May 20, 2022, 11:39 a.m. UTC | #8
On Fri, May 20, 2022 at 06:56:22PM +0800, yukuai (C) wrote:
> 在 2022/05/20 17:53, Ming Lei 写道:
> > On Fri, May 20, 2022 at 04:49:19PM +0800, yukuai (C) wrote:
> > > 在 2022/05/20 16:34, Ming Lei 写道:
> > > > On Fri, May 20, 2022 at 03:02:13PM +0800, yukuai (C) wrote:
> > > > > 在 2022/05/20 14:23, yukuai (C) 写道:
> > > > > > 在 2022/05/20 11:44, Ming Lei 写道:
> > > > > > > On Fri, May 20, 2022 at 11:25:42AM +0800, Yu Kuai wrote:
> > > > > > > > Our test report a following crash:
> > > > > > > > 
> > > > > > > > BUG: kernel NULL pointer dereference, address: 0000000000000018
> > > > > > > > PGD 0 P4D 0
> > > > > > > > Oops: 0000 [#1] SMP NOPTI
> > > > > > > > CPU: 6 PID: 265 Comm: kworker/6:1H Kdump: loaded Tainted: G
> > > > > > > > O      5.10.0-60.17.0.h43.eulerosv2r11.x86_64 #1
> > > > > > > > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> > > > > > > > rel-1.12.1-0-ga5cab58-20220320_160524-szxrtosci10000 04/01/2014
> > > > > > > > Workqueue: kblockd blk_mq_run_work_fn
> > > > > > > > RIP: 0010:blk_mq_delay_run_hw_queues+0xb6/0xe0
> > > > > > > > RSP: 0018:ffffacc6803d3d88 EFLAGS: 00010246
> > > > > > > > RAX: 0000000000000006 RBX: ffff99e2c3d25008 RCX: 00000000ffffffff
> > > > > > > > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff99e2c911ae18
> > > > > > > > RBP: ffffacc6803d3dd8 R08: 0000000000000000 R09: ffff99e2c0901f6c
> > > > > > > > R10: 0000000000000018 R11: 0000000000000018 R12: ffff99e2c911ae18
> > > > > > > > R13: 0000000000000000 R14: 0000000000000003 R15: ffff99e2c911ae18
> > > > > > > > FS:  0000000000000000(0000) GS:ffff99e6bbf00000(0000)
> > > > > > > > knlGS:0000000000000000
> > > > > > > > CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > > > > > > > CR2: 0000000000000018 CR3: 000000007460a006 CR4: 00000000003706e0
> > > > > > > > Call Trace:
> > > > > > > >     __blk_mq_do_dispatch_sched+0x2a7/0x2c0
> > > > > > > >     ? newidle_balance+0x23e/0x2f0
> > > > > > > >     __blk_mq_sched_dispatch_requests+0x13f/0x190
> > > > > > > >     blk_mq_sched_dispatch_requests+0x30/0x60
> > > > > > > >     __blk_mq_run_hw_queue+0x47/0xd0
> > > > > > > >     process_one_work+0x1b0/0x350
> > > > > > > >     worker_thread+0x49/0x300
> > > > > > > >     ? rescuer_thread+0x3a0/0x3a0
> > > > > > > >     kthread+0xfe/0x140
> > > > > > > >     ? kthread_park+0x90/0x90
> > > > > > > >     ret_from_fork+0x22/0x30
> > > > > > > > 
> > > > > > > > After digging from vmcore, I found that the queue is cleaned
> > > > > > > > up(blk_cleanup_queue() is done) and tag set is
> > > > > > > > freed(blk_mq_free_tag_set() is done).
> > > > > > > > 
> > > > > > > > There are two problems here:
> > > > > > > > 
> > > > > > > > 1) blk_mq_delay_run_hw_queues() will only be called from
> > > > > > > > __blk_mq_do_dispatch_sched() if e->type->ops.has_work() return true.
> > > > > > > > This seems impossible because blk_cleanup_queue() is done, and there
> > > > > > > > should be no io. Commit ddc25c86b466 ("block, bfq: make bfq_has_work()
> > > > > > > > more accurate") fix the problem in bfq. And currently ohter schedulers
> > > > > > > > don't have such problem.
> > > > > > > > 
> > > > > > > > 2) 'hctx->run_work' still exists after blk_cleanup_queue().
> > > > > > > > blk_mq_cancel_work_sync() is called from blk_cleanup_queue() to cancel
> > > > > > > > all the 'run_work'. However, there is no guarantee that new 'run_work'
> > > > > > > > won't be queued after that(and before blk_mq_exit_queue() is done).
> > > > > > > 
> > > > > > > It is blk_mq_run_hw_queue() caller's responsibility to grab
> > > > > > > ->q_usage_counter for avoiding queue cleaned up, so please fix the user
> > > > > > > side.
> > > > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > Thanks for your advice.
> > > > > > 
> > > > > > blk_mq_run_hw_queue() can be called async, in order to do that, what I
> > > > > > can think of is that grab 'q_usage_counte' before queuing 'run->work'
> > > > > > and release it after. Which is very similar to this patch...
> > > > > 
> > > > > Hi,
> > > > > 
> > > > > How do you think about following change:
> > > > > 
> > > > 
> > > > I think the issue is in blk_mq_map_queue_type() which may touch tagset.
> > > > 
> > > > So please try the following patch:
> > > > 
> > > > 
> > > > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > > > index ed1869a305c4..5789e971ac83 100644
> > > > --- a/block/blk-mq.c
> > > > +++ b/block/blk-mq.c
> > > > @@ -2174,8 +2174,7 @@ static bool blk_mq_has_sqsched(struct request_queue *q)
> > > >     */
> > > >    static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
> > > >    {
> > > > -	struct blk_mq_hw_ctx *hctx;
> > > > -
> > > > +	struct blk_mq_ctx *ctx = blk_mq_get_ctx(q);
> > > >    	/*
> > > >    	 * If the IO scheduler does not respect hardware queues when
> > > >    	 * dispatching, we just don't bother with multiple HW queues and
> > > > @@ -2183,8 +2182,8 @@ static struct blk_mq_hw_ctx *blk_mq_get_sq_hctx(struct request_queue *q)
> > > >    	 * just causes lock contention inside the scheduler and pointless cache
> > > >    	 * bouncing.
> > > >    	 */
> > > > -	hctx = blk_mq_map_queue_type(q, HCTX_TYPE_DEFAULT,
> > > > -				     raw_smp_processor_id());
> > > > +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(q, 0, ctx);
> > > > +
> > > >    	if (!blk_mq_hctx_stopped(hctx))
> > > >    		return hctx;
> > > >    	return NULL;
> > > 
> > > Hi, Ming
> > > 
> > > This patch do make sense, however, this doesn't fix the root cause, it
> > 
> > Isn't the root cause that tagset is referred after blk_cleanup_queue
> > returns?
> 
> No, it's not the root cause. If we can make sure 'hctx->run_work' won't

Really, then how is the panic triggered?

> exist after blk_cleanup_queue(), such problem won't be triggered.

You can't drain run queue simply without call synchronize_rcu(), but
that is really what we want to avoid.

What if one inserted request is just done before run queue?

Such as:

blk_mq_submit_bio
	blk_mq_sched_insert_request	//immediately done after inserted to queue
							//blk_cleaup_queue returns if .q_usage_counter
							//is in atomic mode
		blk_mq_run_hw_queue	//still run queue

> 
> Actually, blk_cleaup_queue() already call blk_mq_cancel_work_sync() to
> do that, however, new 'hctx->run_work' can be queued after that.

We know run hw queue can be in-progress during blk_cleaup_queue(), that
is fine since we do not want to quiesce queue which slows down
teardown much.

> > 
> > > just bypass the problem like commit ddc25c86b466 ("block, bfq: make
> > > bfq_has_work() more accurate"), which will prevent
> > > blk_mq_delay_run_hw_queues() to be called in such case.
> > 
> > How can?
> See the call trace:
> 
> __blk_mq_do_dispatch_sched+0x2a7/0x2c0
> ? newidle_balance+0x23e/0x2f0
> __blk_mq_sched_dispatch_requests+0x13f/0x190
> blk_mq_sched_dispatch_requests+0x30/0x60
> __blk_mq_run_hw_queue+0x47/0xd0
> process_one_work+0x1b0/0x350 -> hctx->run_work
> 
> details how blk_mq_delay_run_hw_queues() is called:
> __blk_mq_do_dispatch_sched
>  if (e->type->ops.has_work && !e->type->ops.has_work(hctx))
>   break -> has_work has to return true.
> 
>  rq = e->type->ops.dispatch_request(hctx);
>  if (!rq)
>   run_queue = true
>   break; -> dispatch has to failed
> 
>  if (run_queue)
>   blk_mq_delay_run_hw_queues(q, BLK_MQ_BUDGET_DELAY);
> 
> Thus if 'has_work' is accurate, blk_mq_delay_run_hw_queues() won't be
> called if there is no io.

After queue freezing is done, no any request is in queue, but the dispatch/run
queue activity may not be done. It is one known fact.

> > 
> > > 
> > > I do think we need to make sure 'run_work' doesn't exist after
> > > blk_cleanup_queue().
> > 
> > Both hctx and request queue are fine to be referred after blk_cleanup_queue
> > returns, what can't be referred is tagset.
> 
> I agree with that, however, I think we still need to reach an agreement
> about root cause of this problem...

In short:

1) run queue can be in-progress during cleanup queue, or returns from
cleanup queue; we drain it in both blk_cleanup_queue() and
disk_release_mq(), see commit 2a19b28f7929 ("blk-mq: cancel blk-mq dispatch
work in both blk_cleanup_queue and disk_release()")

2) tagset can't be touched after blk_cleanup_queue returns because
tagset lifetime is covered by driver, which is often released after
blk_cleanup_queue() returns.


Thanks,
Ming
Yu Kuai May 20, 2022, 12:01 p.m. UTC | #9
在 2022/05/20 19:39, Ming Lei 写道:

> 
> In short:
> 
> 1) run queue can be in-progress during cleanup queue, or returns from
> cleanup queue; we drain it in both blk_cleanup_queue() and
> disk_release_mq(), see commit 2a19b28f7929 ("blk-mq: cancel blk-mq dispatch
> work in both blk_cleanup_queue and disk_release()")
I understand that, however, there is no garantee new 'hctx->run_work'
won't be queued after 'drain it', for this crash, I think this is how
it triggered:

assum that there is no io, while some bfq_queue is still busy:

blk_cleanup_queue
  blk_freeze_queue
  blk_mq_cancel_work_sync
  cancel_delayed_work_sync(hctx1)
				blk_mq_run_work_fn -> hctx2
				 __blk_mq_run_hw_queue
				  blk_mq_sched_dispatch_requests
				   __blk_mq_do_dispatch_sched
				    blk_mq_delay_run_hw_queues
				     blk_mq_delay_run_hw_queue
				      -> add hctx1->run_work again
  cancel_delayed_work_sync(hctx2)
> 
> 2) tagset can't be touched after blk_cleanup_queue returns because
> tagset lifetime is covered by driver, which is often released after
> blk_cleanup_queue() returns.
> 
> 
> Thanks,
> Ming
> 
> .
>
Ming Lei May 20, 2022, 1:56 p.m. UTC | #10
On Fri, May 20, 2022 at 08:01:31PM +0800, yukuai (C) wrote:
> 在 2022/05/20 19:39, Ming Lei 写道:
> 
> > 
> > In short:
> > 
> > 1) run queue can be in-progress during cleanup queue, or returns from
> > cleanup queue; we drain it in both blk_cleanup_queue() and
> > disk_release_mq(), see commit 2a19b28f7929 ("blk-mq: cancel blk-mq dispatch
> > work in both blk_cleanup_queue and disk_release()")
> I understand that, however, there is no garantee new 'hctx->run_work'
> won't be queued after 'drain it', for this crash, I think this is how

No, run queue activity will be shutdown after both disk_release_mq()
and blk_cleanup_queue() are done.

disk_release_mq() is called after all FS IOs are done, so there isn't
any run queue from FS IO code path, either sync or async.

In blk_cleanup_queue(), we only focus on passthrough request, and
passthrough request is always explicitly allocated & freed by
its caller, so once queue is frozen, all sync dispatch activity
for passthrough request has been done, then it is enough to just cancel
dispatch work for avoiding any dispatch activity.

That is why both request queue and hctx can be released safely
after the two are done.

> it triggered:
> 
> assum that there is no io, while some bfq_queue is still busy:
> 
> blk_cleanup_queue
>  blk_freeze_queue
>  blk_mq_cancel_work_sync
>  cancel_delayed_work_sync(hctx1)
> 				blk_mq_run_work_fn -> hctx2
> 				 __blk_mq_run_hw_queue
> 				  blk_mq_sched_dispatch_requests
> 				   __blk_mq_do_dispatch_sched
> 				    blk_mq_delay_run_hw_queues
> 				     blk_mq_delay_run_hw_queue
> 				      -> add hctx1->run_work again
>  cancel_delayed_work_sync(hctx2)

Yes, even blk_mq_delay_run_hw_queues() can be called after all
hctx->run_work are canceled since __blk_mq_run_hw_queue() could be
running in sync io code path, not via ->run_work.

And my patch will fix the issue, won't it?


Thanks,
Ming
Yu Kuai May 21, 2022, 3:33 a.m. UTC | #11
在 2022/05/20 21:56, Ming Lei 写道:
> On Fri, May 20, 2022 at 08:01:31PM +0800, yukuai (C) wrote:
>> 在 2022/05/20 19:39, Ming Lei 写道:
>>
>>>
>>> In short:
>>>
>>> 1) run queue can be in-progress during cleanup queue, or returns from
>>> cleanup queue; we drain it in both blk_cleanup_queue() and
>>> disk_release_mq(), see commit 2a19b28f7929 ("blk-mq: cancel blk-mq dispatch
>>> work in both blk_cleanup_queue and disk_release()")
>> I understand that, however, there is no garantee new 'hctx->run_work'
>> won't be queued after 'drain it', for this crash, I think this is how
> 
> No, run queue activity will be shutdown after both disk_release_mq()
> and blk_cleanup_queue() are done.
> 
> disk_release_mq() is called after all FS IOs are done, so there isn't
> any run queue from FS IO code path, either sync or async.
> 
> In blk_cleanup_queue(), we only focus on passthrough request, and
> passthrough request is always explicitly allocated & freed by
> its caller, so once queue is frozen, all sync dispatch activity
> for passthrough request has been done, then it is enough to just cancel
> dispatch work for avoiding any dispatch activity.
> 
Hi, Ming

Thanks for you explanation, it really help me understand the code
better.

In our test kernel, elevator_exit() is not called from
disk_release_mq(), that is the reason I thought differently about
root cause...

> That is why both request queue and hctx can be released safely
> after the two are done.
> 
>> it triggered:
>>
>> assum that there is no io, while some bfq_queue is still busy:
>>
>> blk_cleanup_queue
>>   blk_freeze_queue
>>   blk_mq_cancel_work_sync
>>   cancel_delayed_work_sync(hctx1)
>> 				blk_mq_run_work_fn -> hctx2
>> 				 __blk_mq_run_hw_queue
>> 				  blk_mq_sched_dispatch_requests
>> 				   __blk_mq_do_dispatch_sched
>> 				    blk_mq_delay_run_hw_queues
>> 				     blk_mq_delay_run_hw_queue
>> 				      -> add hctx1->run_work again
>>   cancel_delayed_work_sync(hctx2)
> 
> Yes, even blk_mq_delay_run_hw_queues() can be called after all
> hctx->run_work are canceled since __blk_mq_run_hw_queue() could be
> running in sync io code path, not via ->run_work.
> 
> And my patch will fix the issue, won't it?

Yes, like I said before, your patch do make sense. It seems like
commit 28ce942fa2d5 ("block: move blk_exit_queue into disk_release")
is the real fix for the crash in out test.

Thanks,
Kuai
> 
> 
> Thanks,
> Ming
> 
> .
>
diff mbox series

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index 80fa73c419a9..f3e36d8143ec 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -314,7 +314,10 @@  void blk_cleanup_queue(struct request_queue *q)
 	 */
 	blk_freeze_queue(q);
 
+	/* New 'hctx->run_work' can't be queued after setting the dead flag */
+	spin_lock_irq(&q->queue_lock);
 	blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
+	spin_unlock_irq(&q->queue_lock);
 
 	blk_sync_queue(q);
 	if (queue_is_mq(q)) {
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ed1869a305c4..fb35d335d554 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2093,6 +2093,8 @@  static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
 static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 					unsigned long msecs)
 {
+	unsigned long flags;
+
 	if (unlikely(blk_mq_hctx_stopped(hctx)))
 		return;
 
@@ -2107,8 +2109,12 @@  static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
 		put_cpu();
 	}
 
-	kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
-				    msecs_to_jiffies(msecs));
+	spin_lock_irqsave(&hctx->queue->queue_lock, flags);
+	if (!blk_queue_dead(hctx->queue))
+		kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx),
+					    &hctx->run_work,
+					    msecs_to_jiffies(msecs));
+	spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
 }
 
 /**