diff mbox series

[V2] block: Fix use-after-free issue while accessing ioscheduler lock

Message ID 1600161062-43793-1-git-send-email-ppvk@codeaurora.org (mailing list archive)
State New, archived
Headers show
Series [V2] block: Fix use-after-free issue while accessing ioscheduler lock | expand

Commit Message

Pradeep P V K Sept. 15, 2020, 9:11 a.m. UTC
Observes below crash while accessing (use-after-free) lock member
of bfq data.

context#1			context#2
				process_one_work()
kthread()			blk_mq_run_work_fn()
worker_thread()			 ->__blk_mq_run_hw_queue()
process_one_work()		  ->blk_mq_sched_dispatch_requests()
__blk_release_queue()		    ->blk_mq_do_dispatch_sched()
->__elevator_exit()
  ->blk_mq_exit_sched()
    ->exit_sched()
      ->kfree()
      					->bfq_dispatch_request()
					  ->spin_unlock_irq(&bfqd->lock)

This is because of the kblockd delayed work that might got scheduled
around blk_release_queue() and accessed use-after-free member of
bfq_data.

240.212359:   <2> Unable to handle kernel paging request at
virtual address ffffffee2e33ad70
...
240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
...
Call trace:
240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
240.212963:   <2>  process_one_work+0x280/0x460
240.212973:   <2>  worker_thread+0x27c/0x4dc
240.212986:   <2>  kthread+0x160/0x170

Fix this by cancelling the delayed work if any before elevator exits.

Changes since V1:
- Moved the logic into blk_cleanup_queue() as per Ming comments.

Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
---
 block/blk-mq.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Ming Lei Sept. 15, 2020, 10:09 a.m. UTC | #1
On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
> Observes below crash while accessing (use-after-free) lock member
> of bfq data.
> 
> context#1			context#2
> 				process_one_work()
> kthread()			blk_mq_run_work_fn()
> worker_thread()			 ->__blk_mq_run_hw_queue()
> process_one_work()		  ->blk_mq_sched_dispatch_requests()
> __blk_release_queue()		    ->blk_mq_do_dispatch_sched()

Just found __blk_release_queue killed in v5.9 cycle.

> ->__elevator_exit()
>   ->blk_mq_exit_sched()
>     ->exit_sched()
>       ->kfree()
>       					->bfq_dispatch_request()
> 					  ->spin_unlock_irq(&bfqd->lock)

Actually not sure if the above race is easy to trigger in recent kernel,
because we do call cancel_delayed_work_sync() in blk_mq_hw_sysfs_release(),
which is usually called before __elevator_exit() from
blk_exit_queue()/blk_release_queue().

So can you share your kernel version in which the issue is reproduced?
And can you reproduce this issue on v5.8 or v5.9-rc5?

> 
> This is because of the kblockd delayed work that might got scheduled
> around blk_release_queue() and accessed use-after-free member of
> bfq_data.
> 
> 240.212359:   <2> Unable to handle kernel paging request at
> virtual address ffffffee2e33ad70
> ...
> 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
> 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
> 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
> 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
> ...
> Call trace:
> 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
> 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
> 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
> 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
> 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
> 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
> 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
> 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
> 240.212963:   <2>  process_one_work+0x280/0x460
> 240.212973:   <2>  worker_thread+0x27c/0x4dc
> 240.212986:   <2>  kthread+0x160/0x170
> 
> Fix this by cancelling the delayed work if any before elevator exits.
> 
> Changes since V1:
> - Moved the logic into blk_cleanup_queue() as per Ming comments.
> 
> Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
> ---
>  block/blk-mq.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 4abb714..890fded 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
>  			break;
>  		blk_mq_debugfs_unregister_hctx(hctx);
>  		blk_mq_exit_hctx(q, set, hctx, i);
> +		cancel_delayed_work_sync(&hctx->run_work);
>  	}
>  }

It should be better to move cancel_delayed_work_sync() into
blk_mq_exit_hctx(), exactly before adding hctx into unused list.


Thanks,
Ming
Pradeep P V K Sept. 15, 2020, 12:20 p.m. UTC | #2
On 2020-09-15 15:39, Ming Lei wrote:
> On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
>> Observes below crash while accessing (use-after-free) lock member
>> of bfq data.
>> 
>> context#1			context#2
>> 				process_one_work()
>> kthread()			blk_mq_run_work_fn()
>> worker_thread()			 ->__blk_mq_run_hw_queue()
>> process_one_work()		  ->blk_mq_sched_dispatch_requests()
>> __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
> 
> Just found __blk_release_queue killed in v5.9 cycle.
> 
Yes on v5.9 blk_release_queue() will be called directly by q->kobj when
request_queue ref. goes zero but
where as on older kernel versions (< 5.9), blk_release_queue() will
schedule a work to invoke/call "__blk_release_queue()".

>> ->__elevator_exit()
>>   ->blk_mq_exit_sched()
>>     ->exit_sched()
>>       ->kfree()
>>       					->bfq_dispatch_request()
>> 					  ->spin_unlock_irq(&bfqd->lock)
> 
> Actually not sure if the above race is easy to trigger in recent 
> kernel,
> because we do call cancel_delayed_work_sync() in 
> blk_mq_hw_sysfs_release(),
> which is usually called before __elevator_exit() from
> blk_exit_queue()/blk_release_queue().
> 
blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. with
kobject_put(hctx->kobj), which is after __elevator_exit()

__elevator_exit() is called from blk_exit_queue() which is prior to 
blk_mq_release().

> So can you share your kernel version in which the issue is reproduced?
> And can you reproduce this issue on v5.8 or v5.9-rc5?
> 
This issue is seen on v5.4 stable and it is very easy to reproduce on 
v5.4.
sorry, i don't have a resource with v5.8 or with latest kernel. I can 
help you
to get tested on v5.4. From the issue prospective, both v5.4 kernel and
latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() after
__elevator_exit(). So, i think it wont matter much here.

>> 
>> This is because of the kblockd delayed work that might got scheduled
>> around blk_release_queue() and accessed use-after-free member of
>> bfq_data.
>> 
>> 240.212359:   <2> Unable to handle kernel paging request at
>> virtual address ffffffee2e33ad70
>> ...
>> 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
>> 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
>> 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
>> 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
>> ...
>> Call trace:
>> 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
>> 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
>> 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
>> 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
>> 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
>> 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
>> 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
>> 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
>> 240.212963:   <2>  process_one_work+0x280/0x460
>> 240.212973:   <2>  worker_thread+0x27c/0x4dc
>> 240.212986:   <2>  kthread+0x160/0x170
>> 
>> Fix this by cancelling the delayed work if any before elevator exits.
>> 
>> Changes since V1:
>> - Moved the logic into blk_cleanup_queue() as per Ming comments.
>> 
>> Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
>> ---
>>  block/blk-mq.c | 1 +
>>  1 file changed, 1 insertion(+)
>> 
>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>> index 4abb714..890fded 100644
>> --- a/block/blk-mq.c
>> +++ b/block/blk-mq.c
>> @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct 
>> request_queue *q,
>>  			break;
>>  		blk_mq_debugfs_unregister_hctx(hctx);
>>  		blk_mq_exit_hctx(q, set, hctx, i);
>> +		cancel_delayed_work_sync(&hctx->run_work);
>>  	}
>>  }
> 
> It should be better to move cancel_delayed_work_sync() into
> blk_mq_exit_hctx(), exactly before adding hctx into unused list.
> 
Sure. i will do it in my next patch series.
> 
> Thanks,
> Ming

Thanks and Regards,
Pradeep
Ming Lei Sept. 15, 2020, 12:41 p.m. UTC | #3
On Tue, Sep 15, 2020 at 05:50:33PM +0530, ppvk@codeaurora.org wrote:
> On 2020-09-15 15:39, Ming Lei wrote:
> > On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
> > > Observes below crash while accessing (use-after-free) lock member
> > > of bfq data.
> > > 
> > > context#1			context#2
> > > 				process_one_work()
> > > kthread()			blk_mq_run_work_fn()
> > > worker_thread()			 ->__blk_mq_run_hw_queue()
> > > process_one_work()		  ->blk_mq_sched_dispatch_requests()
> > > __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
> > 
> > Just found __blk_release_queue killed in v5.9 cycle.
> > 
> Yes on v5.9 blk_release_queue() will be called directly by q->kobj when
> request_queue ref. goes zero but
> where as on older kernel versions (< 5.9), blk_release_queue() will
> schedule a work to invoke/call "__blk_release_queue()".
> 
> > > ->__elevator_exit()
> > >   ->blk_mq_exit_sched()
> > >     ->exit_sched()
> > >       ->kfree()
> > >       					->bfq_dispatch_request()
> > > 					  ->spin_unlock_irq(&bfqd->lock)
> > 
> > Actually not sure if the above race is easy to trigger in recent kernel,
> > because we do call cancel_delayed_work_sync() in
> > blk_mq_hw_sysfs_release(),
> > which is usually called before __elevator_exit() from
> > blk_exit_queue()/blk_release_queue().
> > 
> blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. with
> kobject_put(hctx->kobj), which is after __elevator_exit()
> 
> __elevator_exit() is called from blk_exit_queue() which is prior to
> blk_mq_release().
> 
> > So can you share your kernel version in which the issue is reproduced?
> > And can you reproduce this issue on v5.8 or v5.9-rc5?
> > 
> This issue is seen on v5.4 stable and it is very easy to reproduce on v5.4.
> sorry, i don't have a resource with v5.8 or with latest kernel. I can help
> you
> to get tested on v5.4. From the issue prospective, both v5.4 kernel and
> latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() after
> __elevator_exit(). So, i think it wont matter much here.
> 
> > > 
> > > This is because of the kblockd delayed work that might got scheduled
> > > around blk_release_queue() and accessed use-after-free member of
> > > bfq_data.
> > > 
> > > 240.212359:   <2> Unable to handle kernel paging request at
> > > virtual address ffffffee2e33ad70
> > > ...
> > > 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
> > > 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
> > > 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
> > > 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
> > > ...
> > > Call trace:
> > > 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
> > > 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
> > > 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
> > > 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
> > > 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
> > > 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
> > > 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
> > > 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
> > > 240.212963:   <2>  process_one_work+0x280/0x460
> > > 240.212973:   <2>  worker_thread+0x27c/0x4dc
> > > 240.212986:   <2>  kthread+0x160/0x170
> > > 
> > > Fix this by cancelling the delayed work if any before elevator exits.
> > > 
> > > Changes since V1:
> > > - Moved the logic into blk_cleanup_queue() as per Ming comments.
> > > 
> > > Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
> > > ---
> > >  block/blk-mq.c | 1 +
> > >  1 file changed, 1 insertion(+)
> > > 
> > > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > > index 4abb714..890fded 100644
> > > --- a/block/blk-mq.c
> > > +++ b/block/blk-mq.c
> > > @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct
> > > request_queue *q,
> > >  			break;
> > >  		blk_mq_debugfs_unregister_hctx(hctx);
> > >  		blk_mq_exit_hctx(q, set, hctx, i);
> > > +		cancel_delayed_work_sync(&hctx->run_work);
> > >  	}
> > >  }
> > 
> > It should be better to move cancel_delayed_work_sync() into
> > blk_mq_exit_hctx(), exactly before adding hctx into unused list.
> > 
> Sure. i will do it in my next patch series.

Thinking of further, looks your 1st post is right, because hw queue
won't be run when refcount of the queue drops to zero.

Sorry for the noise.

thanks,
Ming
Pradeep P V K Sept. 15, 2020, 1:59 p.m. UTC | #4
On 2020-09-15 18:11, Ming Lei wrote:
> On Tue, Sep 15, 2020 at 05:50:33PM +0530, ppvk@codeaurora.org wrote:
>> On 2020-09-15 15:39, Ming Lei wrote:
>> > On Tue, Sep 15, 2020 at 02:41:02PM +0530, Pradeep P V K wrote:
>> > > Observes below crash while accessing (use-after-free) lock member
>> > > of bfq data.
>> > >
>> > > context#1			context#2
>> > > 				process_one_work()
>> > > kthread()			blk_mq_run_work_fn()
>> > > worker_thread()			 ->__blk_mq_run_hw_queue()
>> > > process_one_work()		  ->blk_mq_sched_dispatch_requests()
>> > > __blk_release_queue()		    ->blk_mq_do_dispatch_sched()
>> >
>> > Just found __blk_release_queue killed in v5.9 cycle.
>> >
>> Yes on v5.9 blk_release_queue() will be called directly by q->kobj 
>> when
>> request_queue ref. goes zero but
>> where as on older kernel versions (< 5.9), blk_release_queue() will
>> schedule a work to invoke/call "__blk_release_queue()".
>> 
>> > > ->__elevator_exit()
>> > >   ->blk_mq_exit_sched()
>> > >     ->exit_sched()
>> > >       ->kfree()
>> > >       					->bfq_dispatch_request()
>> > > 					  ->spin_unlock_irq(&bfqd->lock)
>> >
>> > Actually not sure if the above race is easy to trigger in recent kernel,
>> > because we do call cancel_delayed_work_sync() in
>> > blk_mq_hw_sysfs_release(),
>> > which is usually called before __elevator_exit() from
>> > blk_exit_queue()/blk_release_queue().
>> >
>> blk_mq_hw_sysfs_release() will be called from blk_mq_release() i.e. 
>> with
>> kobject_put(hctx->kobj), which is after __elevator_exit()
>> 
>> __elevator_exit() is called from blk_exit_queue() which is prior to
>> blk_mq_release().
>> 
>> > So can you share your kernel version in which the issue is reproduced?
>> > And can you reproduce this issue on v5.8 or v5.9-rc5?
>> >
>> This issue is seen on v5.4 stable and it is very easy to reproduce on 
>> v5.4.
>> sorry, i don't have a resource with v5.8 or with latest kernel. I can 
>> help
>> you
>> to get tested on v5.4. From the issue prospective, both v5.4 kernel 
>> and
>> latest kernels calls blk_mq_release() -> blk_mq_hw_sysfs_release() 
>> after
>> __elevator_exit(). So, i think it wont matter much here.
>> 
>> > >
>> > > This is because of the kblockd delayed work that might got scheduled
>> > > around blk_release_queue() and accessed use-after-free member of
>> > > bfq_data.
>> > >
>> > > 240.212359:   <2> Unable to handle kernel paging request at
>> > > virtual address ffffffee2e33ad70
>> > > ...
>> > > 240.212637:   <2> Workqueue: kblockd blk_mq_run_work_fn
>> > > 240.212649:   <2> pstate: 00c00085 (nzcv daIf +PAN +UAO)
>> > > 240.212666:   <2> pc : queued_spin_lock_slowpath+0x10c/0x2e0
>> > > 240.212677:   <2> lr : queued_spin_lock_slowpath+0x84/0x2e0
>> > > ...
>> > > Call trace:
>> > > 240.212865:   <2>  queued_spin_lock_slowpath+0x10c/0x2e0
>> > > 240.212876:   <2>  do_raw_spin_lock+0xf0/0xf4
>> > > 240.212890:   <2>  _raw_spin_lock_irq+0x74/0x94
>> > > 240.212906:   <2>  bfq_dispatch_request+0x4c/0xd60
>> > > 240.212918:   <2>  blk_mq_do_dispatch_sched+0xe0/0x1f0
>> > > 240.212927:   <2>  blk_mq_sched_dispatch_requests+0x130/0x194
>> > > 240.212940:   <2>  __blk_mq_run_hw_queue+0x100/0x158
>> > > 240.212950:   <2>  blk_mq_run_work_fn+0x1c/0x28
>> > > 240.212963:   <2>  process_one_work+0x280/0x460
>> > > 240.212973:   <2>  worker_thread+0x27c/0x4dc
>> > > 240.212986:   <2>  kthread+0x160/0x170
>> > >
>> > > Fix this by cancelling the delayed work if any before elevator exits.
>> > >
>> > > Changes since V1:
>> > > - Moved the logic into blk_cleanup_queue() as per Ming comments.
>> > >
>> > > Signed-off-by: Pradeep P V K <ppvk@codeaurora.org>
>> > > ---
>> > >  block/blk-mq.c | 1 +
>> > >  1 file changed, 1 insertion(+)
>> > >
>> > > diff --git a/block/blk-mq.c b/block/blk-mq.c
>> > > index 4abb714..890fded 100644
>> > > --- a/block/blk-mq.c
>> > > +++ b/block/blk-mq.c
>> > > @@ -2598,6 +2598,7 @@ static void blk_mq_exit_hw_queues(struct
>> > > request_queue *q,
>> > >  			break;
>> > >  		blk_mq_debugfs_unregister_hctx(hctx);
>> > >  		blk_mq_exit_hctx(q, set, hctx, i);
>> > > +		cancel_delayed_work_sync(&hctx->run_work);
>> > >  	}
>> > >  }
>> >
>> > It should be better to move cancel_delayed_work_sync() into
>> > blk_mq_exit_hctx(), exactly before adding hctx into unused list.
>> >
>> Sure. i will do it in my next patch series.
> 
> Thinking of further, looks your 1st post is right, because hw queue
> won't be run when refcount of the queue drops to zero.
> 
ok. Can you help add your Review-by sign off on my V1 patch.
> Sorry for the noise.
> 
never mind - it's ok.

> thanks,
> Ming

Thanks and Regards,
Pradeep
diff mbox series

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4abb714..890fded 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -2598,6 +2598,7 @@  static void blk_mq_exit_hw_queues(struct request_queue *q,
 			break;
 		blk_mq_debugfs_unregister_hctx(hctx);
 		blk_mq_exit_hctx(q, set, hctx, i);
+		cancel_delayed_work_sync(&hctx->run_work);
 	}
 }