diff mbox series

block: quiesce queue before freeing queue

Message ID 20210115064352.532534-1-yuyufen@huawei.com (mailing list archive)
State New, archived
Headers show
Series block: quiesce queue before freeing queue | expand

Commit Message

Yufen Yu Jan. 15, 2021, 6:43 a.m. UTC
There is a race beteewn blk_mq_run_hw_queue() and cleanup queue,
which can cause use-after-free as following:

cpu1                              cpu2
queue_state_write()
                                  blk_release_queue
                                    blk_exit_queue
  blk_mq_run_hw_queue
    blk_mq_hctx_has_pending
      e = q->elevator
                                    q->elevator = NULL
                                    free(q->elevator)
      e && e->type->ops.has_work //use-after-free

Fix this bug by adding quiesce before freeing queue. Then, anyone
who tries to run hw queue will be safe.

This is basically revert of 662156641bc4 ("block: don't drain
in-progress dispatch in blk_cleanup_queue()")

Fixes: 662156641bc4 ("block: don't drain in-progress dispatch in blk_cleanup_queue()")
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
---
 block/blk-core.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

Comments

Ming Lei Jan. 15, 2021, 7:35 a.m. UTC | #1
On Fri, Jan 15, 2021 at 01:43:52AM -0500, Yufen Yu wrote:
> There is a race beteewn blk_mq_run_hw_queue() and cleanup queue,
> which can cause use-after-free as following:
> 
> cpu1                              cpu2
> queue_state_write()
>                                   blk_release_queue
>                                     blk_exit_queue
>   blk_mq_run_hw_queue
>     blk_mq_hctx_has_pending
>       e = q->elevator
>                                     q->elevator = NULL
>                                     free(q->elevator)
>       e && e->type->ops.has_work //use-after-free
> 
> Fix this bug by adding quiesce before freeing queue. Then, anyone
> who tries to run hw queue will be safe.

It is blk_mq_run_hw_queue() caller's responsibility to make sure that
the request queue won't disappear, so please fix the caller if there is
such issue.

> 
> This is basically revert of 662156641bc4 ("block: don't drain
> in-progress dispatch in blk_cleanup_queue()")
> 
> Fixes: 662156641bc4 ("block: don't drain in-progress dispatch in blk_cleanup_queue()")
> Signed-off-by: Yufen Yu <yuyufen@huawei.com>
> ---
>  block/blk-core.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 7663a9b94b80..f8a038d19c89 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -392,6 +392,18 @@ void blk_cleanup_queue(struct request_queue *q)
>  
>  	blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
>  
> +	/*
> +	 * make sure all in-progress dispatch are completed because
> +	 * blk_freeze_queue() can only complete all requests, and
> +	 * dispatch may still be in-progress since we dispatch requests
> +	 * from more than one contexts.
> +	 *
> +	 * We rely on driver to deal with the race in case that queue
> +	 * initialization isn't done.
> +	 */
> +	if (queue_is_mq(q) && blk_queue_init_done(q))
> +		blk_mq_quiesce_queue(q);

No, please don't do that. We had several slow boot reports before caused by
by synchronize_rcu() in blk_cleanup_queue(), since blk_cleanup_queue()
is called for in-existed queues, which number can be quite big.

Thanks,
Ming
Yufen Yu Jan. 15, 2021, 8:32 a.m. UTC | #2
On 2021/1/15 15:35, Ming Lei wrote:
> On Fri, Jan 15, 2021 at 01:43:52AM -0500, Yufen Yu wrote:
>> There is a race beteewn blk_mq_run_hw_queue() and cleanup queue,
>> which can cause use-after-free as following:

>>
>> diff --git a/block/blk-core.c b/block/blk-core.c
>> index 7663a9b94b80..f8a038d19c89 100644
>> --- a/block/blk-core.c
>> +++ b/block/blk-core.c
>> @@ -392,6 +392,18 @@ void blk_cleanup_queue(struct request_queue *q)
>>   
>>   	blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
>>   
>> +	/*
>> +	 * make sure all in-progress dispatch are completed because
>> +	 * blk_freeze_queue() can only complete all requests, and
>> +	 * dispatch may still be in-progress since we dispatch requests
>> +	 * from more than one contexts.
>> +	 *
>> +	 * We rely on driver to deal with the race in case that queue
>> +	 * initialization isn't done.
>> +	 */
>> +	if (queue_is_mq(q) && blk_queue_init_done(q))
>> +		blk_mq_quiesce_queue(q);
> 
> No, please don't do that. We had several slow boot reports before caused by
> by synchronize_rcu() in blk_cleanup_queue(), since blk_cleanup_queue()
> is called for in-existed queues, which number can be quite big.

Thanks to point out this. But, can't blk_queue_init_done will avoid calling
synchronize_rcu() for inexistent queues?

Thanks,
Yufen
diff mbox series

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index 7663a9b94b80..f8a038d19c89 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -392,6 +392,18 @@  void blk_cleanup_queue(struct request_queue *q)
 
 	blk_queue_flag_set(QUEUE_FLAG_DEAD, q);
 
+	/*
+	 * make sure all in-progress dispatch are completed because
+	 * blk_freeze_queue() can only complete all requests, and
+	 * dispatch may still be in-progress since we dispatch requests
+	 * from more than one contexts.
+	 *
+	 * We rely on driver to deal with the race in case that queue
+	 * initialization isn't done.
+	 */
+	if (queue_is_mq(q) && blk_queue_init_done(q))
+		blk_mq_quiesce_queue(q);
+
 	/* for synchronous bio-based driver finish in-flight integrity i/o */
 	blk_flush_integrity();