diff mbox

[v1,3/3] blk-mq: start to freeze queue just after setting dying

Message ID 20170317095711.5819-4-tom.leiming@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ming Lei March 17, 2017, 9:57 a.m. UTC
Before commit 780db2071a(blk-mq: decouble blk-mq freezing
from generic bypassing), the dying flag is checked before
entering queue, and Tejun converts the checking into .mq_freeze_depth,
and assumes the counter is increased just after dying flag
is set. Unfortunately we doesn't do that in blk_set_queue_dying().

This patch calls blk_mq_freeze_queue_start() for blk-mq in
blk_set_queue_dying(), so that we can block new I/O coming
once the queue is set as dying.

Given blk_set_queue_dying() is always called in remove path
of block device, and queue will be cleaned up later, we don't
need to worry about undoing the counter.

Cc: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-core.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

Comments

Bart Van Assche March 17, 2017, 11:26 p.m. UTC | #1
On Fri, 2017-03-17 at 17:57 +0800, Ming Lei wrote:
> Given blk_set_queue_dying() is always called in remove path
> of block device, and queue will be cleaned up later, we don't
> need to worry about undoing the counter.
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index d772c221cc17..62d4967c369f 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -500,9 +500,12 @@ void blk_set_queue_dying(struct request_queue *q)
>  	queue_flag_set(QUEUE_FLAG_DYING, q);
>  	spin_unlock_irq(q->queue_lock);
>  
> -	if (q->mq_ops)
> +	if (q->mq_ops) {
>  		blk_mq_wake_waiters(q);
> -	else {
> +
> +		/* block new I/O coming */
> +		blk_mq_freeze_queue_start(q);
> +	} else {
>  		struct request_list *rl;
>  
>  		spin_lock_irq(q->queue_lock);

Hello Ming,

The blk_freeze_queue() call in blk_cleanup_queue() waits until q_usage_counter
drops to zero. Since the above blk_mq_freeze_queue_start() call increases that
counter by one, how is blk_freeze_queue() expected to finish ever?

Bart.
Ming Lei March 18, 2017, 1:01 a.m. UTC | #2
On Fri, Mar 17, 2017 at 11:26:26PM +0000, Bart Van Assche wrote:
> On Fri, 2017-03-17 at 17:57 +0800, Ming Lei wrote:
> > Given blk_set_queue_dying() is always called in remove path
> > of block device, and queue will be cleaned up later, we don't
> > need to worry about undoing the counter.
> > 
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index d772c221cc17..62d4967c369f 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -500,9 +500,12 @@ void blk_set_queue_dying(struct request_queue *q)
> >  	queue_flag_set(QUEUE_FLAG_DYING, q);
> >  	spin_unlock_irq(q->queue_lock);
> >  
> > -	if (q->mq_ops)
> > +	if (q->mq_ops) {
> >  		blk_mq_wake_waiters(q);
> > -	else {
> > +
> > +		/* block new I/O coming */
> > +		blk_mq_freeze_queue_start(q);
> > +	} else {
> >  		struct request_list *rl;
> >  
> >  		spin_lock_irq(q->queue_lock);
> 
> Hello Ming,
> 
> The blk_freeze_queue() call in blk_cleanup_queue() waits until q_usage_counter
> drops to zero. Since the above blk_mq_freeze_queue_start() call increases that
> counter by one, how is blk_freeze_queue() expected to finish ever?

It is q->mq_freeze_depth which is increased by blk_mq_freeze_queue_start(), not
q->q_usage_counter, otherwise blk_freeze_queue() would never return, :-)

Thanks,
Ming
Hannes Reinecke March 18, 2017, 11:27 a.m. UTC | #3
On 03/17/2017 10:57 AM, Ming Lei wrote:
> Before commit 780db2071a(blk-mq: decouble blk-mq freezing
> from generic bypassing), the dying flag is checked before
> entering queue, and Tejun converts the checking into .mq_freeze_depth,
> and assumes the counter is increased just after dying flag
> is set. Unfortunately we doesn't do that in blk_set_queue_dying().
> 
> This patch calls blk_mq_freeze_queue_start() for blk-mq in
> blk_set_queue_dying(), so that we can block new I/O coming
> once the queue is set as dying.
> 
> Given blk_set_queue_dying() is always called in remove path
> of block device, and queue will be cleaned up later, we don't
> need to worry about undoing the counter.
> 
> Cc: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Tejun Heo <tj@kernel.org>
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---
>  block/blk-core.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.com>

Cheers,

Hannes
diff mbox

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index d772c221cc17..62d4967c369f 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -500,9 +500,12 @@  void blk_set_queue_dying(struct request_queue *q)
 	queue_flag_set(QUEUE_FLAG_DYING, q);
 	spin_unlock_irq(q->queue_lock);
 
-	if (q->mq_ops)
+	if (q->mq_ops) {
 		blk_mq_wake_waiters(q);
-	else {
+
+		/* block new I/O coming */
+		blk_mq_freeze_queue_start(q);
+	} else {
 		struct request_list *rl;
 
 		spin_lock_irq(q->queue_lock);