diff mbox

[v3,4/4] block: block new I/O just after queue is set as dying

Message ID 20170327120658.29864-5-tom.leiming@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Ming Lei March 27, 2017, 12:06 p.m. UTC
Before commit 780db2071a(blk-mq: decouble blk-mq freezing
from generic bypassing), the dying flag is checked before
entering queue, and Tejun converts the checking into .mq_freeze_depth,
and assumes the counter is increased just after dying flag
is set. Unfortunately we doesn't do that in blk_set_queue_dying().

This patch calls blk_freeze_queue_start() in blk_set_queue_dying(),
so that we can block new I/O coming once the queue is set as dying.

Given blk_set_queue_dying() is always called in remove path
of block device, and queue will be cleaned up later, we don't
need to worry about undoing the counter.

Cc: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Tejun Heo <tj@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-core.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

Comments

Johannes Thumshirn March 27, 2017, 12:16 p.m. UTC | #1
On Mon, Mar 27, 2017 at 08:06:58PM +0800, Ming Lei wrote:
> Before commit 780db2071a(blk-mq: decouble blk-mq freezing
> from generic bypassing), the dying flag is checked before
> entering queue, and Tejun converts the checking into .mq_freeze_depth,
> and assumes the counter is increased just after dying flag
> is set. Unfortunately we doesn't do that in blk_set_queue_dying().
> 
> This patch calls blk_freeze_queue_start() in blk_set_queue_dying(),
> so that we can block new I/O coming once the queue is set as dying.
> 
> Given blk_set_queue_dying() is always called in remove path
> of block device, and queue will be cleaned up later, we don't
> need to worry about undoing the counter.
> 
> Cc: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Tejun Heo <tj@kernel.org>
> Reviewed-by: Hannes Reinecke <hare@suse.com>
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Bart Van Assche March 27, 2017, 3:49 p.m. UTC | #2
On Mon, 2017-03-27 at 20:06 +0800, Ming Lei wrote:
> Before commit 780db2071a(blk-mq: decouble blk-mq freezing
> from generic bypassing), the dying flag is checked before
> entering queue, and Tejun converts the checking into .mq_freeze_depth,
> and assumes the counter is increased just after dying flag
> is set. Unfortunately we doesn't do that in blk_set_queue_dying().
> 
> This patch calls blk_freeze_queue_start() in blk_set_queue_dying(),
> so that we can block new I/O coming once the queue is set as dying.
> 
> Given blk_set_queue_dying() is always called in remove path
> of block device, and queue will be cleaned up later, we don't
> need to worry about undoing the counter.
> 
> Cc: Bart Van Assche <bart.vanassche@sandisk.com>
> Cc: Tejun Heo <tj@kernel.org>
> Reviewed-by: Hannes Reinecke <hare@suse.com>
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---
>  block/blk-core.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 60f364e1d36b..e22c4ea002ec 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -500,6 +500,13 @@ void blk_set_queue_dying(struct request_queue *q)
>  	queue_flag_set(QUEUE_FLAG_DYING, q);
>  	spin_unlock_irq(q->queue_lock);
>  
> +	/*
> +	 * When queue DYING flag is set, we need to block new req
> +	 * entering queue, so we call blk_freeze_queue_start() to
> +	 * prevent I/O from crossing blk_queue_enter().
> +	 */
> +	blk_freeze_queue_start(q);
>
> 	if (q->mq_ops)
>  		blk_mq_wake_waiters(q);
>  	else {
> @@ -672,9 +679,9 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
>  		/*
>  		 * read pair of barrier in blk_freeze_queue_start(),
>  		 * we need to order reading __PERCPU_REF_DEAD flag of
> -		 * .q_usage_counter and reading .mq_freeze_depth,
> -		 * otherwise the following wait may never return if the
> -		 * two reads are reordered.
> +		 * .q_usage_counter and reading .mq_freeze_depth or
> +		 * queue dying flag, otherwise the following wait may
> +		 * never return if the two reads are reordered.
>  		 */
>  		smp_rmb();
>  

An explanation of why that crossing can happen is still missing above the
blk_freeze_queue_start() call. Additionally, I'm still wondering whether
or not we need "Cc: stable" tags for the patches in this series. But since
the code looks fine:

Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>
>
diff mbox

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index 60f364e1d36b..e22c4ea002ec 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -500,6 +500,13 @@  void blk_set_queue_dying(struct request_queue *q)
 	queue_flag_set(QUEUE_FLAG_DYING, q);
 	spin_unlock_irq(q->queue_lock);
 
+	/*
+	 * When queue DYING flag is set, we need to block new req
+	 * entering queue, so we call blk_freeze_queue_start() to
+	 * prevent I/O from crossing blk_queue_enter().
+	 */
+	blk_freeze_queue_start(q);
+
 	if (q->mq_ops)
 		blk_mq_wake_waiters(q);
 	else {
@@ -672,9 +679,9 @@  int blk_queue_enter(struct request_queue *q, bool nowait)
 		/*
 		 * read pair of barrier in blk_freeze_queue_start(),
 		 * we need to order reading __PERCPU_REF_DEAD flag of
-		 * .q_usage_counter and reading .mq_freeze_depth,
-		 * otherwise the following wait may never return if the
-		 * two reads are reordered.
+		 * .q_usage_counter and reading .mq_freeze_depth or
+		 * queue dying flag, otherwise the following wait may
+		 * never return if the two reads are reordered.
 		 */
 		smp_rmb();