[v3,2/4] block: add a read barrier in blk_queue_enter()
diff mbox

Message ID 20170327120658.29864-3-tom.leiming@gmail.com
State New
Headers show

Commit Message

Ming Lei March 27, 2017, 12:06 p.m. UTC
Without the barrier, reading DEAD flag of .q_usage_counter
and reading .mq_freeze_depth may be reordered, then the
following wait_event_interruptible() may never return.

Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Ming Lei <tom.leiming@gmail.com>
---
 block/blk-core.c | 9 +++++++++
 1 file changed, 9 insertions(+)

Comments

Johannes Thumshirn March 27, 2017, 12:14 p.m. UTC | #1
On Mon, Mar 27, 2017 at 08:06:56PM +0800, Ming Lei wrote:
> Without the barrier, reading DEAD flag of .q_usage_counter
> and reading .mq_freeze_depth may be reordered, then the
> following wait_event_interruptible() may never return.
> 
> Reviewed-by: Hannes Reinecke <hare@suse.com>
> Signed-off-by: Ming Lei <tom.leiming@gmail.com>
> ---

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Bart Van Assche March 27, 2017, 3:44 p.m. UTC | #2
On Mon, 2017-03-27 at 20:06 +0800, Ming Lei wrote:
> diff --git a/block/blk-core.c b/block/blk-core.c
> index ad388d5e309a..5e8963bc98d9 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -669,6 +669,15 @@ int blk_queue_enter(struct request_queue *q, bool nowait)
>  		if (nowait)
>  			return -EBUSY;
>  
> +		/*
> +		 * read pair of barrier in blk_mq_freeze_queue_start(),
> +		 * we need to order reading __PERCPU_REF_DEAD flag of
> +		 * .q_usage_counter and reading .mq_freeze_depth,
> +		 * otherwise the following wait may never return if the
> +		 * two reads are reordered.
> +		 */
> +		smp_rmb();
> +
>  		ret = wait_event_interruptible(q->mq_freeze_wq,
>  				!atomic_read(&q->mq_freeze_depth) ||
>  				blk_queue_dying(q));

Since patch 4/4 modifies the comment introduced by this patch, I would have
preferred that patches 2/4 and 4/4 would have been combined into a single
patch. Anyway:

Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com>

Patch
diff mbox

diff --git a/block/blk-core.c b/block/blk-core.c
index ad388d5e309a..5e8963bc98d9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -669,6 +669,15 @@  int blk_queue_enter(struct request_queue *q, bool nowait)
 		if (nowait)
 			return -EBUSY;
 
+		/*
+		 * read pair of barrier in blk_mq_freeze_queue_start(),
+		 * we need to order reading __PERCPU_REF_DEAD flag of
+		 * .q_usage_counter and reading .mq_freeze_depth,
+		 * otherwise the following wait may never return if the
+		 * two reads are reordered.
+		 */
+		smp_rmb();
+
 		ret = wait_event_interruptible(q->mq_freeze_wq,
 				!atomic_read(&q->mq_freeze_depth) ||
 				blk_queue_dying(q));