Message ID | 1489064578-17305-4-git-send-email-tom.leiming@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
diff --git a/block/blk-core.c b/block/blk-core.c index 0eeb99ef654f..559487e58296 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -500,9 +500,12 @@ void blk_set_queue_dying(struct request_queue *q) queue_flag_set(QUEUE_FLAG_DYING, q); spin_unlock_irq(q->queue_lock); - if (q->mq_ops) + if (q->mq_ops) { blk_mq_wake_waiters(q); - else { + + /* block new I/O coming */ + blk_mq_freeze_queue_start(q); + } else { struct request_list *rl; spin_lock_irq(q->queue_lock);
Before commit 780db2071a(blk-mq: decouble blk-mq freezing from generic bypassing), the dying flag is checked before entering queue, and Tejun converts the checking into .mq_freeze_depth, and assumes the counter is increased just after dying flag is set. Unfortunately we doesn't do that in blk_set_queue_dying(). This patch calls blk_mq_freeze_queue_start() for blk-mq in blk_set_queue_dying(), so that we can block new I/O coming once the queue is set as dying. Given blk_set_queue_dying() is always called in remove path of block device, and queue will be cleaned up later, we don't need to worry about undo of the counter. Cc: Tejun Heo <tj@kernel.org> Signed-off-by: Ming Lei <tom.leiming@gmail.com> --- block/blk-core.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)