Message ID | 20201116030459.13963-2-bvanassche@acm.org (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | Rework runtime suspend and SCSI domain validation | expand |
On Sun, Nov 15, 2020 at 07:04:51PM -0800, Bart Van Assche wrote: > With the current implementation the following race can happen: > * blk_pre_runtime_suspend() calls blk_freeze_queue_start() and > blk_mq_unfreeze_queue(). > * blk_queue_enter() calls blk_queue_pm_only() and that function returns > true. > * blk_queue_enter() calls blk_pm_request_resume() and that function does > not call pm_request_resume() because the queue runtime status is > RPM_ACTIVE. > * blk_pre_runtime_suspend() changes the queue status into RPM_SUSPENDING. > > Fix this race by changing the queue runtime status into RPM_SUSPENDING > before switching q_usage_counter to atomic mode. Looks good, Reviewed-by: Christoph Hellwig <hch@lst.de>
diff --git a/block/blk-pm.c b/block/blk-pm.c index b85234d758f7..17bd020268d4 100644 --- a/block/blk-pm.c +++ b/block/blk-pm.c @@ -67,6 +67,10 @@ int blk_pre_runtime_suspend(struct request_queue *q) WARN_ON_ONCE(q->rpm_status != RPM_ACTIVE); + spin_lock_irq(&q->queue_lock); + q->rpm_status = RPM_SUSPENDING; + spin_unlock_irq(&q->queue_lock); + /* * Increase the pm_only counter before checking whether any * non-PM blk_queue_enter() calls are in progress to avoid that any @@ -89,15 +93,14 @@ int blk_pre_runtime_suspend(struct request_queue *q) /* Switch q_usage_counter back to per-cpu mode. */ blk_mq_unfreeze_queue(q); - spin_lock_irq(&q->queue_lock); - if (ret < 0) + if (ret < 0) { + spin_lock_irq(&q->queue_lock); + q->rpm_status = RPM_ACTIVE; pm_runtime_mark_last_busy(q->dev); - else - q->rpm_status = RPM_SUSPENDING; - spin_unlock_irq(&q->queue_lock); + spin_unlock_irq(&q->queue_lock); - if (ret) blk_clear_pm_only(q); + } return ret; }