Message ID | 20190412033032.10418-2-ming.lei@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: fix races related with freeing queue | expand |
Looks good,
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
On 4/12/19 5:30 AM, Ming Lei wrote: > Just like aio/io_uring, we need to grab 2 refcount for queuing one > request, one is for submission, another is for completion. > > If the request isn't queued from plug code path, the refcount grabbed > in generic_make_request() serves for submission. In theroy, this > refcount should have been released after the sumission(async run queue) > is done. blk_freeze_queue() works with blk_sync_queue() together > for avoiding race between cleanup queue and IO submission, given async > run queue activities are canceled because hctx->run_work is scheduled with > the refcount held, so it is fine to not hold the refcount when > running the run queue work function for dispatch IO. > > However, if request is staggered into plug list, and finally queued > from plug code path, the refcount in submission side is actually missed. > And we may start to run queue after queue is removed because the queue's > kobject refcount isn't guaranteed to be grabbed in flushing plug list > context, then kernel oops is triggered, see the following race: > > blk_mq_flush_plug_list(): > blk_mq_sched_insert_requests() > insert requests to sw queue or scheduler queue > blk_mq_run_hw_queue > > Because of concurrent run queue, all requests inserted above may be > completed before calling the above blk_mq_run_hw_queue. Then queue can > be freed during the above blk_mq_run_hw_queue(). > > Fixes the issue by grab .q_usage_counter before calling > blk_mq_sched_insert_requests() in blk_mq_flush_plug_list(). This way is > safe because the queue is absolutely alive before inserting request. > > Cc: Dongli Zhang <dongli.zhang@oracle.com> > Cc: James Smart <james.smart@broadcom.com> > Cc: Bart Van Assche <bart.vanassche@wdc.com> > Cc: linux-scsi@vger.kernel.org, > Cc: Martin K . Petersen <martin.petersen@oracle.com>, > Cc: Christoph Hellwig <hch@lst.de>, > Cc: James E . J . Bottomley <jejb@linux.vnet.ibm.com>, > Cc: jianchao wang <jianchao.w.wang@oracle.com> > Reviewed-by: Bart Van Assche <bvanassche@acm.org> > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > block/blk-mq.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 3ff3d7b49969..5b586affee09 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1728,9 +1728,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) > if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) { > if (this_hctx) { > trace_block_unplug(this_q, depth, !from_schedule); > + > + percpu_ref_get(&this_q->q_usage_counter); > blk_mq_sched_insert_requests(this_hctx, this_ctx, > &rq_list, > from_schedule); > + percpu_ref_put(&this_q->q_usage_counter); > } > > this_q = rq->q; > @@ -1749,8 +1752,11 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) > */ > if (this_hctx) { > trace_block_unplug(this_q, depth, !from_schedule); > + > + percpu_ref_get(&this_q->q_usage_counter); > blk_mq_sched_insert_requests(this_hctx, this_ctx, &rq_list, > from_schedule); > + percpu_ref_put(&this_q->q_usage_counter); > } > } > > Reviewed-by: Hannes Reinecke <hare@suse.com> Cheers, Hannes
diff --git a/block/blk-mq.c b/block/blk-mq.c index 3ff3d7b49969..5b586affee09 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1728,9 +1728,12 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) if (rq->mq_hctx != this_hctx || rq->mq_ctx != this_ctx) { if (this_hctx) { trace_block_unplug(this_q, depth, !from_schedule); + + percpu_ref_get(&this_q->q_usage_counter); blk_mq_sched_insert_requests(this_hctx, this_ctx, &rq_list, from_schedule); + percpu_ref_put(&this_q->q_usage_counter); } this_q = rq->q; @@ -1749,8 +1752,11 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule) */ if (this_hctx) { trace_block_unplug(this_q, depth, !from_schedule); + + percpu_ref_get(&this_q->q_usage_counter); blk_mq_sched_insert_requests(this_hctx, this_ctx, &rq_list, from_schedule); + percpu_ref_put(&this_q->q_usage_counter); } }