Message ID | 20201203012638.543321-3-ming.lei@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq/nvme-loop: use nvme-loop's lock class for addressing lockdep false positive warning | expand |
On 12/3/20 2:26 AM, Ming Lei wrote: > Set nvme-loop's lock class via blk_mq_hctx_set_fq_lock_class for avoiding > lockdep possible recursive locking, then we can remove the dynamically > allocated lock class for each flush queue, finally we can avoid horrible > SCSI probe delay. > > This way may not address situation in which one nvme-loop is backed on > another nvme-loop. However, in reality, people seldom uses this way > for test. Even though someone played in this way, it is just one > recursive locking false positive, no real deadlock issue. > > Tested-by: Kashyap Desai <kashyap.desai@broadcom.com> > Reported-by: Qian Cai <cai@redhat.com> > Reviewed-by: Christoph Hellwig <hch@lst.de> > Cc: Sumit Saxena <sumit.saxena@broadcom.com> > Cc: John Garry <john.garry@huawei.com> > Cc: Kashyap Desai <kashyap.desai@broadcom.com> > Cc: Bart Van Assche <bvanassche@acm.org> > Cc: Hannes Reinecke <hare@suse.de> > Signed-off-by: Ming Lei <ming.lei@redhat.com> > --- > drivers/nvme/target/loop.c | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c > index f6d81239be21..07806016c09d 100644 > --- a/drivers/nvme/target/loop.c > +++ b/drivers/nvme/target/loop.c > @@ -211,6 +211,8 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set, > (set == &ctrl->tag_set) ? hctx_idx + 1 : 0); > } > > +static struct lock_class_key loop_hctx_fq_lock_key; > + > static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, > unsigned int hctx_idx) > { > @@ -219,6 +221,14 @@ static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, > > BUG_ON(hctx_idx >= ctrl->ctrl.queue_count); > > + /* > + * flush_end_io() can be called recursively for us, so use our own > + * lock class key for avoiding lockdep possible recursive locking, > + * then we can remove the dynamically allocated lock class for each > + * flush queue, that way may cause horrible boot delay. > + */ > + blk_mq_hctx_set_fq_lock_class(hctx, &loop_hctx_fq_lock_key); > + > hctx->driver_data = queue; > return 0; > } > Reviewed-by: Hannes Reinecke <hare@suse.de> Cheers, Hannes
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index f6d81239be21..07806016c09d 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -211,6 +211,8 @@ static int nvme_loop_init_request(struct blk_mq_tag_set *set, (set == &ctrl->tag_set) ? hctx_idx + 1 : 0); } +static struct lock_class_key loop_hctx_fq_lock_key; + static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, unsigned int hctx_idx) { @@ -219,6 +221,14 @@ static int nvme_loop_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, BUG_ON(hctx_idx >= ctrl->ctrl.queue_count); + /* + * flush_end_io() can be called recursively for us, so use our own + * lock class key for avoiding lockdep possible recursive locking, + * then we can remove the dynamically allocated lock class for each + * flush queue, that way may cause horrible boot delay. + */ + blk_mq_hctx_set_fq_lock_class(hctx, &loop_hctx_fq_lock_key); + hctx->driver_data = queue; return 0; }