diff mbox series

[1/2] blk-mq: add callback of .cleanup_rq

Message ID 20190718032519.28306-2-ming.lei@redhat.com (mailing list archive)
State Superseded
Headers show
Series block/scsi/dm-rq: fix leak of request private data in dm-mpath | expand

Commit Message

Ming Lei July 18, 2019, 3:25 a.m. UTC
dm-rq needs to free request which has been dispatched and not completed
by underlying queue. However, the underlying queue may have allocated
private stuff for this request in .queue_rq(), so dm-rq will leak the
request private part.

Add one new callback of .cleanup_rq() to fix the memory leak issue.

Another use case is to free request when the hctx is dead during
cpu hotplug context.

Cc: Ewan D. Milne <emilne@redhat.com>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Mike Snitzer <snitzer@redhat.com>
Cc: dm-devel@redhat.com
Cc: <stable@vger.kernel.org>
Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/md/dm-rq.c     |  1 +
 include/linux/blk-mq.h | 13 +++++++++++++
 2 files changed, 14 insertions(+)

Comments

Mike Snitzer July 18, 2019, 2:52 p.m. UTC | #1
On Wed, Jul 17 2019 at 11:25pm -0400,
Ming Lei <ming.lei@redhat.com> wrote:

> dm-rq needs to free request which has been dispatched and not completed
> by underlying queue. However, the underlying queue may have allocated
> private stuff for this request in .queue_rq(), so dm-rq will leak the
> request private part.

No, SCSI (and blk-mq) will leak.  DM doesn't know anything about the
internal memory SCSI uses.  That memory is a SCSI implementation detail.

Please fix header to properly reflect which layer is doing the leaking.

> Add one new callback of .cleanup_rq() to fix the memory leak issue.
> 
> Another use case is to free request when the hctx is dead during
> cpu hotplug context.
> 
> Cc: Ewan D. Milne <emilne@redhat.com>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: Hannes Reinecke <hare@suse.com>
> Cc: Christoph Hellwig <hch@lst.de>
> Cc: Mike Snitzer <snitzer@redhat.com>
> Cc: dm-devel@redhat.com
> Cc: <stable@vger.kernel.org>
> Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
>  drivers/md/dm-rq.c     |  1 +
>  include/linux/blk-mq.h | 13 +++++++++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
> index c9e44ac1f9a6..21d5c1784d0c 100644
> --- a/drivers/md/dm-rq.c
> +++ b/drivers/md/dm-rq.c
> @@ -408,6 +408,7 @@ static int map_request(struct dm_rq_target_io *tio)
>  		ret = dm_dispatch_clone_request(clone, rq);
>  		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
>  			blk_rq_unprep_clone(clone);
> +			blk_mq_cleanup_rq(clone);
>  			tio->ti->type->release_clone_rq(clone, &tio->info);
>  			tio->clone = NULL;
>  			return DM_MAPIO_REQUEUE;

Requiring upper layer driver (dm-rq) to explicitly call blk_mq_cleanup_rq() 
seems wrong.  In this instance tio->ti->type->release_clone_rq()
(dm-mpath's multipath_release_clone) calls blk_put_request().  Why can't
blk_put_request(), or blk_mq_free_request(), call blk_mq_cleanup_rq()?

Not looked at the cpu hotplug case you mention, but my naive thought is
it'd be pretty weird to also sprinkle a call to blk_mq_cleanup_rq() from
that specific "dead hctx" code path.

Mike
Ming Lei July 19, 2019, 1:35 a.m. UTC | #2
On Thu, Jul 18, 2019 at 10:52:01AM -0400, Mike Snitzer wrote:
> On Wed, Jul 17 2019 at 11:25pm -0400,
> Ming Lei <ming.lei@redhat.com> wrote:
> 
> > dm-rq needs to free request which has been dispatched and not completed
> > by underlying queue. However, the underlying queue may have allocated
> > private stuff for this request in .queue_rq(), so dm-rq will leak the
> > request private part.
> 
> No, SCSI (and blk-mq) will leak.  DM doesn't know anything about the
> internal memory SCSI uses.  That memory is a SCSI implementation detail.

It isn't noting to do with dm-rq, which frees one request after BLK_STS_*RESOURCE
is returned from blk_insert_cloned_request(), in this case it has to be
the user for releasing the request private data.

> 
> Please fix header to properly reflect which layer is doing the leaking.

Fine.

> 
> > Add one new callback of .cleanup_rq() to fix the memory leak issue.
> > 
> > Another use case is to free request when the hctx is dead during
> > cpu hotplug context.
> > 
> > Cc: Ewan D. Milne <emilne@redhat.com>
> > Cc: Bart Van Assche <bvanassche@acm.org>
> > Cc: Hannes Reinecke <hare@suse.com>
> > Cc: Christoph Hellwig <hch@lst.de>
> > Cc: Mike Snitzer <snitzer@redhat.com>
> > Cc: dm-devel@redhat.com
> > Cc: <stable@vger.kernel.org>
> > Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> >  drivers/md/dm-rq.c     |  1 +
> >  include/linux/blk-mq.h | 13 +++++++++++++
> >  2 files changed, 14 insertions(+)
> > 
> > diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
> > index c9e44ac1f9a6..21d5c1784d0c 100644
> > --- a/drivers/md/dm-rq.c
> > +++ b/drivers/md/dm-rq.c
> > @@ -408,6 +408,7 @@ static int map_request(struct dm_rq_target_io *tio)
> >  		ret = dm_dispatch_clone_request(clone, rq);
> >  		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
> >  			blk_rq_unprep_clone(clone);
> > +			blk_mq_cleanup_rq(clone);
> >  			tio->ti->type->release_clone_rq(clone, &tio->info);
> >  			tio->clone = NULL;
> >  			return DM_MAPIO_REQUEUE;
> 
> Requiring upper layer driver (dm-rq) to explicitly call blk_mq_cleanup_rq() 
> seems wrong.  In this instance tio->ti->type->release_clone_rq()
> (dm-mpath's multipath_release_clone) calls blk_put_request().  Why can't
> blk_put_request(), or blk_mq_free_request(), call blk_mq_cleanup_rq()?

I did think about doing it in blk_put_request(), and I just want to
avoid the little cost in generic fast path, given freeing request after
dispatch is very unusual, so far only nvme multipath and dm-rq did in
that way.

However, if no one objects to move blk_mq_cleanup_rq() to blk_put_request()
or blk_mq_free_request(), I am fine to do that in V2.

> 
> Not looked at the cpu hotplug case you mention, but my naive thought is
> it'd be pretty weird to also sprinkle a call to blk_mq_cleanup_rq() from
> that specific "dead hctx" code path.

It isn't weird, and it is exactly what NVMe multipath is doing, please see
nvme_failover_req(). And it is just that nvme doesn't allocate request
private data.

Wrt. blk-mq cpu hotplug handling: after one hctx is dead, we can't dispatch
request to this hctx any more, however one request has been bounded to its
hctx since its allocation and the association can't(or quite hard to) be
changed any more, do you have any better idea to deal with this issue?


Thanks,
Ming
Mike Snitzer July 19, 2019, 12:26 p.m. UTC | #3
On Thu, Jul 18 2019 at  9:35pm -0400,
Ming Lei <ming.lei@redhat.com> wrote:

> On Thu, Jul 18, 2019 at 10:52:01AM -0400, Mike Snitzer wrote:
> > On Wed, Jul 17 2019 at 11:25pm -0400,
> > Ming Lei <ming.lei@redhat.com> wrote:
> > 
> > > dm-rq needs to free request which has been dispatched and not completed
> > > by underlying queue. However, the underlying queue may have allocated
> > > private stuff for this request in .queue_rq(), so dm-rq will leak the
> > > request private part.
> > 
> > No, SCSI (and blk-mq) will leak.  DM doesn't know anything about the
> > internal memory SCSI uses.  That memory is a SCSI implementation detail.
> 
> It isn't noting to do with dm-rq, which frees one request after BLK_STS_*RESOURCE
> is returned from blk_insert_cloned_request(), in this case it has to be
> the user for releasing the request private data.
> 
> > 
> > Please fix header to properly reflect which layer is doing the leaking.
> 
> Fine.
> 
> > 
> > > Add one new callback of .cleanup_rq() to fix the memory leak issue.
> > > 
> > > Another use case is to free request when the hctx is dead during
> > > cpu hotplug context.
> > > 
> > > Cc: Ewan D. Milne <emilne@redhat.com>
> > > Cc: Bart Van Assche <bvanassche@acm.org>
> > > Cc: Hannes Reinecke <hare@suse.com>
> > > Cc: Christoph Hellwig <hch@lst.de>
> > > Cc: Mike Snitzer <snitzer@redhat.com>
> > > Cc: dm-devel@redhat.com
> > > Cc: <stable@vger.kernel.org>
> > > Fixes: 396eaf21ee17 ("blk-mq: improve DM's blk-mq IO merging via blk_insert_cloned_request feedback")
> > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > ---
> > >  drivers/md/dm-rq.c     |  1 +
> > >  include/linux/blk-mq.h | 13 +++++++++++++
> > >  2 files changed, 14 insertions(+)
> > > 
> > > diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
> > > index c9e44ac1f9a6..21d5c1784d0c 100644
> > > --- a/drivers/md/dm-rq.c
> > > +++ b/drivers/md/dm-rq.c
> > > @@ -408,6 +408,7 @@ static int map_request(struct dm_rq_target_io *tio)
> > >  		ret = dm_dispatch_clone_request(clone, rq);
> > >  		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
> > >  			blk_rq_unprep_clone(clone);
> > > +			blk_mq_cleanup_rq(clone);
> > >  			tio->ti->type->release_clone_rq(clone, &tio->info);
> > >  			tio->clone = NULL;
> > >  			return DM_MAPIO_REQUEUE;
> > 
> > Requiring upper layer driver (dm-rq) to explicitly call blk_mq_cleanup_rq() 
> > seems wrong.  In this instance tio->ti->type->release_clone_rq()
> > (dm-mpath's multipath_release_clone) calls blk_put_request().  Why can't
> > blk_put_request(), or blk_mq_free_request(), call blk_mq_cleanup_rq()?
> 
> I did think about doing it in blk_put_request(), and I just want to
> avoid the little cost in generic fast path, given freeing request after
> dispatch is very unusual, so far only nvme multipath and dm-rq did in
> that way.
> 
> However, if no one objects to move blk_mq_cleanup_rq() to blk_put_request()
> or blk_mq_free_request(), I am fine to do that in V2.

Think it'd be a less fragile/nuanced way to extend the blk-mq
interface.  Otherwise there is potential for other future drivers
experiencing leaks.

> > Not looked at the cpu hotplug case you mention, but my naive thought is
> > it'd be pretty weird to also sprinkle a call to blk_mq_cleanup_rq() from
> > that specific "dead hctx" code path.
> 
> It isn't weird, and it is exactly what NVMe multipath is doing, please see
> nvme_failover_req(). And it is just that nvme doesn't allocate request
> private data.
> 
> Wrt. blk-mq cpu hotplug handling: after one hctx is dead, we can't dispatch
> request to this hctx any more, however one request has been bounded to its
> hctx since its allocation and the association can't(or quite hard to) be
> changed any more, do you have any better idea to deal with this issue?

No, as I prefaced before "Not looked at the cpu hotplug case you
mention".  As such I should've stayed silent ;)

But my point was we should hook off current interfaces rather than rely
on a new primary function call.

Mike
diff mbox series

Patch

diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index c9e44ac1f9a6..21d5c1784d0c 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -408,6 +408,7 @@  static int map_request(struct dm_rq_target_io *tio)
 		ret = dm_dispatch_clone_request(clone, rq);
 		if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) {
 			blk_rq_unprep_clone(clone);
+			blk_mq_cleanup_rq(clone);
 			tio->ti->type->release_clone_rq(clone, &tio->info);
 			tio->clone = NULL;
 			return DM_MAPIO_REQUEUE;
diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
index 3fa1fa59f9b2..8a7808be5d0b 100644
--- a/include/linux/blk-mq.h
+++ b/include/linux/blk-mq.h
@@ -140,6 +140,7 @@  typedef int (poll_fn)(struct blk_mq_hw_ctx *);
 typedef int (map_queues_fn)(struct blk_mq_tag_set *set);
 typedef bool (busy_fn)(struct request_queue *);
 typedef void (complete_fn)(struct request *);
+typedef void (cleanup_rq_fn)(struct request *);
 
 
 struct blk_mq_ops {
@@ -200,6 +201,12 @@  struct blk_mq_ops {
 	/* Called from inside blk_get_request() */
 	void (*initialize_rq_fn)(struct request *rq);
 
+	/*
+	 * Called before freeing one request which isn't completed yet,
+	 * and usually for freeing the driver private part
+	 */
+	cleanup_rq_fn		*cleanup_rq;
+
 	/*
 	 * If set, returns whether or not this queue currently is busy
 	 */
@@ -366,4 +373,10 @@  static inline blk_qc_t request_to_qc_t(struct blk_mq_hw_ctx *hctx,
 			BLK_QC_T_INTERNAL;
 }
 
+static inline void blk_mq_cleanup_rq(struct request *rq)
+{
+	if (rq->q->mq_ops->cleanup_rq)
+		rq->q->mq_ops->cleanup_rq(rq);
+}
+
 #endif