diff mbox

dm mpath: switch IO scheduler of underlying paths to "none" [was: Re: BFQ + dm-mpath]

Message ID 20170908164129.GA48286@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Mike Snitzer Sept. 8, 2017, 4:41 p.m. UTC
On Fri, Sep 08 2017 at  5:13P -0400,
Paolo Valente <paolo.valente@linaro.org> wrote:

> 
> > Il giorno 07 set 2017, alle ore 17:52, Mike Snitzer <snitzer@redhat.com> ha scritto:
> > 
> > On Tue, Sep 05 2017 at 10:15am -0400,
> > Bart Van Assche <Bart.VanAssche@wdc.com> wrote:
> > 
> >> On Tue, 2017-09-05 at 09:56 +0200, Paolo Valente wrote:
> >>> Ok, my suspects seem confirmed: the path dm_mq_queue_rq -> map_request
> >>> -> setup_clone -> blk_rq_prep_clone creates a cloned request without
> >>> invoking e->type->ops.mq.prepare_request for the target elevator e.
> >>> The cloned request is therefore not initialized for the scheduler, but
> >>> it is however inserted into the scheduler by
> >>> blk_mq_sched_insert_request.  This seems an error for any scheduler
> >>> that needs to initialize fields in the incoming request, or in general
> >>> to take some preliminary action.
> >>> 
> >>> Am I missing something here?
> >> 
> >> (+Mike Snitzer)
> >> 
> >> Mike, do you perhaps have the time to look into this memory leak?
> > 
> > It isn't a memory leak, it is missing initialization in the case of
> > cloned requests (if I'm understanding Paolo correctly).
> > 
> 
> Exactly!
> 
> > But cloned requests shouldn't be going through the scheduler.  Only the
> > original requests should.
> > 
> > Commit bd166ef18 ("blk-mq-sched: add framework for MQ capable IO
> > schedulers") switched from blk_mq_insert_request() to
> > blk_mq_sched_insert_request() and in doing so it opened dm-mpath up to
> > this bug.
> > 
> > Could be we need to take steps to ensure the block layer still
> > supports bypassing the elevator by using direct insertion?
> > 
> > Or blk_mq_sched_insert_request() needs updating to check if
> > e->type->ops.mq.prepare_request were actually performed and to fallback
> > to the !elevator case if not..
> > 
> > Not sure on the fix, but I can look closer if others (like Jens or
> > Paolo) don't have quicker suggestions.
> > 
> 
> No quick suggestion from me :(
> 
> Thanks for analyzing this bug,

Please see the following untested patch.  All
testing/review/comments/acks appreciated.

I elected to use elevator_change() rather than fiddle with adding a new
blk-mq elevator hook (e.g. ->request_prepared) to verify that each
blk-mq elevator enabled request did in fact get prepared.

Bart, please test this patch and reply with your review/feedback.

Jens, if you're OK with this solution please reply with your Ack and
I'll send it to Linus along with the rest of the handful of DM changes I
have for 4.14.

Thanks,
Mike

From: Mike Snitzer <snitzer@redhat.com>
Date: Fri, 8 Sep 2017 11:45:13 -0400
Subject: [PATCH] dm mpath: switch IO scheduler of underlying paths to "none"

A NULL pointer crash was reported for the case of having the BFQ IO
scheduler attached to the underlying paths of a DM multipath device.
The crash occurs in blk_mq_sched_insert_request()'s call to
e->type->ops.mq.insert_requests().

Paolo Valente correctly summarized why the crash occured with:
"the call chain (dm_mq_queue_rq -> map_request -> setup_clone ->
blk_rq_prep_clone) creates a cloned request without invoking
e->type->ops.mq.prepare_request for the target elevator e.  The cloned
request is therefore not initialized for the scheduler, but it is
however inserted into the scheduler by blk_mq_sched_insert_request."

All said, there is no reason for IO scheduling in the underlying paths
because the top-level DM multipath request_queue handles all IO
scheduling of the original requests issued to the multipath device.  The
multipath device's clones of the original requests are then just inserted
directly into the underlying path's dispatch queue(s).

Commit bd166ef18 ("blk-mq-sched: add framework for MQ capable IO
schedulers") switched blk_insert_cloned_request() from using
blk_mq_insert_request() to blk_mq_sched_insert_request().  Which
incorrectly added elevator machinery into a call chain that isn't
supposed to have any.

To fix this DM multipath now explicitly removes the IO scheduler from
all underlying paths during multipath device initialization.  To do so
elevator_change() is needed, so elevator_change() is reinstated by
reverting commit c033269490 ("block: Remove elevator_change()").

Fixes: bd166ef18 ("blk-mq-sched: add framework for MQ capable IO schedulers")
Reported-by: Bart Van Assche <Bart.VanAssche@wdc.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
---
 block/elevator.c         | 13 +++++++++++++
 drivers/md/dm-mpath.c    | 14 ++++++++++++--
 include/linux/elevator.h |  1 +
 3 files changed, 26 insertions(+), 2 deletions(-)

Comments

Jens Axboe Sept. 8, 2017, 4:48 p.m. UTC | #1
On 09/08/2017 10:41 AM, Mike Snitzer wrote:
> On Fri, Sep 08 2017 at  5:13P -0400,
> Paolo Valente <paolo.valente@linaro.org> wrote:
> 
>>
>>> Il giorno 07 set 2017, alle ore 17:52, Mike Snitzer <snitzer@redhat.com> ha scritto:
>>>
>>> On Tue, Sep 05 2017 at 10:15am -0400,
>>> Bart Van Assche <Bart.VanAssche@wdc.com> wrote:
>>>
>>>> On Tue, 2017-09-05 at 09:56 +0200, Paolo Valente wrote:
>>>>> Ok, my suspects seem confirmed: the path dm_mq_queue_rq -> map_request
>>>>> -> setup_clone -> blk_rq_prep_clone creates a cloned request without
>>>>> invoking e->type->ops.mq.prepare_request for the target elevator e.
>>>>> The cloned request is therefore not initialized for the scheduler, but
>>>>> it is however inserted into the scheduler by
>>>>> blk_mq_sched_insert_request.  This seems an error for any scheduler
>>>>> that needs to initialize fields in the incoming request, or in general
>>>>> to take some preliminary action.
>>>>>
>>>>> Am I missing something here?
>>>>
>>>> (+Mike Snitzer)
>>>>
>>>> Mike, do you perhaps have the time to look into this memory leak?
>>>
>>> It isn't a memory leak, it is missing initialization in the case of
>>> cloned requests (if I'm understanding Paolo correctly).
>>>
>>
>> Exactly!
>>
>>> But cloned requests shouldn't be going through the scheduler.  Only the
>>> original requests should.
>>>
>>> Commit bd166ef18 ("blk-mq-sched: add framework for MQ capable IO
>>> schedulers") switched from blk_mq_insert_request() to
>>> blk_mq_sched_insert_request() and in doing so it opened dm-mpath up to
>>> this bug.
>>>
>>> Could be we need to take steps to ensure the block layer still
>>> supports bypassing the elevator by using direct insertion?
>>>
>>> Or blk_mq_sched_insert_request() needs updating to check if
>>> e->type->ops.mq.prepare_request were actually performed and to fallback
>>> to the !elevator case if not..
>>>
>>> Not sure on the fix, but I can look closer if others (like Jens or
>>> Paolo) don't have quicker suggestions.
>>>
>>
>> No quick suggestion from me :(
>>
>> Thanks for analyzing this bug,
> 
> Please see the following untested patch.  All
> testing/review/comments/acks appreciated.
> 
> I elected to use elevator_change() rather than fiddle with adding a new
> blk-mq elevator hook (e.g. ->request_prepared) to verify that each
> blk-mq elevator enabled request did in fact get prepared.
> 
> Bart, please test this patch and reply with your review/feedback.
> 
> Jens, if you're OK with this solution please reply with your Ack and
> I'll send it to Linus along with the rest of the handful of DM changes I
> have for 4.14.

I am not - we used to have this elevator change functionality from
inside the kernel, and finally got rid of it when certain drivers killed
it. I don't want to be bringing it back.

Sounds like we have two issues here. One is that we run into issues with
stacking IO schedulers, and the other is that we'd rather not have
multiple schedulers in play for a stacked setup.

Maybe it'd be cleaner to have the dm-mq side of things not insert
through the scheduler, but rather just FIFO on the target end?
Mike Snitzer Sept. 8, 2017, 5:07 p.m. UTC | #2
On Fri, Sep 08 2017 at 12:48pm -0400,
Jens Axboe <axboe@kernel.dk> wrote:

> On 09/08/2017 10:41 AM, Mike Snitzer wrote:
> > On Fri, Sep 08 2017 at  5:13P -0400,
> > Paolo Valente <paolo.valente@linaro.org> wrote:
> > 
> >>
> >>> Il giorno 07 set 2017, alle ore 17:52, Mike Snitzer <snitzer@redhat.com> ha scritto:
> >>>
> >>> On Tue, Sep 05 2017 at 10:15am -0400,
> >>> Bart Van Assche <Bart.VanAssche@wdc.com> wrote:
> >>>
> >>>> On Tue, 2017-09-05 at 09:56 +0200, Paolo Valente wrote:
> >>>>> Ok, my suspects seem confirmed: the path dm_mq_queue_rq -> map_request
> >>>>> -> setup_clone -> blk_rq_prep_clone creates a cloned request without
> >>>>> invoking e->type->ops.mq.prepare_request for the target elevator e.
> >>>>> The cloned request is therefore not initialized for the scheduler, but
> >>>>> it is however inserted into the scheduler by
> >>>>> blk_mq_sched_insert_request.  This seems an error for any scheduler
> >>>>> that needs to initialize fields in the incoming request, or in general
> >>>>> to take some preliminary action.
> >>>>>
> >>>>> Am I missing something here?
> >>>>
> >>>> (+Mike Snitzer)
> >>>>
> >>>> Mike, do you perhaps have the time to look into this memory leak?
> >>>
> >>> It isn't a memory leak, it is missing initialization in the case of
> >>> cloned requests (if I'm understanding Paolo correctly).
> >>>
> >>
> >> Exactly!
> >>
> >>> But cloned requests shouldn't be going through the scheduler.  Only the
> >>> original requests should.
> >>>
> >>> Commit bd166ef18 ("blk-mq-sched: add framework for MQ capable IO
> >>> schedulers") switched from blk_mq_insert_request() to
> >>> blk_mq_sched_insert_request() and in doing so it opened dm-mpath up to
> >>> this bug.
> >>>
> >>> Could be we need to take steps to ensure the block layer still
> >>> supports bypassing the elevator by using direct insertion?
> >>>
> >>> Or blk_mq_sched_insert_request() needs updating to check if
> >>> e->type->ops.mq.prepare_request were actually performed and to fallback
> >>> to the !elevator case if not..
> >>>
> >>> Not sure on the fix, but I can look closer if others (like Jens or
> >>> Paolo) don't have quicker suggestions.
> >>>
> >>
> >> No quick suggestion from me :(
> >>
> >> Thanks for analyzing this bug,
> > 
> > Please see the following untested patch.  All
> > testing/review/comments/acks appreciated.
> > 
> > I elected to use elevator_change() rather than fiddle with adding a new
> > blk-mq elevator hook (e.g. ->request_prepared) to verify that each
> > blk-mq elevator enabled request did in fact get prepared.
> > 
> > Bart, please test this patch and reply with your review/feedback.
> > 
> > Jens, if you're OK with this solution please reply with your Ack and
> > I'll send it to Linus along with the rest of the handful of DM changes I
> > have for 4.14.
> 
> I am not - we used to have this elevator change functionality from
> inside the kernel, and finally got rid of it when certain drivers killed
> it. I don't want to be bringing it back.

Fine.

> Sounds like we have two issues here. One is that we run into issues with
> stacking IO schedulers, and the other is that we'd rather not have
> multiple schedulers in play for a stacked setup.
> 
> Maybe it'd be cleaner to have the dm-mq side of things not insert
> through the scheduler, but rather just FIFO on the target end?

That was how blk_insert_cloned_request() was before.  From the patch
header (you may have missed it):

"Commit bd166ef18 ("blk-mq-sched: add framework for MQ capable IO
schedulers") switched blk_insert_cloned_request() from using
blk_mq_insert_request() to blk_mq_sched_insert_request().  Which
incorrectly added elevator machinery into a call chain that isn't
supposed to have any."

So shouldn't blk_insert_cloned_request() be made to _not_ use
blk_mq_sched_insert_request()?  We'd need a new block interface
established, or equivalent open-coded in blk_insert_cloned_request(),
to handle direct dispatch to an mq request_queue's queue(s).

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Mike Snitzer Sept. 8, 2017, 7:58 p.m. UTC | #3
On Fri, Sep 08 2017 at  1:07pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:

> On Fri, Sep 08 2017 at 12:48pm -0400,
> Jens Axboe <axboe@kernel.dk> wrote:
> 
> > > Please see the following untested patch.  All
> > > testing/review/comments/acks appreciated.
> > > 
> > > I elected to use elevator_change() rather than fiddle with adding a new
> > > blk-mq elevator hook (e.g. ->request_prepared) to verify that each
> > > blk-mq elevator enabled request did in fact get prepared.
> > > 
> > > Bart, please test this patch and reply with your review/feedback.
> > > 
> > > Jens, if you're OK with this solution please reply with your Ack and
> > > I'll send it to Linus along with the rest of the handful of DM changes I
> > > have for 4.14.
> > 
> > I am not - we used to have this elevator change functionality from
> > inside the kernel, and finally got rid of it when certain drivers killed
> > it. I don't want to be bringing it back.
> 
> Fine.

BTW, while I conceded "Fine": I think your justification for not
reintroducing elevator_change() lacks substance.  What is inherently
problematic about elevator_change()?

Having an elevator attached to a DM multipath device's underlying path's
request_queue just asks for trouble (especially given the blk-mq
elevator interface).

Please own this issue as a regression and help me arrive at a timely way
forward.

Thanks,
Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jens Axboe Sept. 8, 2017, 8:28 p.m. UTC | #4
On 09/08/2017 01:58 PM, Mike Snitzer wrote:
> On Fri, Sep 08 2017 at  1:07pm -0400,
> Mike Snitzer <snitzer@redhat.com> wrote:
> 
>> On Fri, Sep 08 2017 at 12:48pm -0400,
>> Jens Axboe <axboe@kernel.dk> wrote:
>>
>>>> Please see the following untested patch.  All
>>>> testing/review/comments/acks appreciated.
>>>>
>>>> I elected to use elevator_change() rather than fiddle with adding a new
>>>> blk-mq elevator hook (e.g. ->request_prepared) to verify that each
>>>> blk-mq elevator enabled request did in fact get prepared.
>>>>
>>>> Bart, please test this patch and reply with your review/feedback.
>>>>
>>>> Jens, if you're OK with this solution please reply with your Ack and
>>>> I'll send it to Linus along with the rest of the handful of DM changes I
>>>> have for 4.14.
>>>
>>> I am not - we used to have this elevator change functionality from
>>> inside the kernel, and finally got rid of it when certain drivers killed
>>> it. I don't want to be bringing it back.
>>
>> Fine.
> 
> BTW, while I conceded "Fine": I think your justification for not
> reintroducing elevator_change() lacks substance.  What is inherently
> problematic about elevator_change()?

Because no in-kernel users should be mucking with the IO scheduler. Adding
this back is just an excuse for drivers to start doing it again, which
generally happens because whatever vendors driver team tests some synthetic
benchmark and decide that X is better than the default of Y. So we're not
going back to that.

> Having an elevator attached to a DM multipath device's underlying path's
> request_queue just asks for trouble (especially given the blk-mq
> elevator interface).
> 
> Please own this issue as a regression and help me arrive at a timely way
> forward.

I'm trying, I made suggestions on how we can proceed - we can have a way
to insert to hctx->dispatch without bothering the IO scheduler. I'm
open to other suggestions as well, just not open to exporting an
interface to change IO schedulers from inside the kernel.
Jens Axboe Sept. 8, 2017, 9:50 p.m. UTC | #5
On 09/08/2017 03:42 PM, Mike Snitzer wrote:
> diff --git a/block/blk-core.c b/block/blk-core.c
> index d709c0e..7a06b2b 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -2342,7 +2342,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
>  	if (q->mq_ops) {
>  		if (blk_queue_io_stat(q))
>  			blk_account_io_start(rq, true);
> -		blk_mq_sched_insert_request(rq, false, true, false, false);
> +		blk_mq_insert_request(rq);
>  		return BLK_STS_OK;
>  	}

I think this is fine, since only dm uses this function. Would be nice to
have some check though, to ensure it doesn't get misused in the future.

> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 3f18cff..5c5bb3f 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1401,6 +1401,24 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
>  	blk_mq_hctx_mark_pending(hctx, ctx);
>  }
>  
> +static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx,
> +				   struct blk_mq_ctx *ctx,
> +				   struct request *rq)
> +{
> +	spin_lock(&ctx->lock);
> +	__blk_mq_insert_request(hctx, rq, false);
> +	spin_unlock(&ctx->lock);
> +}

Any particular reason it isn't just added to the dispatch queue?

> +void blk_mq_insert_request(struct request *rq)
> +{
> +	struct blk_mq_ctx *ctx = rq->mq_ctx;
> +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
> +
> +	blk_mq_queue_io(hctx, ctx, rq);
> +	blk_mq_run_hw_queue(hctx, false);
> +}

Would probably be cleaner as blk_mq_insert_and_run_request() or
something, to make sure it's understood that it also runs the queue.
Mike Snitzer Sept. 8, 2017, 10:03 p.m. UTC | #6
On Fri, Sep 08 2017 at  5:50pm -0400,
Jens Axboe <axboe@kernel.dk> wrote:

> On 09/08/2017 03:42 PM, Mike Snitzer wrote:
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index d709c0e..7a06b2b 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -2342,7 +2342,7 @@ blk_status_t blk_insert_cloned_request(struct request_queue *q, struct request *
> >  	if (q->mq_ops) {
> >  		if (blk_queue_io_stat(q))
> >  			blk_account_io_start(rq, true);
> > -		blk_mq_sched_insert_request(rq, false, true, false, false);
> > +		blk_mq_insert_request(rq);
> >  		return BLK_STS_OK;
> >  	}
> 
> I think this is fine, since only dm uses this function. Would be nice to
> have some check though, to ensure it doesn't get misused in the future.

Not sure what kind of check you're thinking.

> 
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index 3f18cff..5c5bb3f 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -1401,6 +1401,24 @@ void __blk_mq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq,
> >  	blk_mq_hctx_mark_pending(hctx, ctx);
> >  }
> >  
> > +static inline void blk_mq_queue_io(struct blk_mq_hw_ctx *hctx,
> > +				   struct blk_mq_ctx *ctx,
> > +				   struct request *rq)
> > +{
> > +	spin_lock(&ctx->lock);
> > +	__blk_mq_insert_request(hctx, rq, false);
> > +	spin_unlock(&ctx->lock);
> > +}
> 
> Any particular reason it isn't just added to the dispatch queue?

I just started from the blk_mq_insert_request() that was removed as part
of commit bd166ef18 and then simplified it by reusing blk_mq_queue_io()
rather than open-coding it again.  So I moved blk_mq_queue_io() higher
in the file and re-used it.

Is there more efficiency associated with adding direct to dispatch
queue?  If so then I suppose it is worth it!  But me doing it would take
a bit more effort (to review code and expand my horizons, but that is
fine if that is what you'd prefer to see).

> > +void blk_mq_insert_request(struct request *rq)
> > +{
> > +	struct blk_mq_ctx *ctx = rq->mq_ctx;
> > +	struct blk_mq_hw_ctx *hctx = blk_mq_map_queue(rq->q, ctx->cpu);
> > +
> > +	blk_mq_queue_io(hctx, ctx, rq);
> > +	blk_mq_run_hw_queue(hctx, false);
> > +}
> 
> Would probably be cleaner as blk_mq_insert_and_run_request() or
> something, to make sure it's understood that it also runs the queue.

That's fine.  Feel free to tweak this patch however you like!  But if
you'd like me to completely own driving this patch forward I'll fold
this in to v2.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
diff mbox

Patch

diff --git a/block/elevator.c b/block/elevator.c
index 4bb2f0c..a5d9639 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -1084,6 +1084,19 @@  static int __elevator_change(struct request_queue *q, const char *name)
 	return elevator_switch(q, e);
 }
 
+int elevator_change(struct request_queue *q, const char *name)
+{
+	int ret;
+
+	/* Protect q->elevator from elevator_init() */
+	mutex_lock(&q->sysfs_lock);
+	ret = __elevator_change(q, name);
+	mutex_unlock(&q->sysfs_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(elevator_change);
+
 static inline bool elv_support_iosched(struct request_queue *q)
 {
 	if (q->mq_ops && q->tag_set && (q->tag_set->flags &
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index bf280a9..de046b0 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -25,6 +25,7 @@ 
 #include <scsi/scsi_dh.h>
 #include <linux/atomic.h>
 #include <linux/blk-mq.h>
+#include <linux/elevator.h>
 
 #define DM_MSG_PREFIX "multipath"
 #define DM_PG_INIT_DELAY_MSECS 2000
@@ -757,8 +758,17 @@  static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
 		goto bad;
 	}
 
-	if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags) || m->hw_handler_name)
-		q = bdev_get_queue(p->path.dev->bdev);
+	q = bdev_get_queue(p->path.dev->bdev);
+
+	/*
+	 * The underlying path's IO scheduler is _not_ used because all
+	 * scheduling is done by the top-level multipath request_queue.
+	 */
+	if (elevator_change(q, "none")) {
+		ti->error = "error switching underlying path's IO scheduler to 'none'";
+		dm_put_device(ti, p->path.dev);
+		goto bad;
+	}
 
 	if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags)) {
 retain:
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 5bc8f86..fe24004 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -220,6 +220,7 @@  extern ssize_t elv_iosched_store(struct request_queue *, const char *, size_t);
 
 extern int elevator_init(struct request_queue *, char *);
 extern void elevator_exit(struct request_queue *, struct elevator_queue *);
+extern int elevator_change(struct request_queue *, const char *);
 extern bool elv_bio_merge_ok(struct request *, struct bio *);
 extern struct elevator_queue *elevator_alloc(struct request_queue *,
 					struct elevator_type *);