diff mbox

[RFC,v2] dm mpath: add a queue_if_no_path timeout

Message ID 20131018225350.GB7553@redhat.com (mailing list archive)
State Rejected, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Mike Snitzer Oct. 18, 2013, 10:53 p.m. UTC
On Fri, Oct 18 2013 at  4:51pm -0400,
Frank Mayhar <fmayhar@google.com> wrote:

> On Thu, 2013-10-17 at 17:13 -0400, Mike Snitzer wrote:
> > Cannot say that argument wins me over but I will say that if you intend
> > to take the approach to have the kernel have a timeout; please pursue
> > the approach Hannes offered:
> > 
> > https://patchwork.kernel.org/patch/2953231/
> > 
> > It is much cleaner and if it works for your needs we can see about
> > getting a tested version upstream.
> 
> Unfortunately his patch doesn't work as-is; it turns out that it tries
> to set the timeout only if the target is request-based but at the time
> he tries to set it the table type hasn't yet been set.
> 
> I'm looking into fixing it.

Ouch, yeah, can't access the DM device's queue from .ctr()
There were other issues with Hannes RFC patch, wouldn't compile.

Anyway, looks like we need a new target_type hook (e.g. .init_queue)
that is called from dm_init_request_based_queue().

Request-based DM only allows a single DM target per device so we don't
need the usual multi DM-target iterators.

But, unfortunately, at the time we call dm_init_request_based_queue()
the mapped_device isn't yet connected to the inactive table that is
being loaded (but the table is connected to the mapped_device).

In dm-ioctl.c:table_load(), the inactive table could be passed directly
into dm_setup_md_queue().

Please give the following revised patch a try, if it works we can clean
it up further (think multipath_status needs updating, we also may want
to constrain .init_queue to only being called if the target is a
singleton, which dm-mpath should be, but isn't flagged as such yet).

It compiles, but I haven't tested it...

---
 drivers/md/dm-ioctl.c         |    2 +-
 drivers/md/dm-mpath.c         |   77 +++++++++++++++++++++++++++++++++++++++++
 drivers/md/dm.c               |   12 +++++--
 drivers/md/dm.h               |    2 +-
 include/linux/device-mapper.h |    4 ++
 5 files changed, 92 insertions(+), 5 deletions(-)


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Comments

Mike Snitzer Oct. 30, 2013, 1:02 a.m. UTC | #1
On Fri, Oct 18 2013 at  6:53pm -0400,
Mike Snitzer <snitzer@redhat.com> wrote:

> On Fri, Oct 18 2013 at  4:51pm -0400,
> Frank Mayhar <fmayhar@google.com> wrote:
> 
> > On Thu, 2013-10-17 at 17:13 -0400, Mike Snitzer wrote:
> > > Cannot say that argument wins me over but I will say that if you intend
> > > to take the approach to have the kernel have a timeout; please pursue
> > > the approach Hannes offered:
> > > 
> > > https://patchwork.kernel.org/patch/2953231/
> > > 
> > > It is much cleaner and if it works for your needs we can see about
> > > getting a tested version upstream.
> > 
> > Unfortunately his patch doesn't work as-is; it turns out that it tries
> > to set the timeout only if the target is request-based but at the time
> > he tries to set it the table type hasn't yet been set.
> > 
> > I'm looking into fixing it.
> 
> Ouch, yeah, can't access the DM device's queue from .ctr()
> There were other issues with Hannes RFC patch, wouldn't compile.
> 
> Anyway, looks like we need a new target_type hook (e.g. .init_queue)
> that is called from dm_init_request_based_queue().
> 
> Request-based DM only allows a single DM target per device so we don't
> need the usual multi DM-target iterators.
> 
> But, unfortunately, at the time we call dm_init_request_based_queue()
> the mapped_device isn't yet connected to the inactive table that is
> being loaded (but the table is connected to the mapped_device).
> 
> In dm-ioctl.c:table_load(), the inactive table could be passed directly
> into dm_setup_md_queue().
> 
> Please give the following revised patch a try, if it works we can clean
> it up further (think multipath_status needs updating, we also may want
> to constrain .init_queue to only being called if the target is a
> singleton, which dm-mpath should be, but isn't flagged as such yet).
> 
> It compiles, but I haven't tested it...

Frank,

Any interest in this or should I just table it for >= v3.14?

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Frank Mayhar Oct. 30, 2013, 3:08 p.m. UTC | #2
On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
> Any interest in this or should I just table it for >= v3.14?

Sorry, I've been busy putting out another fire.  Yes, there's definitely
still interest.  I grabbed your revised patch and tested with it.
Unfortunately the timeout doesn't actually fire when requests are queued
due to queue_if_no_path; IIRC the block request queue timeout logic
wasn't triggering.  I planned to look into it more deeply figure out why
but I had to spend all last week fixing a nasty race and hadn't gotten
back to it yet.
Mike Snitzer Oct. 30, 2013, 3:43 p.m. UTC | #3
On Wed, Oct 30 2013 at 11:08am -0400,
Frank Mayhar <fmayhar@google.com> wrote:

> On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
> > Any interest in this or should I just table it for >= v3.14?
> 
> Sorry, I've been busy putting out another fire.  Yes, there's definitely
> still interest.  I grabbed your revised patch and tested with it.
> Unfortunately the timeout doesn't actually fire when requests are queued
> due to queue_if_no_path; IIRC the block request queue timeout logic
> wasn't triggering.  I planned to look into it more deeply figure out why
> but I had to spend all last week fixing a nasty race and hadn't gotten
> back to it yet.

OK, Hannes, any idea why this might be happening?  The patch in question
is here: https://patchwork.kernel.org/patch/3070391/

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Frank Mayhar Oct. 30, 2013, 6:09 p.m. UTC | #4
On Wed, 2013-10-30 at 11:43 -0400, Mike Snitzer wrote:
> On Wed, Oct 30 2013 at 11:08am -0400,
> Frank Mayhar <fmayhar@google.com> wrote:
> 
> > On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
> > > Any interest in this or should I just table it for >= v3.14?
> > 
> > Sorry, I've been busy putting out another fire.  Yes, there's definitely
> > still interest.  I grabbed your revised patch and tested with it.
> > Unfortunately the timeout doesn't actually fire when requests are queued
> > due to queue_if_no_path; IIRC the block request queue timeout logic
> > wasn't triggering.  I planned to look into it more deeply figure out why
> > but I had to spend all last week fixing a nasty race and hadn't gotten
> > back to it yet.
> 
> OK, Hannes, any idea why this might be happening?  The patch in question
> is here: https://patchwork.kernel.org/patch/3070391/

I got to this today and so far the most interesting I see is that the
cloned request that's queued in multipath has no queue associated with
it when it's queued; a printk reveals:

[  517.610042] map_io: queueing rq ffff8801150e0070 q           (null)

When it's eventually dequeued, it gets a queue from the destination
device (in the pgpath) via bdev_get_queue().

Because of this and from just looking at the code, blk_start_request()
(and therefore blk_add_timer()) isn't being called for those requests,
so there's never a chance that the timeout would happen.

Does this make sense?  Or am I totally off-base?
Junichi Nomura Oct. 31, 2013, 9:36 a.m. UTC | #5
On 10/31/13 03:09, Frank Mayhar wrote:
> On Wed, 2013-10-30 at 11:43 -0400, Mike Snitzer wrote:
>> On Wed, Oct 30 2013 at 11:08am -0400,
>> Frank Mayhar <fmayhar@google.com> wrote:
>>
>>> On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
>>>> Any interest in this or should I just table it for >= v3.14?
>>>
>>> Sorry, I've been busy putting out another fire.  Yes, there's definitely
>>> still interest.  I grabbed your revised patch and tested with it.
>>> Unfortunately the timeout doesn't actually fire when requests are queued
>>> due to queue_if_no_path; IIRC the block request queue timeout logic
>>> wasn't triggering.  I planned to look into it more deeply figure out why
>>> but I had to spend all last week fixing a nasty race and hadn't gotten
>>> back to it yet.
>>
>> OK, Hannes, any idea why this might be happening?  The patch in question
>> is here: https://patchwork.kernel.org/patch/3070391/
> 
> I got to this today and so far the most interesting I see is that the
> cloned request that's queued in multipath has no queue associated with
> it when it's queued; a printk reveals:
> 
> [  517.610042] map_io: queueing rq ffff8801150e0070 q           (null)
> 
> When it's eventually dequeued, it gets a queue from the destination
> device (in the pgpath) via bdev_get_queue().
> 
> Because of this and from just looking at the code, blk_start_request()
> (and therefore blk_add_timer()) isn't being called for those requests,
> so there's never a chance that the timeout would happen.
> 
> Does this make sense?  Or am I totally off-base?

Hi,

I haven't checked the above patch in detail but there is a problem;
abort_if_no_path() treats "rq" as a clone request, which it isn't.
"rq" is an original request.

It shouldn't be a correct fix but just for testing purpose, you can try
changing:
  info = dm_get_rq_mapinfo(rq);
to
  info = dm_get_rq_mapinfo(rq->special);
and see what happens.
Frank Mayhar Oct. 31, 2013, 2:16 p.m. UTC | #6
On Thu, 2013-10-31 at 09:36 +0000, Junichi Nomura wrote:
> On 10/31/13 03:09, Frank Mayhar wrote:
> > On Wed, 2013-10-30 at 11:43 -0400, Mike Snitzer wrote:
> >> On Wed, Oct 30 2013 at 11:08am -0400,
> >> Frank Mayhar <fmayhar@google.com> wrote:
> >>
> >>> On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
> >>>> Any interest in this or should I just table it for >= v3.14?
> >>>
> >>> Sorry, I've been busy putting out another fire.  Yes, there's definitely
> >>> still interest.  I grabbed your revised patch and tested with it.
> >>> Unfortunately the timeout doesn't actually fire when requests are queued
> >>> due to queue_if_no_path; IIRC the block request queue timeout logic
> >>> wasn't triggering.  I planned to look into it more deeply figure out why
> >>> but I had to spend all last week fixing a nasty race and hadn't gotten
> >>> back to it yet.
> >>
> >> OK, Hannes, any idea why this might be happening?  The patch in question
> >> is here: https://patchwork.kernel.org/patch/3070391/
> > 
> > I got to this today and so far the most interesting I see is that the
> > cloned request that's queued in multipath has no queue associated with
> > it when it's queued; a printk reveals:
> > 
> > [  517.610042] map_io: queueing rq ffff8801150e0070 q           (null)
> > 
> > When it's eventually dequeued, it gets a queue from the destination
> > device (in the pgpath) via bdev_get_queue().
> > 
> > Because of this and from just looking at the code, blk_start_request()
> > (and therefore blk_add_timer()) isn't being called for those requests,
> > so there's never a chance that the timeout would happen.
> > 
> > Does this make sense?  Or am I totally off-base?
> 
> Hi,
> 
> I haven't checked the above patch in detail but there is a problem;
> abort_if_no_path() treats "rq" as a clone request, which it isn't.
> "rq" is an original request.
> 
> It shouldn't be a correct fix but just for testing purpose, you can try
> changing:
>   info = dm_get_rq_mapinfo(rq);
> to
>   info = dm_get_rq_mapinfo(rq->special);
> and see what happens.

Well, at the moment this is kind of moot since abort_if_no_path() isn't
being called.  But, regardless, don't we want to time out the clone
request?  That is, after all, what is being queued in map_io().
Unfortunately the clones don't appear to be associated with a request
queue; they're just put on multipath's internal queue.
Alasdair G Kergon Oct. 31, 2013, 2:31 p.m. UTC | #7
On Thu, Oct 31, 2013 at 07:16:51AM -0700, Frank Mayhar wrote:
> Unfortunately the clones don't appear to be associated with a request
> queue; they're just put on multipath's internal queue.

(And also remember to test table swap/push back.)

Alasdair

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Hannes Reinecke Oct. 31, 2013, 2:59 p.m. UTC | #8
On 10/30/2013 04:43 PM, Mike Snitzer wrote:
> On Wed, Oct 30 2013 at 11:08am -0400,
> Frank Mayhar <fmayhar@google.com> wrote:
>
>> On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
>>> Any interest in this or should I just table it for >= v3.14?
>>
>> Sorry, I've been busy putting out another fire.  Yes, there's definitely
>> still interest.  I grabbed your revised patch and tested with it.
>> Unfortunately the timeout doesn't actually fire when requests are queued
>> due to queue_if_no_path; IIRC the block request queue timeout logic
>> wasn't triggering.  I planned to look into it more deeply figure out why
>> but I had to spend all last week fixing a nasty race and hadn't gotten
>> back to it yet.
>
> OK, Hannes, any idea why this might be happening?  The patch in question
> is here: https://patchwork.kernel.org/patch/3070391/
>
Not yet; currently I'm on vacation, but will be looking into it on Monday.

Cheers,

Hannes
Frank Mayhar Oct. 31, 2013, 5:17 p.m. UTC | #9
On Thu, 2013-10-31 at 14:31 +0000, Alasdair G Kergon wrote:
> On Thu, Oct 31, 2013 at 07:16:51AM -0700, Frank Mayhar wrote:
> > Unfortunately the clones don't appear to be associated with a request
> > queue; they're just put on multipath's internal queue.
> 
> (And also remember to test table swap/push back.)

That brings up something I wanted to ask.  I've dug through the code and
this particular thing isn't clear to me.  So how does it handle the
queued I/Os when switching tables?  I see nothing in the table_load()
path that would deal with this.  I'm guessing that the requests are
pushed back to the block layer and are later resubmitted and requeued on
the new multipath queue, but I don't see how that works.

Code references would be very welcome.
Junichi Nomura Nov. 1, 2013, 1:23 a.m. UTC | #10
On 11/01/13 02:17, Frank Mayhar wrote:
> On Thu, 2013-10-31 at 14:31 +0000, Alasdair G Kergon wrote:
>> (And also remember to test table swap/push back.)
> 
> That brings up something I wanted to ask.  I've dug through the code and
> this particular thing isn't clear to me.  So how does it handle the
> queued I/Os when switching tables?  I see nothing in the table_load()
> path that would deal with this.  I'm guessing that the requests are
> pushed back to the block layer and are later resubmitted and requeued on
> the new multipath queue, but I don't see how that works.
> 
> Code references would be very welcome.

Relevant piece of codes is:
  - multipath_presuspend() temporarily disables "queue_if_no_path"
  - during the suspend process, __must_push_back() catches (otherwise
    failing) requests and requeues them back to the block layer queue
  - upon resume, dm starts processing requests in the block layer queue
    as usual

Hope this helps.
Junichi Nomura Nov. 1, 2013, 1:58 a.m. UTC | #11
On 10/31/13 23:16, Frank Mayhar wrote:
> On Thu, 2013-10-31 at 09:36 +0000, Junichi Nomura wrote:
>> On 10/31/13 03:09, Frank Mayhar wrote:
>>> On Wed, 2013-10-30 at 11:43 -0400, Mike Snitzer wrote:
>>>> On Wed, Oct 30 2013 at 11:08am -0400,
>>>> Frank Mayhar <fmayhar@google.com> wrote:
>>>>
>>>>> On Tue, 2013-10-29 at 21:02 -0400, Mike Snitzer wrote:
>>>>>> Any interest in this or should I just table it for >= v3.14?
>>>>>
>>>>> Sorry, I've been busy putting out another fire.  Yes, there's definitely
>>>>> still interest.  I grabbed your revised patch and tested with it.
>>>>> Unfortunately the timeout doesn't actually fire when requests are queued
>>>>> due to queue_if_no_path; IIRC the block request queue timeout logic
>>>>> wasn't triggering.  I planned to look into it more deeply figure out why
>>>>> but I had to spend all last week fixing a nasty race and hadn't gotten
>>>>> back to it yet.
>>>>
>>>> OK, Hannes, any idea why this might be happening?  The patch in question
>>>> is here: https://patchwork.kernel.org/patch/3070391/
>>>
>>> I got to this today and so far the most interesting I see is that the
>>> cloned request that's queued in multipath has no queue associated with
>>> it when it's queued; a printk reveals:
>>>
>>> [  517.610042] map_io: queueing rq ffff8801150e0070 q           (null)
>>>
>>> When it's eventually dequeued, it gets a queue from the destination
>>> device (in the pgpath) via bdev_get_queue().
>>>
>>> Because of this and from just looking at the code, blk_start_request()
>>> (and therefore blk_add_timer()) isn't being called for those requests,
>>> so there's never a chance that the timeout would happen.
>>>
>>> Does this make sense?  Or am I totally off-base?
>>
>> Hi,
>>
>> I haven't checked the above patch in detail but there is a problem;
>> abort_if_no_path() treats "rq" as a clone request, which it isn't.
>> "rq" is an original request.
>>
>> It shouldn't be a correct fix but just for testing purpose, you can try
>> changing:
>>   info = dm_get_rq_mapinfo(rq);
>> to
>>   info = dm_get_rq_mapinfo(rq->special);
>> and see what happens.
> 
> Well, at the moment this is kind of moot since abort_if_no_path() isn't
> being called.  But, regardless, don't we want to time out the clone
> request?  That is, after all, what is being queued in map_io().
> Unfortunately the clones don't appear to be associated with a request
> queue; they're just put on multipath's internal queue.

Hmm, "isn't being called" is strange.
If the clone is in multipath's internal queue, the original
should have been "started" from request queue point of view
and timeout should fire.

As for the "clone or original" question, if you are to use the block timer,
you have to use it for the original request (then perhaps let the handler
find its clone and kill it).
That's because (as you already see) clones are not associated with
request queue when it's queued in multipath internal queue
and when it's associated, it belongs to the lower device's queue.
diff mbox

Patch

diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index afe0814..74d1ab4 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1289,7 +1289,7 @@  static int table_load(struct dm_ioctl *param, size_t param_size)
 	}
 
 	/* setup md->queue to reflect md's type (may block) */
-	r = dm_setup_md_queue(md);
+	r = dm_setup_md_queue(md, t);
 	if (r) {
 		DMWARN("unable to set up device queue for new table.");
 		goto err_unlock_md_type;
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index de570a5..2c3e427 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -105,6 +105,8 @@  struct multipath {
 	mempool_t *mpio_pool;
 
 	struct mutex work_mutex;
+
+	unsigned no_path_timeout;
 };
 
 /*
@@ -444,6 +446,61 @@  static int queue_if_no_path(struct multipath *m, unsigned queue_if_no_path,
 	return 0;
 }
 
+/*
+ * Block timeout callback, called from the block layer
+ *
+ * request_queue lock is held on entry.
+ *
+ * Return values:
+ * BLK_EH_RESET_TIMER if the request should be left running
+ * BLK_EH_NOT_HANDLED if the request is handled or terminated
+ *                    by the driver.
+ */
+enum blk_eh_timer_return abort_if_no_path(struct request *rq)
+{
+	union map_info *info;
+	struct dm_mpath_io *mpio;
+	struct multipath *m;
+	unsigned long flags;
+	int rc = BLK_EH_RESET_TIMER;
+	int flush_ios = 0;
+
+	info = dm_get_rq_mapinfo(rq);
+	if (!info || !info->ptr)
+		return BLK_EH_NOT_HANDLED;
+
+	mpio = info->ptr;
+	m = mpio->pgpath->pg->m;
+	/*
+	 * Only abort request if:
+	 * - queued_ios is not empty
+	 *   (protect against races with process_queued_ios)
+	 * - queue_io is not set
+	 * - no valid paths are found
+	 */
+	spin_lock_irqsave(&m->lock, flags);
+	if (!list_empty(&m->queued_ios) &&
+	    !m->queue_io &&
+	    !m->nr_valid_paths) {
+		list_del_init(&rq->queuelist);
+		m->queue_size--;
+		m->queue_if_no_path = 0;
+		if (m->queue_size)
+			flush_ios = 1;
+		rc = BLK_EH_NOT_HANDLED;
+	}
+	spin_unlock_irqrestore(&m->lock, flags);
+
+	if (rc == BLK_EH_NOT_HANDLED) {
+		mempool_free(mpio, m->mpio_pool);
+		dm_kill_unmapped_request(rq, -ETIMEDOUT);
+	}
+	if (flush_ios)
+		queue_work(kmultipathd, &m->process_queued_ios);
+
+	return rc;
+}
+
 /*-----------------------------------------------------------------
  * The multipath daemon is responsible for resubmitting queued ios.
  *---------------------------------------------------------------*/
@@ -790,6 +847,7 @@  static int parse_features(struct dm_arg_set *as, struct multipath *m)
 		{0, 6, "invalid number of feature args"},
 		{1, 50, "pg_init_retries must be between 1 and 50"},
 		{0, 60000, "pg_init_delay_msecs must be between 0 and 60000"},
+		{0, 65535, "no_path_timeout must be between 0 and 65535"},
 	};
 
 	r = dm_read_arg_group(_args, as, &argc, &ti->error);
@@ -827,6 +885,13 @@  static int parse_features(struct dm_arg_set *as, struct multipath *m)
 			continue;
 		}
 
+		if (!strcasecmp(arg_name, "no_path_timeout") &&
+		    (argc >= 1)) {
+			r = dm_read_arg(_args + 3, as, &m->no_path_timeout, &ti->error);
+			argc--;
+			continue;
+		}
+
 		ti->error = "Unrecognised multipath feature request";
 		r = -EINVAL;
 	} while (argc && !r);
@@ -1709,6 +1774,17 @@  out:
 	return busy;
 }
 
+static void multipath_init_queue(struct dm_target *ti,
+				 struct request_queue *q)
+{
+	struct multipath *m = ti->private;
+
+	if (m->no_path_timeout) {
+		blk_queue_rq_timed_out(q, abort_if_no_path);
+		blk_queue_rq_timeout(q, m->no_path_timeout * HZ);
+	}
+}
+
 /*-----------------------------------------------------------------
  * Module setup
  *---------------------------------------------------------------*/
@@ -1728,6 +1804,7 @@  static struct target_type multipath_target = {
 	.ioctl  = multipath_ioctl,
 	.iterate_devices = multipath_iterate_devices,
 	.busy = multipath_busy,
+	.init_queue = multipath_init_queue,
 };
 
 static int __init dm_multipath_init(void)
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index b3e26c7..ce87b8a 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2336,8 +2336,10 @@  EXPORT_SYMBOL_GPL(dm_get_queue_limits);
 /*
  * Fully initialize a request-based queue (->elevator, ->request_fn, etc).
  */
-static int dm_init_request_based_queue(struct mapped_device *md)
+static int dm_init_request_based_queue(struct mapped_device *md,
+				       struct dm_table *table)
 {
+	struct dm_target *ti = NULL;
 	struct request_queue *q = NULL;
 
 	if (md->queue->elevator)
@@ -2356,16 +2358,20 @@  static int dm_init_request_based_queue(struct mapped_device *md)
 
 	elv_register_queue(md->queue);
 
+	ti = dm_table_get_target(table, 0);
+	if (ti->type->init_queue)
+		ti->type->init_queue(ti, md->queue);
+
 	return 1;
 }
 
 /*
  * Setup the DM device's queue based on md's type
  */
-int dm_setup_md_queue(struct mapped_device *md)
+int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
 {
 	if ((dm_get_md_type(md) == DM_TYPE_REQUEST_BASED) &&
-	    !dm_init_request_based_queue(md)) {
+	    !dm_init_request_based_queue(md, t)) {
 		DMWARN("Cannot initialize queue for request-based mapped device");
 		return -EINVAL;
 	}
diff --git a/drivers/md/dm.h b/drivers/md/dm.h
index 1d1ad7b..55cb207 100644
--- a/drivers/md/dm.h
+++ b/drivers/md/dm.h
@@ -83,7 +83,7 @@  void dm_set_md_type(struct mapped_device *md, unsigned type);
 unsigned dm_get_md_type(struct mapped_device *md);
 struct target_type *dm_get_immutable_target_type(struct mapped_device *md);
 
-int dm_setup_md_queue(struct mapped_device *md);
+int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t);
 
 /*
  * To check the return value from dm_table_find_target().
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index ed419c6..650c575 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -107,6 +107,9 @@  typedef int (*dm_iterate_devices_fn) (struct dm_target *ti,
 typedef void (*dm_io_hints_fn) (struct dm_target *ti,
 				struct queue_limits *limits);
 
+typedef void (*dm_init_queue_fn) (struct dm_target *ti,
+				  struct request_queue *q);
+
 /*
  * Returns:
  *    0: The target can handle the next I/O immediately.
@@ -162,6 +165,7 @@  struct target_type {
 	dm_busy_fn busy;
 	dm_iterate_devices_fn iterate_devices;
 	dm_io_hints_fn io_hints;
+	dm_init_queue_fn init_queue;
 
 	/* For internal device-mapper use. */
 	struct list_head list;