diff mbox

[RFC] blk-mq: fixup RESTART when queue becomes idle

Message ID 20180118024124.8079-1-ming.lei@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Mike Snitzer
Headers show

Commit Message

Ming Lei Jan. 18, 2018, 2:41 a.m. UTC
BLK_STS_RESOURCE can be returned from driver when any resource
is running out of. And the resource may not be related with tags,
such as kmalloc(GFP_ATOMIC), when queue is idle under this kind of
BLK_STS_RESOURCE, restart can't work any more, then IO hang may
be caused.

Most of drivers may call kmalloc(GFP_ATOMIC) in IO path, and almost
all returns BLK_STS_RESOURCE under this situation. But for dm-mpath,
it may be triggered a bit easier since the request pool of underlying
queue may be consumed up much easier. But in reality, it is still not
easy to trigger it. I run all kinds of test on dm-mpath/scsi-debug
with all kinds of scsi_debug parameters, can't trigger this issue
at all. But finally it is triggered in Bart's SRP test, which seems
made by genius, :-)

This patch deals with this situation by running the queue again when
queue is found idle in timeout handler.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
---

Another approach is to do the check after BLK_STS_RESOURCE is returned
from .queue_rq() and BLK_MQ_S_SCHED_RESTART is set, that way may introduce
a bit cost in hot path, and it was V1 of this patch actually, please see
that in the following link:

	https://github.com/ming1/linux/commit/68a66900f3647ea6751aab2848b1e5eef508feaa

Or other better ways?

 block/blk-mq.c | 83 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 1 deletion(-)

Comments

Bart Van Assche Jan. 18, 2018, 4:50 p.m. UTC | #1
On 01/17/18 18:41, Ming Lei wrote:
> BLK_STS_RESOURCE can be returned from driver when any resource
> is running out of. And the resource may not be related with tags,
> such as kmalloc(GFP_ATOMIC), when queue is idle under this kind of
> BLK_STS_RESOURCE, restart can't work any more, then IO hang may
> be caused.
> 
> Most of drivers may call kmalloc(GFP_ATOMIC) in IO path, and almost
> all returns BLK_STS_RESOURCE under this situation. But for dm-mpath,
> it may be triggered a bit easier since the request pool of underlying
> queue may be consumed up much easier. But in reality, it is still not
> easy to trigger it. I run all kinds of test on dm-mpath/scsi-debug
> with all kinds of scsi_debug parameters, can't trigger this issue
> at all. But finally it is triggered in Bart's SRP test, which seems
> made by genius, :-)
> 
> [ ... ]
 >
>   static void blk_mq_timeout_work(struct work_struct *work)
>   {
>   	struct request_queue *q =
> @@ -966,8 +1045,10 @@ static void blk_mq_timeout_work(struct work_struct *work)
>   		 */
>   		queue_for_each_hw_ctx(q, hctx, i) {
>   			/* the hctx may be unmapped, so check it here */
> -			if (blk_mq_hw_queue_mapped(hctx))
> +			if (blk_mq_hw_queue_mapped(hctx)) {
>   				blk_mq_tag_idle(hctx);
> +				blk_mq_fixup_restart(hctx);
> +			}
>   		}
>   	}
>   	blk_queue_exit(q);

Hello Ming,

My comments about the above are as follows:
- It can take up to q->rq_timeout jiffies after a .queue_rq()
   implementation returned BLK_STS_RESOURCE before blk_mq_timeout_work()
   gets called. However, it can happen that only a few milliseconds after
   .queue_rq() returned BLK_STS_RESOURCE that the condition that caused
   it to return BLK_STS_RESOURCE gets cleared. So the above approach can
   result in long delays during which it will seem like the queue got
   stuck. Additionally, I think that the block driver should decide how
   long it takes before a queue is rerun and not the block layer core.
- The lockup that I reported only occurs with the dm driver but not any
   other block driver. So why to modify the block layer core since this
   can be fixed by modifying the dm driver?
- A much simpler fix and a fix that is known to work exists, namely
   inserting a blk_mq_delay_run_hw_queue() call in the dm driver.

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Mike Snitzer Jan. 18, 2018, 5:03 p.m. UTC | #2
On Thu, Jan 18 2018 at 11:50am -0500,
Bart Van Assche <bart.vanassche@wdc.com> wrote:

> On 01/17/18 18:41, Ming Lei wrote:
> >BLK_STS_RESOURCE can be returned from driver when any resource
> >is running out of. And the resource may not be related with tags,
> >such as kmalloc(GFP_ATOMIC), when queue is idle under this kind of
> >BLK_STS_RESOURCE, restart can't work any more, then IO hang may
> >be caused.
> >
> >Most of drivers may call kmalloc(GFP_ATOMIC) in IO path, and almost
> >all returns BLK_STS_RESOURCE under this situation. But for dm-mpath,
> >it may be triggered a bit easier since the request pool of underlying
> >queue may be consumed up much easier. But in reality, it is still not
> >easy to trigger it. I run all kinds of test on dm-mpath/scsi-debug
> >with all kinds of scsi_debug parameters, can't trigger this issue
> >at all. But finally it is triggered in Bart's SRP test, which seems
> >made by genius, :-)
> >
> >[ ... ]
> >
> >  static void blk_mq_timeout_work(struct work_struct *work)
> >  {
> >  	struct request_queue *q =
> >@@ -966,8 +1045,10 @@ static void blk_mq_timeout_work(struct work_struct *work)
> >  		 */
> >  		queue_for_each_hw_ctx(q, hctx, i) {
> >  			/* the hctx may be unmapped, so check it here */
> >-			if (blk_mq_hw_queue_mapped(hctx))
> >+			if (blk_mq_hw_queue_mapped(hctx)) {
> >  				blk_mq_tag_idle(hctx);
> >+				blk_mq_fixup_restart(hctx);
> >+			}
> >  		}
> >  	}
> >  	blk_queue_exit(q);
> 
> Hello Ming,
> 
> My comments about the above are as follows:
> - It can take up to q->rq_timeout jiffies after a .queue_rq()
>   implementation returned BLK_STS_RESOURCE before blk_mq_timeout_work()
>   gets called. However, it can happen that only a few milliseconds after
>   .queue_rq() returned BLK_STS_RESOURCE that the condition that caused
>   it to return BLK_STS_RESOURCE gets cleared. So the above approach can
>   result in long delays during which it will seem like the queue got
>   stuck. Additionally, I think that the block driver should decide how
>   long it takes before a queue is rerun and not the block layer core.

So configure q->rq_timeout to be shorter?  Which is configurable though
blk_mq_tag_set's 'timeout' member.  It apparently defaults to 30 * HZ.

That is the problem with timeouts, there is generally no one size fits
all.

> - The lockup that I reported only occurs with the dm driver but not any
>   other block driver. So why to modify the block layer core since this
>   can be fixed by modifying the dm driver?

Hard to know it is only DM's blk-mq that is impacted.  That is the only
blk-mq driver that you're testing like this (that is also able to handle
faults, etc).

> - A much simpler fix and a fix that is known to work exists, namely
>   inserting a blk_mq_delay_run_hw_queue() call in the dm driver.

Because your "much simpler" fix actively hurts performance, as is
detailed in this header:
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=ec3eaf9a673106f66606896aed6ddd20180b02ec

I'm not going to take your bandaid fix given it very much seems to be
papering over a real blk-mq issue.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Bart Van Assche Jan. 18, 2018, 5:20 p.m. UTC | #3
On Thu, 2018-01-18 at 12:03 -0500, Mike Snitzer wrote:
> On Thu, Jan 18 2018 at 11:50am -0500,
> Bart Van Assche <bart.vanassche@wdc.com> wrote:
> > My comments about the above are as follows:
> > - It can take up to q->rq_timeout jiffies after a .queue_rq()
> >   implementation returned BLK_STS_RESOURCE before blk_mq_timeout_work()
> >   gets called. However, it can happen that only a few milliseconds after
> >   .queue_rq() returned BLK_STS_RESOURCE that the condition that caused
> >   it to return BLK_STS_RESOURCE gets cleared. So the above approach can
> >   result in long delays during which it will seem like the queue got
> >   stuck. Additionally, I think that the block driver should decide how
> >   long it takes before a queue is rerun and not the block layer core.
> 
> So configure q->rq_timeout to be shorter?  Which is configurable though
> blk_mq_tag_set's 'timeout' member.  It apparently defaults to 30 * HZ.
> 
> That is the problem with timeouts, there is generally no one size fits
> all.

Sorry but I think that would be wrong. The delay after which a queue is rerun
should not be coupled to the request timeout. These two should be independent.

> > - The lockup that I reported only occurs with the dm driver but not any
> >   other block driver. So why to modify the block layer core since this
> >   can be fixed by modifying the dm driver?
> 
> Hard to know it is only DM's blk-mq that is impacted.  That is the only
> blk-mq driver that you're testing like this (that is also able to handle
> faults, etc).

That's not correct. I'm also testing the SCSI core, which is one of the most
complicated block drivers.

> > - A much simpler fix and a fix that is known to work exists, namely
> >   inserting a blk_mq_delay_run_hw_queue() call in the dm driver.
> 
> Because your "much simpler" fix actively hurts performance, as is
> detailed in this header:
> https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=ec3eaf9a673106f66606896aed6ddd20180b02ec

We are close to the start of the merge window so I think it's better to fall
back to an old approach that is known to work than to keep a new approach
that is known not to work. Additionally, the performance issue you referred
to only affects IOPS and bandwidth more than 1% with the lpfc driver and that
is because the queue depth it supports is much lower than for other SCSI HBAs,
namely 3 instead of 64.

Thanks,

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Mike Snitzer Jan. 18, 2018, 6:30 p.m. UTC | #4
On Thu, Jan 18 2018 at 12:20pm -0500,
Bart Van Assche <Bart.VanAssche@wdc.com> wrote:

> On Thu, 2018-01-18 at 12:03 -0500, Mike Snitzer wrote:
> > On Thu, Jan 18 2018 at 11:50am -0500,
> > Bart Van Assche <bart.vanassche@wdc.com> wrote:
> > > My comments about the above are as follows:
> > > - It can take up to q->rq_timeout jiffies after a .queue_rq()
> > >   implementation returned BLK_STS_RESOURCE before blk_mq_timeout_work()
> > >   gets called. However, it can happen that only a few milliseconds after
> > >   .queue_rq() returned BLK_STS_RESOURCE that the condition that caused
> > >   it to return BLK_STS_RESOURCE gets cleared. So the above approach can
> > >   result in long delays during which it will seem like the queue got
> > >   stuck. Additionally, I think that the block driver should decide how
> > >   long it takes before a queue is rerun and not the block layer core.
> > 
> > So configure q->rq_timeout to be shorter?  Which is configurable though
> > blk_mq_tag_set's 'timeout' member.  It apparently defaults to 30 * HZ.
> > 
> > That is the problem with timeouts, there is generally no one size fits
> > all.
> 
> Sorry but I think that would be wrong. The delay after which a queue is rerun
> should not be coupled to the request timeout. These two should be independent.

That's fair.  Not saying I think that is a fix anyway.

> > > - The lockup that I reported only occurs with the dm driver but not any
> > >   other block driver. So why to modify the block layer core since this
> > >   can be fixed by modifying the dm driver?
> > 
> > Hard to know it is only DM's blk-mq that is impacted.  That is the only
> > blk-mq driver that you're testing like this (that is also able to handle
> > faults, etc).
> 
> That's not correct. I'm also testing the SCSI core, which is one of the most
> complicated block drivers.

OK, but SCSI mq is part of the problem here.  It is a snowflake that
has more exotic reasons for returning BLK_STS_RESOURCE.

> > > - A much simpler fix and a fix that is known to work exists, namely
> > >   inserting a blk_mq_delay_run_hw_queue() call in the dm driver.
> > 
> > Because your "much simpler" fix actively hurts performance, as is
> > detailed in this header:
> > https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=ec3eaf9a673106f66606896aed6ddd20180b02ec
> 
> We are close to the start of the merge window so I think it's better to fall
> back to an old approach that is known to work than to keep a new approach
> that is known not to work. Additionally, the performance issue you referred
> to only affects IOPS and bandwidth more than 1% with the lpfc driver and that
> is because the queue depth it supports is much lower than for other SCSI HBAs,
> namely 3 instead of 64.

1%!?  Where are you getting that number?  Ming has detailed more
significant performance gains than 1%.. and not just on lpfc (though you
keep seizing on lpfc because of the low queue_depth of 3).

This is all very tiresome.  I'm _really_ not interested in this debate
any more.  The specific case that causes the stall need to be identified
and a real fix needs to be developed.  Ming is doing a lot of that hard
work.  Please contribute or at least stop pleading for your hack to be
reintroduced.

If at the end of the 4.16 release we still don't have a handle on the
stall you're seeing I'll revisit this and likely revert to blindly
kicking the queue after an arbitrary delay.  But I'm willing to let this
issue get more time without papering over it.

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Bart Van Assche Jan. 18, 2018, 6:47 p.m. UTC | #5
On Thu, 2018-01-18 at 13:30 -0500, Mike Snitzer wrote:
> 1%!?  Where are you getting that number?  Ming has detailed more
> significant performance gains than 1%.. and not just on lpfc (though you
> keep seizing on lpfc because of the low queue_depth of 3).

That's what I derived from the numbers you posted for null_blk. If Ming has
posted performance results for other drivers than lpfc, please let me know
where I can find these. I have not yet seen these numbers.

> This is all very tiresome.

Yes, this is tiresome. It is very annoying to me that others keep introducing
so many regressions in such important parts of the kernel. It is also annoying
to me that I get blamed if I report a regression instead of seeing that the
regression gets fixed.

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jens Axboe Jan. 18, 2018, 8:11 p.m. UTC | #6
On 1/18/18 11:47 AM, Bart Van Assche wrote:
>> This is all very tiresome.
> 
> Yes, this is tiresome. It is very annoying to me that others keep
> introducing so many regressions in such important parts of the kernel.
> It is also annoying to me that I get blamed if I report a regression
> instead of seeing that the regression gets fixed.

I agree, it sucks that any change there introduces the regression. I'm
fine with doing the delay insert again until a new patch is proven to be
better.

>From the original topic of this email, we have conditions that can cause
the driver to not be able to submit an IO. A set of those conditions can
only happen if IO is in flight, and those cases we have covered just
fine. Another set can potentially trigger without IO being in flight.
These are cases where a non-device resource is unavailable at the time
of submission. This might be iommu running out of space, for instance,
or it might be a memory allocation of some sort. For these cases, we
don't get any notification when the shortage clears. All we can do is
ensure that we restart operations at some point in the future. We're SOL
at that point, but we have to ensure that we make forward progress.

That last set of conditions better not be a a common occurence, since
performance is down the toilet at that point. I don't want to introduce
hot path code to rectify it. Have the driver return if that happens in a
way that is DIFFERENT from needing a normal restart. The driver knows if
this is a resource that will become available when IO completes on this
device or not. If we get that return, we have a generic run-again delay.

This basically becomes the same as doing the delay queue thing from DM,
but just in a generic fashion.
Ming Lei Jan. 19, 2018, 2:32 a.m. UTC | #7
On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
> On 1/18/18 11:47 AM, Bart Van Assche wrote:
> >> This is all very tiresome.
> > 
> > Yes, this is tiresome. It is very annoying to me that others keep
> > introducing so many regressions in such important parts of the kernel.
> > It is also annoying to me that I get blamed if I report a regression
> > instead of seeing that the regression gets fixed.
> 
> I agree, it sucks that any change there introduces the regression. I'm
> fine with doing the delay insert again until a new patch is proven to be
> better.

That way is still buggy as I explained, since rerun queue before adding
request to hctx->dispatch_list isn't correct. Who can make sure the request
is visible when __blk_mq_run_hw_queue() is called?

Not mention this way will cause performance regression again.

> 
> From the original topic of this email, we have conditions that can cause
> the driver to not be able to submit an IO. A set of those conditions can
> only happen if IO is in flight, and those cases we have covered just
> fine. Another set can potentially trigger without IO being in flight.
> These are cases where a non-device resource is unavailable at the time
> of submission. This might be iommu running out of space, for instance,
> or it might be a memory allocation of some sort. For these cases, we
> don't get any notification when the shortage clears. All we can do is
> ensure that we restart operations at some point in the future. We're SOL
> at that point, but we have to ensure that we make forward progress.

Right, it is a generic issue, not DM-specific one, almost all drivers
call kmalloc(GFP_ATOMIC) in IO path.

IMO, there is enough time for figuring out a generic solution before
4.16 release.

> 
> That last set of conditions better not be a a common occurence, since
> performance is down the toilet at that point. I don't want to introduce
> hot path code to rectify it. Have the driver return if that happens in a
> way that is DIFFERENT from needing a normal restart. The driver knows if
> this is a resource that will become available when IO completes on this
> device or not. If we get that return, we have a generic run-again delay.

Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
it should be DM-only which returns STS_RESOURCE so often.

> 
> This basically becomes the same as doing the delay queue thing from DM,
> but just in a generic fashion.

Yeah, it is right.
Jens Axboe Jan. 19, 2018, 4:02 a.m. UTC | #8
On 1/18/18 7:32 PM, Ming Lei wrote:
> On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
>> On 1/18/18 11:47 AM, Bart Van Assche wrote:
>>>> This is all very tiresome.
>>>
>>> Yes, this is tiresome. It is very annoying to me that others keep
>>> introducing so many regressions in such important parts of the kernel.
>>> It is also annoying to me that I get blamed if I report a regression
>>> instead of seeing that the regression gets fixed.
>>
>> I agree, it sucks that any change there introduces the regression. I'm
>> fine with doing the delay insert again until a new patch is proven to be
>> better.
> 
> That way is still buggy as I explained, since rerun queue before adding
> request to hctx->dispatch_list isn't correct. Who can make sure the request
> is visible when __blk_mq_run_hw_queue() is called?

That race basically doesn't exist for a 10ms gap.

> Not mention this way will cause performance regression again.

How so? It's _exactly_ the same as what you are proposing, except mine
will potentially run the queue when it need not do so. But given that
these are random 10ms queue kicks because we are screwed, it should not
matter. The key point is that it only should be if we have NO better
options. If it's a frequently occurring event that we have to return
BLK_STS_RESOURCE, then we need to get a way to register an event for
when that condition clears. That event will then kick the necessary
queue(s).

>> From the original topic of this email, we have conditions that can cause
>> the driver to not be able to submit an IO. A set of those conditions can
>> only happen if IO is in flight, and those cases we have covered just
>> fine. Another set can potentially trigger without IO being in flight.
>> These are cases where a non-device resource is unavailable at the time
>> of submission. This might be iommu running out of space, for instance,
>> or it might be a memory allocation of some sort. For these cases, we
>> don't get any notification when the shortage clears. All we can do is
>> ensure that we restart operations at some point in the future. We're SOL
>> at that point, but we have to ensure that we make forward progress.
> 
> Right, it is a generic issue, not DM-specific one, almost all drivers
> call kmalloc(GFP_ATOMIC) in IO path.

GFP_ATOMIC basically never fails, unless we are out of memory. The
exception is higher order allocations. If a driver has a higher order
atomic allocation in its IO path, the device driver writer needs to be
taken out behind the barn and shot. Simple as that. It will NEVER work
well in a production environment. Witness the disaster that so many NIC
driver writers have learned.

This is NOT the case we care about here. It's resources that are more
readily depleted because other devices are using them. If it's a high
frequency or generally occurring event, then we simply must have a
callback to restart the queue from that. The condition then becomes
identical to device private starvation, the only difference being from
where we restart the queue.

> IMO, there is enough time for figuring out a generic solution before
> 4.16 release.

I would hope so, but the proposed solutions have not filled me with
a lot of confidence in the end result so far.

>> That last set of conditions better not be a a common occurence, since
>> performance is down the toilet at that point. I don't want to introduce
>> hot path code to rectify it. Have the driver return if that happens in a
>> way that is DIFFERENT from needing a normal restart. The driver knows if
>> this is a resource that will become available when IO completes on this
>> device or not. If we get that return, we have a generic run-again delay.
> 
> Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> it should be DM-only which returns STS_RESOURCE so often.

Where does the dm STS_RESOURCE error usually come from - what's exact
resource are we running out of?
Bart Van Assche Jan. 19, 2018, 5:09 a.m. UTC | #9
On Fri, 2018-01-19 at 10:32 +0800, Ming Lei wrote:
> Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> it should be DM-only which returns STS_RESOURCE so often.

That's wrong at least for SCSI. See also https://marc.info/?l=linux-block&m=151578329417076.

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Ming Lei Jan. 19, 2018, 7:26 a.m. UTC | #10
On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote:
> On 1/18/18 7:32 PM, Ming Lei wrote:
> > On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
> >> On 1/18/18 11:47 AM, Bart Van Assche wrote:
> >>>> This is all very tiresome.
> >>>
> >>> Yes, this is tiresome. It is very annoying to me that others keep
> >>> introducing so many regressions in such important parts of the kernel.
> >>> It is also annoying to me that I get blamed if I report a regression
> >>> instead of seeing that the regression gets fixed.
> >>
> >> I agree, it sucks that any change there introduces the regression. I'm
> >> fine with doing the delay insert again until a new patch is proven to be
> >> better.
> > 
> > That way is still buggy as I explained, since rerun queue before adding
> > request to hctx->dispatch_list isn't correct. Who can make sure the request
> > is visible when __blk_mq_run_hw_queue() is called?
> 
> That race basically doesn't exist for a 10ms gap.
> 
> > Not mention this way will cause performance regression again.
> 
> How so? It's _exactly_ the same as what you are proposing, except mine
> will potentially run the queue when it need not do so. But given that
> these are random 10ms queue kicks because we are screwed, it should not
> matter. The key point is that it only should be if we have NO better
> options. If it's a frequently occurring event that we have to return
> BLK_STS_RESOURCE, then we need to get a way to register an event for
> when that condition clears. That event will then kick the necessary
> queue(s).

Please see queue_delayed_work_on(), hctx->run_work is shared by all
scheduling, once blk_mq_delay_run_hw_queue(100ms) returns, no new
scheduling can make progress during the 100ms.

> 
> >> From the original topic of this email, we have conditions that can cause
> >> the driver to not be able to submit an IO. A set of those conditions can
> >> only happen if IO is in flight, and those cases we have covered just
> >> fine. Another set can potentially trigger without IO being in flight.
> >> These are cases where a non-device resource is unavailable at the time
> >> of submission. This might be iommu running out of space, for instance,
> >> or it might be a memory allocation of some sort. For these cases, we
> >> don't get any notification when the shortage clears. All we can do is
> >> ensure that we restart operations at some point in the future. We're SOL
> >> at that point, but we have to ensure that we make forward progress.
> > 
> > Right, it is a generic issue, not DM-specific one, almost all drivers
> > call kmalloc(GFP_ATOMIC) in IO path.
> 
> GFP_ATOMIC basically never fails, unless we are out of memory. The

I guess GFP_KERNEL may never fail, but GFP_ATOMIC failure might be
possible, and it is mentioned[1] there is such code in mm allocation
path, also OOM can happen too.

  if (some randomly generated condition) && (request is atomic)
      return NULL;

[1] https://lwn.net/Articles/276731/

> exception is higher order allocations. If a driver has a higher order
> atomic allocation in its IO path, the device driver writer needs to be
> taken out behind the barn and shot. Simple as that. It will NEVER work
> well in a production environment. Witness the disaster that so many NIC
> driver writers have learned.
> 
> This is NOT the case we care about here. It's resources that are more
> readily depleted because other devices are using them. If it's a high
> frequency or generally occurring event, then we simply must have a
> callback to restart the queue from that. The condition then becomes
> identical to device private starvation, the only difference being from
> where we restart the queue.
> 
> > IMO, there is enough time for figuring out a generic solution before
> > 4.16 release.
> 
> I would hope so, but the proposed solutions have not filled me with
> a lot of confidence in the end result so far.
> 
> >> That last set of conditions better not be a a common occurence, since
> >> performance is down the toilet at that point. I don't want to introduce
> >> hot path code to rectify it. Have the driver return if that happens in a
> >> way that is DIFFERENT from needing a normal restart. The driver knows if
> >> this is a resource that will become available when IO completes on this
> >> device or not. If we get that return, we have a generic run-again delay.
> > 
> > Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> > it should be DM-only which returns STS_RESOURCE so often.
> 
> Where does the dm STS_RESOURCE error usually come from - what's exact
> resource are we running out of?

It is from blk_get_request(underlying queue), see multipath_clone_and_map().

Thanks,
Ming

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Ming Lei Jan. 19, 2018, 7:34 a.m. UTC | #11
On Fri, Jan 19, 2018 at 05:09:46AM +0000, Bart Van Assche wrote:
> On Fri, 2018-01-19 at 10:32 +0800, Ming Lei wrote:
> > Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> > it should be DM-only which returns STS_RESOURCE so often.
> 
> That's wrong at least for SCSI. See also https://marc.info/?l=linux-block&m=151578329417076.
> 

> For other scenario's, e.g. if a SCSI initiator submits a
> SCSI request over a fabric and the SCSI target replies with "BUSY" then the

Could you explain a bit when SCSI target replies with BUSY very often?

Inside initiator, we have limited the max per-LUN requests and per-host
requests already before calling .queue_rq().

> SCSI core will end the I/O request with status BLK_STS_RESOURCE after the
> maximum number of retries has been reached (see also scsi_io_completion()).
> In that last case, if a SCSI target sends a "BUSY" reply over the wire back
> to the initiator, there is no other approach for the SCSI initiator to
> figure out whether it can queue another request than to resubmit the
> request. The worst possible strategy is to resubmit a request immediately
> because that will cause a significant fraction of the fabric bandwidth to
> be used just for replying "BUSY" to requests that can't be processed
> immediately.
Jens Axboe Jan. 19, 2018, 3:24 p.m. UTC | #12
On 1/19/18 12:26 AM, Ming Lei wrote:
> On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote:
>> On 1/18/18 7:32 PM, Ming Lei wrote:
>>> On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
>>>> On 1/18/18 11:47 AM, Bart Van Assche wrote:
>>>>>> This is all very tiresome.
>>>>>
>>>>> Yes, this is tiresome. It is very annoying to me that others keep
>>>>> introducing so many regressions in such important parts of the kernel.
>>>>> It is also annoying to me that I get blamed if I report a regression
>>>>> instead of seeing that the regression gets fixed.
>>>>
>>>> I agree, it sucks that any change there introduces the regression. I'm
>>>> fine with doing the delay insert again until a new patch is proven to be
>>>> better.
>>>
>>> That way is still buggy as I explained, since rerun queue before adding
>>> request to hctx->dispatch_list isn't correct. Who can make sure the request
>>> is visible when __blk_mq_run_hw_queue() is called?
>>
>> That race basically doesn't exist for a 10ms gap.
>>
>>> Not mention this way will cause performance regression again.
>>
>> How so? It's _exactly_ the same as what you are proposing, except mine
>> will potentially run the queue when it need not do so. But given that
>> these are random 10ms queue kicks because we are screwed, it should not
>> matter. The key point is that it only should be if we have NO better
>> options. If it's a frequently occurring event that we have to return
>> BLK_STS_RESOURCE, then we need to get a way to register an event for
>> when that condition clears. That event will then kick the necessary
>> queue(s).
> 
> Please see queue_delayed_work_on(), hctx->run_work is shared by all
> scheduling, once blk_mq_delay_run_hw_queue(100ms) returns, no new
> scheduling can make progress during the 100ms.

That's a bug, plain and simple. If someone does "run this queue in
100ms" and someone else comes in and says "run this queue now", the
correct outcome is running this queue now.

>>>> From the original topic of this email, we have conditions that can cause
>>>> the driver to not be able to submit an IO. A set of those conditions can
>>>> only happen if IO is in flight, and those cases we have covered just
>>>> fine. Another set can potentially trigger without IO being in flight.
>>>> These are cases where a non-device resource is unavailable at the time
>>>> of submission. This might be iommu running out of space, for instance,
>>>> or it might be a memory allocation of some sort. For these cases, we
>>>> don't get any notification when the shortage clears. All we can do is
>>>> ensure that we restart operations at some point in the future. We're SOL
>>>> at that point, but we have to ensure that we make forward progress.
>>>
>>> Right, it is a generic issue, not DM-specific one, almost all drivers
>>> call kmalloc(GFP_ATOMIC) in IO path.
>>
>> GFP_ATOMIC basically never fails, unless we are out of memory. The
> 
> I guess GFP_KERNEL may never fail, but GFP_ATOMIC failure might be
> possible, and it is mentioned[1] there is such code in mm allocation
> path, also OOM can happen too.
> 
>   if (some randomly generated condition) && (request is atomic)
>       return NULL;
> 
> [1] https://lwn.net/Articles/276731/

That article is 10 years old. Once you run large scale production, you
see what the real failures are. Fact is, for zero order allocation, if
the atomic alloc fails the shit has really hit the fan. In that case, a
delay of 10ms is not your main issue. It's a total red herring when you
compare to the frequency of what Bart is seeing. It's noise, and
irrelevant here. For an atomic zero order allocation failure, doing a
short random sleep is perfectly fine.

>> exception is higher order allocations. If a driver has a higher order
>> atomic allocation in its IO path, the device driver writer needs to be
>> taken out behind the barn and shot. Simple as that. It will NEVER work
>> well in a production environment. Witness the disaster that so many NIC
>> driver writers have learned.
>>
>> This is NOT the case we care about here. It's resources that are more
>> readily depleted because other devices are using them. If it's a high
>> frequency or generally occurring event, then we simply must have a
>> callback to restart the queue from that. The condition then becomes
>> identical to device private starvation, the only difference being from
>> where we restart the queue.
>>
>>> IMO, there is enough time for figuring out a generic solution before
>>> 4.16 release.
>>
>> I would hope so, but the proposed solutions have not filled me with
>> a lot of confidence in the end result so far.
>>
>>>> That last set of conditions better not be a a common occurence, since
>>>> performance is down the toilet at that point. I don't want to introduce
>>>> hot path code to rectify it. Have the driver return if that happens in a
>>>> way that is DIFFERENT from needing a normal restart. The driver knows if
>>>> this is a resource that will become available when IO completes on this
>>>> device or not. If we get that return, we have a generic run-again delay.
>>>
>>> Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
>>> it should be DM-only which returns STS_RESOURCE so often.
>>
>> Where does the dm STS_RESOURCE error usually come from - what's exact
>> resource are we running out of?
> 
> It is from blk_get_request(underlying queue), see
> multipath_clone_and_map().

That's what I thought. So for a low queue depth underlying queue, it's
quite possible that this situation can happen. Two potential solutions
I see:

1) As described earlier in this thread, having a mechanism for being
   notified when the scarce resource becomes available. It would not
   be hard to tap into the existing sbitmap wait queue for that.

2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
   allocation. I haven't read the dm code to know if this is a
   possibility or not.

I'd probably prefer #1. It's a classic case of trying to get the
request, and if it fails, add ourselves to the sbitmap tag wait
queue head, retry, and bail if that also fails. Connecting the
scarce resource and the consumer is the only way to really fix
this, without bogus arbitrary delays.
Ming Lei Jan. 19, 2018, 3:40 p.m. UTC | #13
On Fri, Jan 19, 2018 at 08:24:06AM -0700, Jens Axboe wrote:
> On 1/19/18 12:26 AM, Ming Lei wrote:
> > On Thu, Jan 18, 2018 at 09:02:45PM -0700, Jens Axboe wrote:
> >> On 1/18/18 7:32 PM, Ming Lei wrote:
> >>> On Thu, Jan 18, 2018 at 01:11:01PM -0700, Jens Axboe wrote:
> >>>> On 1/18/18 11:47 AM, Bart Van Assche wrote:
> >>>>>> This is all very tiresome.
> >>>>>
> >>>>> Yes, this is tiresome. It is very annoying to me that others keep
> >>>>> introducing so many regressions in such important parts of the kernel.
> >>>>> It is also annoying to me that I get blamed if I report a regression
> >>>>> instead of seeing that the regression gets fixed.
> >>>>
> >>>> I agree, it sucks that any change there introduces the regression. I'm
> >>>> fine with doing the delay insert again until a new patch is proven to be
> >>>> better.
> >>>
> >>> That way is still buggy as I explained, since rerun queue before adding
> >>> request to hctx->dispatch_list isn't correct. Who can make sure the request
> >>> is visible when __blk_mq_run_hw_queue() is called?
> >>
> >> That race basically doesn't exist for a 10ms gap.
> >>
> >>> Not mention this way will cause performance regression again.
> >>
> >> How so? It's _exactly_ the same as what you are proposing, except mine
> >> will potentially run the queue when it need not do so. But given that
> >> these are random 10ms queue kicks because we are screwed, it should not
> >> matter. The key point is that it only should be if we have NO better
> >> options. If it's a frequently occurring event that we have to return
> >> BLK_STS_RESOURCE, then we need to get a way to register an event for
> >> when that condition clears. That event will then kick the necessary
> >> queue(s).
> > 
> > Please see queue_delayed_work_on(), hctx->run_work is shared by all
> > scheduling, once blk_mq_delay_run_hw_queue(100ms) returns, no new
> > scheduling can make progress during the 100ms.
> 
> That's a bug, plain and simple. If someone does "run this queue in
> 100ms" and someone else comes in and says "run this queue now", the
> correct outcome is running this queue now.
> 
> >>>> From the original topic of this email, we have conditions that can cause
> >>>> the driver to not be able to submit an IO. A set of those conditions can
> >>>> only happen if IO is in flight, and those cases we have covered just
> >>>> fine. Another set can potentially trigger without IO being in flight.
> >>>> These are cases where a non-device resource is unavailable at the time
> >>>> of submission. This might be iommu running out of space, for instance,
> >>>> or it might be a memory allocation of some sort. For these cases, we
> >>>> don't get any notification when the shortage clears. All we can do is
> >>>> ensure that we restart operations at some point in the future. We're SOL
> >>>> at that point, but we have to ensure that we make forward progress.
> >>>
> >>> Right, it is a generic issue, not DM-specific one, almost all drivers
> >>> call kmalloc(GFP_ATOMIC) in IO path.
> >>
> >> GFP_ATOMIC basically never fails, unless we are out of memory. The
> > 
> > I guess GFP_KERNEL may never fail, but GFP_ATOMIC failure might be
> > possible, and it is mentioned[1] there is such code in mm allocation
> > path, also OOM can happen too.
> > 
> >   if (some randomly generated condition) && (request is atomic)
> >       return NULL;
> > 
> > [1] https://lwn.net/Articles/276731/
> 
> That article is 10 years old. Once you run large scale production, you
> see what the real failures are. Fact is, for zero order allocation, if
> the atomic alloc fails the shit has really hit the fan. In that case, a
> delay of 10ms is not your main issue. It's a total red herring when you
> compare to the frequency of what Bart is seeing. It's noise, and
> irrelevant here. For an atomic zero order allocation failure, doing a
> short random sleep is perfectly fine.
> 
> >> exception is higher order allocations. If a driver has a higher order
> >> atomic allocation in its IO path, the device driver writer needs to be
> >> taken out behind the barn and shot. Simple as that. It will NEVER work
> >> well in a production environment. Witness the disaster that so many NIC
> >> driver writers have learned.
> >>
> >> This is NOT the case we care about here. It's resources that are more
> >> readily depleted because other devices are using them. If it's a high
> >> frequency or generally occurring event, then we simply must have a
> >> callback to restart the queue from that. The condition then becomes
> >> identical to device private starvation, the only difference being from
> >> where we restart the queue.
> >>
> >>> IMO, there is enough time for figuring out a generic solution before
> >>> 4.16 release.
> >>
> >> I would hope so, but the proposed solutions have not filled me with
> >> a lot of confidence in the end result so far.
> >>
> >>>> That last set of conditions better not be a a common occurence, since
> >>>> performance is down the toilet at that point. I don't want to introduce
> >>>> hot path code to rectify it. Have the driver return if that happens in a
> >>>> way that is DIFFERENT from needing a normal restart. The driver knows if
> >>>> this is a resource that will become available when IO completes on this
> >>>> device or not. If we get that return, we have a generic run-again delay.
> >>>
> >>> Now most of times both NVMe and SCSI won't return BLK_STS_RESOURCE, and
> >>> it should be DM-only which returns STS_RESOURCE so often.
> >>
> >> Where does the dm STS_RESOURCE error usually come from - what's exact
> >> resource are we running out of?
> > 
> > It is from blk_get_request(underlying queue), see
> > multipath_clone_and_map().
> 
> That's what I thought. So for a low queue depth underlying queue, it's
> quite possible that this situation can happen. Two potential solutions
> I see:
> 
> 1) As described earlier in this thread, having a mechanism for being
>    notified when the scarce resource becomes available. It would not
>    be hard to tap into the existing sbitmap wait queue for that.
> 
> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>    allocation. I haven't read the dm code to know if this is a
>    possibility or not.
> 
> I'd probably prefer #1. It's a classic case of trying to get the
> request, and if it fails, add ourselves to the sbitmap tag wait
> queue head, retry, and bail if that also fails. Connecting the
> scarce resource and the consumer is the only way to really fix
> this, without bogus arbitrary delays.

Right, as I have replied to Bart, using mod_delayed_work_on() with
returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
resource should fix this issue.
Jens Axboe Jan. 19, 2018, 3:48 p.m. UTC | #14
On 1/19/18 8:40 AM, Ming Lei wrote:
>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>> resource are we running out of?
>>>
>>> It is from blk_get_request(underlying queue), see
>>> multipath_clone_and_map().
>>
>> That's what I thought. So for a low queue depth underlying queue, it's
>> quite possible that this situation can happen. Two potential solutions
>> I see:
>>
>> 1) As described earlier in this thread, having a mechanism for being
>>    notified when the scarce resource becomes available. It would not
>>    be hard to tap into the existing sbitmap wait queue for that.
>>
>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>    allocation. I haven't read the dm code to know if this is a
>>    possibility or not.
>>
>> I'd probably prefer #1. It's a classic case of trying to get the
>> request, and if it fails, add ourselves to the sbitmap tag wait
>> queue head, retry, and bail if that also fails. Connecting the
>> scarce resource and the consumer is the only way to really fix
>> this, without bogus arbitrary delays.
> 
> Right, as I have replied to Bart, using mod_delayed_work_on() with
> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> resource should fix this issue.

It'll fix the forever stall, but it won't really fix it, as we'll slow
down the dm device by some random amount.

A simple test case would be to have a null_blk device with a queue depth
of one, and dm on top of that. Start a fio job that runs two jobs: one
that does IO to the underlying device, and one that does IO to the dm
device. If the job on the dm device runs substantially slower than the
one to the underlying device, then the problem isn't really fixed.

That said, I'm fine with ensuring that we make forward progress always
first, and then we can come up with a proper solution to the issue. The
forward progress guarantee will be needed for the more rare failure
cases, like allocation failures. nvme needs that too, for instance, for
the discard range struct allocation.
Ming Lei Jan. 19, 2018, 4:05 p.m. UTC | #15
On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>> resource are we running out of?
> >>>
> >>> It is from blk_get_request(underlying queue), see
> >>> multipath_clone_and_map().
> >>
> >> That's what I thought. So for a low queue depth underlying queue, it's
> >> quite possible that this situation can happen. Two potential solutions
> >> I see:
> >>
> >> 1) As described earlier in this thread, having a mechanism for being
> >>    notified when the scarce resource becomes available. It would not
> >>    be hard to tap into the existing sbitmap wait queue for that.
> >>
> >> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>    allocation. I haven't read the dm code to know if this is a
> >>    possibility or not.
> >>
> >> I'd probably prefer #1. It's a classic case of trying to get the
> >> request, and if it fails, add ourselves to the sbitmap tag wait
> >> queue head, retry, and bail if that also fails. Connecting the
> >> scarce resource and the consumer is the only way to really fix
> >> this, without bogus arbitrary delays.
> > 
> > Right, as I have replied to Bart, using mod_delayed_work_on() with
> > returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> > resource should fix this issue.
> 
> It'll fix the forever stall, but it won't really fix it, as we'll slow
> down the dm device by some random amount.
> 
> A simple test case would be to have a null_blk device with a queue depth
> of one, and dm on top of that. Start a fio job that runs two jobs: one
> that does IO to the underlying device, and one that does IO to the dm
> device. If the job on the dm device runs substantially slower than the
> one to the underlying device, then the problem isn't really fixed.

I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
seems not observed this issue, could you explain a bit why IO over dm-mpath
may be slower? Because both two IO contexts call same get_request(), and
in theory dm-mpath should be a bit quicker since it uses direct issue for
underlying queue, without io scheduler involved.
Mike Snitzer Jan. 19, 2018, 4:13 p.m. UTC | #16
On Fri, Jan 19 2018 at 10:48am -0500,
Jens Axboe <axboe@kernel.dk> wrote:

> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>> resource are we running out of?
> >>>
> >>> It is from blk_get_request(underlying queue), see
> >>> multipath_clone_and_map().
> >>
> >> That's what I thought. So for a low queue depth underlying queue, it's
> >> quite possible that this situation can happen. Two potential solutions
> >> I see:
> >>
> >> 1) As described earlier in this thread, having a mechanism for being
> >>    notified when the scarce resource becomes available. It would not
> >>    be hard to tap into the existing sbitmap wait queue for that.
> >>
> >> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>    allocation. I haven't read the dm code to know if this is a
> >>    possibility or not.

Right, #2 is _not_ the way forward.  Historically request-based DM used
its own mempool for requests, this was to be able to have some measure
of control and resiliency in the face of low memory conditions that
might be affecting the broader system.

Then Christoph switched over to adding per-request-data; which ushered
in the use of blk_get_request using ATOMIC allocations.  I like the
result of that line of development.  But taking the next step of setting
BLK_MQ_F_BLOCKING is highly unfortunate (especially in that this
dm-mpath.c code is common to old .request_fn and blk-mq, at least the
call to blk_get_request is).  Ultimately dm-mpath like to avoid blocking
for a request because for this dm-mpath device we have multiple queues
to allocate from if need be (provided we have an active-active storage
network topology).

> >> I'd probably prefer #1. It's a classic case of trying to get the
> >> request, and if it fails, add ourselves to the sbitmap tag wait
> >> queue head, retry, and bail if that also fails. Connecting the
> >> scarce resource and the consumer is the only way to really fix
> >> this, without bogus arbitrary delays.
> > 
> > Right, as I have replied to Bart, using mod_delayed_work_on() with
> > returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> > resource should fix this issue.
> 
> It'll fix the forever stall, but it won't really fix it, as we'll slow
> down the dm device by some random amount.

Agreed.

> A simple test case would be to have a null_blk device with a queue depth
> of one, and dm on top of that. Start a fio job that runs two jobs: one
> that does IO to the underlying device, and one that does IO to the dm
> device. If the job on the dm device runs substantially slower than the
> one to the underlying device, then the problem isn't really fixed.

Not sure DM will allow the underlying device to be opened (due to
master/slave ownership that is part of loading a DM table)?

> That said, I'm fine with ensuring that we make forward progress always
> first, and then we can come up with a proper solution to the issue. The
> forward progress guarantee will be needed for the more rare failure
> cases, like allocation failures. nvme needs that too, for instance, for
> the discard range struct allocation.

Yeap, I'd be OK with that too.  We'd be better for revisted this and
then have some time to develop the ultimate robust fix (#1, callback
from above).

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jens Axboe Jan. 19, 2018, 4:19 p.m. UTC | #17
On 1/19/18 9:05 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>> resource are we running out of?
>>>>>
>>>>> It is from blk_get_request(underlying queue), see
>>>>> multipath_clone_and_map().
>>>>
>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>> quite possible that this situation can happen. Two potential solutions
>>>> I see:
>>>>
>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>    notified when the scarce resource becomes available. It would not
>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>
>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>    allocation. I haven't read the dm code to know if this is a
>>>>    possibility or not.
>>>>
>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>> queue head, retry, and bail if that also fails. Connecting the
>>>> scarce resource and the consumer is the only way to really fix
>>>> this, without bogus arbitrary delays.
>>>
>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>> resource should fix this issue.
>>
>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>> down the dm device by some random amount.
>>
>> A simple test case would be to have a null_blk device with a queue depth
>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>> that does IO to the underlying device, and one that does IO to the dm
>> device. If the job on the dm device runs substantially slower than the
>> one to the underlying device, then the problem isn't really fixed.
> 
> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> seems not observed this issue, could you explain a bit why IO over dm-mpath
> may be slower? Because both two IO contexts call same get_request(), and
> in theory dm-mpath should be a bit quicker since it uses direct issue for
> underlying queue, without io scheduler involved.

Because if you lose the race for getting the request, you'll have some
arbitrary delay before trying again, potentially. Compared to the direct
user of the underlying device, who will simply sleep on the resource and
get woken the instant it's available.
Jens Axboe Jan. 19, 2018, 4:23 p.m. UTC | #18
On 1/19/18 9:13 AM, Mike Snitzer wrote:
> On Fri, Jan 19 2018 at 10:48am -0500,
> Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>> resource are we running out of?
>>>>>
>>>>> It is from blk_get_request(underlying queue), see
>>>>> multipath_clone_and_map().
>>>>
>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>> quite possible that this situation can happen. Two potential solutions
>>>> I see:
>>>>
>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>    notified when the scarce resource becomes available. It would not
>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>
>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>    allocation. I haven't read the dm code to know if this is a
>>>>    possibility or not.
> 
> Right, #2 is _not_ the way forward.  Historically request-based DM used
> its own mempool for requests, this was to be able to have some measure
> of control and resiliency in the face of low memory conditions that
> might be affecting the broader system.
> 
> Then Christoph switched over to adding per-request-data; which ushered
> in the use of blk_get_request using ATOMIC allocations.  I like the
> result of that line of development.  But taking the next step of setting
> BLK_MQ_F_BLOCKING is highly unfortunate (especially in that this
> dm-mpath.c code is common to old .request_fn and blk-mq, at least the
> call to blk_get_request is).  Ultimately dm-mpath like to avoid blocking
> for a request because for this dm-mpath device we have multiple queues
> to allocate from if need be (provided we have an active-active storage
> network topology).

If you can go to multiple devices, obviously it should not block on a
single device. That's only true for the case where you can only go to
one device, blocking at that point would probably be fine. Or if all
your paths are busy, then blocking would also be OK.

But it's a much larger change, and would entail changing more than just
the actual call to blk_get_request().

>> A simple test case would be to have a null_blk device with a queue depth
>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>> that does IO to the underlying device, and one that does IO to the dm
>> device. If the job on the dm device runs substantially slower than the
>> one to the underlying device, then the problem isn't really fixed.
> 
> Not sure DM will allow the underlying device to be opened (due to
> master/slave ownership that is part of loading a DM table)?

There are many ways it could be setup - just partition the underlying
device then, and have one partition be part of the dm setup and the
other used directly.

>> That said, I'm fine with ensuring that we make forward progress always
>> first, and then we can come up with a proper solution to the issue. The
>> forward progress guarantee will be needed for the more rare failure
>> cases, like allocation failures. nvme needs that too, for instance, for
>> the discard range struct allocation.
> 
> Yeap, I'd be OK with that too.  We'd be better for revisted this and
> then have some time to develop the ultimate robust fix (#1, callback
> from above).

Yeah, we need the quick and dirty sooner, which just brings us back to
what we had before, essentially.
Ming Lei Jan. 19, 2018, 4:26 p.m. UTC | #19
On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> On 1/19/18 9:05 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> >> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>>>> resource are we running out of?
> >>>>>
> >>>>> It is from blk_get_request(underlying queue), see
> >>>>> multipath_clone_and_map().
> >>>>
> >>>> That's what I thought. So for a low queue depth underlying queue, it's
> >>>> quite possible that this situation can happen. Two potential solutions
> >>>> I see:
> >>>>
> >>>> 1) As described earlier in this thread, having a mechanism for being
> >>>>    notified when the scarce resource becomes available. It would not
> >>>>    be hard to tap into the existing sbitmap wait queue for that.
> >>>>
> >>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>>>    allocation. I haven't read the dm code to know if this is a
> >>>>    possibility or not.
> >>>>
> >>>> I'd probably prefer #1. It's a classic case of trying to get the
> >>>> request, and if it fails, add ourselves to the sbitmap tag wait
> >>>> queue head, retry, and bail if that also fails. Connecting the
> >>>> scarce resource and the consumer is the only way to really fix
> >>>> this, without bogus arbitrary delays.
> >>>
> >>> Right, as I have replied to Bart, using mod_delayed_work_on() with
> >>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> >>> resource should fix this issue.
> >>
> >> It'll fix the forever stall, but it won't really fix it, as we'll slow
> >> down the dm device by some random amount.
> >>
> >> A simple test case would be to have a null_blk device with a queue depth
> >> of one, and dm on top of that. Start a fio job that runs two jobs: one
> >> that does IO to the underlying device, and one that does IO to the dm
> >> device. If the job on the dm device runs substantially slower than the
> >> one to the underlying device, then the problem isn't really fixed.
> > 
> > I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> > seems not observed this issue, could you explain a bit why IO over dm-mpath
> > may be slower? Because both two IO contexts call same get_request(), and
> > in theory dm-mpath should be a bit quicker since it uses direct issue for
> > underlying queue, without io scheduler involved.
> 
> Because if you lose the race for getting the request, you'll have some
> arbitrary delay before trying again, potentially. Compared to the direct

But the restart still works, one request is completed, then the queue
is return immediately because we use mod_delayed_work_on(0), so looks
no such issue.
Jens Axboe Jan. 19, 2018, 4:27 p.m. UTC | #20
On 1/19/18 9:26 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>> On 1/19/18 9:05 AM, Ming Lei wrote:
>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>>>> resource are we running out of?
>>>>>>>
>>>>>>> It is from blk_get_request(underlying queue), see
>>>>>>> multipath_clone_and_map().
>>>>>>
>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>>>> quite possible that this situation can happen. Two potential solutions
>>>>>> I see:
>>>>>>
>>>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>>>    notified when the scarce resource becomes available. It would not
>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>>>
>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>>>    allocation. I haven't read the dm code to know if this is a
>>>>>>    possibility or not.
>>>>>>
>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>>>> queue head, retry, and bail if that also fails. Connecting the
>>>>>> scarce resource and the consumer is the only way to really fix
>>>>>> this, without bogus arbitrary delays.
>>>>>
>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>>>> resource should fix this issue.
>>>>
>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>>>> down the dm device by some random amount.
>>>>
>>>> A simple test case would be to have a null_blk device with a queue depth
>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>>>> that does IO to the underlying device, and one that does IO to the dm
>>>> device. If the job on the dm device runs substantially slower than the
>>>> one to the underlying device, then the problem isn't really fixed.
>>>
>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
>>> may be slower? Because both two IO contexts call same get_request(), and
>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
>>> underlying queue, without io scheduler involved.
>>
>> Because if you lose the race for getting the request, you'll have some
>> arbitrary delay before trying again, potentially. Compared to the direct
> 
> But the restart still works, one request is completed, then the queue
> is return immediately because we use mod_delayed_work_on(0), so looks
> no such issue.

There are no pending requests for this case, nothing to restart the
queue. When you fail that blk_get_request(), you are idle, nothing
is pending.
Ming Lei Jan. 19, 2018, 4:37 p.m. UTC | #21
On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> On 1/19/18 9:26 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:05 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> >>>> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>>>>>> resource are we running out of?
> >>>>>>>
> >>>>>>> It is from blk_get_request(underlying queue), see
> >>>>>>> multipath_clone_and_map().
> >>>>>>
> >>>>>> That's what I thought. So for a low queue depth underlying queue, it's
> >>>>>> quite possible that this situation can happen. Two potential solutions
> >>>>>> I see:
> >>>>>>
> >>>>>> 1) As described earlier in this thread, having a mechanism for being
> >>>>>>    notified when the scarce resource becomes available. It would not
> >>>>>>    be hard to tap into the existing sbitmap wait queue for that.
> >>>>>>
> >>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>>>>>    allocation. I haven't read the dm code to know if this is a
> >>>>>>    possibility or not.
> >>>>>>
> >>>>>> I'd probably prefer #1. It's a classic case of trying to get the
> >>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
> >>>>>> queue head, retry, and bail if that also fails. Connecting the
> >>>>>> scarce resource and the consumer is the only way to really fix
> >>>>>> this, without bogus arbitrary delays.
> >>>>>
> >>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
> >>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> >>>>> resource should fix this issue.
> >>>>
> >>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
> >>>> down the dm device by some random amount.
> >>>>
> >>>> A simple test case would be to have a null_blk device with a queue depth
> >>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
> >>>> that does IO to the underlying device, and one that does IO to the dm
> >>>> device. If the job on the dm device runs substantially slower than the
> >>>> one to the underlying device, then the problem isn't really fixed.
> >>>
> >>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> >>> seems not observed this issue, could you explain a bit why IO over dm-mpath
> >>> may be slower? Because both two IO contexts call same get_request(), and
> >>> in theory dm-mpath should be a bit quicker since it uses direct issue for
> >>> underlying queue, without io scheduler involved.
> >>
> >> Because if you lose the race for getting the request, you'll have some
> >> arbitrary delay before trying again, potentially. Compared to the direct
> > 
> > But the restart still works, one request is completed, then the queue
> > is return immediately because we use mod_delayed_work_on(0), so looks
> > no such issue.
> 
> There are no pending requests for this case, nothing to restart the
> queue. When you fail that blk_get_request(), you are idle, nothing
> is pending.

I think we needn't worry about that, once a device is attached to
dm-rq, it can't be mounted any more, and usually user don't use the device
directly and by dm-mpath at the same time.
Jens Axboe Jan. 19, 2018, 4:41 p.m. UTC | #22
On 1/19/18 9:37 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
>> On 1/19/18 9:26 AM, Ming Lei wrote:
>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>>>> On 1/19/18 9:05 AM, Ming Lei wrote:
>>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>>>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>>>>>> resource are we running out of?
>>>>>>>>>
>>>>>>>>> It is from blk_get_request(underlying queue), see
>>>>>>>>> multipath_clone_and_map().
>>>>>>>>
>>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>>>>>> quite possible that this situation can happen. Two potential solutions
>>>>>>>> I see:
>>>>>>>>
>>>>>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>>>>>    notified when the scarce resource becomes available. It would not
>>>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>>>>>
>>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>>>>>    allocation. I haven't read the dm code to know if this is a
>>>>>>>>    possibility or not.
>>>>>>>>
>>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>>>>>> queue head, retry, and bail if that also fails. Connecting the
>>>>>>>> scarce resource and the consumer is the only way to really fix
>>>>>>>> this, without bogus arbitrary delays.
>>>>>>>
>>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>>>>>> resource should fix this issue.
>>>>>>
>>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>>>>>> down the dm device by some random amount.
>>>>>>
>>>>>> A simple test case would be to have a null_blk device with a queue depth
>>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>>>>>> that does IO to the underlying device, and one that does IO to the dm
>>>>>> device. If the job on the dm device runs substantially slower than the
>>>>>> one to the underlying device, then the problem isn't really fixed.
>>>>>
>>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
>>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
>>>>> may be slower? Because both two IO contexts call same get_request(), and
>>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
>>>>> underlying queue, without io scheduler involved.
>>>>
>>>> Because if you lose the race for getting the request, you'll have some
>>>> arbitrary delay before trying again, potentially. Compared to the direct
>>>
>>> But the restart still works, one request is completed, then the queue
>>> is return immediately because we use mod_delayed_work_on(0), so looks
>>> no such issue.
>>
>> There are no pending requests for this case, nothing to restart the
>> queue. When you fail that blk_get_request(), you are idle, nothing
>> is pending.
> 
> I think we needn't worry about that, once a device is attached to
> dm-rq, it can't be mounted any more, and usually user don't use the device
> directly and by dm-mpath at the same time.

Even if it doesn't happen for a normal dm setup, it is a case that
needs to be handled. The request allocation is just one example of
a wider scope resource that can be unavailable. If the driver returns
NO_DEV_RESOURCE (or whatever name), it will be a possibility that
the device itself is currently idle.
Mike Snitzer Jan. 19, 2018, 4:47 p.m. UTC | #23
On Fri, Jan 19 2018 at 11:41am -0500,
Jens Axboe <axboe@kernel.dk> wrote:

> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >>
> >> There are no pending requests for this case, nothing to restart the
> >> queue. When you fail that blk_get_request(), you are idle, nothing
> >> is pending.
> > 
> > I think we needn't worry about that, once a device is attached to
> > dm-rq, it can't be mounted any more, and usually user don't use the device
> > directly and by dm-mpath at the same time.
> 
> Even if it doesn't happen for a normal dm setup, it is a case that
> needs to be handled. The request allocation is just one example of
> a wider scope resource that can be unavailable. If the driver returns
> NO_DEV_RESOURCE (or whatever name), it will be a possibility that
> the device itself is currently idle.

How would a driver's resources be exhausted yet the device is idle (so
as not to be able to benefit from RESTART)?

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Jens Axboe Jan. 19, 2018, 4:52 p.m. UTC | #24
On 1/19/18 9:47 AM, Mike Snitzer wrote:
> On Fri, Jan 19 2018 at 11:41am -0500,
> Jens Axboe <axboe@kernel.dk> wrote:
> 
>> On 1/19/18 9:37 AM, Ming Lei wrote:
>>> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
>>>> On 1/19/18 9:26 AM, Ming Lei wrote:
>>>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>>>>
>>>> There are no pending requests for this case, nothing to restart the
>>>> queue. When you fail that blk_get_request(), you are idle, nothing
>>>> is pending.
>>>
>>> I think we needn't worry about that, once a device is attached to
>>> dm-rq, it can't be mounted any more, and usually user don't use the device
>>> directly and by dm-mpath at the same time.
>>
>> Even if it doesn't happen for a normal dm setup, it is a case that
>> needs to be handled. The request allocation is just one example of
>> a wider scope resource that can be unavailable. If the driver returns
>> NO_DEV_RESOURCE (or whatever name), it will be a possibility that
>> the device itself is currently idle.
> 
> How would a driver's resources be exhausted yet the device is idle (so
> as not to be able to benefit from RESTART)?

I've outlined a number of these examples already. Another case might be:

1) Device is idle
2) Device gets request
3) Device attempts to DMA map
4) DMA map fails because the IOMMU is out of space (nic is using it all)
5) Device returns STS_RESOURCE
6) Queue is marked as needing a restart

All's well, except there is no IO on this device that will notice the
restart bit and retry the operation.

Replace IOMMU failure with any other resource that the driver might need
for an IO, which isn't tied to a device specific resource.
blk_get_request() on dm is an example, as is any allocation failure
occurring in the queue IO path for the driver.
Ming Lei Jan. 19, 2018, 5:05 p.m. UTC | #25
On Fri, Jan 19, 2018 at 09:52:32AM -0700, Jens Axboe wrote:
> On 1/19/18 9:47 AM, Mike Snitzer wrote:
> > On Fri, Jan 19 2018 at 11:41am -0500,
> > Jens Axboe <axboe@kernel.dk> wrote:
> > 
> >> On 1/19/18 9:37 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >>>> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >>>>
> >>>> There are no pending requests for this case, nothing to restart the
> >>>> queue. When you fail that blk_get_request(), you are idle, nothing
> >>>> is pending.
> >>>
> >>> I think we needn't worry about that, once a device is attached to
> >>> dm-rq, it can't be mounted any more, and usually user don't use the device
> >>> directly and by dm-mpath at the same time.
> >>
> >> Even if it doesn't happen for a normal dm setup, it is a case that
> >> needs to be handled. The request allocation is just one example of
> >> a wider scope resource that can be unavailable. If the driver returns
> >> NO_DEV_RESOURCE (or whatever name), it will be a possibility that
> >> the device itself is currently idle.
> > 
> > How would a driver's resources be exhausted yet the device is idle (so
> > as not to be able to benefit from RESTART)?
> 
> I've outlined a number of these examples already. Another case might be:
> 
> 1) Device is idle
> 2) Device gets request
> 3) Device attempts to DMA map
> 4) DMA map fails because the IOMMU is out of space (nic is using it all)
> 5) Device returns STS_RESOURCE
> 6) Queue is marked as needing a restart
> 
> All's well, except there is no IO on this device that will notice the
> restart bit and retry the operation.
> 
> Replace IOMMU failure with any other resource that the driver might need
> for an IO, which isn't tied to a device specific resource.
> blk_get_request() on dm is an example, as is any allocation failure
> occurring in the queue IO path for the driver.

Yeah, if we decide to take the approach of introducing NO_DEV_RESOURCE, all
the current STS_RESOURCE for non-device resource allocation(kmalloc, dma
map, get_request, ...) should be converted to NO_DEV_RESOURCE.

And it is a generic issue, which need generic solution.

Seems running queue after arbitrary in this case is the only way we
thought of, or other solutions?

If the decision is made, let's make/review the patch, :-)
Jens Axboe Jan. 19, 2018, 5:09 p.m. UTC | #26
On 1/19/18 10:05 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 09:52:32AM -0700, Jens Axboe wrote:
>> On 1/19/18 9:47 AM, Mike Snitzer wrote:
>>> On Fri, Jan 19 2018 at 11:41am -0500,
>>> Jens Axboe <axboe@kernel.dk> wrote:
>>>
>>>> On 1/19/18 9:37 AM, Ming Lei wrote:
>>>>> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
>>>>>> On 1/19/18 9:26 AM, Ming Lei wrote:
>>>>>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>>>>>>
>>>>>> There are no pending requests for this case, nothing to restart the
>>>>>> queue. When you fail that blk_get_request(), you are idle, nothing
>>>>>> is pending.
>>>>>
>>>>> I think we needn't worry about that, once a device is attached to
>>>>> dm-rq, it can't be mounted any more, and usually user don't use the device
>>>>> directly and by dm-mpath at the same time.
>>>>
>>>> Even if it doesn't happen for a normal dm setup, it is a case that
>>>> needs to be handled. The request allocation is just one example of
>>>> a wider scope resource that can be unavailable. If the driver returns
>>>> NO_DEV_RESOURCE (or whatever name), it will be a possibility that
>>>> the device itself is currently idle.
>>>
>>> How would a driver's resources be exhausted yet the device is idle (so
>>> as not to be able to benefit from RESTART)?
>>
>> I've outlined a number of these examples already. Another case might be:
>>
>> 1) Device is idle
>> 2) Device gets request
>> 3) Device attempts to DMA map
>> 4) DMA map fails because the IOMMU is out of space (nic is using it all)
>> 5) Device returns STS_RESOURCE
>> 6) Queue is marked as needing a restart
>>
>> All's well, except there is no IO on this device that will notice the
>> restart bit and retry the operation.
>>
>> Replace IOMMU failure with any other resource that the driver might need
>> for an IO, which isn't tied to a device specific resource.
>> blk_get_request() on dm is an example, as is any allocation failure
>> occurring in the queue IO path for the driver.
> 
> Yeah, if we decide to take the approach of introducing NO_DEV_RESOURCE, all
> the current STS_RESOURCE for non-device resource allocation(kmalloc, dma
> map, get_request, ...) should be converted to NO_DEV_RESOURCE.
> 
> And it is a generic issue, which need generic solution.

Precisely.

> Seems running queue after arbitrary in this case is the only way we
> thought of, or other solutions?

I think that is the only solution. If it's a frequent enough occurence
to cause performance issues, then it's likely down to a specific
resource shortage, and we can tackle that independently (we need to,
since each of those will need a specialized solution).

> If the decision is made, let's make/review the patch, :-)

Let 'er rip.
Ming Lei Jan. 19, 2018, 5:20 p.m. UTC | #27
On Fri, Jan 19, 2018 at 10:09:11AM -0700, Jens Axboe wrote:
> On 1/19/18 10:05 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:52:32AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:47 AM, Mike Snitzer wrote:
> >>> On Fri, Jan 19 2018 at 11:41am -0500,
> >>> Jens Axboe <axboe@kernel.dk> wrote:
> >>>
> >>>> On 1/19/18 9:37 AM, Ming Lei wrote:
> >>>>> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >>>>>> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>>>>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >>>>>>
> >>>>>> There are no pending requests for this case, nothing to restart the
> >>>>>> queue. When you fail that blk_get_request(), you are idle, nothing
> >>>>>> is pending.
> >>>>>
> >>>>> I think we needn't worry about that, once a device is attached to
> >>>>> dm-rq, it can't be mounted any more, and usually user don't use the device
> >>>>> directly and by dm-mpath at the same time.
> >>>>
> >>>> Even if it doesn't happen for a normal dm setup, it is a case that
> >>>> needs to be handled. The request allocation is just one example of
> >>>> a wider scope resource that can be unavailable. If the driver returns
> >>>> NO_DEV_RESOURCE (or whatever name), it will be a possibility that
> >>>> the device itself is currently idle.
> >>>
> >>> How would a driver's resources be exhausted yet the device is idle (so
> >>> as not to be able to benefit from RESTART)?
> >>
> >> I've outlined a number of these examples already. Another case might be:
> >>
> >> 1) Device is idle
> >> 2) Device gets request
> >> 3) Device attempts to DMA map
> >> 4) DMA map fails because the IOMMU is out of space (nic is using it all)
> >> 5) Device returns STS_RESOURCE
> >> 6) Queue is marked as needing a restart
> >>
> >> All's well, except there is no IO on this device that will notice the
> >> restart bit and retry the operation.
> >>
> >> Replace IOMMU failure with any other resource that the driver might need
> >> for an IO, which isn't tied to a device specific resource.
> >> blk_get_request() on dm is an example, as is any allocation failure
> >> occurring in the queue IO path for the driver.
> > 
> > Yeah, if we decide to take the approach of introducing NO_DEV_RESOURCE, all
> > the current STS_RESOURCE for non-device resource allocation(kmalloc, dma
> > map, get_request, ...) should be converted to NO_DEV_RESOURCE.

Or simply introduce BLK_STS_DEV_RESOURCE and convert out of real device
resource into it, this way may be much easy to do.
Jens Axboe Jan. 19, 2018, 5:38 p.m. UTC | #28
On 1/19/18 9:37 AM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
>> On 1/19/18 9:26 AM, Ming Lei wrote:
>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>>>> On 1/19/18 9:05 AM, Ming Lei wrote:
>>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>>>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>>>>>> resource are we running out of?
>>>>>>>>>
>>>>>>>>> It is from blk_get_request(underlying queue), see
>>>>>>>>> multipath_clone_and_map().
>>>>>>>>
>>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>>>>>> quite possible that this situation can happen. Two potential solutions
>>>>>>>> I see:
>>>>>>>>
>>>>>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>>>>>    notified when the scarce resource becomes available. It would not
>>>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>>>>>
>>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>>>>>    allocation. I haven't read the dm code to know if this is a
>>>>>>>>    possibility or not.
>>>>>>>>
>>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>>>>>> queue head, retry, and bail if that also fails. Connecting the
>>>>>>>> scarce resource and the consumer is the only way to really fix
>>>>>>>> this, without bogus arbitrary delays.
>>>>>>>
>>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>>>>>> resource should fix this issue.
>>>>>>
>>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>>>>>> down the dm device by some random amount.
>>>>>>
>>>>>> A simple test case would be to have a null_blk device with a queue depth
>>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>>>>>> that does IO to the underlying device, and one that does IO to the dm
>>>>>> device. If the job on the dm device runs substantially slower than the
>>>>>> one to the underlying device, then the problem isn't really fixed.
>>>>>
>>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
>>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
>>>>> may be slower? Because both two IO contexts call same get_request(), and
>>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
>>>>> underlying queue, without io scheduler involved.
>>>>
>>>> Because if you lose the race for getting the request, you'll have some
>>>> arbitrary delay before trying again, potentially. Compared to the direct
>>>
>>> But the restart still works, one request is completed, then the queue
>>> is return immediately because we use mod_delayed_work_on(0), so looks
>>> no such issue.
>>
>> There are no pending requests for this case, nothing to restart the
>> queue. When you fail that blk_get_request(), you are idle, nothing
>> is pending.
> 
> I think we needn't worry about that, once a device is attached to
> dm-rq, it can't be mounted any more, and usually user don't use the device
> directly and by dm-mpath at the same time.

Here's an example of that, using my current block tree (merged into
master).  The setup is dm-mpath on top of null_blk, the latter having
just a single request. Both are mq devices.

Fio direct 4k random reads on dm_mq: ~250K iops

Start dd on underlying device (or partition on same device), just doing
sequential reads.

Fio direct 4k random reads on dm_mq with dd running: 9 iops

No schedulers involved.

https://i.imgur.com/WTDnnwE.gif
Ming Lei Jan. 19, 2018, 6:24 p.m. UTC | #29
On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote:
> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >>>> On 1/19/18 9:05 AM, Ming Lei wrote:
> >>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> >>>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>>>>>>>> resource are we running out of?
> >>>>>>>>>
> >>>>>>>>> It is from blk_get_request(underlying queue), see
> >>>>>>>>> multipath_clone_and_map().
> >>>>>>>>
> >>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
> >>>>>>>> quite possible that this situation can happen. Two potential solutions
> >>>>>>>> I see:
> >>>>>>>>
> >>>>>>>> 1) As described earlier in this thread, having a mechanism for being
> >>>>>>>>    notified when the scarce resource becomes available. It would not
> >>>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
> >>>>>>>>
> >>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>>>>>>>    allocation. I haven't read the dm code to know if this is a
> >>>>>>>>    possibility or not.
> >>>>>>>>
> >>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
> >>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
> >>>>>>>> queue head, retry, and bail if that also fails. Connecting the
> >>>>>>>> scarce resource and the consumer is the only way to really fix
> >>>>>>>> this, without bogus arbitrary delays.
> >>>>>>>
> >>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
> >>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> >>>>>>> resource should fix this issue.
> >>>>>>
> >>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
> >>>>>> down the dm device by some random amount.
> >>>>>>
> >>>>>> A simple test case would be to have a null_blk device with a queue depth
> >>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
> >>>>>> that does IO to the underlying device, and one that does IO to the dm
> >>>>>> device. If the job on the dm device runs substantially slower than the
> >>>>>> one to the underlying device, then the problem isn't really fixed.
> >>>>>
> >>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> >>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
> >>>>> may be slower? Because both two IO contexts call same get_request(), and
> >>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
> >>>>> underlying queue, without io scheduler involved.
> >>>>
> >>>> Because if you lose the race for getting the request, you'll have some
> >>>> arbitrary delay before trying again, potentially. Compared to the direct
> >>>
> >>> But the restart still works, one request is completed, then the queue
> >>> is return immediately because we use mod_delayed_work_on(0), so looks
> >>> no such issue.
> >>
> >> There are no pending requests for this case, nothing to restart the
> >> queue. When you fail that blk_get_request(), you are idle, nothing
> >> is pending.
> > 
> > I think we needn't worry about that, once a device is attached to
> > dm-rq, it can't be mounted any more, and usually user don't use the device
> > directly and by dm-mpath at the same time.
> 
> Here's an example of that, using my current block tree (merged into
> master).  The setup is dm-mpath on top of null_blk, the latter having
> just a single request. Both are mq devices.
> 
> Fio direct 4k random reads on dm_mq: ~250K iops
> 
> Start dd on underlying device (or partition on same device), just doing
> sequential reads.
> 
> Fio direct 4k random reads on dm_mq with dd running: 9 iops
> 
> No schedulers involved.
> 
> https://i.imgur.com/WTDnnwE.gif

This DM specific issue might be addressed by applying notifier_chain
(or similar mechanism)between the two queues, will think about the
details tomorrow.
Mike Snitzer Jan. 19, 2018, 6:33 p.m. UTC | #30
On Fri, Jan 19 2018 at 12:38pm -0500,
Jens Axboe <axboe@kernel.dk> wrote:

> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >>
> >> There are no pending requests for this case, nothing to restart the
> >> queue. When you fail that blk_get_request(), you are idle, nothing
> >> is pending.
> > 
> > I think we needn't worry about that, once a device is attached to
> > dm-rq, it can't be mounted any more, and usually user don't use the device
> > directly and by dm-mpath at the same time.
> 
> Here's an example of that, using my current block tree (merged into
> master).  The setup is dm-mpath on top of null_blk, the latter having
> just a single request. Both are mq devices.
> 
> Fio direct 4k random reads on dm_mq: ~250K iops
> 
> Start dd on underlying device (or partition on same device), just doing
> sequential reads.
> 
> Fio direct 4k random reads on dm_mq with dd running: 9 iops
> 
> No schedulers involved.
> 
> https://i.imgur.com/WTDnnwE.gif

FYI, your tree doesn't have these dm-4.16 changes (which are needed to
make Ming's blk-mq chnages that are in your tree, commit 396eaf21e et
al, have Ming's desired effect on DM):

https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=050af08ffb1b62af69196d61c22a0755f9a3cdbd
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=459b54019cfeb7330ed4863ad40f78489e0ff23d
https://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=dm-4.16&id=ec3eaf9a673106f66606896aed6ddd20180b02ec

Fact that you're seeing such shit results without dm-4.16 commit
ec3eaf9a67 (that reverts older commit 6077c2d706) means: yeap, this
is really awful, let's fix it!  But it is a different flavor of awful
because the dm-rq.c:map_request() will handle the DM_MAPIO_DELAY_REQUEUE
result from the null_blk's blk_get_request() failure using
dm_mq_delay_requeue_request() against the dm-mq mpath device:

        blk_mq_requeue_request(rq, false);
        __dm_mq_kick_requeue_list(rq->q, msecs);

So begs the question: why are we stalling _exactly_?  (you may have it
all figured out, as your gif implies.. but I'm not there yet).

Might be interesting to see how your same test behaves without all of
the churn we've staged for 4.16, e.g. against v4.15-rc8

Mike

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Bart Van Assche Jan. 19, 2018, 7:47 p.m. UTC | #31
On Fri, 2018-01-19 at 15:34 +0800, Ming Lei wrote:
> Could you explain a bit when SCSI target replies with BUSY very often?
> 
> Inside initiator, we have limited the max per-LUN requests and per-host
> requests already before calling .queue_rq().

That's correct. However, when a SCSI initiator and target system are
communicating with each other there is no guarantee that initiator and target
queue depth have been tuned properly. If the initiator queue depth and the
number of requests that can be in flight according to the network protocol
are both larger than the target queue depth and if the target system uses
relatively slow storage (e.g. harddisks) then it can happen that the target
replies with BUSY often.

The Linux iSCSI initiator limits MaxOutstandingR2T (the number of requests
an initiator may sent without having received an answer from the target) to
one so I don't think this can happen when using iSCSI/TCP.

With the SRP initiator however the maximum requests that can be in flight
between initiator and target depends on the number of credits that were
negotiated during login between initiator and target. Some time ago I modified
the SRP initiator such that it limits the initiator queue depth to the number
of SRP credits minus one (for task management). That resulted in a performance
improvement due to fewer BUSY conditions at the initiator side (see also commit
7ade400aba9a ("IB/srp: Reduce number of BUSY conditions")). Another patch for
the SCST SRP target driver limited the number of SRP credits to the queue depth
of the block device at the target side. I'm referring to the following code:
ch->rq_size = min(MAX_SRPT_RQ_SIZE, scst_get_max_lun_commands(NULL, 0)) (I have
not yet had the time to port this change to LIO).

Without such tuning across initiator and target it can happen often that the
target system sends the reply "BUSY" back to the initiator. I think that's why
there is code in the SCSI core to automatically adjust the initiator queue
depth if the "BUSY" condition is encountered frequently. See also
scsi_track_queue_full().

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
Ming Lei Jan. 19, 2018, 11:52 p.m. UTC | #32
On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote:
> On 1/19/18 9:37 AM, Ming Lei wrote:
> > On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
> >> On 1/19/18 9:26 AM, Ming Lei wrote:
> >>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
> >>>> On 1/19/18 9:05 AM, Ming Lei wrote:
> >>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
> >>>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>>>>>>>> resource are we running out of?
> >>>>>>>>>
> >>>>>>>>> It is from blk_get_request(underlying queue), see
> >>>>>>>>> multipath_clone_and_map().
> >>>>>>>>
> >>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
> >>>>>>>> quite possible that this situation can happen. Two potential solutions
> >>>>>>>> I see:
> >>>>>>>>
> >>>>>>>> 1) As described earlier in this thread, having a mechanism for being
> >>>>>>>>    notified when the scarce resource becomes available. It would not
> >>>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
> >>>>>>>>
> >>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>>>>>>>    allocation. I haven't read the dm code to know if this is a
> >>>>>>>>    possibility or not.
> >>>>>>>>
> >>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
> >>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
> >>>>>>>> queue head, retry, and bail if that also fails. Connecting the
> >>>>>>>> scarce resource and the consumer is the only way to really fix
> >>>>>>>> this, without bogus arbitrary delays.
> >>>>>>>
> >>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
> >>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
> >>>>>>> resource should fix this issue.
> >>>>>>
> >>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
> >>>>>> down the dm device by some random amount.
> >>>>>>
> >>>>>> A simple test case would be to have a null_blk device with a queue depth
> >>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
> >>>>>> that does IO to the underlying device, and one that does IO to the dm
> >>>>>> device. If the job on the dm device runs substantially slower than the
> >>>>>> one to the underlying device, then the problem isn't really fixed.
> >>>>>
> >>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
> >>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
> >>>>> may be slower? Because both two IO contexts call same get_request(), and
> >>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
> >>>>> underlying queue, without io scheduler involved.
> >>>>
> >>>> Because if you lose the race for getting the request, you'll have some
> >>>> arbitrary delay before trying again, potentially. Compared to the direct
> >>>
> >>> But the restart still works, one request is completed, then the queue
> >>> is return immediately because we use mod_delayed_work_on(0), so looks
> >>> no such issue.
> >>
> >> There are no pending requests for this case, nothing to restart the
> >> queue. When you fail that blk_get_request(), you are idle, nothing
> >> is pending.
> > 
> > I think we needn't worry about that, once a device is attached to
> > dm-rq, it can't be mounted any more, and usually user don't use the device
> > directly and by dm-mpath at the same time.
> 
> Here's an example of that, using my current block tree (merged into
> master).  The setup is dm-mpath on top of null_blk, the latter having
> just a single request. Both are mq devices.
> 
> Fio direct 4k random reads on dm_mq: ~250K iops
> 
> Start dd on underlying device (or partition on same device), just doing
> sequential reads.
> 
> Fio direct 4k random reads on dm_mq with dd running: 9 iops
> 
> No schedulers involved.
> 
> https://i.imgur.com/WTDnnwE.gif

If null_blk's timer mode is used with a bit delay introduced, I guess
the effect from direct access to underlying queue shouldn't be so
serious. But it still won't be good as direct access.

Another way may be to introduce a variants blk_get_request(), such as
blk_get_request_with_notify(), then pass the current dm-rq's hctx to
it, and use the tag's waitqueue to handle that. But the change can be
a bit big.
Ming Lei Jan. 19, 2018, 11:57 p.m. UTC | #33
On Fri, Jan 19, 2018 at 09:23:35AM -0700, Jens Axboe wrote:
> On 1/19/18 9:13 AM, Mike Snitzer wrote:
> > On Fri, Jan 19 2018 at 10:48am -0500,
> > Jens Axboe <axboe@kernel.dk> wrote:
> > 
> >> On 1/19/18 8:40 AM, Ming Lei wrote:
> >>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
> >>>>>> resource are we running out of?
> >>>>>
> >>>>> It is from blk_get_request(underlying queue), see
> >>>>> multipath_clone_and_map().
> >>>>
> >>>> That's what I thought. So for a low queue depth underlying queue, it's
> >>>> quite possible that this situation can happen. Two potential solutions
> >>>> I see:
> >>>>
> >>>> 1) As described earlier in this thread, having a mechanism for being
> >>>>    notified when the scarce resource becomes available. It would not
> >>>>    be hard to tap into the existing sbitmap wait queue for that.
> >>>>
> >>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
> >>>>    allocation. I haven't read the dm code to know if this is a
> >>>>    possibility or not.
> > 
> > Right, #2 is _not_ the way forward.  Historically request-based DM used
> > its own mempool for requests, this was to be able to have some measure
> > of control and resiliency in the face of low memory conditions that
> > might be affecting the broader system.
> > 
> > Then Christoph switched over to adding per-request-data; which ushered
> > in the use of blk_get_request using ATOMIC allocations.  I like the
> > result of that line of development.  But taking the next step of setting
> > BLK_MQ_F_BLOCKING is highly unfortunate (especially in that this
> > dm-mpath.c code is common to old .request_fn and blk-mq, at least the
> > call to blk_get_request is).  Ultimately dm-mpath like to avoid blocking
> > for a request because for this dm-mpath device we have multiple queues
> > to allocate from if need be (provided we have an active-active storage
> > network topology).
> 
> If you can go to multiple devices, obviously it should not block on a
> single device. That's only true for the case where you can only go to
> one device, blocking at that point would probably be fine. Or if all
> your paths are busy, then blocking would also be OK.

Introducing one extra block point will hurt AIO performance, in which
there is usually much less jobs/processes to submit IO.
Jens Axboe Jan. 20, 2018, 4:27 a.m. UTC | #34
On 1/19/18 4:52 PM, Ming Lei wrote:
> On Fri, Jan 19, 2018 at 10:38:41AM -0700, Jens Axboe wrote:
>> On 1/19/18 9:37 AM, Ming Lei wrote:
>>> On Fri, Jan 19, 2018 at 09:27:46AM -0700, Jens Axboe wrote:
>>>> On 1/19/18 9:26 AM, Ming Lei wrote:
>>>>> On Fri, Jan 19, 2018 at 09:19:24AM -0700, Jens Axboe wrote:
>>>>>> On 1/19/18 9:05 AM, Ming Lei wrote:
>>>>>>> On Fri, Jan 19, 2018 at 08:48:55AM -0700, Jens Axboe wrote:
>>>>>>>> On 1/19/18 8:40 AM, Ming Lei wrote:
>>>>>>>>>>>> Where does the dm STS_RESOURCE error usually come from - what's exact
>>>>>>>>>>>> resource are we running out of?
>>>>>>>>>>>
>>>>>>>>>>> It is from blk_get_request(underlying queue), see
>>>>>>>>>>> multipath_clone_and_map().
>>>>>>>>>>
>>>>>>>>>> That's what I thought. So for a low queue depth underlying queue, it's
>>>>>>>>>> quite possible that this situation can happen. Two potential solutions
>>>>>>>>>> I see:
>>>>>>>>>>
>>>>>>>>>> 1) As described earlier in this thread, having a mechanism for being
>>>>>>>>>>    notified when the scarce resource becomes available. It would not
>>>>>>>>>>    be hard to tap into the existing sbitmap wait queue for that.
>>>>>>>>>>
>>>>>>>>>> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>>>>>>>>>>    allocation. I haven't read the dm code to know if this is a
>>>>>>>>>>    possibility or not.
>>>>>>>>>>
>>>>>>>>>> I'd probably prefer #1. It's a classic case of trying to get the
>>>>>>>>>> request, and if it fails, add ourselves to the sbitmap tag wait
>>>>>>>>>> queue head, retry, and bail if that also fails. Connecting the
>>>>>>>>>> scarce resource and the consumer is the only way to really fix
>>>>>>>>>> this, without bogus arbitrary delays.
>>>>>>>>>
>>>>>>>>> Right, as I have replied to Bart, using mod_delayed_work_on() with
>>>>>>>>> returning BLK_STS_NO_DEV_RESOURCE(or sort of name) for the scarce
>>>>>>>>> resource should fix this issue.
>>>>>>>>
>>>>>>>> It'll fix the forever stall, but it won't really fix it, as we'll slow
>>>>>>>> down the dm device by some random amount.
>>>>>>>>
>>>>>>>> A simple test case would be to have a null_blk device with a queue depth
>>>>>>>> of one, and dm on top of that. Start a fio job that runs two jobs: one
>>>>>>>> that does IO to the underlying device, and one that does IO to the dm
>>>>>>>> device. If the job on the dm device runs substantially slower than the
>>>>>>>> one to the underlying device, then the problem isn't really fixed.
>>>>>>>
>>>>>>> I remembered that I tried this test on scsi-debug & dm-mpath over scsi-debug,
>>>>>>> seems not observed this issue, could you explain a bit why IO over dm-mpath
>>>>>>> may be slower? Because both two IO contexts call same get_request(), and
>>>>>>> in theory dm-mpath should be a bit quicker since it uses direct issue for
>>>>>>> underlying queue, without io scheduler involved.
>>>>>>
>>>>>> Because if you lose the race for getting the request, you'll have some
>>>>>> arbitrary delay before trying again, potentially. Compared to the direct
>>>>>
>>>>> But the restart still works, one request is completed, then the queue
>>>>> is return immediately because we use mod_delayed_work_on(0), so looks
>>>>> no such issue.
>>>>
>>>> There are no pending requests for this case, nothing to restart the
>>>> queue. When you fail that blk_get_request(), you are idle, nothing
>>>> is pending.
>>>
>>> I think we needn't worry about that, once a device is attached to
>>> dm-rq, it can't be mounted any more, and usually user don't use the device
>>> directly and by dm-mpath at the same time.
>>
>> Here's an example of that, using my current block tree (merged into
>> master).  The setup is dm-mpath on top of null_blk, the latter having
>> just a single request. Both are mq devices.
>>
>> Fio direct 4k random reads on dm_mq: ~250K iops
>>
>> Start dd on underlying device (or partition on same device), just doing
>> sequential reads.
>>
>> Fio direct 4k random reads on dm_mq with dd running: 9 iops
>>
>> No schedulers involved.
>>
>> https://i.imgur.com/WTDnnwE.gif
> 
> If null_blk's timer mode is used with a bit delay introduced, I guess
> the effect from direct access to underlying queue shouldn't be so
> serious. But it still won't be good as direct access.

Doesn't matter if it's used at the default of 10usec completion
latency, or inline complete. Same result, I ran both.

> Another way may be to introduce a variants blk_get_request(), such as
> blk_get_request_with_notify(), then pass the current dm-rq's hctx to
> it, and use the tag's waitqueue to handle that. But the change can be
> a bit big.

Yes, that's exactly the solution I suggested both yesterday and today.
Bart Van Assche Jan. 29, 2018, 10:37 p.m. UTC | #35
On 01/19/18 07:24, Jens Axboe wrote:
> That's what I thought. So for a low queue depth underlying queue, it's
> quite possible that this situation can happen. Two potential solutions
> I see:
> 
> 1) As described earlier in this thread, having a mechanism for being
>     notified when the scarce resource becomes available. It would not
>     be hard to tap into the existing sbitmap wait queue for that.
> 
> 2) Have dm set BLK_MQ_F_BLOCKING and just sleep on the resource
>     allocation. I haven't read the dm code to know if this is a
>     possibility or not.
> 
> I'd probably prefer #1. It's a classic case of trying to get the
> request, and if it fails, add ourselves to the sbitmap tag wait
> queue head, retry, and bail if that also fails. Connecting the
> scarce resource and the consumer is the only way to really fix
> this, without bogus arbitrary delays.

(replying to an e-mail from ten days ago)

Implementing a notification mechanism for all cases in which 
blk_insert_cloned_request() returns BLK_STS_RESOURCE today would require 
a lot of work. If e.g. a SCSI LLD returns one of the SCSI_MLQUEUE_*_BUSY 
return codes from its .queuecommand() implementation then the SCSI core 
will translate that return code into BLK_STS_RESOURCE. From scsi_queue_rq():

	reason = scsi_dispatch_cmd(cmd);
	if (reason) {
		scsi_set_blocked(cmd, reason);
		ret = BLK_STS_RESOURCE;
		goto out_dec_host_busy;
	}

In other words, implementing a notification mechanism for all cases in 
which blk_insert_cloned_request() can return BLK_STS_RESOURCE would 
require to modify all SCSI LLDs.

Bart.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
diff mbox

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 6e3f77829dcc..4d4af8d712da 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -896,6 +896,85 @@  static void blk_mq_terminate_expired(struct blk_mq_hw_ctx *hctx,
 		blk_mq_rq_timed_out(rq, reserved);
 }
 
+struct hctx_busy_data {
+	struct blk_mq_hw_ctx *hctx;
+	bool reserved;
+	bool busy;
+};
+
+static bool check_busy_hctx(struct sbitmap *sb, unsigned int bitnr, void *data)
+{
+	struct hctx_busy_data *busy_data = data;
+	struct blk_mq_hw_ctx *hctx = busy_data->hctx;
+	struct request *rq;
+
+	if (busy_data->reserved)
+		bitnr += hctx->tags->nr_reserved_tags;
+
+	rq = hctx->tags->static_rqs[bitnr];
+	if (blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) {
+		busy_data->busy = true;
+		return false;
+	}
+	return true;
+}
+
+/* Check if there is any in-flight request */
+static bool blk_mq_hctx_is_busy(struct blk_mq_hw_ctx *hctx)
+{
+	struct hctx_busy_data data = {
+		.hctx = hctx,
+		.busy = false,
+		.reserved = true,
+	};
+
+	sbitmap_for_each_set(&hctx->tags->breserved_tags.sb,
+			check_busy_hctx, &data);
+	if (data.busy)
+		return true;
+
+	data.reserved = false;
+	sbitmap_for_each_set(&hctx->tags->bitmap_tags.sb,
+			check_busy_hctx, &data);
+	if (data.busy)
+		return true;
+
+	return false;
+}
+
+static void blk_mq_fixup_restart(struct blk_mq_hw_ctx *hctx)
+{
+	if (test_bit(BLK_MQ_S_SCHED_RESTART, &hctx->state)) {
+		bool busy;
+
+		/*
+		 * If this hctx is still marked as RESTART, and there
+		 * isn't any in-flight requests, we have to run queue
+		 * here to prevent IO from hanging.
+		 *
+		 * BLK_STS_RESOURCE can be returned from driver when any
+		 * resource is running out of. And the resource may not
+		 * be related with tags, such as kmalloc(GFP_ATOMIC), when
+		 * queue is idle under this kind of BLK_STS_RESOURCE, restart
+		 * can't work any more, then IO hang may be caused.
+		 *
+		 * The counter-pair of the following barrier is the one
+		 * in blk_mq_put_driver_tag() after returning BLK_STS_RESOURCE
+		 * from ->queue_rq().
+		 */
+		smp_mb();
+
+		busy = blk_mq_hctx_is_busy(hctx);
+		if (!busy) {
+			printk(KERN_WARNING "blk-mq: fixup RESTART\n");
+			printk(KERN_WARNING "\t If this message is shown"
+			       " a bit often, please report the issue to"
+			       " linux-block@vger.kernel.org\n");
+			blk_mq_run_hw_queue(hctx, true);
+		}
+	}
+}
+
 static void blk_mq_timeout_work(struct work_struct *work)
 {
 	struct request_queue *q =
@@ -966,8 +1045,10 @@  static void blk_mq_timeout_work(struct work_struct *work)
 		 */
 		queue_for_each_hw_ctx(q, hctx, i) {
 			/* the hctx may be unmapped, so check it here */
-			if (blk_mq_hw_queue_mapped(hctx))
+			if (blk_mq_hw_queue_mapped(hctx)) {
 				blk_mq_tag_idle(hctx);
+				blk_mq_fixup_restart(hctx);
+			}
 		}
 	}
 	blk_queue_exit(q);