Message ID | 20200330074856.2.I28278ef8ea27afc0ec7e597752a6d4e58c16176f@changeid (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: Fix two causes of IO stalls found in reboot testing | expand |
On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > It is possible for two threads to be running > blk_mq_do_dispatch_sched() at the same time with the same "hctx". > This is because there can be more than one caller to > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > prevent more than one thread from entering. > > If more than one thread is running blk_mq_do_dispatch_sched() at the > same time with the same "hctx", they may have contention acquiring > budget. The blk_mq_get_dispatch_budget() can eventually translate > into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > uncommon) then only one of the two threads will be the one to > increment "device_busy" to 1 and get the budget. > > The losing thread will break out of blk_mq_do_dispatch_sched() and > will stop dispatching requests. The assumption is that when more > budget is available later (when existing transactions finish) the > queue will be kicked again, perhaps in scsi_end_request(). > > The winning thread now has budget and can go on to call > dispatch_request(). If dispatch_request() returns NULL here then we > have a potential problem. Specifically we'll now call I guess this problem should be BFQ specific. Now there is definitely requests in BFQ queue wrt. this hctx. However, looks this request is only available from another loser thread, and it won't be retrieved in the winning thread via e->type->ops.dispatch_request(). Just wondering why BFQ is implemented in this way? > blk_mq_put_dispatch_budget() which translates into > scsi_mq_put_budget(). That will mark the device as no longer busy but > doesn't do anything to kick the queue. This violates the assumption > that the queue would be kicked when more budget was available. > > Pictorially: > > Thread A Thread B > ================================= ================================== > blk_mq_get_dispatch_budget() => 1 > dispatch_request() => NULL > blk_mq_get_dispatch_budget() => 0 > // because Thread A marked > // "device_busy" in scsi_device > blk_mq_put_dispatch_budget() > > The above case was observed in reboot tests and caused a task to hang > forever waiting for IO to complete. Traces showed that in fact two > tasks were running blk_mq_do_dispatch_sched() at the same time with > the same "hctx". The task that got the budget did in fact see > dispatch_request() return NULL. Both tasks returned and the system > went on for several minutes (until the hung task delay kicked in) > without the given "hctx" showing up again in traces. > > Let's attempt to fix this problem by detecting budget contention. If > we're in the SCSI code's put_budget() function and we saw that someone > else might have wanted the budget we got then we'll kick the queue. > > The mechanism of kicking due to budget contention has the potential to > overcompensate and kick the queue more than strictly necessary, but it > shouldn't hurt. > > Signed-off-by: Douglas Anderson <dianders@chromium.org> > --- > > drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- > drivers/scsi/scsi_scan.c | 1 + > include/scsi/scsi_device.h | 2 ++ > 3 files changed, 27 insertions(+), 3 deletions(-) > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > index 610ee41fa54c..0530da909995 100644 > --- a/drivers/scsi/scsi_lib.c > +++ b/drivers/scsi/scsi_lib.c > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) > rcu_read_unlock(); > } > > +static void scsi_device_dec_busy(struct scsi_device *sdev) > +{ > + bool was_contention; > + unsigned long flags; > + > + spin_lock_irqsave(&sdev->budget_lock, flags); > + atomic_dec(&sdev->device_busy); > + was_contention = sdev->budget_contention; > + sdev->budget_contention = false; > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > + > + if (was_contention) > + blk_mq_run_hw_queues(sdev->request_queue, true); > +} > + > void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > { > struct Scsi_Host *shost = sdev->host; > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > if (starget->can_queue > 0) > atomic_dec(&starget->target_busy); > > - atomic_dec(&sdev->device_busy); > + scsi_device_dec_busy(sdev); > } > > static void scsi_kick_queue(struct request_queue *q) > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) > struct request_queue *q = hctx->queue; > struct scsi_device *sdev = q->queuedata; > > - atomic_dec(&sdev->device_busy); > + scsi_device_dec_busy(sdev); > } > > static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) > { > struct request_queue *q = hctx->queue; > struct scsi_device *sdev = q->queuedata; > + unsigned long flags; > > - if (scsi_dev_queue_ready(q, sdev)) > + spin_lock_irqsave(&sdev->budget_lock, flags); > + if (scsi_dev_queue_ready(q, sdev)) { > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > return true; > + } > + sdev->budget_contention = true; > + spin_unlock_irqrestore(&sdev->budget_lock, flags); No, it really hurts performance by adding one per-sdev spinlock in fast path, and we actually tried to kill the atomic variable of 'sdev->device_busy' for high performance HBA. Thanks, Ming
Hi, On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote: > > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > > It is possible for two threads to be running > > blk_mq_do_dispatch_sched() at the same time with the same "hctx". > > This is because there can be more than one caller to > > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > > prevent more than one thread from entering. > > > > If more than one thread is running blk_mq_do_dispatch_sched() at the > > same time with the same "hctx", they may have contention acquiring > > budget. The blk_mq_get_dispatch_budget() can eventually translate > > into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > > uncommon) then only one of the two threads will be the one to > > increment "device_busy" to 1 and get the budget. > > > > The losing thread will break out of blk_mq_do_dispatch_sched() and > > will stop dispatching requests. The assumption is that when more > > budget is available later (when existing transactions finish) the > > queue will be kicked again, perhaps in scsi_end_request(). > > > > The winning thread now has budget and can go on to call > > dispatch_request(). If dispatch_request() returns NULL here then we > > have a potential problem. Specifically we'll now call > > I guess this problem should be BFQ specific. Now there is definitely > requests in BFQ queue wrt. this hctx. However, looks this request is > only available from another loser thread, and it won't be retrieved in > the winning thread via e->type->ops.dispatch_request(). > > Just wondering why BFQ is implemented in this way? Paolo can maybe comment why. ...but even if BFQ wanted to try to change this, I think it's impossible to fully close the race. There is no locking between the call to has_work() and dispatch_request() and there can be two (or more) threads running the code at the same time. Without some type of locking I think it will always be possible for dispatch_request() to return NULL. Are we OK with code that works most of the time but still has a race? ...or did I misunderstand how this all works? > > blk_mq_put_dispatch_budget() which translates into > > scsi_mq_put_budget(). That will mark the device as no longer busy but > > doesn't do anything to kick the queue. This violates the assumption > > that the queue would be kicked when more budget was available. > > > > Pictorially: > > > > Thread A Thread B > > ================================= ================================== > > blk_mq_get_dispatch_budget() => 1 > > dispatch_request() => NULL > > blk_mq_get_dispatch_budget() => 0 > > // because Thread A marked > > // "device_busy" in scsi_device > > blk_mq_put_dispatch_budget() > > > > The above case was observed in reboot tests and caused a task to hang > > forever waiting for IO to complete. Traces showed that in fact two > > tasks were running blk_mq_do_dispatch_sched() at the same time with > > the same "hctx". The task that got the budget did in fact see > > dispatch_request() return NULL. Both tasks returned and the system > > went on for several minutes (until the hung task delay kicked in) > > without the given "hctx" showing up again in traces. > > > > Let's attempt to fix this problem by detecting budget contention. If > > we're in the SCSI code's put_budget() function and we saw that someone > > else might have wanted the budget we got then we'll kick the queue. > > > > The mechanism of kicking due to budget contention has the potential to > > overcompensate and kick the queue more than strictly necessary, but it > > shouldn't hurt. > > > > Signed-off-by: Douglas Anderson <dianders@chromium.org> > > --- > > > > drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- > > drivers/scsi/scsi_scan.c | 1 + > > include/scsi/scsi_device.h | 2 ++ > > 3 files changed, 27 insertions(+), 3 deletions(-) > > > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > > index 610ee41fa54c..0530da909995 100644 > > --- a/drivers/scsi/scsi_lib.c > > +++ b/drivers/scsi/scsi_lib.c > > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) > > rcu_read_unlock(); > > } > > > > +static void scsi_device_dec_busy(struct scsi_device *sdev) > > +{ > > + bool was_contention; > > + unsigned long flags; > > + > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > + atomic_dec(&sdev->device_busy); > > + was_contention = sdev->budget_contention; > > + sdev->budget_contention = false; > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > + > > + if (was_contention) > > + blk_mq_run_hw_queues(sdev->request_queue, true); > > +} > > + > > void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > { > > struct Scsi_Host *shost = sdev->host; > > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > if (starget->can_queue > 0) > > atomic_dec(&starget->target_busy); > > > > - atomic_dec(&sdev->device_busy); > > + scsi_device_dec_busy(sdev); > > } > > > > static void scsi_kick_queue(struct request_queue *q) > > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) > > struct request_queue *q = hctx->queue; > > struct scsi_device *sdev = q->queuedata; > > > > - atomic_dec(&sdev->device_busy); > > + scsi_device_dec_busy(sdev); > > } > > > > static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) > > { > > struct request_queue *q = hctx->queue; > > struct scsi_device *sdev = q->queuedata; > > + unsigned long flags; > > > > - if (scsi_dev_queue_ready(q, sdev)) > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > + if (scsi_dev_queue_ready(q, sdev)) { > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > return true; > > + } > > + sdev->budget_contention = true; > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > No, it really hurts performance by adding one per-sdev spinlock in fast path, > and we actually tried to kill the atomic variable of 'sdev->device_busy' > for high performance HBA. It might be slow, but correctness trumps speed, right? I tried to do this with a 2nd atomic and without the spinlock but I kept having a hole one way or the other. I ended up just trying to keep the spinlock section as small as possible. If you know of a way to get rid of the spinlock that still makes the code correct, I'd be super interested! :-) I certainly won't claim that it's impossible to do, only that I didn't manage to come up with a way. -Doug
On Mon, Mar 30, 2020 at 07:15:54PM -0700, Doug Anderson wrote: > Hi, > > On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote: > > > > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > > > It is possible for two threads to be running > > > blk_mq_do_dispatch_sched() at the same time with the same "hctx". > > > This is because there can be more than one caller to > > > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > > > prevent more than one thread from entering. > > > > > > If more than one thread is running blk_mq_do_dispatch_sched() at the > > > same time with the same "hctx", they may have contention acquiring > > > budget. The blk_mq_get_dispatch_budget() can eventually translate > > > into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > > > uncommon) then only one of the two threads will be the one to > > > increment "device_busy" to 1 and get the budget. > > > > > > The losing thread will break out of blk_mq_do_dispatch_sched() and > > > will stop dispatching requests. The assumption is that when more > > > budget is available later (when existing transactions finish) the > > > queue will be kicked again, perhaps in scsi_end_request(). > > > > > > The winning thread now has budget and can go on to call > > > dispatch_request(). If dispatch_request() returns NULL here then we > > > have a potential problem. Specifically we'll now call > > > > I guess this problem should be BFQ specific. Now there is definitely > > requests in BFQ queue wrt. this hctx. However, looks this request is > > only available from another loser thread, and it won't be retrieved in > > the winning thread via e->type->ops.dispatch_request(). > > > > Just wondering why BFQ is implemented in this way? > > Paolo can maybe comment why. > > ...but even if BFQ wanted to try to change this, I think it's > impossible to fully close the race. There is no locking between the > call to has_work() and dispatch_request() and there can be two (or > more) threads running the code at the same time. Without some type of > locking I think it will always be possible for dispatch_request() to > return NULL. Are we OK with code that works most of the time but > still has a race? ...or did I misunderstand how this all works? Wrt. dispatching requests from hctx->dispatch, there is really one race given scsi's run queue from scsi_end_request() may not see that request. Looks that is what the patch 1 is addressing. However, for this issue, there isn't race, given when we get budget, the request isn't dequeued from BFQ yet. If budget is assigned successfully, either the request is dispatched to LLD successfully, or STS_RESOURCE is triggered, or running out of driver tag, run queue is guaranteed to be started for handling another dispatch path which running out of budget. That is why I raise the question why BFQ dispatches request in this way. > > > > > blk_mq_put_dispatch_budget() which translates into > > > scsi_mq_put_budget(). That will mark the device as no longer busy but > > > doesn't do anything to kick the queue. This violates the assumption > > > that the queue would be kicked when more budget was available. > > > > > > Pictorially: > > > > > > Thread A Thread B > > > ================================= ================================== > > > blk_mq_get_dispatch_budget() => 1 > > > dispatch_request() => NULL > > > blk_mq_get_dispatch_budget() => 0 > > > // because Thread A marked > > > // "device_busy" in scsi_device > > > blk_mq_put_dispatch_budget() > > > > > > The above case was observed in reboot tests and caused a task to hang > > > forever waiting for IO to complete. Traces showed that in fact two > > > tasks were running blk_mq_do_dispatch_sched() at the same time with > > > the same "hctx". The task that got the budget did in fact see > > > dispatch_request() return NULL. Both tasks returned and the system > > > went on for several minutes (until the hung task delay kicked in) > > > without the given "hctx" showing up again in traces. > > > > > > Let's attempt to fix this problem by detecting budget contention. If > > > we're in the SCSI code's put_budget() function and we saw that someone > > > else might have wanted the budget we got then we'll kick the queue. > > > > > > The mechanism of kicking due to budget contention has the potential to > > > overcompensate and kick the queue more than strictly necessary, but it > > > shouldn't hurt. > > > > > > Signed-off-by: Douglas Anderson <dianders@chromium.org> > > > --- > > > > > > drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- > > > drivers/scsi/scsi_scan.c | 1 + > > > include/scsi/scsi_device.h | 2 ++ > > > 3 files changed, 27 insertions(+), 3 deletions(-) > > > > > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > > > index 610ee41fa54c..0530da909995 100644 > > > --- a/drivers/scsi/scsi_lib.c > > > +++ b/drivers/scsi/scsi_lib.c > > > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) > > > rcu_read_unlock(); > > > } > > > > > > +static void scsi_device_dec_busy(struct scsi_device *sdev) > > > +{ > > > + bool was_contention; > > > + unsigned long flags; > > > + > > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > > + atomic_dec(&sdev->device_busy); > > > + was_contention = sdev->budget_contention; > > > + sdev->budget_contention = false; > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > + > > > + if (was_contention) > > > + blk_mq_run_hw_queues(sdev->request_queue, true); > > > +} > > > + > > > void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > > { > > > struct Scsi_Host *shost = sdev->host; > > > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > > if (starget->can_queue > 0) > > > atomic_dec(&starget->target_busy); > > > > > > - atomic_dec(&sdev->device_busy); > > > + scsi_device_dec_busy(sdev); > > > } > > > > > > static void scsi_kick_queue(struct request_queue *q) > > > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) > > > struct request_queue *q = hctx->queue; > > > struct scsi_device *sdev = q->queuedata; > > > > > > - atomic_dec(&sdev->device_busy); > > > + scsi_device_dec_busy(sdev); > > > } > > > > > > static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) > > > { > > > struct request_queue *q = hctx->queue; > > > struct scsi_device *sdev = q->queuedata; > > > + unsigned long flags; > > > > > > - if (scsi_dev_queue_ready(q, sdev)) > > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > > + if (scsi_dev_queue_ready(q, sdev)) { > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > return true; > > > + } > > > + sdev->budget_contention = true; > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > > No, it really hurts performance by adding one per-sdev spinlock in fast path, > > and we actually tried to kill the atomic variable of 'sdev->device_busy' > > for high performance HBA. > > It might be slow, but correctness trumps speed, right? I tried to do Correctness doesn't have to cause performance regression, does it? > this with a 2nd atomic and without the spinlock but I kept having a > hole one way or the other. I ended up just trying to keep the > spinlock section as small as possible. > > If you know of a way to get rid of the spinlock that still makes the > code correct, I'd be super interested! :-) I certainly won't claim > that it's impossible to do, only that I didn't manage to come up with > a way. As I mentioned, if BFQ doesn't dispatch request in this special way, there isn't such race. Thanks, Ming
Hi, On Mon, Mar 30, 2020 at 7:58 PM Ming Lei <ming.lei@redhat.com> wrote: > > On Mon, Mar 30, 2020 at 07:15:54PM -0700, Doug Anderson wrote: > > Hi, > > > > On Mon, Mar 30, 2020 at 6:41 PM Ming Lei <ming.lei@redhat.com> wrote: > > > > > > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > > > > It is possible for two threads to be running > > > > blk_mq_do_dispatch_sched() at the same time with the same "hctx". > > > > This is because there can be more than one caller to > > > > __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > > > > prevent more than one thread from entering. > > > > > > > > If more than one thread is running blk_mq_do_dispatch_sched() at the > > > > same time with the same "hctx", they may have contention acquiring > > > > budget. The blk_mq_get_dispatch_budget() can eventually translate > > > > into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > > > > uncommon) then only one of the two threads will be the one to > > > > increment "device_busy" to 1 and get the budget. > > > > > > > > The losing thread will break out of blk_mq_do_dispatch_sched() and > > > > will stop dispatching requests. The assumption is that when more > > > > budget is available later (when existing transactions finish) the > > > > queue will be kicked again, perhaps in scsi_end_request(). > > > > > > > > The winning thread now has budget and can go on to call > > > > dispatch_request(). If dispatch_request() returns NULL here then we > > > > have a potential problem. Specifically we'll now call > > > > > > I guess this problem should be BFQ specific. Now there is definitely > > > requests in BFQ queue wrt. this hctx. However, looks this request is > > > only available from another loser thread, and it won't be retrieved in > > > the winning thread via e->type->ops.dispatch_request(). > > > > > > Just wondering why BFQ is implemented in this way? > > > > Paolo can maybe comment why. > > > > ...but even if BFQ wanted to try to change this, I think it's > > impossible to fully close the race. There is no locking between the > > call to has_work() and dispatch_request() and there can be two (or > > more) threads running the code at the same time. Without some type of > > locking I think it will always be possible for dispatch_request() to > > return NULL. Are we OK with code that works most of the time but > > still has a race? ...or did I misunderstand how this all works? > > Wrt. dispatching requests from hctx->dispatch, there is really one > race given scsi's run queue from scsi_end_request() may not see > that request. Looks that is what the patch 1 is addressing. OK, at least I got something right. ;-) > However, for this issue, there isn't race, given when we get budget, > the request isn't dequeued from BFQ yet. If budget is assigned > successfully, either the request is dispatched to LLD successfully, > or STS_RESOURCE is triggered, or running out of driver tag, run queue > is guaranteed to be started for handling another dispatch path > which running out of budget. > > That is why I raise the question why BFQ dispatches request in this way. Ah, I _think_ I see what you mean. So there should be no race because the "has_work" is just a hint? It's assumed that whichever task gets the budget will be able to dispatch all the work that's there. Is that right? > > > > blk_mq_put_dispatch_budget() which translates into > > > > scsi_mq_put_budget(). That will mark the device as no longer busy but > > > > doesn't do anything to kick the queue. This violates the assumption > > > > that the queue would be kicked when more budget was available. > > > > > > > > Pictorially: > > > > > > > > Thread A Thread B > > > > ================================= ================================== > > > > blk_mq_get_dispatch_budget() => 1 > > > > dispatch_request() => NULL > > > > blk_mq_get_dispatch_budget() => 0 > > > > // because Thread A marked > > > > // "device_busy" in scsi_device > > > > blk_mq_put_dispatch_budget() > > > > > > > > The above case was observed in reboot tests and caused a task to hang > > > > forever waiting for IO to complete. Traces showed that in fact two > > > > tasks were running blk_mq_do_dispatch_sched() at the same time with > > > > the same "hctx". The task that got the budget did in fact see > > > > dispatch_request() return NULL. Both tasks returned and the system > > > > went on for several minutes (until the hung task delay kicked in) > > > > without the given "hctx" showing up again in traces. > > > > > > > > Let's attempt to fix this problem by detecting budget contention. If > > > > we're in the SCSI code's put_budget() function and we saw that someone > > > > else might have wanted the budget we got then we'll kick the queue. > > > > > > > > The mechanism of kicking due to budget contention has the potential to > > > > overcompensate and kick the queue more than strictly necessary, but it > > > > shouldn't hurt. > > > > > > > > Signed-off-by: Douglas Anderson <dianders@chromium.org> > > > > --- > > > > > > > > drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- > > > > drivers/scsi/scsi_scan.c | 1 + > > > > include/scsi/scsi_device.h | 2 ++ > > > > 3 files changed, 27 insertions(+), 3 deletions(-) > > > > > > > > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c > > > > index 610ee41fa54c..0530da909995 100644 > > > > --- a/drivers/scsi/scsi_lib.c > > > > +++ b/drivers/scsi/scsi_lib.c > > > > @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) > > > > rcu_read_unlock(); > > > > } > > > > > > > > +static void scsi_device_dec_busy(struct scsi_device *sdev) > > > > +{ > > > > + bool was_contention; > > > > + unsigned long flags; > > > > + > > > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > > > + atomic_dec(&sdev->device_busy); > > > > + was_contention = sdev->budget_contention; > > > > + sdev->budget_contention = false; > > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > > + > > > > + if (was_contention) > > > > + blk_mq_run_hw_queues(sdev->request_queue, true); > > > > +} > > > > + > > > > void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > > > { > > > > struct Scsi_Host *shost = sdev->host; > > > > @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) > > > > if (starget->can_queue > 0) > > > > atomic_dec(&starget->target_busy); > > > > > > > > - atomic_dec(&sdev->device_busy); > > > > + scsi_device_dec_busy(sdev); > > > > } > > > > > > > > static void scsi_kick_queue(struct request_queue *q) > > > > @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) > > > > struct request_queue *q = hctx->queue; > > > > struct scsi_device *sdev = q->queuedata; > > > > > > > > - atomic_dec(&sdev->device_busy); > > > > + scsi_device_dec_busy(sdev); > > > > } > > > > > > > > static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) > > > > { > > > > struct request_queue *q = hctx->queue; > > > > struct scsi_device *sdev = q->queuedata; > > > > + unsigned long flags; > > > > > > > > - if (scsi_dev_queue_ready(q, sdev)) > > > > + spin_lock_irqsave(&sdev->budget_lock, flags); > > > > + if (scsi_dev_queue_ready(q, sdev)) { > > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > > return true; > > > > + } > > > > + sdev->budget_contention = true; > > > > + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > > > > > No, it really hurts performance by adding one per-sdev spinlock in fast path, > > > and we actually tried to kill the atomic variable of 'sdev->device_busy' > > > for high performance HBA. > > > > It might be slow, but correctness trumps speed, right? I tried to do > > Correctness doesn't have to cause performance regression, does it? I guess what I'm saying is that if there is a choice between the two we have to choose correctness. If there is a bug and we don't know of any way to fix it other than with a fix that regresses performance then we have to regress performance. I wasn't able to find a way to fix the bug (as I understood it) without regressing performance, but I'd be happy if someone else could come up with a way. > > this with a 2nd atomic and without the spinlock but I kept having a > > hole one way or the other. I ended up just trying to keep the > > spinlock section as small as possible. > > > > If you know of a way to get rid of the spinlock that still makes the > > code correct, I'd be super interested! :-) I certainly won't claim > > that it's impossible to do, only that I didn't manage to come up with > > a way. > > As I mentioned, if BFQ doesn't dispatch request in this special way, > there isn't such race. OK, so I guess this puts it in Paolo's court then. I'm about done for the evening, but maybe he can comment on it or come up with a fix? -Doug
> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: > > On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: >> It is possible for two threads to be running >> blk_mq_do_dispatch_sched() at the same time with the same "hctx". >> This is because there can be more than one caller to >> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't >> prevent more than one thread from entering. >> >> If more than one thread is running blk_mq_do_dispatch_sched() at the >> same time with the same "hctx", they may have contention acquiring >> budget. The blk_mq_get_dispatch_budget() can eventually translate >> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not >> uncommon) then only one of the two threads will be the one to >> increment "device_busy" to 1 and get the budget. >> >> The losing thread will break out of blk_mq_do_dispatch_sched() and >> will stop dispatching requests. The assumption is that when more >> budget is available later (when existing transactions finish) the >> queue will be kicked again, perhaps in scsi_end_request(). >> >> The winning thread now has budget and can go on to call >> dispatch_request(). If dispatch_request() returns NULL here then we >> have a potential problem. Specifically we'll now call > > I guess this problem should be BFQ specific. Now there is definitely > requests in BFQ queue wrt. this hctx. However, looks this request is > only available from another loser thread, and it won't be retrieved in > the winning thread via e->type->ops.dispatch_request(). > > Just wondering why BFQ is implemented in this way? > BFQ inherited this powerful non-working scheme from CFQ, some age ago. In more detail: if BFQ has at least one non-empty internal queue, then is says of course that there is work to do. But if the currently in-service queue is empty, and is expected to receive new I/O, then BFQ plugs I/O dispatch to enforce service guarantees for the in-service queue, i.e., BFQ responds NULL to a dispatch request. It would be very easy to change bfq_has_work so that it returns false in case the in-service queue is empty, even if there is I/O backlogged. My only concern is: since everything has worked with the current scheme for probably 15 years, are we sure that everything is still ok after we change this scheme? I'm confident it would be ok, because a timer fires if the in-service queue does not receive any I/O for too long, and the handler of the timer invokes blk_mq_run_hw_queues(). Looking forward to your feedback before proposing a change, Paolo >> blk_mq_put_dispatch_budget() which translates into >> scsi_mq_put_budget(). That will mark the device as no longer busy but >> doesn't do anything to kick the queue. This violates the assumption >> that the queue would be kicked when more budget was available. >> >> Pictorially: >> >> Thread A Thread B >> ================================= ================================== >> blk_mq_get_dispatch_budget() => 1 >> dispatch_request() => NULL >> blk_mq_get_dispatch_budget() => 0 >> // because Thread A marked >> // "device_busy" in scsi_device >> blk_mq_put_dispatch_budget() >> >> The above case was observed in reboot tests and caused a task to hang >> forever waiting for IO to complete. Traces showed that in fact two >> tasks were running blk_mq_do_dispatch_sched() at the same time with >> the same "hctx". The task that got the budget did in fact see >> dispatch_request() return NULL. Both tasks returned and the system >> went on for several minutes (until the hung task delay kicked in) >> without the given "hctx" showing up again in traces. >> >> Let's attempt to fix this problem by detecting budget contention. If >> we're in the SCSI code's put_budget() function and we saw that someone >> else might have wanted the budget we got then we'll kick the queue. >> >> The mechanism of kicking due to budget contention has the potential to >> overcompensate and kick the queue more than strictly necessary, but it >> shouldn't hurt. >> >> Signed-off-by: Douglas Anderson <dianders@chromium.org> >> --- >> >> drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- >> drivers/scsi/scsi_scan.c | 1 + >> include/scsi/scsi_device.h | 2 ++ >> 3 files changed, 27 insertions(+), 3 deletions(-) >> >> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c >> index 610ee41fa54c..0530da909995 100644 >> --- a/drivers/scsi/scsi_lib.c >> +++ b/drivers/scsi/scsi_lib.c >> @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) >> rcu_read_unlock(); >> } >> >> +static void scsi_device_dec_busy(struct scsi_device *sdev) >> +{ >> + bool was_contention; >> + unsigned long flags; >> + >> + spin_lock_irqsave(&sdev->budget_lock, flags); >> + atomic_dec(&sdev->device_busy); >> + was_contention = sdev->budget_contention; >> + sdev->budget_contention = false; >> + spin_unlock_irqrestore(&sdev->budget_lock, flags); >> + >> + if (was_contention) >> + blk_mq_run_hw_queues(sdev->request_queue, true); >> +} >> + >> void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) >> { >> struct Scsi_Host *shost = sdev->host; >> @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) >> if (starget->can_queue > 0) >> atomic_dec(&starget->target_busy); >> >> - atomic_dec(&sdev->device_busy); >> + scsi_device_dec_busy(sdev); >> } >> >> static void scsi_kick_queue(struct request_queue *q) >> @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) >> struct request_queue *q = hctx->queue; >> struct scsi_device *sdev = q->queuedata; >> >> - atomic_dec(&sdev->device_busy); >> + scsi_device_dec_busy(sdev); >> } >> >> static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) >> { >> struct request_queue *q = hctx->queue; >> struct scsi_device *sdev = q->queuedata; >> + unsigned long flags; >> >> - if (scsi_dev_queue_ready(q, sdev)) >> + spin_lock_irqsave(&sdev->budget_lock, flags); >> + if (scsi_dev_queue_ready(q, sdev)) { >> + spin_unlock_irqrestore(&sdev->budget_lock, flags); >> return true; >> + } >> + sdev->budget_contention = true; >> + spin_unlock_irqrestore(&sdev->budget_lock, flags); > > No, it really hurts performance by adding one per-sdev spinlock in fast path, > and we actually tried to kill the atomic variable of 'sdev->device_busy' > for high performance HBA. > > Thanks, > Ming
On 3/31/20 12:07 PM, Paolo Valente wrote: >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: >> >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: >>> It is possible for two threads to be running >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". >>> This is because there can be more than one caller to >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't >>> prevent more than one thread from entering. >>> >>> If more than one thread is running blk_mq_do_dispatch_sched() at the >>> same time with the same "hctx", they may have contention acquiring >>> budget. The blk_mq_get_dispatch_budget() can eventually translate >>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not >>> uncommon) then only one of the two threads will be the one to >>> increment "device_busy" to 1 and get the budget. >>> >>> The losing thread will break out of blk_mq_do_dispatch_sched() and >>> will stop dispatching requests. The assumption is that when more >>> budget is available later (when existing transactions finish) the >>> queue will be kicked again, perhaps in scsi_end_request(). >>> >>> The winning thread now has budget and can go on to call >>> dispatch_request(). If dispatch_request() returns NULL here then we >>> have a potential problem. Specifically we'll now call >> >> I guess this problem should be BFQ specific. Now there is definitely >> requests in BFQ queue wrt. this hctx. However, looks this request is >> only available from another loser thread, and it won't be retrieved in >> the winning thread via e->type->ops.dispatch_request(). >> >> Just wondering why BFQ is implemented in this way? >> > > BFQ inherited this powerful non-working scheme from CFQ, some age ago. > > In more detail: if BFQ has at least one non-empty internal queue, then > is says of course that there is work to do. But if the currently > in-service queue is empty, and is expected to receive new I/O, then > BFQ plugs I/O dispatch to enforce service guarantees for the > in-service queue, i.e., BFQ responds NULL to a dispatch request. What BFQ is doing is fine, IFF it always ensures that the queue is run at some later time, if it returns "yep I have work" yet returns NULL when attempting to retrieve that work. Generally this should happen from subsequent IO completion, or whatever else condition will resolve the issue that is currently preventing dispatch of that request. Last resort would be a timer, but that can happen if you're slicing your scheduling somehow. > It would be very easy to change bfq_has_work so that it returns false > in case the in-service queue is empty, even if there is I/O > backlogged. My only concern is: since everything has worked with the > current scheme for probably 15 years, are we sure that everything is > still ok after we change this scheme? You're comparing apples to oranges, CFQ never worked within the blk-mq scheduling framework. That said, I don't think such a change is needed. If we currently have a hang due to this discrepancy between has_work and gets_work, then it sounds like we're not always re-running the queue as we should. From the original patch, the budget putting is not something the scheduler is involved with. Do we just need to ensure that if we put budget without having dispatched a request, we need to kick off dispatching again?
Hi, On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote: > > On 3/31/20 12:07 PM, Paolo Valente wrote: > >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: > >> > >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > >>> It is possible for two threads to be running > >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". > >>> This is because there can be more than one caller to > >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > >>> prevent more than one thread from entering. > >>> > >>> If more than one thread is running blk_mq_do_dispatch_sched() at the > >>> same time with the same "hctx", they may have contention acquiring > >>> budget. The blk_mq_get_dispatch_budget() can eventually translate > >>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > >>> uncommon) then only one of the two threads will be the one to > >>> increment "device_busy" to 1 and get the budget. > >>> > >>> The losing thread will break out of blk_mq_do_dispatch_sched() and > >>> will stop dispatching requests. The assumption is that when more > >>> budget is available later (when existing transactions finish) the > >>> queue will be kicked again, perhaps in scsi_end_request(). > >>> > >>> The winning thread now has budget and can go on to call > >>> dispatch_request(). If dispatch_request() returns NULL here then we > >>> have a potential problem. Specifically we'll now call > >> > >> I guess this problem should be BFQ specific. Now there is definitely > >> requests in BFQ queue wrt. this hctx. However, looks this request is > >> only available from another loser thread, and it won't be retrieved in > >> the winning thread via e->type->ops.dispatch_request(). > >> > >> Just wondering why BFQ is implemented in this way? > >> > > > > BFQ inherited this powerful non-working scheme from CFQ, some age ago. > > > > In more detail: if BFQ has at least one non-empty internal queue, then > > is says of course that there is work to do. But if the currently > > in-service queue is empty, and is expected to receive new I/O, then > > BFQ plugs I/O dispatch to enforce service guarantees for the > > in-service queue, i.e., BFQ responds NULL to a dispatch request. > > What BFQ is doing is fine, IFF it always ensures that the queue is run > at some later time, if it returns "yep I have work" yet returns NULL > when attempting to retrieve that work. Generally this should happen from > subsequent IO completion, or whatever else condition will resolve the > issue that is currently preventing dispatch of that request. Last resort > would be a timer, but that can happen if you're slicing your scheduling > somehow. I've been poking more at this today trying to understand why the idle timer that Paolo says is in BFQ isn't doing what it should be doing. I've been continuing to put most of my stream-of-consciousness at <https://crbug.com/1061950> to avoid so much spamming of this thread. In the trace I looked at most recently it looks like BFQ does try to ensure that the queue is run at a later time, but at least in this trace the later time is not late enough. Specifically the quick summary of my recent trace: 28977309us - PID 2167 got the budget. 28977518us - BFQ told PID 2167 that there was nothing to dispatch. 28977702us - BFQ idle timer fires. 28977725us - We start to try to dispatch as a result of BFQ's idle timer. 28977732us - Dispatching that was a result of BFQ's idle timer can't get budget and thus does nothing. 28977780us - PID 2167 put the budget and exits since there was nothing to dispatch. This is only one particular trace, but in this case BFQ did attempt to rerun the queue after it returned NULL, but that ran almost immediately after it returned NULL and thus ran into the race. :( > > It would be very easy to change bfq_has_work so that it returns false > > in case the in-service queue is empty, even if there is I/O > > backlogged. My only concern is: since everything has worked with the > > current scheme for probably 15 years, are we sure that everything is > > still ok after we change this scheme? > > You're comparing apples to oranges, CFQ never worked within the blk-mq > scheduling framework. > > That said, I don't think such a change is needed. If we currently have a > hang due to this discrepancy between has_work and gets_work, then it > sounds like we're not always re-running the queue as we should. From the > original patch, the budget putting is not something the scheduler is > involved with. Do we just need to ensure that if we put budget without > having dispatched a request, we need to kick off dispatching again? By this you mean a change like this in blk_mq_do_dispatch_sched()? if (!rq) { blk_mq_put_dispatch_budget(hctx); + ret = true; break; } I'm pretty sure that would fix the problems and I'd be happy to test, but it feels like a heavy hammer. IIUC we're essentially going to go into a polling loop and keep calling has_work() and dispatch_request() over and over again until has_work() returns false or we manage to dispatch something... -Doug
On 3/31/20 5:51 PM, Doug Anderson wrote: > Hi, > > On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote: >> >> On 3/31/20 12:07 PM, Paolo Valente wrote: >>>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: >>>> >>>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: >>>>> It is possible for two threads to be running >>>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". >>>>> This is because there can be more than one caller to >>>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't >>>>> prevent more than one thread from entering. >>>>> >>>>> If more than one thread is running blk_mq_do_dispatch_sched() at the >>>>> same time with the same "hctx", they may have contention acquiring >>>>> budget. The blk_mq_get_dispatch_budget() can eventually translate >>>>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not >>>>> uncommon) then only one of the two threads will be the one to >>>>> increment "device_busy" to 1 and get the budget. >>>>> >>>>> The losing thread will break out of blk_mq_do_dispatch_sched() and >>>>> will stop dispatching requests. The assumption is that when more >>>>> budget is available later (when existing transactions finish) the >>>>> queue will be kicked again, perhaps in scsi_end_request(). >>>>> >>>>> The winning thread now has budget and can go on to call >>>>> dispatch_request(). If dispatch_request() returns NULL here then we >>>>> have a potential problem. Specifically we'll now call >>>> >>>> I guess this problem should be BFQ specific. Now there is definitely >>>> requests in BFQ queue wrt. this hctx. However, looks this request is >>>> only available from another loser thread, and it won't be retrieved in >>>> the winning thread via e->type->ops.dispatch_request(). >>>> >>>> Just wondering why BFQ is implemented in this way? >>>> >>> >>> BFQ inherited this powerful non-working scheme from CFQ, some age ago. >>> >>> In more detail: if BFQ has at least one non-empty internal queue, then >>> is says of course that there is work to do. But if the currently >>> in-service queue is empty, and is expected to receive new I/O, then >>> BFQ plugs I/O dispatch to enforce service guarantees for the >>> in-service queue, i.e., BFQ responds NULL to a dispatch request. >> >> What BFQ is doing is fine, IFF it always ensures that the queue is run >> at some later time, if it returns "yep I have work" yet returns NULL >> when attempting to retrieve that work. Generally this should happen from >> subsequent IO completion, or whatever else condition will resolve the >> issue that is currently preventing dispatch of that request. Last resort >> would be a timer, but that can happen if you're slicing your scheduling >> somehow. > > I've been poking more at this today trying to understand why the idle > timer that Paolo says is in BFQ isn't doing what it should be doing. > I've been continuing to put most of my stream-of-consciousness at > <https://crbug.com/1061950> to avoid so much spamming of this thread. > In the trace I looked at most recently it looks like BFQ does try to > ensure that the queue is run at a later time, but at least in this > trace the later time is not late enough. Specifically the quick > summary of my recent trace: > > 28977309us - PID 2167 got the budget. > 28977518us - BFQ told PID 2167 that there was nothing to dispatch. > 28977702us - BFQ idle timer fires. > 28977725us - We start to try to dispatch as a result of BFQ's idle timer. > 28977732us - Dispatching that was a result of BFQ's idle timer can't get > budget and thus does nothing. > 28977780us - PID 2167 put the budget and exits since there was nothing > to dispatch. > > This is only one particular trace, but in this case BFQ did attempt to > rerun the queue after it returned NULL, but that ran almost > immediately after it returned NULL and thus ran into the race. :( OK, and then it doesn't trigger again? It's key that it keeps doing this timeout and re-dispatch if it fails, not just once. But BFQ really should be smarter here. It's the same caller etc that asks whether it has work and whether it can dispatch, yet the answer is different. That's just kind of silly, and it'd make more sense if BFQ actually implemented the ->has_work() as a "would I actually dispatch for this guy, now". >>> It would be very easy to change bfq_has_work so that it returns false >>> in case the in-service queue is empty, even if there is I/O >>> backlogged. My only concern is: since everything has worked with the >>> current scheme for probably 15 years, are we sure that everything is >>> still ok after we change this scheme? >> >> You're comparing apples to oranges, CFQ never worked within the blk-mq >> scheduling framework. >> >> That said, I don't think such a change is needed. If we currently have a >> hang due to this discrepancy between has_work and gets_work, then it >> sounds like we're not always re-running the queue as we should. From the >> original patch, the budget putting is not something the scheduler is >> involved with. Do we just need to ensure that if we put budget without >> having dispatched a request, we need to kick off dispatching again? > > By this you mean a change like this in blk_mq_do_dispatch_sched()? > > if (!rq) { > blk_mq_put_dispatch_budget(hctx); > + ret = true; > break; > } > > I'm pretty sure that would fix the problems and I'd be happy to test, > but it feels like a heavy hammer. IIUC we're essentially going to go > into a polling loop and keep calling has_work() and dispatch_request() > over and over again until has_work() returns false or we manage to > dispatch something... We obviously have to be careful not to introduce a busy-loop, where we just keep scheduling dispatch, only to fail.
On Tue, Mar 31, 2020 at 04:51:00PM -0700, Doug Anderson wrote: > Hi, > > On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote: > > > > On 3/31/20 12:07 PM, Paolo Valente wrote: > > >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: > > >> > > >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > > >>> It is possible for two threads to be running > > >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". > > >>> This is because there can be more than one caller to > > >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > > >>> prevent more than one thread from entering. > > >>> > > >>> If more than one thread is running blk_mq_do_dispatch_sched() at the > > >>> same time with the same "hctx", they may have contention acquiring > > >>> budget. The blk_mq_get_dispatch_budget() can eventually translate > > >>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > > >>> uncommon) then only one of the two threads will be the one to > > >>> increment "device_busy" to 1 and get the budget. > > >>> > > >>> The losing thread will break out of blk_mq_do_dispatch_sched() and > > >>> will stop dispatching requests. The assumption is that when more > > >>> budget is available later (when existing transactions finish) the > > >>> queue will be kicked again, perhaps in scsi_end_request(). > > >>> > > >>> The winning thread now has budget and can go on to call > > >>> dispatch_request(). If dispatch_request() returns NULL here then we > > >>> have a potential problem. Specifically we'll now call > > >> > > >> I guess this problem should be BFQ specific. Now there is definitely > > >> requests in BFQ queue wrt. this hctx. However, looks this request is > > >> only available from another loser thread, and it won't be retrieved in > > >> the winning thread via e->type->ops.dispatch_request(). > > >> > > >> Just wondering why BFQ is implemented in this way? > > >> > > > > > > BFQ inherited this powerful non-working scheme from CFQ, some age ago. > > > > > > In more detail: if BFQ has at least one non-empty internal queue, then > > > is says of course that there is work to do. But if the currently > > > in-service queue is empty, and is expected to receive new I/O, then > > > BFQ plugs I/O dispatch to enforce service guarantees for the > > > in-service queue, i.e., BFQ responds NULL to a dispatch request. > > > > What BFQ is doing is fine, IFF it always ensures that the queue is run > > at some later time, if it returns "yep I have work" yet returns NULL > > when attempting to retrieve that work. Generally this should happen from > > subsequent IO completion, or whatever else condition will resolve the > > issue that is currently preventing dispatch of that request. Last resort > > would be a timer, but that can happen if you're slicing your scheduling > > somehow. > > I've been poking more at this today trying to understand why the idle > timer that Paolo says is in BFQ isn't doing what it should be doing. > I've been continuing to put most of my stream-of-consciousness at > <https://crbug.com/1061950> to avoid so much spamming of this thread. > In the trace I looked at most recently it looks like BFQ does try to > ensure that the queue is run at a later time, but at least in this > trace the later time is not late enough. Specifically the quick > summary of my recent trace: > > 28977309us - PID 2167 got the budget. > 28977518us - BFQ told PID 2167 that there was nothing to dispatch. > 28977702us - BFQ idle timer fires. > 28977725us - We start to try to dispatch as a result of BFQ's idle timer. > 28977732us - Dispatching that was a result of BFQ's idle timer can't get > budget and thus does nothing. Looks the BFQ idle timer may be re-tried given it knows there is work to do. > 28977780us - PID 2167 put the budget and exits since there was nothing > to dispatch. > > This is only one particular trace, but in this case BFQ did attempt to > rerun the queue after it returned NULL, but that ran almost > immediately after it returned NULL and thus ran into the race. :( > > > > > It would be very easy to change bfq_has_work so that it returns false > > > in case the in-service queue is empty, even if there is I/O > > > backlogged. My only concern is: since everything has worked with the > > > current scheme for probably 15 years, are we sure that everything is > > > still ok after we change this scheme? > > > > You're comparing apples to oranges, CFQ never worked within the blk-mq > > scheduling framework. > > > > That said, I don't think such a change is needed. If we currently have a > > hang due to this discrepancy between has_work and gets_work, then it > > sounds like we're not always re-running the queue as we should. From the > > original patch, the budget putting is not something the scheduler is > > involved with. Do we just need to ensure that if we put budget without > > having dispatched a request, we need to kick off dispatching again? > > By this you mean a change like this in blk_mq_do_dispatch_sched()? > > if (!rq) { > blk_mq_put_dispatch_budget(hctx); > + ret = true; > break; > } From Jens's tree, blk_mq_do_dispatch_sched() returns nothing. Which tree are you talking against? Thanks, Ming
Hi, On Tue, Mar 31, 2020 at 7:04 PM Ming Lei <ming.lei@redhat.com> wrote: > > On Tue, Mar 31, 2020 at 04:51:00PM -0700, Doug Anderson wrote: > > Hi, > > > > On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote: > > > > > > On 3/31/20 12:07 PM, Paolo Valente wrote: > > > >> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: > > > >> > > > >> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: > > > >>> It is possible for two threads to be running > > > >>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". > > > >>> This is because there can be more than one caller to > > > >>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't > > > >>> prevent more than one thread from entering. > > > >>> > > > >>> If more than one thread is running blk_mq_do_dispatch_sched() at the > > > >>> same time with the same "hctx", they may have contention acquiring > > > >>> budget. The blk_mq_get_dispatch_budget() can eventually translate > > > >>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not > > > >>> uncommon) then only one of the two threads will be the one to > > > >>> increment "device_busy" to 1 and get the budget. > > > >>> > > > >>> The losing thread will break out of blk_mq_do_dispatch_sched() and > > > >>> will stop dispatching requests. The assumption is that when more > > > >>> budget is available later (when existing transactions finish) the > > > >>> queue will be kicked again, perhaps in scsi_end_request(). > > > >>> > > > >>> The winning thread now has budget and can go on to call > > > >>> dispatch_request(). If dispatch_request() returns NULL here then we > > > >>> have a potential problem. Specifically we'll now call > > > >> > > > >> I guess this problem should be BFQ specific. Now there is definitely > > > >> requests in BFQ queue wrt. this hctx. However, looks this request is > > > >> only available from another loser thread, and it won't be retrieved in > > > >> the winning thread via e->type->ops.dispatch_request(). > > > >> > > > >> Just wondering why BFQ is implemented in this way? > > > >> > > > > > > > > BFQ inherited this powerful non-working scheme from CFQ, some age ago. > > > > > > > > In more detail: if BFQ has at least one non-empty internal queue, then > > > > is says of course that there is work to do. But if the currently > > > > in-service queue is empty, and is expected to receive new I/O, then > > > > BFQ plugs I/O dispatch to enforce service guarantees for the > > > > in-service queue, i.e., BFQ responds NULL to a dispatch request. > > > > > > What BFQ is doing is fine, IFF it always ensures that the queue is run > > > at some later time, if it returns "yep I have work" yet returns NULL > > > when attempting to retrieve that work. Generally this should happen from > > > subsequent IO completion, or whatever else condition will resolve the > > > issue that is currently preventing dispatch of that request. Last resort > > > would be a timer, but that can happen if you're slicing your scheduling > > > somehow. > > > > I've been poking more at this today trying to understand why the idle > > timer that Paolo says is in BFQ isn't doing what it should be doing. > > I've been continuing to put most of my stream-of-consciousness at > > <https://crbug.com/1061950> to avoid so much spamming of this thread. > > In the trace I looked at most recently it looks like BFQ does try to > > ensure that the queue is run at a later time, but at least in this > > trace the later time is not late enough. Specifically the quick > > summary of my recent trace: > > > > 28977309us - PID 2167 got the budget. > > 28977518us - BFQ told PID 2167 that there was nothing to dispatch. > > 28977702us - BFQ idle timer fires. > > 28977725us - We start to try to dispatch as a result of BFQ's idle timer. > > 28977732us - Dispatching that was a result of BFQ's idle timer can't get > > budget and thus does nothing. > > Looks the BFQ idle timer may be re-tried given it knows there is work to do. Yeah, it does seem like perhaps a BFQ fix like this would be ideal. > > 28977780us - PID 2167 put the budget and exits since there was nothing > > to dispatch. > > > > This is only one particular trace, but in this case BFQ did attempt to > > rerun the queue after it returned NULL, but that ran almost > > immediately after it returned NULL and thus ran into the race. :( > > > > > > > > It would be very easy to change bfq_has_work so that it returns false > > > > in case the in-service queue is empty, even if there is I/O > > > > backlogged. My only concern is: since everything has worked with the > > > > current scheme for probably 15 years, are we sure that everything is > > > > still ok after we change this scheme? > > > > > > You're comparing apples to oranges, CFQ never worked within the blk-mq > > > scheduling framework. > > > > > > That said, I don't think such a change is needed. If we currently have a > > > hang due to this discrepancy between has_work and gets_work, then it > > > sounds like we're not always re-running the queue as we should. From the > > > original patch, the budget putting is not something the scheduler is > > > involved with. Do we just need to ensure that if we put budget without > > > having dispatched a request, we need to kick off dispatching again? > > > > By this you mean a change like this in blk_mq_do_dispatch_sched()? > > > > if (!rq) { > > blk_mq_put_dispatch_budget(hctx); > > + ret = true; > > break; > > } > > From Jens's tree, blk_mq_do_dispatch_sched() returns nothing. > > Which tree are you talking against? Ah, right. Sorry. As per the cover letter, I've tested against the Chrome OS 5.4 tree and also against mainline Linux and both experienced the same behavior. It's slightly more convenient for me to test against the Chrome OS 5.4 tree, so I've been focusing most of my effort there. As mentioned in the cover letter, in the Chrome OS 5.4 branch we have an extra FROMLIST patch from Salman, specifically: http://lore.kernel.org/r/20200207190416.99928-1-sqazi@google.com ...that's what makes the return value "bool" for me. -Doug
> Il giorno 1 apr 2020, alle ore 03:21, Jens Axboe <axboe@kernel.dk> ha scritto: > > On 3/31/20 5:51 PM, Doug Anderson wrote: >> Hi, >> >> On Tue, Mar 31, 2020 at 11:26 AM Jens Axboe <axboe@kernel.dk> wrote: >>> >>> On 3/31/20 12:07 PM, Paolo Valente wrote: >>>>> Il giorno 31 mar 2020, alle ore 03:41, Ming Lei <ming.lei@redhat.com> ha scritto: >>>>> >>>>> On Mon, Mar 30, 2020 at 07:49:06AM -0700, Douglas Anderson wrote: >>>>>> It is possible for two threads to be running >>>>>> blk_mq_do_dispatch_sched() at the same time with the same "hctx". >>>>>> This is because there can be more than one caller to >>>>>> __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't >>>>>> prevent more than one thread from entering. >>>>>> >>>>>> If more than one thread is running blk_mq_do_dispatch_sched() at the >>>>>> same time with the same "hctx", they may have contention acquiring >>>>>> budget. The blk_mq_get_dispatch_budget() can eventually translate >>>>>> into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not >>>>>> uncommon) then only one of the two threads will be the one to >>>>>> increment "device_busy" to 1 and get the budget. >>>>>> >>>>>> The losing thread will break out of blk_mq_do_dispatch_sched() and >>>>>> will stop dispatching requests. The assumption is that when more >>>>>> budget is available later (when existing transactions finish) the >>>>>> queue will be kicked again, perhaps in scsi_end_request(). >>>>>> >>>>>> The winning thread now has budget and can go on to call >>>>>> dispatch_request(). If dispatch_request() returns NULL here then we >>>>>> have a potential problem. Specifically we'll now call >>>>> >>>>> I guess this problem should be BFQ specific. Now there is definitely >>>>> requests in BFQ queue wrt. this hctx. However, looks this request is >>>>> only available from another loser thread, and it won't be retrieved in >>>>> the winning thread via e->type->ops.dispatch_request(). >>>>> >>>>> Just wondering why BFQ is implemented in this way? >>>>> >>>> >>>> BFQ inherited this powerful non-working scheme from CFQ, some age ago. >>>> >>>> In more detail: if BFQ has at least one non-empty internal queue, then >>>> is says of course that there is work to do. But if the currently >>>> in-service queue is empty, and is expected to receive new I/O, then >>>> BFQ plugs I/O dispatch to enforce service guarantees for the >>>> in-service queue, i.e., BFQ responds NULL to a dispatch request. >>> >>> What BFQ is doing is fine, IFF it always ensures that the queue is run >>> at some later time, if it returns "yep I have work" yet returns NULL >>> when attempting to retrieve that work. Generally this should happen from >>> subsequent IO completion, or whatever else condition will resolve the >>> issue that is currently preventing dispatch of that request. Last resort >>> would be a timer, but that can happen if you're slicing your scheduling >>> somehow. >> >> I've been poking more at this today trying to understand why the idle >> timer that Paolo says is in BFQ isn't doing what it should be doing. >> I've been continuing to put most of my stream-of-consciousness at >> <https://crbug.com/1061950> to avoid so much spamming of this thread. >> In the trace I looked at most recently it looks like BFQ does try to >> ensure that the queue is run at a later time, but at least in this >> trace the later time is not late enough. Specifically the quick >> summary of my recent trace: >> >> 28977309us - PID 2167 got the budget. >> 28977518us - BFQ told PID 2167 that there was nothing to dispatch. >> 28977702us - BFQ idle timer fires. >> 28977725us - We start to try to dispatch as a result of BFQ's idle timer. >> 28977732us - Dispatching that was a result of BFQ's idle timer can't get >> budget and thus does nothing. >> 28977780us - PID 2167 put the budget and exits since there was nothing >> to dispatch. >> >> This is only one particular trace, but in this case BFQ did attempt to >> rerun the queue after it returned NULL, but that ran almost >> immediately after it returned NULL and thus ran into the race. :( > > OK, and then it doesn't trigger again? It's key that it keeps doing this > timeout and re-dispatch if it fails, not just once. > The goal of BFQ's timer is to make BFQ switch from non-work-conserving to work-conserving mode, just because not doing so would cause a stall. In contrast, it sounds a little weird that an I/O scheduler systematically kicks I/O periodically (how can BFQ know when no more kicking is needed?). IOW, it doesn't seem very robust that blk-mq may need a series of periodic kicks to finally restart, like a flooded engine. Compared with this solution, I'd still prefer one where BFQ doesn't trigger this blk-mq stall at all. Paolo > But BFQ really should be smarter here. It's the same caller etc that > asks whether it has work and whether it can dispatch, yet the answer is > different. That's just kind of silly, and it'd make more sense if BFQ > actually implemented the ->has_work() as a "would I actually dispatch > for this guy, now". > >>>> It would be very easy to change bfq_has_work so that it returns false >>>> in case the in-service queue is empty, even if there is I/O >>>> backlogged. My only concern is: since everything has worked with the >>>> current scheme for probably 15 years, are we sure that everything is >>>> still ok after we change this scheme? >>> >>> You're comparing apples to oranges, CFQ never worked within the blk-mq >>> scheduling framework. >>> >>> That said, I don't think such a change is needed. If we currently have a >>> hang due to this discrepancy between has_work and gets_work, then it >>> sounds like we're not always re-running the queue as we should. From the >>> original patch, the budget putting is not something the scheduler is >>> involved with. Do we just need to ensure that if we put budget without >>> having dispatched a request, we need to kick off dispatching again? >> >> By this you mean a change like this in blk_mq_do_dispatch_sched()? >> >> if (!rq) { >> blk_mq_put_dispatch_budget(hctx); >> + ret = true; >> break; >> } >> >> I'm pretty sure that would fix the problems and I'd be happy to test, >> but it feels like a heavy hammer. IIUC we're essentially going to go >> into a polling loop and keep calling has_work() and dispatch_request() >> over and over again until has_work() returns false or we manage to >> dispatch something... > > We obviously have to be careful not to introduce a busy-loop, where we > just keep scheduling dispatch, only to fail. > > -- > Jens Axboe
Hi, On Wed, Apr 1, 2020 at 12:48 AM Paolo Valente <paolo.valente@linaro.org> wrote: > > >> 28977309us - PID 2167 got the budget. > >> 28977518us - BFQ told PID 2167 that there was nothing to dispatch. > >> 28977702us - BFQ idle timer fires. > >> 28977725us - We start to try to dispatch as a result of BFQ's idle timer. > >> 28977732us - Dispatching that was a result of BFQ's idle timer can't get > >> budget and thus does nothing. > >> 28977780us - PID 2167 put the budget and exits since there was nothing > >> to dispatch. > >> > >> This is only one particular trace, but in this case BFQ did attempt to > >> rerun the queue after it returned NULL, but that ran almost > >> immediately after it returned NULL and thus ran into the race. :( > > > > OK, and then it doesn't trigger again? It's key that it keeps doing this > > timeout and re-dispatch if it fails, not just once. > > > > The goal of BFQ's timer is to make BFQ switch from non-work-conserving > to work-conserving mode, just because not doing so would cause a > stall. In contrast, it sounds a little weird that an I/O scheduler > systematically kicks I/O periodically (how can BFQ know when no more > kicking is needed?). IOW, it doesn't seem very robust that blk-mq may > need a series of periodic kicks to finally restart, like a flooded > engine. > > Compared with this solution, I'd still prefer one where BFQ doesn't > trigger this blk-mq stall at all. I spent more time thinking about this / testing things. Probably the best summary of my thoughts can be found at <https://crbug.com/1061950#c79>. The quick summary is that I believe the problem is that BFQ has faith that when it calls blk_mq_run_hw_queues() that it will eventually cause BFQ to be called back at least once to dispatch. That doesn't always happen due to the race we're trying to solve here. If we fix the race / make blk_mq_run_hw_queues() reliable then I don't think there's a need for BFQ to implement some type of timeout/retry mechanism. > > But BFQ really should be smarter here. It's the same caller etc that > > asks whether it has work and whether it can dispatch, yet the answer is > > different. That's just kind of silly, and it'd make more sense if BFQ > > actually implemented the ->has_work() as a "would I actually dispatch > > for this guy, now". I prototyped this and I think it would solve the problem (though I haven't had time to do extensive testing yet). It certainly makes BFQ's has_work() more expensive in some cases, but it might be worth it. Someone setup to do benchmarking would need to say for sure. However, I think I've figured out an inexpensive / lightweight solution that means we can let has_work() be inexact. It's mostly the same as this patch but implemented at the blk-mq layer (not the SCSI layer) and doesn't add a spinlock. I'll post a v2 and you can see if you hate it or if it looks OK. You can find it at: https://lore.kernel.org/r/20200402085050.v2.2.I28278ef8ea27afc0ec7e597752a6d4e58c16176f@changeid -Doug
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c index 610ee41fa54c..0530da909995 100644 --- a/drivers/scsi/scsi_lib.c +++ b/drivers/scsi/scsi_lib.c @@ -344,6 +344,21 @@ static void scsi_dec_host_busy(struct Scsi_Host *shost, struct scsi_cmnd *cmd) rcu_read_unlock(); } +static void scsi_device_dec_busy(struct scsi_device *sdev) +{ + bool was_contention; + unsigned long flags; + + spin_lock_irqsave(&sdev->budget_lock, flags); + atomic_dec(&sdev->device_busy); + was_contention = sdev->budget_contention; + sdev->budget_contention = false; + spin_unlock_irqrestore(&sdev->budget_lock, flags); + + if (was_contention) + blk_mq_run_hw_queues(sdev->request_queue, true); +} + void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) { struct Scsi_Host *shost = sdev->host; @@ -354,7 +369,7 @@ void scsi_device_unbusy(struct scsi_device *sdev, struct scsi_cmnd *cmd) if (starget->can_queue > 0) atomic_dec(&starget->target_busy); - atomic_dec(&sdev->device_busy); + scsi_device_dec_busy(sdev); } static void scsi_kick_queue(struct request_queue *q) @@ -1624,16 +1639,22 @@ static void scsi_mq_put_budget(struct blk_mq_hw_ctx *hctx) struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; - atomic_dec(&sdev->device_busy); + scsi_device_dec_busy(sdev); } static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx) { struct request_queue *q = hctx->queue; struct scsi_device *sdev = q->queuedata; + unsigned long flags; - if (scsi_dev_queue_ready(q, sdev)) + spin_lock_irqsave(&sdev->budget_lock, flags); + if (scsi_dev_queue_ready(q, sdev)) { + spin_unlock_irqrestore(&sdev->budget_lock, flags); return true; + } + sdev->budget_contention = true; + spin_unlock_irqrestore(&sdev->budget_lock, flags); if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev)) blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY); diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c index 058079f915f1..72f7b6faed9b 100644 --- a/drivers/scsi/scsi_scan.c +++ b/drivers/scsi/scsi_scan.c @@ -240,6 +240,7 @@ static struct scsi_device *scsi_alloc_sdev(struct scsi_target *starget, INIT_LIST_HEAD(&sdev->starved_entry); INIT_LIST_HEAD(&sdev->event_list); spin_lock_init(&sdev->list_lock); + spin_lock_init(&sdev->budget_lock); mutex_init(&sdev->inquiry_mutex); INIT_WORK(&sdev->event_work, scsi_evt_thread); INIT_WORK(&sdev->requeue_work, scsi_requeue_run_queue); diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h index f8312a3e5b42..3c5e0f0c8a91 100644 --- a/include/scsi/scsi_device.h +++ b/include/scsi/scsi_device.h @@ -106,6 +106,8 @@ struct scsi_device { struct list_head siblings; /* list of all devices on this host */ struct list_head same_target_siblings; /* just the devices sharing same target id */ + spinlock_t budget_lock; /* For device_busy and budget_contention */ + bool budget_contention; /* Someone couldn't get budget */ atomic_t device_busy; /* commands actually active on LLDD */ atomic_t device_blocked; /* Device returned QUEUE_FULL. */
It is possible for two threads to be running blk_mq_do_dispatch_sched() at the same time with the same "hctx". This is because there can be more than one caller to __blk_mq_run_hw_queue() with the same "hctx" and hctx_lock() doesn't prevent more than one thread from entering. If more than one thread is running blk_mq_do_dispatch_sched() at the same time with the same "hctx", they may have contention acquiring budget. The blk_mq_get_dispatch_budget() can eventually translate into scsi_mq_get_budget(). If the device's "queue_depth" is 1 (not uncommon) then only one of the two threads will be the one to increment "device_busy" to 1 and get the budget. The losing thread will break out of blk_mq_do_dispatch_sched() and will stop dispatching requests. The assumption is that when more budget is available later (when existing transactions finish) the queue will be kicked again, perhaps in scsi_end_request(). The winning thread now has budget and can go on to call dispatch_request(). If dispatch_request() returns NULL here then we have a potential problem. Specifically we'll now call blk_mq_put_dispatch_budget() which translates into scsi_mq_put_budget(). That will mark the device as no longer busy but doesn't do anything to kick the queue. This violates the assumption that the queue would be kicked when more budget was available. Pictorially: Thread A Thread B ================================= ================================== blk_mq_get_dispatch_budget() => 1 dispatch_request() => NULL blk_mq_get_dispatch_budget() => 0 // because Thread A marked // "device_busy" in scsi_device blk_mq_put_dispatch_budget() The above case was observed in reboot tests and caused a task to hang forever waiting for IO to complete. Traces showed that in fact two tasks were running blk_mq_do_dispatch_sched() at the same time with the same "hctx". The task that got the budget did in fact see dispatch_request() return NULL. Both tasks returned and the system went on for several minutes (until the hung task delay kicked in) without the given "hctx" showing up again in traces. Let's attempt to fix this problem by detecting budget contention. If we're in the SCSI code's put_budget() function and we saw that someone else might have wanted the budget we got then we'll kick the queue. The mechanism of kicking due to budget contention has the potential to overcompensate and kick the queue more than strictly necessary, but it shouldn't hurt. Signed-off-by: Douglas Anderson <dianders@chromium.org> --- drivers/scsi/scsi_lib.c | 27 ++++++++++++++++++++++++--- drivers/scsi/scsi_scan.c | 1 + include/scsi/scsi_device.h | 2 ++ 3 files changed, 27 insertions(+), 3 deletions(-)