diff mbox series

[V2] scsi: core: only re-run queue in scsi_end_request() if device queue is busy

Message ID 20191118100640.3673-1-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series [V2] scsi: core: only re-run queue in scsi_end_request() if device queue is busy | expand

Commit Message

Ming Lei Nov. 18, 2019, 10:06 a.m. UTC
Now the request queue is run in scsi_end_request() unconditionally if both
target queue and host queue is ready. We should have re-run request queue
only after this device queue becomes busy for restarting this LUN only.

Recently Long Li reported that cost of run queue may be very heavy in
case of high queue depth. So improve this situation by only running
the request queue when this LUN is busy.

Cc: Jens Axboe <axboe@kernel.dk>
Cc: Ewan D. Milne <emilne@redhat.com>
Cc: Kashyap Desai <kashyap.desai@broadcom.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Bart Van Assche <bvanassche@acm.org>
Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Long Li <longli@microsoft.com>
Cc: linux-block@vger.kernel.org
Reported-by: Long Li <longli@microsoft.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V2:
	- commit log change, no any code change
	- add reported-by tag 


 drivers/scsi/scsi_lib.c    | 29 +++++++++++++++++++++++++++--
 include/scsi/scsi_device.h |  1 +
 2 files changed, 28 insertions(+), 2 deletions(-)

Comments

Bart Van Assche Nov. 18, 2019, 11:40 p.m. UTC | #1
On 11/18/19 2:06 AM, Ming Lei wrote:
> Now the request queue is run in scsi_end_request() unconditionally if both
> target queue and host queue is ready. We should have re-run request queue
> only after this device queue becomes busy for restarting this LUN only.
> 
> Recently Long Li reported that cost of run queue may be very heavy in
> case of high queue depth. So improve this situation by only running
> the request queue when this LUN is busy.
> 
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Ewan D. Milne <emilne@redhat.com>
> Cc: Kashyap Desai <kashyap.desai@broadcom.com>
> Cc: Hannes Reinecke <hare@suse.de>
> Cc: Bart Van Assche <bvanassche@acm.org>
> Cc: Damien Le Moal <damien.lemoal@wdc.com>
> Cc: Long Li <longli@microsoft.com>
> Cc: linux-block@vger.kernel.org
> Reported-by: Long Li <longli@microsoft.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> V2:
> 	- commit log change, no any code change
> 	- add reported-by tag
> 
> 
>   drivers/scsi/scsi_lib.c    | 29 +++++++++++++++++++++++++++--
>   include/scsi/scsi_device.h |  1 +
>   2 files changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> index 379533ce8661..62a86a82c38d 100644
> --- a/drivers/scsi/scsi_lib.c
> +++ b/drivers/scsi/scsi_lib.c
> @@ -612,7 +612,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
>   	if (scsi_target(sdev)->single_lun ||
>   	    !list_empty(&sdev->host->starved_list))
>   		kblockd_schedule_work(&sdev->requeue_work);
> -	else
> +	else if (READ_ONCE(sdev->restart))
>   		blk_mq_run_hw_queues(q, true);
>   
>   	percpu_ref_put(&q->q_usage_counter);
> @@ -1632,8 +1632,33 @@ static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
>   	struct request_queue *q = hctx->queue;
>   	struct scsi_device *sdev = q->queuedata;
>   
> -	if (scsi_dev_queue_ready(q, sdev))
> +	if (scsi_dev_queue_ready(q, sdev)) {
> +		WRITE_ONCE(sdev->restart, 0);
>   		return true;
> +	}
> +
> +	/*
> +	 * If all in-flight requests originated from this LUN are completed
> +	 * before setting .restart, sdev->device_busy will be observed as
> +	 * zero, then blk_mq_delay_run_hw_queue() will dispatch this request
> +	 * soon. Otherwise, completion of one of these request will observe
> +	 * the .restart flag, and the request queue will be run for handling
> +	 * this request, see scsi_end_request().
> +	 *
> +	 * However, the .restart flag may be cleared from other dispatch code
> +	 * path after one inflight request is completed, then:
> +	 *
> +	 * 1) if this request is dispatched from scheduler queue or sw queue one
> +	 * by one, this request will be handled in that dispatch path too given
> +	 * the request still stays at scheduler/sw queue when calling .get_budget()
> +	 * callback.
> +	 *
> +	 * 2) if this request is dispatched from hctx->dispatch or
> +	 * blk_mq_flush_busy_ctxs(), this request will be put into hctx->dispatch
> +	 * list soon, and blk-mq will be responsible for covering it, see
> +	 * blk_mq_dispatch_rq_list().
> +	 */
> +	WRITE_ONCE(sdev->restart, 1);

Hi Ming,

Are any memory barriers needed?

Should WRITE_ONCE(sdev->restart, 1) perhaps be moved above the 
scsi_dev_queue_ready()? Consider e.g. the following scenario:

sdev->restart == 0

scsi_mq_get_budget() calls scsi_dev_queue_ready() and that last function 
returns false.

scsi_end_request() calls __blk_mq_end_request()
scsi_end_request() skips the blk_mq_run_hw_queues() call

scsi_mq_get_budget() changes sdev->restart into 1.

Can this race happen with the above patch applied? Will this scenario 
result in a queue stall?

Thanks,

Bart.
Ming Lei Nov. 19, 2019, 2:20 a.m. UTC | #2
On Mon, Nov 18, 2019 at 03:40:06PM -0800, Bart Van Assche wrote:
> On 11/18/19 2:06 AM, Ming Lei wrote:
> > Now the request queue is run in scsi_end_request() unconditionally if both
> > target queue and host queue is ready. We should have re-run request queue
> > only after this device queue becomes busy for restarting this LUN only.
> > 
> > Recently Long Li reported that cost of run queue may be very heavy in
> > case of high queue depth. So improve this situation by only running
> > the request queue when this LUN is busy.
> > 
> > Cc: Jens Axboe <axboe@kernel.dk>
> > Cc: Ewan D. Milne <emilne@redhat.com>
> > Cc: Kashyap Desai <kashyap.desai@broadcom.com>
> > Cc: Hannes Reinecke <hare@suse.de>
> > Cc: Bart Van Assche <bvanassche@acm.org>
> > Cc: Damien Le Moal <damien.lemoal@wdc.com>
> > Cc: Long Li <longli@microsoft.com>
> > Cc: linux-block@vger.kernel.org
> > Reported-by: Long Li <longli@microsoft.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > V2:
> > 	- commit log change, no any code change
> > 	- add reported-by tag
> > 
> > 
> >   drivers/scsi/scsi_lib.c    | 29 +++++++++++++++++++++++++++--
> >   include/scsi/scsi_device.h |  1 +
> >   2 files changed, 28 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
> > index 379533ce8661..62a86a82c38d 100644
> > --- a/drivers/scsi/scsi_lib.c
> > +++ b/drivers/scsi/scsi_lib.c
> > @@ -612,7 +612,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
> >   	if (scsi_target(sdev)->single_lun ||
> >   	    !list_empty(&sdev->host->starved_list))
> >   		kblockd_schedule_work(&sdev->requeue_work);
> > -	else
> > +	else if (READ_ONCE(sdev->restart))
> >   		blk_mq_run_hw_queues(q, true);
> >   	percpu_ref_put(&q->q_usage_counter);
> > @@ -1632,8 +1632,33 @@ static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
> >   	struct request_queue *q = hctx->queue;
> >   	struct scsi_device *sdev = q->queuedata;
> > -	if (scsi_dev_queue_ready(q, sdev))
> > +	if (scsi_dev_queue_ready(q, sdev)) {
> > +		WRITE_ONCE(sdev->restart, 0);
> >   		return true;
> > +	}
> > +
> > +	/*
> > +	 * If all in-flight requests originated from this LUN are completed
> > +	 * before setting .restart, sdev->device_busy will be observed as
> > +	 * zero, then blk_mq_delay_run_hw_queue() will dispatch this request
> > +	 * soon. Otherwise, completion of one of these request will observe
> > +	 * the .restart flag, and the request queue will be run for handling
> > +	 * this request, see scsi_end_request().
> > +	 *
> > +	 * However, the .restart flag may be cleared from other dispatch code
> > +	 * path after one inflight request is completed, then:
> > +	 *
> > +	 * 1) if this request is dispatched from scheduler queue or sw queue one
> > +	 * by one, this request will be handled in that dispatch path too given
> > +	 * the request still stays at scheduler/sw queue when calling .get_budget()
> > +	 * callback.
> > +	 *
> > +	 * 2) if this request is dispatched from hctx->dispatch or
> > +	 * blk_mq_flush_busy_ctxs(), this request will be put into hctx->dispatch
> > +	 * list soon, and blk-mq will be responsible for covering it, see
> > +	 * blk_mq_dispatch_rq_list().
> > +	 */
> > +	WRITE_ONCE(sdev->restart, 1);
> 
> Hi Ming,
> 
> Are any memory barriers needed?
> 
> Should WRITE_ONCE(sdev->restart, 1) perhaps be moved above the
> scsi_dev_queue_ready()? Consider e.g. the following scenario:
> 
> sdev->restart == 0
> 
> scsi_mq_get_budget() calls scsi_dev_queue_ready() and that last function
> returns false.
> 
> scsi_end_request() calls __blk_mq_end_request()
> scsi_end_request() skips the blk_mq_run_hw_queues() call

Suppose the sdev->restart isn't set as 1 or isn't visible by
scsi_end_request().

> 
> scsi_mq_get_budget() changes sdev->restart into 1.

As the comment mentioned, if there isn't any in-flight requests
originated from this LUN, blk_mq_delay_run_hw_queue() in
scsi_mq_get_budget() will run the hw queue. If there is any
in-flight requests from this LUN, that request's scsi_end_request()
will handle that.

Then looks one barrier is required between 'WRITE_ONCE(sdev->restart, 1)'
and 'atomic_read(&sdev->device_busy) == 0'.

And its pair is scsi_device_unbusy() and READ_ONCE(sdev->restart).
The barrier between the pair could be implied by __blk_mq_end_request(),
either __blk_mq_free_request() or rq->end_io.

> 
> Can this race happen with the above patch applied? Will this scenario result
> in a queue stall?

If barrier is added between 'WRITE_ONCE(sdev->restart, 1)' and
'atomic_read(&sdev->device_busy) == 0', the race should be avoided.

Will do that in V3.

Thanks,
Ming
diff mbox series

Patch

diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 379533ce8661..62a86a82c38d 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -612,7 +612,7 @@  static bool scsi_end_request(struct request *req, blk_status_t error,
 	if (scsi_target(sdev)->single_lun ||
 	    !list_empty(&sdev->host->starved_list))
 		kblockd_schedule_work(&sdev->requeue_work);
-	else
+	else if (READ_ONCE(sdev->restart))
 		blk_mq_run_hw_queues(q, true);
 
 	percpu_ref_put(&q->q_usage_counter);
@@ -1632,8 +1632,33 @@  static bool scsi_mq_get_budget(struct blk_mq_hw_ctx *hctx)
 	struct request_queue *q = hctx->queue;
 	struct scsi_device *sdev = q->queuedata;
 
-	if (scsi_dev_queue_ready(q, sdev))
+	if (scsi_dev_queue_ready(q, sdev)) {
+		WRITE_ONCE(sdev->restart, 0);
 		return true;
+	}
+
+	/*
+	 * If all in-flight requests originated from this LUN are completed
+	 * before setting .restart, sdev->device_busy will be observed as
+	 * zero, then blk_mq_delay_run_hw_queue() will dispatch this request
+	 * soon. Otherwise, completion of one of these request will observe
+	 * the .restart flag, and the request queue will be run for handling
+	 * this request, see scsi_end_request().
+	 *
+	 * However, the .restart flag may be cleared from other dispatch code
+	 * path after one inflight request is completed, then:
+	 *
+	 * 1) if this request is dispatched from scheduler queue or sw queue one
+	 * by one, this request will be handled in that dispatch path too given
+	 * the request still stays at scheduler/sw queue when calling .get_budget()
+	 * callback.
+	 *
+	 * 2) if this request is dispatched from hctx->dispatch or
+	 * blk_mq_flush_busy_ctxs(), this request will be put into hctx->dispatch
+	 * list soon, and blk-mq will be responsible for covering it, see
+	 * blk_mq_dispatch_rq_list().
+	 */
+	WRITE_ONCE(sdev->restart, 1);
 
 	if (atomic_read(&sdev->device_busy) == 0 && !scsi_device_blocked(sdev))
 		blk_mq_delay_run_hw_queue(hctx, SCSI_QUEUE_DELAY);
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 202f4d6a4342..9d8ca662ae86 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -109,6 +109,7 @@  struct scsi_device {
 	atomic_t device_busy;		/* commands actually active on LLDD */
 	atomic_t device_blocked;	/* Device returned QUEUE_FULL. */
 
+	unsigned int restart;
 	spinlock_t list_lock;
 	struct list_head cmd_list;	/* queue of in use SCSI Command structures */
 	struct list_head starved_entry;