diff mbox series

[V2] blk-mq: always allow reserved allocation in hctx_may_queue

Message ID 20200911104114.163691-1-ming.lei@redhat.com (mailing list archive)
State New, archived
Headers show
Series [V2] blk-mq: always allow reserved allocation in hctx_may_queue | expand

Commit Message

Ming Lei Sept. 11, 2020, 10:41 a.m. UTC
NVMe shares tagset between fabric queue and admin queue or between
connect_q and NS queue, so hctx_may_queue() can be called to allocate
request for these queues.

Tags can be reserved in these tagset. Before error recovery, there is
often lots of in-flight requests which can't be completed, and new
reserved request may be needed in error recovery path. However,
hctx_may_queue() can always return false because there is too many
in-flight requests which can't be completed during error handling.
Finally, everything can't move on.

Fix this issue by always allowing reserved tag allocation in
hctx_may_queue(). This ways is reasonable because reserved tag
suppose to be ready any time.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Cc: David Milburn <dmilburn@redhat.com>
Cc: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
V2:
	- remove 'reserved' local variable, as suggested by Christoph 

 block/blk-mq-tag.c | 3 ++-
 block/blk-mq.c     | 5 +++--
 2 files changed, 5 insertions(+), 3 deletions(-)

Comments

Jens Axboe Sept. 11, 2020, 11:27 a.m. UTC | #1
On 9/11/20 4:41 AM, Ming Lei wrote:
> NVMe shares tagset between fabric queue and admin queue or between
> connect_q and NS queue, so hctx_may_queue() can be called to allocate
> request for these queues.
> 
> Tags can be reserved in these tagset. Before error recovery, there is
> often lots of in-flight requests which can't be completed, and new
> reserved request may be needed in error recovery path. However,
> hctx_may_queue() can always return false because there is too many
> in-flight requests which can't be completed during error handling.
> Finally, everything can't move on.
> 
> Fix this issue by always allowing reserved tag allocation in
> hctx_may_queue(). This ways is reasonable because reserved tag
> suppose to be ready any time.

Applied, thanks.
diff mbox series

Patch

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index c31c4a0478a5..aacf10decdbd 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -76,7 +76,8 @@  void __blk_mq_tag_idle(struct blk_mq_hw_ctx *hctx)
 static int __blk_mq_get_tag(struct blk_mq_alloc_data *data,
 			    struct sbitmap_queue *bt)
 {
-	if (!data->q->elevator && !hctx_may_queue(data->hctx, bt))
+	if (!data->q->elevator && !(data->flags & BLK_MQ_REQ_RESERVED) &&
+			!hctx_may_queue(data->hctx, bt))
 		return BLK_MQ_NO_TAG;
 
 	if (data->shallow_depth)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ccb500e38008..fb609fc38cf5 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1153,10 +1153,11 @@  static bool __blk_mq_get_driver_tag(struct request *rq)
 	if (blk_mq_tag_is_reserved(rq->mq_hctx->sched_tags, rq->internal_tag)) {
 		bt = rq->mq_hctx->tags->breserved_tags;
 		tag_offset = 0;
+	} else {
+		if (!hctx_may_queue(rq->mq_hctx, bt))
+			return false;
 	}
 
-	if (!hctx_may_queue(rq->mq_hctx, bt))
-		return false;
 	tag = __sbitmap_queue_get(bt);
 	if (tag == BLK_MQ_NO_TAG)
 		return false;