diff mbox

[1/2] blk-mq: get rid of manual run of queue with __blk_mq_run_hw_queue()

Message ID 20160922185603.GA32468@infradead.org (mailing list archive)
State New, archived
Headers show

Commit Message

Christoph Hellwig Sept. 22, 2016, 6:56 p.m. UTC
Ok, I looked into this a bit more, and while I'm still fine with the
patch I think it's only half of what we should do here.  There really
is no point in doing the first non-blocking path in blk_mq_map_request
either as bt_get itself already does the non-blocking pass, and also
runs the queue when scheduling in the later loop as well.  So to get
towards what I had in my tree we also need something like this:

---
From c69bf02929d9c37d193b004a4c3c85c1142fa996 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch@lst.de>
Date: Thu, 22 Sep 2016 11:38:23 -0700
Subject: blk-mq: remove non-blocking pass in blk_mq_map_request

bt_get already does a non-blocking pass as well as running the queue
when scheduling internally, no need to duplicate it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

Comments

Jens Axboe Sept. 22, 2016, 8:29 p.m. UTC | #1
On 09/22/2016 12:56 PM, Christoph Hellwig wrote:
> Ok, I looked into this a bit more, and while I'm still fine with the
> patch I think it's only half of what we should do here.  There really
> is no point in doing the first non-blocking path in blk_mq_map_request
> either as bt_get itself already does the non-blocking pass, and also
> runs the queue when scheduling in the later loop as well.  So to get
> towards what I had in my tree we also need something like this:

Good point, I'll apply this one as well.
Sagi Grimberg Sept. 23, 2016, 9:59 p.m. UTC | #2
> ---
> From c69bf02929d9c37d193b004a4c3c85c1142fa996 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@lst.de>
> Date: Thu, 22 Sep 2016 11:38:23 -0700
> Subject: blk-mq: remove non-blocking pass in blk_mq_map_request
>
> bt_get already does a non-blocking pass as well as running the queue
> when scheduling internally, no need to duplicate it.

Looks good too,

Reviewed-by: Sagi Grimberg <sagi@grimberg.me>

Question (while we're on the subject):
Do consumers have a way to restrict blk-mq to block on
lack of tags? I'm thinking in the context of nvme-target
that can do more useful things than waiting for a tag...
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/block/blk-mq.c b/block/blk-mq.c
index eae2f12..e9ebe98 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1210,20 +1210,8 @@  static struct request *blk_mq_map_request(struct request_queue *q,
 		op_flags |= REQ_SYNC;
 
 	trace_block_getrq(q, bio, op);
-	blk_mq_set_alloc_data(&alloc_data, q, BLK_MQ_REQ_NOWAIT, ctx, hctx);
+	blk_mq_set_alloc_data(&alloc_data, q, 0, ctx, hctx);
 	rq = __blk_mq_alloc_request(&alloc_data, op, op_flags);
-	if (unlikely(!rq)) {
-		blk_mq_run_hw_queue(hctx, false);
-		blk_mq_put_ctx(ctx);
-		trace_block_sleeprq(q, bio, op);
-
-		ctx = blk_mq_get_ctx(q);
-		hctx = q->mq_ops->map_queue(q, ctx->cpu);
-		blk_mq_set_alloc_data(&alloc_data, q, 0, ctx, hctx);
-		rq = __blk_mq_alloc_request(&alloc_data, op, op_flags);
-		ctx = alloc_data.ctx;
-		hctx = alloc_data.hctx;
-	}
 
 	hctx->queued++;
 	data->hctx = hctx;