Message ID | 20180628031918.17694-2-ming.lei@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Hi Ming, On Thu, Jun 28, 2018 at 11:19:16AM +0800, Ming Lei wrote: > + list_for_each_entry(rq, list, queuelist) { > BUG_ON(rq->mq_ctx != ctx); > - list_del_init(&rq->queuelist); > - __blk_mq_insert_req_list(hctx, rq, false); > + trace_block_rq_insert(hctx->queue, rq); > } I wonder if we really need the above loop unconditionally. It does some BUG_ON() sanity checking (which I hate but it was already there so not your problem) and tracing of the request insertion. So can we maybe only run this loop if tracing is enabled? Not sure if this is possible though. Maybe Steven (Cced) can help here. Byte, Johannes
On 6/28/18 1:27 AM, Johannes Thumshirn wrote: > Hi Ming, > > On Thu, Jun 28, 2018 at 11:19:16AM +0800, Ming Lei wrote: >> + list_for_each_entry(rq, list, queuelist) { >> BUG_ON(rq->mq_ctx != ctx); >> - list_del_init(&rq->queuelist); >> - __blk_mq_insert_req_list(hctx, rq, false); >> + trace_block_rq_insert(hctx->queue, rq); >> } > > I wonder if we really need the above loop unconditionally. It does > some BUG_ON() sanity checking (which I hate but it was already there > so not your problem) and tracing of the request insertion. We can probably kill that now, but jfyi, this was introduced because of a specific issue. > So can we maybe only run this loop if tracing is enabled? Not sure if > this is possible though. Maybe Steven (Cced) can help here. Don't think it is, since we can be tracing through a variety of different ways. I think it's best to leave this as-is. If this were to be changed, it'd have to be a separate patch anyway, it should not be mixed up with this change. It does need the list_splice_tail_init() as others found out, or you'll leave the source list corrupted.
On Thu, 28 Jun 2018 09:27:08 +0200 Johannes Thumshirn <jthumshirn@suse.de> wrote: > Hi Ming, > > On Thu, Jun 28, 2018 at 11:19:16AM +0800, Ming Lei wrote: > > + list_for_each_entry(rq, list, queuelist) { > > BUG_ON(rq->mq_ctx != ctx); > > - list_del_init(&rq->queuelist); > > - __blk_mq_insert_req_list(hctx, rq, false); > > + trace_block_rq_insert(hctx->queue, rq); > > } > > I wonder if we really need the above loop unconditionally. It does > some BUG_ON() sanity checking (which I hate but it was already there > so not your problem) and tracing of the request insertion. > > So can we maybe only run this loop if tracing is enabled? Not sure if > this is possible though. Maybe Steven (Cced) can help here. Yes: if (trace_block_rq_insert_enabled()) { list_for_each_entry(rq, list, queuelist) { BUG_ON(rq->mq_ctx != ctx); list_del_init(&rq->queuelist); __blk_mq_insert_req_list(hctx, rq, false); trace_block_rq_insert(hctx->queue, rq); } } This will only call the loop if the trace event "block_rq_insert" has been activated. It also uses the jump label infrastructure, so that if statement is a non-conditional branch (static_key_false()). -- Steve
On 6/28/18 1:52 PM, Steven Rostedt wrote: > On Thu, 28 Jun 2018 09:27:08 +0200 > Johannes Thumshirn <jthumshirn@suse.de> wrote: > >> Hi Ming, >> >> On Thu, Jun 28, 2018 at 11:19:16AM +0800, Ming Lei wrote: >>> + list_for_each_entry(rq, list, queuelist) { >>> BUG_ON(rq->mq_ctx != ctx); >>> - list_del_init(&rq->queuelist); >>> - __blk_mq_insert_req_list(hctx, rq, false); >>> + trace_block_rq_insert(hctx->queue, rq); >>> } >> >> I wonder if we really need the above loop unconditionally. It does >> some BUG_ON() sanity checking (which I hate but it was already there >> so not your problem) and tracing of the request insertion. >> >> So can we maybe only run this loop if tracing is enabled? Not sure if >> this is possible though. Maybe Steven (Cced) can help here. > > Yes: > > if (trace_block_rq_insert_enabled()) { > list_for_each_entry(rq, list, queuelist) { > BUG_ON(rq->mq_ctx != ctx); > list_del_init(&rq->queuelist); > __blk_mq_insert_req_list(hctx, rq, false); > trace_block_rq_insert(hctx->queue, rq); > } > } > > This will only call the loop if the trace event "block_rq_insert" has > been activated. It also uses the jump label infrastructure, so that if > statement is a non-conditional branch (static_key_false()). This works for both ftrace and blktrace?
diff --git a/block/blk-mq.c b/block/blk-mq.c index 70c65bb6c013..20b0519cb3b8 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1533,19 +1533,19 @@ void blk_mq_insert_requests(struct blk_mq_hw_ctx *hctx, struct blk_mq_ctx *ctx, struct list_head *list) { + struct request *rq; + /* * preemption doesn't flush plug list, so it's possible ctx->cpu is * offline now */ - spin_lock(&ctx->lock); - while (!list_empty(list)) { - struct request *rq; - - rq = list_first_entry(list, struct request, queuelist); + list_for_each_entry(rq, list, queuelist) { BUG_ON(rq->mq_ctx != ctx); - list_del_init(&rq->queuelist); - __blk_mq_insert_req_list(hctx, rq, false); + trace_block_rq_insert(hctx->queue, rq); } + + spin_lock(&ctx->lock); + list_splice_tail(list, &ctx->rq_list); blk_mq_hctx_mark_pending(hctx, ctx); spin_unlock(&ctx->lock); }
list_splice_tail() is much more faster than inserting each request one by one, given all requets in 'list' belong to same sw queue and ctx->lock is required to insert requests. Cc: Kashyap Desai <kashyap.desai@broadcom.com> Cc: Laurence Oberman <loberman@redhat.com> Cc: Omar Sandoval <osandov@fb.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Bart Van Assche <bart.vanassche@wdc.com> Reported-by: Kashyap Desai <kashyap.desai@broadcom.com> Signed-off-by: Ming Lei <ming.lei@redhat.com> --- block/blk-mq.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-)