Message ID | b9ae58ae4bc8b16a53fabd35ce163897286d856a.1597637287.git.baolin.wang@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Some clean-ups for bio merge | expand |
On Mon, Aug 17, 2020 at 12:09:18PM +0800, Baolin Wang wrote: > unsigned int nr_segs) > { > @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, > !list_empty_careful(&ctx->rq_lists[type])) { > /* default per sw-queue merge */ > spin_lock(&ctx->lock); > - ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs); > + /* > + * Reverse check our software queue for entries that we could > + * potentially merge with. Currently includes a hand-wavy stop > + * count of 8, to not spend too much time checking for merges. > + */ > + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { > + ctx->rq_merged++; > + ret = true; > + } > + > spin_unlock(&ctx->lock); This adds an overly long line. That being said the whole thing could be nicely simplified to: ... if (e && e->type->ops.bio_merge) return e->type->ops.bio_merge(hctx, bio, nr_segs); if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || list_empty_careful(&ctx->rq_lists[hctx->type])) return false; /* * Reverse check our software queue for entries that we could * potentially merge with. Currently includes a hand-wavy stop count of * 8, to not spend too much time checking for merges. */ spin_lock(&ctx->lock); ret = blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs); if (ret) ctx->rq_merged++; spin_unlock(&ctx->lock); Also I think it would make sense to move the locking into blk_mq_bio_list_merge.
On Mon, Aug 17, 2020 at 08:31:53AM +0200, Christoph Hellwig wrote: > On Mon, Aug 17, 2020 at 12:09:18PM +0800, Baolin Wang wrote: > > unsigned int nr_segs) > > { > > @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, > > !list_empty_careful(&ctx->rq_lists[type])) { > > /* default per sw-queue merge */ > > spin_lock(&ctx->lock); > > - ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs); > > + /* > > + * Reverse check our software queue for entries that we could > > + * potentially merge with. Currently includes a hand-wavy stop > > + * count of 8, to not spend too much time checking for merges. > > + */ > > + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { > > + ctx->rq_merged++; > > + ret = true; > > + } > > + > > spin_unlock(&ctx->lock); > > This adds an overly long line. That being said the whole thing could > be nicely simplified to: > > ... > > if (e && e->type->ops.bio_merge) > return e->type->ops.bio_merge(hctx, bio, nr_segs); > > if (!(hctx->flags & BLK_MQ_F_SHOULD_MERGE) || > list_empty_careful(&ctx->rq_lists[hctx->type])) > return false; > > /* > * Reverse check our software queue for entries that we could > * potentially merge with. Currently includes a hand-wavy stop count of > * 8, to not spend too much time checking for merges. > */ > spin_lock(&ctx->lock); > ret = blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs); > if (ret) > ctx->rq_merged++; > spin_unlock(&ctx->lock); > > Also I think it would make sense to move the locking into > blk_mq_bio_list_merge. Sure, will do in next version.
diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c index 8e9bafe..1cc7919 100644 --- a/block/blk-mq-sched.c +++ b/block/blk-mq-sched.c @@ -408,28 +408,6 @@ bool blk_mq_bio_list_merge(struct request_queue *q, struct list_head *list, } EXPORT_SYMBOL_GPL(blk_mq_bio_list_merge); -/* - * Reverse check our software queue for entries that we could potentially - * merge with. Currently includes a hand-wavy stop count of 8, to not spend - * too much time checking for merges. - */ -static bool blk_mq_attempt_merge(struct request_queue *q, - struct blk_mq_hw_ctx *hctx, - struct blk_mq_ctx *ctx, struct bio *bio, - unsigned int nr_segs) -{ - enum hctx_type type = hctx->type; - - lockdep_assert_held(&ctx->lock); - - if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { - ctx->rq_merged++; - return true; - } - - return false; -} - bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, unsigned int nr_segs) { @@ -447,7 +425,16 @@ bool __blk_mq_sched_bio_merge(struct request_queue *q, struct bio *bio, !list_empty_careful(&ctx->rq_lists[type])) { /* default per sw-queue merge */ spin_lock(&ctx->lock); - ret = blk_mq_attempt_merge(q, hctx, ctx, bio, nr_segs); + /* + * Reverse check our software queue for entries that we could + * potentially merge with. Currently includes a hand-wavy stop + * count of 8, to not spend too much time checking for merges. + */ + if (blk_mq_bio_list_merge(q, &ctx->rq_lists[type], bio, nr_segs)) { + ctx->rq_merged++; + ret = true; + } + spin_unlock(&ctx->lock); }
The small blk_mq_attempt_merge() function is only called by __blk_mq_sched_bio_merge(), just open code it. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> --- block/blk-mq-sched.c | 33 ++++++++++----------------------- 1 file changed, 10 insertions(+), 23 deletions(-)