Message ID | 1542185131-15029-2-git-send-email-jianchao.w.wang@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | blk-mq: refactor and fix on issue request directly | expand |
On Wed, Nov 14, 2018 at 04:45:28PM +0800, Jianchao Wang wrote: > Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly > into one interface to unify the interfaces to issue requests > directly. The merged interface takes over the requests totally, > it could insert, end or do nothing based on the return value of > .queue_rq and 'bypass' parameter. Then caller needn't any other > handling any more. > > Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> > --- > block/blk-mq.c | 93 ++++++++++++++++++++++++++++------------------------------ > 1 file changed, 45 insertions(+), 48 deletions(-) > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 411be60..14b4d06 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -1766,78 +1766,75 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, > return ret; > } > > -static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, > +static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, > struct request *rq, > blk_qc_t *cookie, > - bool bypass_insert) > + bool bypass) > { > struct request_queue *q = rq->q; > bool run_queue = true; > + blk_status_t ret = BLK_STS_RESOURCE; > + int srcu_idx; > > + hctx_lock(hctx, &srcu_idx); > /* > - * RCU or SRCU read lock is needed before checking quiesced flag. > + * hctx_lock is needed before checking quiesced flag. > * > - * When queue is stopped or quiesced, ignore 'bypass_insert' from > - * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller, > - * and avoid driver to try to dispatch again. > + * When queue is stopped or quiesced, ignore 'bypass', insert > + * and return BLK_STS_OK to caller, and avoid driver to try to > + * dispatch again. > */ > - if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) { > + if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) { > run_queue = false; > - bypass_insert = false; > - goto insert; > + bypass = false; > + goto out_unlock; > } > > - if (q->elevator && !bypass_insert) > - goto insert; > + /* > + * Bypass the potential scheduler on the bottom device. > + */ > + if (unlikely(q->elevator && !bypass)) > + goto out_unlock; > > - if (!blk_mq_get_dispatch_budget(hctx)) > - goto insert; > + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) > + goto out_unlock; The unlikely annotation is a bit misleading, since out-of-budget can happen frequently in case of low queue depth, and there are lots of such examples. > > - if (!blk_mq_get_driver_tag(rq)) { > + if (unlikely(!blk_mq_get_driver_tag(rq))) { Same with above. Thanks, Ming
Hi Ming On 11/14/18 5:11 PM, Ming Lei wrote: >> >> - if (!blk_mq_get_dispatch_budget(hctx)) >> - goto insert; >> + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) >> + goto out_unlock; > The unlikely annotation is a bit misleading, since out-of-budget can > happen frequently in case of low queue depth, and there are lots of > such examples. > This could be good for the case for no .get_budget and getting budget success. In case of out-of-budget, we insert the request which is slow path. It should be OK. Maybe some comment should be added for this. Thanks Jianchao
On Wed, Nov 14, 2018 at 05:23:48PM +0800, jianchao.wang wrote: > Hi Ming > > On 11/14/18 5:11 PM, Ming Lei wrote: > >> > >> - if (!blk_mq_get_dispatch_budget(hctx)) > >> - goto insert; > >> + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) > >> + goto out_unlock; > > The unlikely annotation is a bit misleading, since out-of-budget can > > happen frequently in case of low queue depth, and there are lots of > > such examples. > > > > This could be good for the case for no .get_budget and getting budget success. > In case of out-of-budget, we insert the request which is slow path. In case of low queue depth, it is hard to say that 'insert request' is done in slow path, cause it happens quite frequently. I suggest to remove these two unlikely() since modern CPU's branch prediction should work well enough. Especially the annotation of unlikely() often means that this branch is missed in most of times for all settings, and it is obviously not true in this case. thanks, Ming
On 11/14/18 2:43 AM, Ming Lei wrote: > On Wed, Nov 14, 2018 at 05:23:48PM +0800, jianchao.wang wrote: >> Hi Ming >> >> On 11/14/18 5:11 PM, Ming Lei wrote: >>>> >>>> - if (!blk_mq_get_dispatch_budget(hctx)) >>>> - goto insert; >>>> + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) >>>> + goto out_unlock; >>> The unlikely annotation is a bit misleading, since out-of-budget can >>> happen frequently in case of low queue depth, and there are lots of >>> such examples. >>> >> >> This could be good for the case for no .get_budget and getting budget success. >> In case of out-of-budget, we insert the request which is slow path. > > In case of low queue depth, it is hard to say that 'insert request' is > done in slow path, cause it happens quite frequently. > > I suggest to remove these two unlikely() since modern CPU's branch prediction > should work well enough. > > Especially the annotation of unlikely() often means that this branch is > missed in most of times for all settings, and it is obviously not true > in this case. Agree, unlikely() should only be used for the error handling case or similar that does indeed almost never trigger. It should not be used for cases that don't trigger a lot in "most" circumstances.
On 11/14/18 11:22 PM, Jens Axboe wrote: > On 11/14/18 2:43 AM, Ming Lei wrote: >> On Wed, Nov 14, 2018 at 05:23:48PM +0800, jianchao.wang wrote: >>> Hi Ming >>> >>> On 11/14/18 5:11 PM, Ming Lei wrote: >>>>> >>>>> - if (!blk_mq_get_dispatch_budget(hctx)) >>>>> - goto insert; >>>>> + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) >>>>> + goto out_unlock; >>>> The unlikely annotation is a bit misleading, since out-of-budget can >>>> happen frequently in case of low queue depth, and there are lots of >>>> such examples. >>>> >>> >>> This could be good for the case for no .get_budget and getting budget success. >>> In case of out-of-budget, we insert the request which is slow path. >> >> In case of low queue depth, it is hard to say that 'insert request' is >> done in slow path, cause it happens quite frequently. >> >> I suggest to remove these two unlikely() since modern CPU's branch prediction >> should work well enough. >> >> Especially the annotation of unlikely() often means that this branch is >> missed in most of times for all settings, and it is obviously not true >> in this case. > > Agree, unlikely() should only be used for the error handling case or > similar that does indeed almost never trigger. It should not be used > for cases that don't trigger a lot in "most" circumstances. > That's really appreciated for all of your kindly response. Fair enough with 'unlikely'. I will remove these two wrong 'unlikely' in next version. Thanks Jianchao
diff --git a/block/blk-mq.c b/block/blk-mq.c index 411be60..14b4d06 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -1766,78 +1766,75 @@ static blk_status_t __blk_mq_issue_directly(struct blk_mq_hw_ctx *hctx, return ret; } -static blk_status_t __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, +static blk_status_t blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, struct request *rq, blk_qc_t *cookie, - bool bypass_insert) + bool bypass) { struct request_queue *q = rq->q; bool run_queue = true; + blk_status_t ret = BLK_STS_RESOURCE; + int srcu_idx; + hctx_lock(hctx, &srcu_idx); /* - * RCU or SRCU read lock is needed before checking quiesced flag. + * hctx_lock is needed before checking quiesced flag. * - * When queue is stopped or quiesced, ignore 'bypass_insert' from - * blk_mq_request_issue_directly(), and return BLK_STS_OK to caller, - * and avoid driver to try to dispatch again. + * When queue is stopped or quiesced, ignore 'bypass', insert + * and return BLK_STS_OK to caller, and avoid driver to try to + * dispatch again. */ - if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q)) { + if (unlikely(blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(q))) { run_queue = false; - bypass_insert = false; - goto insert; + bypass = false; + goto out_unlock; } - if (q->elevator && !bypass_insert) - goto insert; + /* + * Bypass the potential scheduler on the bottom device. + */ + if (unlikely(q->elevator && !bypass)) + goto out_unlock; - if (!blk_mq_get_dispatch_budget(hctx)) - goto insert; + if (unlikely(!blk_mq_get_dispatch_budget(hctx))) + goto out_unlock; - if (!blk_mq_get_driver_tag(rq)) { + if (unlikely(!blk_mq_get_driver_tag(rq))) { blk_mq_put_dispatch_budget(hctx); - goto insert; + goto out_unlock; } - return __blk_mq_issue_directly(hctx, rq, cookie); -insert: - if (bypass_insert) - return BLK_STS_RESOURCE; - - blk_mq_sched_insert_request(rq, false, run_queue, false); - return BLK_STS_OK; -} - -static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx, - struct request *rq, blk_qc_t *cookie) -{ - blk_status_t ret; - int srcu_idx; - - might_sleep_if(hctx->flags & BLK_MQ_F_BLOCKING); + ret = __blk_mq_issue_directly(hctx, rq, cookie); - hctx_lock(hctx, &srcu_idx); +out_unlock: + hctx_unlock(hctx, srcu_idx); - ret = __blk_mq_try_issue_directly(hctx, rq, cookie, false); - if (ret == BLK_STS_RESOURCE || ret == BLK_STS_DEV_RESOURCE) - blk_mq_sched_insert_request(rq, false, true, false); - else if (ret != BLK_STS_OK) - blk_mq_end_request(rq, ret); + switch (ret) { + case BLK_STS_OK: + break; + case BLK_STS_DEV_RESOURCE: + case BLK_STS_RESOURCE: + if (!bypass) { + blk_mq_sched_insert_request(rq, false, run_queue, false); + ret = BLK_STS_OK; + } + break; + default: + if (!bypass) { + blk_mq_end_request(rq, ret); + ret = BLK_STS_OK; + } + break; + } - hctx_unlock(hctx, srcu_idx); + return ret; } blk_status_t blk_mq_request_issue_directly(struct request *rq) { - blk_status_t ret; - int srcu_idx; blk_qc_t unused_cookie; - struct blk_mq_hw_ctx *hctx = rq->mq_hctx; - hctx_lock(hctx, &srcu_idx); - ret = __blk_mq_try_issue_directly(hctx, rq, &unused_cookie, true); - hctx_unlock(hctx, srcu_idx); - - return ret; + return blk_mq_try_issue_directly(rq->mq_hctx, rq, &unused_cookie, true); } void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx, @@ -1958,13 +1955,13 @@ static blk_qc_t blk_mq_make_request(struct request_queue *q, struct bio *bio) if (same_queue_rq) { data.hctx = same_queue_rq->mq_hctx; blk_mq_try_issue_directly(data.hctx, same_queue_rq, - &cookie); + &cookie, false); } } else if ((q->nr_hw_queues > 1 && is_sync) || (!q->elevator && !data.hctx->dispatch_busy)) { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio); - blk_mq_try_issue_directly(data.hctx, rq, &cookie); + blk_mq_try_issue_directly(data.hctx, rq, &cookie, false); } else { blk_mq_put_ctx(data.ctx); blk_mq_bio_to_request(rq, bio);
Merge blk_mq_try_issue_directly and __blk_mq_try_issue_directly into one interface to unify the interfaces to issue requests directly. The merged interface takes over the requests totally, it could insert, end or do nothing based on the return value of .queue_rq and 'bypass' parameter. Then caller needn't any other handling any more. Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> --- block/blk-mq.c | 93 ++++++++++++++++++++++++++++------------------------------ 1 file changed, 45 insertions(+), 48 deletions(-)