Message ID | 20220408073916.1428590-3-yukuai3@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | improve tag allocation under heavy load | expand |
On 4/8/22 00:39, Yu Kuai wrote: > bt_wait_ptr() will increase 'wait_index', however, if blk_mq_get_tag() > get a tag successfully after bt_wait_ptr() is called and before > sbitmap_prepare_to_wait() is called, then the 'ws' is skipped. This > behavior might cause 8 waitqueues to be unbalanced. > > Move bt_wait_ptr() later should reduce the problem when the disk is > under high io preesure. > > Signed-off-by: Yu Kuai <yukuai3@huawei.com> > --- > block/blk-mq-tag.c | 4 +--- > 1 file changed, 1 insertion(+), 3 deletions(-) > > diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c > index 68ac23d0b640..228a0001694f 100644 > --- a/block/blk-mq-tag.c > +++ b/block/blk-mq-tag.c > @@ -155,7 +155,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) > if (data->flags & BLK_MQ_REQ_NOWAIT) > return BLK_MQ_NO_TAG; > > - ws = bt_wait_ptr(bt, data->hctx); > do { > struct sbitmap_queue *bt_prev; > > @@ -174,6 +173,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) > if (tag != BLK_MQ_NO_TAG) > break; > > + ws = bt_wait_ptr(bt, data->hctx); > sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); > > tag = __blk_mq_get_tag(data, bt); > @@ -201,8 +201,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) > */ > if (bt != bt_prev) > sbitmap_queue_wake_up(bt_prev); > - > - ws = bt_wait_ptr(bt, data->hctx); > } while (1); Is it necessary to call bt_wait_ptr() during every loop iteration or only if bt != bt_prev? Would calling bt_wait_ptr() only if bt != bt_prev help to reduce unfairness further? Thanks, Bart.
在 2022/04/08 22:20, Bart Van Assche 写道: > On 4/8/22 00:39, Yu Kuai wrote: >> bt_wait_ptr() will increase 'wait_index', however, if blk_mq_get_tag() >> get a tag successfully after bt_wait_ptr() is called and before >> sbitmap_prepare_to_wait() is called, then the 'ws' is skipped. This >> behavior might cause 8 waitqueues to be unbalanced. >> >> Move bt_wait_ptr() later should reduce the problem when the disk is >> under high io preesure. >> >> Signed-off-by: Yu Kuai <yukuai3@huawei.com> >> --- >> block/blk-mq-tag.c | 4 +--- >> 1 file changed, 1 insertion(+), 3 deletions(-) >> >> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c >> index 68ac23d0b640..228a0001694f 100644 >> --- a/block/blk-mq-tag.c >> +++ b/block/blk-mq-tag.c >> @@ -155,7 +155,6 @@ unsigned int blk_mq_get_tag(struct >> blk_mq_alloc_data *data) >> if (data->flags & BLK_MQ_REQ_NOWAIT) >> return BLK_MQ_NO_TAG; >> - ws = bt_wait_ptr(bt, data->hctx); >> do { >> struct sbitmap_queue *bt_prev; >> @@ -174,6 +173,7 @@ unsigned int blk_mq_get_tag(struct >> blk_mq_alloc_data *data) >> if (tag != BLK_MQ_NO_TAG) >> break; >> + ws = bt_wait_ptr(bt, data->hctx); >> sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); >> tag = __blk_mq_get_tag(data, bt); >> @@ -201,8 +201,6 @@ unsigned int blk_mq_get_tag(struct >> blk_mq_alloc_data *data) >> */ >> if (bt != bt_prev) >> sbitmap_queue_wake_up(bt_prev); >> - >> - ws = bt_wait_ptr(bt, data->hctx); >> } while (1); > > Is it necessary to call bt_wait_ptr() during every loop iteration or > only if bt != bt_prev? Would calling bt_wait_ptr() only if bt != bt_prev > help to reduce unfairness further? Hi, You are right, that sounds reasonable. Thanks, Kuai > > Thanks, > > Bart. > . >
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 68ac23d0b640..228a0001694f 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -155,7 +155,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (data->flags & BLK_MQ_REQ_NOWAIT) return BLK_MQ_NO_TAG; - ws = bt_wait_ptr(bt, data->hctx); do { struct sbitmap_queue *bt_prev; @@ -174,6 +173,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) if (tag != BLK_MQ_NO_TAG) break; + ws = bt_wait_ptr(bt, data->hctx); sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE); tag = __blk_mq_get_tag(data, bt); @@ -201,8 +201,6 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) */ if (bt != bt_prev) sbitmap_queue_wake_up(bt_prev); - - ws = bt_wait_ptr(bt, data->hctx); } while (1); sbitmap_finish_wait(bt, ws, &wait);
bt_wait_ptr() will increase 'wait_index', however, if blk_mq_get_tag() get a tag successfully after bt_wait_ptr() is called and before sbitmap_prepare_to_wait() is called, then the 'ws' is skipped. This behavior might cause 8 waitqueues to be unbalanced. Move bt_wait_ptr() later should reduce the problem when the disk is under high io preesure. Signed-off-by: Yu Kuai <yukuai3@huawei.com> --- block/blk-mq-tag.c | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)