Message ID | 20221018111240.22612-3-shikemeng@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | A few cleanup patches for blk-iolatency.c | expand |
Friendly ping. on 10/18/2022 7:12 PM, Kemeng Shi wrote: > Default queue depth of iolatency_grp is unlimited, so we scale down > quickly(once by half) in scale_cookie_change. Remove the "subtract > 1/16th" part which is not the truth and add the actual way we > scale down. > > Signed-off-by: Kemeng Shi <shikemeng@huawei.com> > --- > block/blk-iolatency.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c > index b24d7b788ba3..2c574f98c8d1 100644 > --- a/block/blk-iolatency.c > +++ b/block/blk-iolatency.c > @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, > } > > /* > - * Change the queue depth of the iolatency_grp. We add/subtract 1/16th of the > + * Change the queue depth of the iolatency_grp. We add 1/16th of the > * queue depth at a time so we don't get wild swings and hopefully dial in to > - * fairer distribution of the overall queue depth. > + * fairer distribution of the overall queue depth. We halve the queue depth > + * at a time so we can scale down queue depth quickly from default unlimited > + * to target. > */ > static void scale_change(struct iolatency_grp *iolat, bool up) > { >
On Tue, Oct 18, 2022 at 07:12:39PM +0800, Kemeng Shi wrote: > Default queue depth of iolatency_grp is unlimited, so we scale down > quickly(once by half) in scale_cookie_change. Remove the "subtract > 1/16th" part which is not the truth and add the actual way we > scale down. > > Signed-off-by: Kemeng Shi <shikemeng@huawei.com> This is perfect, thanks Kemeng Reviewed-by: Josef Bacik <josef@toxicpanda.com> Thanks, Josef
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c index b24d7b788ba3..2c574f98c8d1 100644 --- a/block/blk-iolatency.c +++ b/block/blk-iolatency.c @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat, } /* - * Change the queue depth of the iolatency_grp. We add/subtract 1/16th of the + * Change the queue depth of the iolatency_grp. We add 1/16th of the * queue depth at a time so we don't get wild swings and hopefully dial in to - * fairer distribution of the overall queue depth. + * fairer distribution of the overall queue depth. We halve the queue depth + * at a time so we can scale down queue depth quickly from default unlimited + * to target. */ static void scale_change(struct iolatency_grp *iolat, bool up) {
Default queue depth of iolatency_grp is unlimited, so we scale down quickly(once by half) in scale_cookie_change. Remove the "subtract 1/16th" part which is not the truth and add the actual way we scale down. Signed-off-by: Kemeng Shi <shikemeng@huawei.com> --- block/blk-iolatency.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)