diff mbox series

[v2,2/3] block: Correct comment for scale_cookie_change

Message ID 20221018111240.22612-3-shikemeng@huawei.com (mailing list archive)
State New, archived
Headers show
Series A few cleanup patches for blk-iolatency.c | expand

Commit Message

Kemeng Shi Oct. 18, 2022, 11:12 a.m. UTC
Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth and add the actual way we
scale down.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
---
 block/blk-iolatency.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Kemeng Shi Nov. 1, 2022, 9:38 a.m. UTC | #1
Friendly ping.

on 10/18/2022 7:12 PM, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
> ---
>  block/blk-iolatency.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
> index b24d7b788ba3..2c574f98c8d1 100644
> --- a/block/blk-iolatency.c
> +++ b/block/blk-iolatency.c
> @@ -364,9 +364,11 @@ static void scale_cookie_change(struct blk_iolatency *blkiolat,
>  }
>  
>  /*
> - * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
> + * Change the queue depth of the iolatency_grp.  We add 1/16th of the
>   * queue depth at a time so we don't get wild swings and hopefully dial in to
> - * fairer distribution of the overall queue depth.
> + * fairer distribution of the overall queue depth.  We halve the queue depth
> + * at a time so we can scale down queue depth quickly from default unlimited
> + * to target.
>   */
>  static void scale_change(struct iolatency_grp *iolat, bool up)
>  {
>
Josef Bacik Nov. 2, 2022, 2:11 p.m. UTC | #2
On Tue, Oct 18, 2022 at 07:12:39PM +0800, Kemeng Shi wrote:
> Default queue depth of iolatency_grp is unlimited, so we scale down
> quickly(once by half) in scale_cookie_change. Remove the "subtract
> 1/16th" part which is not the truth and add the actual way we
> scale down.
> 
> Signed-off-by: Kemeng Shi <shikemeng@huawei.com>

This is perfect, thanks Kemeng

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef
diff mbox series

Patch

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index b24d7b788ba3..2c574f98c8d1 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -364,9 +364,11 @@  static void scale_cookie_change(struct blk_iolatency *blkiolat,
 }
 
 /*
- * Change the queue depth of the iolatency_grp.  We add/subtract 1/16th of the
+ * Change the queue depth of the iolatency_grp.  We add 1/16th of the
  * queue depth at a time so we don't get wild swings and hopefully dial in to
- * fairer distribution of the overall queue depth.
+ * fairer distribution of the overall queue depth.  We halve the queue depth
+ * at a time so we can scale down queue depth quickly from default unlimited
+ * to target.
  */
 static void scale_change(struct iolatency_grp *iolat, bool up)
 {