diff mbox

cfq-iosched: fix the delay of cfq_group's vdisktime under iops mode

Message ID 1488334064-34883-1-git-send-email-houtao1@huawei.com (mailing list archive)
State New, archived
Headers show

Commit Message

Hou Tao March 1, 2017, 2:07 a.m. UTC
When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
as the delay of cfq_group's vdisktime if there have been other cfq_groups
already.

When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
from jiffies to nanoseconds") could result in a large iops delay and
lead to an abnormal io schedule delay for the added cfq_group. To fix
it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
when iops mode is enabled.

Cc: <stable@vger.kernel.org> # 4.8+
Signed-off-by: Hou Tao <houtao1@huawei.com>
---
 block/cfq-iosched.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Jan Kara March 2, 2017, 10:29 a.m. UTC | #1
On Wed 01-03-17 10:07:44, Hou Tao wrote:
> When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
> as the delay of cfq_group's vdisktime if there have been other cfq_groups
> already.
> 
> When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
> from jiffies to nanoseconds") could result in a large iops delay and
> lead to an abnormal io schedule delay for the added cfq_group. To fix
> it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
> when iops mode is enabled.
> 
> Cc: <stable@vger.kernel.org> # 4.8+
> Signed-off-by: Hou Tao <houtao1@huawei.com>

OK, I agree my commit broke the logic in this case. Thanks for the fix.
Please add also tag:

Fixes: 9a7f38c42c2b92391d9dabaf9f51df7cfe5608e4

I somewhat disagree with the fix though. See below:

> +static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
> +{
> +	if (!iops_mode(cfqd))
> +		return CFQ_IDLE_DELAY;
> +	else
> +		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
> +}
> +

So using nsecs_to_jiffies64(CFQ_IDLE_DELAY) when in iops mode just does not
make any sense. AFAIU the code in cfq_group_notify_queue_add() we just want
to add the cfqg as the last one in the tree. So returning 1 from
cfq_get_cfqg_vdisktime_delay() in iops mode should be fine as well.

Frankly, vdisktime is in fixed-point precision shifted by
CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
case and just adding 1 to maximum vdisktime should be fine in all the
cases. But that would require more testing whether I did not miss anything
subtle.

								Honza

>  static void
>  cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>  {
> @@ -1380,7 +1388,8 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>  	n = rb_last(&st->rb);
>  	if (n) {
>  		__cfqg = rb_entry_cfqg(n);
> -		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
> +		cfqg->vdisktime = __cfqg->vdisktime +
> +			cfq_get_cfqg_vdisktime_delay(cfqd);
>  	} else
>  		cfqg->vdisktime = st->min_vdisktime;
>  	cfq_group_service_tree_add(st, cfqg);
> -- 
> 2.5.0
>
Hou Tao March 3, 2017, 1:20 p.m. UTC | #2
On 2017/3/2 18:29, Jan Kara wrote:
> On Wed 01-03-17 10:07:44, Hou Tao wrote:
>> When adding a cfq_group into the cfq service tree, we use CFQ_IDLE_DELAY
>> as the delay of cfq_group's vdisktime if there have been other cfq_groups
>> already.
>>
>> When cfq is under iops mode, commit 9a7f38c42c2b ("cfq-iosched: Convert
>> from jiffies to nanoseconds") could result in a large iops delay and
>> lead to an abnormal io schedule delay for the added cfq_group. To fix
>> it, we just need to revert to the old CFQ_IDLE_DELAY value: HZ / 5
>> when iops mode is enabled.
>>
>> Cc: <stable@vger.kernel.org> # 4.8+
>> Signed-off-by: Hou Tao <houtao1@huawei.com>
> 
> OK, I agree my commit broke the logic in this case. Thanks for the fix.
> Please add also tag:
> 
> Fixes: 9a7f38c42c2b92391d9dabaf9f51df7cfe5608e4
> 
> I somewhat disagree with the fix though. See below:
> 
>> +static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
>> +{
>> +	if (!iops_mode(cfqd))
>> +		return CFQ_IDLE_DELAY;
>> +	else
>> +		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
>> +}
>> +
> 
> So using nsecs_to_jiffies64(CFQ_IDLE_DELAY) when in iops mode just does not
> make any sense. AFAIU the code in cfq_group_notify_queue_add() we just want
> to add the cfqg as the last one in the tree. So returning 1 from
> cfq_get_cfqg_vdisktime_delay() in iops mode should be fine as well.
Yes, nsecs_to_jiffies64(CFQ_IDLE_DELAY) is odd here, the better way is to
define a new macro with a value of 1 or 200 and use it directly, but I still
prefer to use 200 to be consistent with the no-hrtimer configuration.

> Frankly, vdisktime is in fixed-point precision shifted by
> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> case and just adding 1 to maximum vdisktime should be fine in all the
> cases. But that would require more testing whether I did not miss anything
> subtle.
Although the current implementation has done this, I don't think we should
add the cfq_group as the last one in the service tree. In some test cases,
I found that the delayed vdisktime of cfq_group is smaller than its last
vdisktime when the cfq_group was removed from the service_tree, and I think
it hurts the fairness. Maybe we can learn from CFS and calculate the delay
dynamically, but it would be the topic of another thread.

Regards,

Tao

> 
>>  static void
>>  cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>>  {
>> @@ -1380,7 +1388,8 @@ cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
>>  	n = rb_last(&st->rb);
>>  	if (n) {
>>  		__cfqg = rb_entry_cfqg(n);
>> -		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
>> +		cfqg->vdisktime = __cfqg->vdisktime +
>> +			cfq_get_cfqg_vdisktime_delay(cfqd);
>>  	} else
>>  		cfqg->vdisktime = st->min_vdisktime;
>>  	cfq_group_service_tree_add(st, cfqg);
>> -- 
>> 2.5.0
>>
Vivek Goyal March 3, 2017, 7:53 p.m. UTC | #3
On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:

[..]
> > Frankly, vdisktime is in fixed-point precision shifted by
> > CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> > case and just adding 1 to maximum vdisktime should be fine in all the
> > cases. But that would require more testing whether I did not miss anything
> > subtle.

I think even 1 will work. But in the beginning IIRC I took the idea
from cpu scheduler. Adding a value bigger than 1 will allow you to add
some other group later before this group. (If you want to give that group
higher priority).

Thanks
Vivek
Hou Tao March 6, 2017, 8:55 a.m. UTC | #4
Hi Vivek,

On 2017/3/4 3:53, Vivek Goyal wrote:
> On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:
> 
> [..]
>>> Frankly, vdisktime is in fixed-point precision shifted by
>>> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
>>> case and just adding 1 to maximum vdisktime should be fine in all the
>>> cases. But that would require more testing whether I did not miss anything
>>> subtle.
> 
> I think even 1 will work. But in the beginning IIRC I took the idea
> from cpu scheduler. Adding a value bigger than 1 will allow you to add
> some other group later before this group. (If you want to give that group
> higher priority).
I still don't understand why using a value bigger than 1 will allow a later added
group to have a vdisktime less than the firstly added group. Could you explain it
in more detail ?

Regards,

Tao

> Thanks
> Vivek
> 
> .
>
Vivek Goyal March 6, 2017, 1:45 p.m. UTC | #5
On Mon, Mar 06, 2017 at 04:55:25PM +0800, Hou Tao wrote:
> Hi Vivek,
> 
> On 2017/3/4 3:53, Vivek Goyal wrote:
> > On Fri, Mar 03, 2017 at 09:20:44PM +0800, Hou Tao wrote:
> > 
> > [..]
> >>> Frankly, vdisktime is in fixed-point precision shifted by
> >>> CFQ_SERVICE_SHIFT so using CFQ_IDLE_DELAY does not make much sense in any
> >>> case and just adding 1 to maximum vdisktime should be fine in all the
> >>> cases. But that would require more testing whether I did not miss anything
> >>> subtle.
> > 
> > I think even 1 will work. But in the beginning IIRC I took the idea
> > from cpu scheduler. Adding a value bigger than 1 will allow you to add
> > some other group later before this group. (If you want to give that group
> > higher priority).
> I still don't understand why using a value bigger than 1 will allow a later added
> group to have a vdisktime less than the firstly added group. Could you explain it
> in more detail ?

The way I thought about this was as follows.

Assume Idle delay value is 5.

Say a group A is last group in the tree and has vdisktime=100, now a new
group B gets IO and gets added to tree say with value 105 (100 + 5). Now
another group C gets IO and gets added to tree. Assume we want to give
C little higher priority than group B (but not higher than A). So we 
could assign it value between 100 and 105 and it will work. But if we had
always added 1, then group A wil have vdisktime 100, B will have 101 and
now C can't be put between A and B.

But this is such a corner case, I doubt it is going to matter. So changing
it to 1 might not show any affect at all.

We had the issue that groups which were not continuously backlogged, will
lose their share. So I had tried implemeting something that while adding
give them a smaller vdisktime (scale based on their weight). But that did
not help much. So that's why a comment was left in there.

Vivek
diff mbox

Patch

diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 1379447..fdeb70b 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1361,6 +1361,14 @@  cfq_group_service_tree_add(struct cfq_rb_root *st, struct cfq_group *cfqg)
 	cfqg->vfraction = max_t(unsigned, vfr, 1);
 }
 
+static inline u64 cfq_get_cfqg_vdisktime_delay(struct cfq_data *cfqd)
+{
+	if (!iops_mode(cfqd))
+		return CFQ_IDLE_DELAY;
+	else
+		return nsecs_to_jiffies64(CFQ_IDLE_DELAY);
+}
+
 static void
 cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
 {
@@ -1380,7 +1388,8 @@  cfq_group_notify_queue_add(struct cfq_data *cfqd, struct cfq_group *cfqg)
 	n = rb_last(&st->rb);
 	if (n) {
 		__cfqg = rb_entry_cfqg(n);
-		cfqg->vdisktime = __cfqg->vdisktime + CFQ_IDLE_DELAY;
+		cfqg->vdisktime = __cfqg->vdisktime +
+			cfq_get_cfqg_vdisktime_delay(cfqd);
 	} else
 		cfqg->vdisktime = st->min_vdisktime;
 	cfq_group_service_tree_add(st, cfqg);