diff mbox

[v2,1/2] cpufreq: Make iowait boost a policy option

Message ID 20170519062344.27692-2-joelaf@google.com (mailing list archive)
State Changes Requested, archived
Headers show

Commit Message

Joel Fernandes May 19, 2017, 6:23 a.m. UTC
Make iowait boost a cpufreq policy option and enable it for intel_pstate
cpufreq driver. Governors like schedutil can use it to determine if
boosting for tasks that wake up with p->in_iowait set is needed.

Cc: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: Rafael J. Wysocki <rjw@rjwysocki.net>
Cc: Viresh Kumar <viresh.kumar@linaro.org>
Cc: Ingo Molnar <mingo@redhat.com> 
Cc: Peter Zijlstra <peterz@infradead.org> 
Signed-off-by: Joel Fernandes <joelaf@google.com>
---
 drivers/cpufreq/intel_pstate.c | 1 +
 include/linux/cpufreq.h        | 3 +++
 2 files changed, 4 insertions(+)

Comments

Peter Zijlstra May 19, 2017, 9:42 a.m. UTC | #1
On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
> Make iowait boost a cpufreq policy option and enable it for intel_pstate
> cpufreq driver. Governors like schedutil can use it to determine if
> boosting for tasks that wake up with p->in_iowait set is needed.

Rather than just flat out disabling the option, is there something
better we can do on ARM?

The reason for the IO-wait boost is to ensure we feed our external
devices data ASAP, this reduces wait times, increases throughput and
decreases the duration the devices have to operate.

I realize max freq/volt might not be the best option for you, but is
there another spot that would make sense? I can imagine you want to
return your MMC to low power state ASAP as well.


So rather than a disable flag, I would really rather see an IO-wait OPP
state selector or something.
Peter Zijlstra May 19, 2017, 10:21 a.m. UTC | #2
On Fri, May 19, 2017 at 11:42:45AM +0200, Peter Zijlstra wrote:
> On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
> > Make iowait boost a cpufreq policy option and enable it for intel_pstate
> > cpufreq driver. Governors like schedutil can use it to determine if
> > boosting for tasks that wake up with p->in_iowait set is needed.
> 
> Rather than just flat out disabling the option, is there something
> better we can do on ARM?
> 
> The reason for the IO-wait boost is to ensure we feed our external
> devices data ASAP, this reduces wait times, increases throughput and
> decreases the duration the devices have to operate.
> 
> I realize max freq/volt might not be the best option for you, but is
> there another spot that would make sense? I can imagine you want to
> return your MMC to low power state ASAP as well.
> 
> 
> So rather than a disable flag, I would really rather see an IO-wait OPP
> state selector or something.

It would be even better if we can determine that point from the power
model data.
Joel Fernandes May 19, 2017, 5:04 p.m. UTC | #3
Hi Peter,

On Fri, May 19, 2017 at 2:42 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
>> Make iowait boost a cpufreq policy option and enable it for intel_pstate
>> cpufreq driver. Governors like schedutil can use it to determine if
>> boosting for tasks that wake up with p->in_iowait set is needed.
>
> Rather than just flat out disabling the option, is there something
> better we can do on ARM?
>
> The reason for the IO-wait boost is to ensure we feed our external
> devices data ASAP, this reduces wait times, increases throughput and
> decreases the duration the devices have to operate.

Can you help understand how CPU frequency can affect I/O? The ASAP
makes me think of it as a latency thing than a throughput in which
case there should a scheduling priority increase? Also, to me it
sounds more like memory instead of CPU frequency should be boosted
instead so that DMA transfers happen quicker to feed devices data
faster.

Are you trying to boost the CPU frequency so that a process waiting on
I/O does its next set of processing quickly enough after iowaiting on
the previous I/O transaction, and is ready to feed I/O the next time
sooner?

The case I'm seeing a lot is a background thread does I/O request and
blocks for short period, and wakes up. All this while the CPU
frequency is low, but that wake up causes a spike in frequency. So
over a period of time, you see these spikes that don't really help
anything.

>
> I realize max freq/volt might not be the best option for you, but is
> there another spot that would make sense? I can imagine you want to
> return your MMC to low power state ASAP as well.
>
>
> So rather than a disable flag, I would really rather see an IO-wait OPP
> state selector or something.

We never had this in older kernels and I don't think we ever had an
issue where I/O was slow because of CPU frequency. If a task is busy a
lot, then its load tracking signal should be high and take care of
keeping CPU frequency high right? If PELT is decaying the load
tracking of iowaiting tasks too much, then I think that it should be
fixed there (probably decay an iowaiting task lesser?). Considering
that it makes power worse on newer kernels, it'd probably be best to
disable it in my opinion for those who don't need it.

thanks,

-Joel
Peter Zijlstra May 22, 2017, 8:21 a.m. UTC | #4
On Fri, May 19, 2017 at 10:04:28AM -0700, Joel Fernandes wrote:
> Hi Peter,
> 
> On Fri, May 19, 2017 at 2:42 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> > On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
> >> Make iowait boost a cpufreq policy option and enable it for intel_pstate
> >> cpufreq driver. Governors like schedutil can use it to determine if
> >> boosting for tasks that wake up with p->in_iowait set is needed.
> >
> > Rather than just flat out disabling the option, is there something
> > better we can do on ARM?
> >
> > The reason for the IO-wait boost is to ensure we feed our external
> > devices data ASAP, this reduces wait times, increases throughput and
> > decreases the duration the devices have to operate.
> 
> Can you help understand how CPU frequency can affect I/O? The ASAP
> makes me think of it as a latency thing than a throughput in which
> case there should a scheduling priority increase? Also, to me it
> sounds more like memory instead of CPU frequency should be boosted
> instead so that DMA transfers happen quicker to feed devices data
> faster.

Suppose your (I/O) device has the task waiting for a completion for 1ms
for each request. Further suppose that feeding it the next request takes
.1ms at full speed (1 GHz).

Then we get, without contending tasks, a cycle of:


 R----------R----------					(1 GHz)


Which comes at 1/11-th utilization, which would then select something
like 100 MHz as being sufficient. But then the R part becomes 10x longer
and we end up with:


 RRRRRRRRRR----------RRRRRRRRRR----------		(100 MHz)


And since there's still plenty idle time, and the effective utilization
is still the same 1/11th, we'll not ramp up at all and continue in this
cycle.

Note however that the total time of the cycle increased from 1.1ms
to 2ms, for an ~80% decrease in throughput.

> Are you trying to boost the CPU frequency so that a process waiting on
> I/O does its next set of processing quickly enough after iowaiting on
> the previous I/O transaction, and is ready to feed I/O the next time
> sooner?

This. So we break the above pattern by boosting the task that wakes from
IO-wait. Its utilization will never be enough to cause a significant
bump in frequency on its own, as its constantly blocked on the IO
device.

> The case I'm seeing a lot is a background thread does I/O request and
> blocks for short period, and wakes up. All this while the CPU
> frequency is low, but that wake up causes a spike in frequency. So
> over a period of time, you see these spikes that don't really help
> anything.

So the background thread is doing some spurious IO but nothing
consistent?

> > I realize max freq/volt might not be the best option for you, but is
> > there another spot that would make sense? I can imagine you want to
> > return your MMC to low power state ASAP as well.
> >
> >
> > So rather than a disable flag, I would really rather see an IO-wait OPP
> > state selector or something.
> 
> We never had this in older kernels and I don't think we ever had an
> issue where I/O was slow because of CPU frequency. If a task is busy a
> lot, then its load tracking signal should be high and take care of
> keeping CPU frequency high right?

As per the above, no. If the device completion takes long enough to
inject enough idle time, the utilization signal will never be high
enough to break out of that pattern.

> If PELT is decaying the load
> tracking of iowaiting tasks too much, then I think that it should be
> fixed there (probably decay an iowaiting task lesser?).

For the above to work, we'd have to completely discard IO-wait time on
the utilization signal. But that would then give the task u=1, which
would be incorrect for placement decisions and wreck EAS.

> Considering
> that it makes power worse on newer kernels, it'd probably be best to
> disable it in my opinion for those who don't need it.

You have yet to convince me you don't need it. Sure Android might not
have much IO heavy workloads, but that's not to say nothing on ARM ever
does.

Also note that if you set the boost OPP to the lowest OPP you
effectively do disable it.

Looking at the code, it appears we already have this in
iowait_boost_max.
Joel Fernandes May 24, 2017, 8:17 p.m. UTC | #5
Hi Peter,

On Mon, May 22, 2017 at 1:21 AM, Peter Zijlstra <peterz@infradead.org> wrote:
[..]
>> On Fri, May 19, 2017 at 2:42 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
>> >> Make iowait boost a cpufreq policy option and enable it for intel_pstate
>> >> cpufreq driver. Governors like schedutil can use it to determine if
>> >> boosting for tasks that wake up with p->in_iowait set is needed.
>> >
>> > Rather than just flat out disabling the option, is there something
>> > better we can do on ARM?
>> >
>> > The reason for the IO-wait boost is to ensure we feed our external
>> > devices data ASAP, this reduces wait times, increases throughput and
>> > decreases the duration the devices have to operate.
>>
>> Can you help understand how CPU frequency can affect I/O? The ASAP
>> makes me think of it as a latency thing than a throughput in which
>> case there should a scheduling priority increase? Also, to me it
>> sounds more like memory instead of CPU frequency should be boosted
>> instead so that DMA transfers happen quicker to feed devices data
>> faster.
>
> Suppose your (I/O) device has the task waiting for a completion for 1ms
> for each request. Further suppose that feeding it the next request takes
> .1ms at full speed (1 GHz).
>
> Then we get, without contending tasks, a cycle of:
>
>
>  R----------R----------                                 (1 GHz)
>
>
> Which comes at 1/11-th utilization, which would then select something
> like 100 MHz as being sufficient. But then the R part becomes 10x longer
> and we end up with:
>
>
>  RRRRRRRRRR----------RRRRRRRRRR----------               (100 MHz)
>
>
> And since there's still plenty idle time, and the effective utilization
> is still the same 1/11th, we'll not ramp up at all and continue in this
> cycle.
>
> Note however that the total time of the cycle increased from 1.1ms
> to 2ms, for an ~80% decrease in throughput.

Got it, thanks for the explanation.

>> Are you trying to boost the CPU frequency so that a process waiting on
>> I/O does its next set of processing quickly enough after iowaiting on
>> the previous I/O transaction, and is ready to feed I/O the next time
>> sooner?
>
> This. So we break the above pattern by boosting the task that wakes from
> IO-wait. Its utilization will never be enough to cause a significant
> bump in frequency on its own, as its constantly blocked on the IO
> device.

It sounds like this problem can happen with any other use-case where
one task blocks on the other, not just IO. Like a case where 2 tasks
running on different CPUs block on a mutex, then on either task can
wait on the other causing their utilization to be low right?

>> The case I'm seeing a lot is a background thread does I/O request and
>> blocks for short period, and wakes up. All this while the CPU
>> frequency is low, but that wake up causes a spike in frequency. So
>> over a period of time, you see these spikes that don't really help
>> anything.
>
> So the background thread is doing some spurious IO but nothing
> consistent?

Yes, its not a consistent pattern. Its actually a 'kworker' that woke
up to read/write something related to the video being played by the
YouTube app and is asynchronous to the app itself. It could be writing
to the logs or other information. But this definitely not a consistent
pattern as in the use case you described but intermittent spikes. The
frequency boosts don't help the actual activity of playing the video
except increasing power.

>> > I realize max freq/volt might not be the best option for you, but is
>> > there another spot that would make sense? I can imagine you want to
>> > return your MMC to low power state ASAP as well.
>> >
>> >
>> > So rather than a disable flag, I would really rather see an IO-wait OPP
>> > state selector or something.
>>
>> We never had this in older kernels and I don't think we ever had an
>> issue where I/O was slow because of CPU frequency. If a task is busy a
>> lot, then its load tracking signal should be high and take care of
>> keeping CPU frequency high right?
>
> As per the above, no. If the device completion takes long enough to
> inject enough idle time, the utilization signal will never be high
> enough to break out of that pattern.
>
>> If PELT is decaying the load
>> tracking of iowaiting tasks too much, then I think that it should be
>> fixed there (probably decay an iowaiting task lesser?).
>
> For the above to work, we'd have to completely discard IO-wait time on
> the utilization signal. But that would then give the task u=1, which
> would be incorrect for placement decisions and wreck EAS.

Not completely discard but cap the decay of the signal during IO wait.

>
>> Considering
>> that it makes power worse on newer kernels, it'd probably be best to
>> disable it in my opinion for those who don't need it.
>
> You have yet to convince me you don't need it. Sure Android might not
> have much IO heavy workloads, but that's not to say nothing on ARM ever
> does.
>
> Also note that if you set the boost OPP to the lowest OPP you
> effectively do disable it.
>
> Looking at the code, it appears we already have this in
> iowait_boost_max.

Currently it is set to:
 sg_cpu->iowait_boost_max = policy->cpuinfo.max_freq

Are you proposing to make this a sysfs tunable so we can override what
the iowait_boost_max value is?

thanks,

-Joel
Joel Fernandes June 10, 2017, 8:08 a.m. UTC | #6
Adding Juri and Patrick as well to share any thoughts. Replied to
Peter in the end of this email.

On Wed, May 24, 2017 at 1:17 PM, Joel Fernandes <joelaf@google.com> wrote:
> Hi Peter,
>
> On Mon, May 22, 2017 at 1:21 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> [..]
>>> On Fri, May 19, 2017 at 2:42 AM, Peter Zijlstra <peterz@infradead.org> wrote:
>>> > On Thu, May 18, 2017 at 11:23:43PM -0700, Joel Fernandes wrote:
>>> >> Make iowait boost a cpufreq policy option and enable it for intel_pstate
>>> >> cpufreq driver. Governors like schedutil can use it to determine if
>>> >> boosting for tasks that wake up with p->in_iowait set is needed.
>>> >
>>> > Rather than just flat out disabling the option, is there something
>>> > better we can do on ARM?
>>> >
>>> > The reason for the IO-wait boost is to ensure we feed our external
>>> > devices data ASAP, this reduces wait times, increases throughput and
>>> > decreases the duration the devices have to operate.
>>>
>>> Can you help understand how CPU frequency can affect I/O? The ASAP
>>> makes me think of it as a latency thing than a throughput in which
>>> case there should a scheduling priority increase? Also, to me it
>>> sounds more like memory instead of CPU frequency should be boosted
>>> instead so that DMA transfers happen quicker to feed devices data
>>> faster.
>>
>> Suppose your (I/O) device has the task waiting for a completion for 1ms
>> for each request. Further suppose that feeding it the next request takes
>> .1ms at full speed (1 GHz).
>>
>> Then we get, without contending tasks, a cycle of:
>>
>>
>>  R----------R----------                                 (1 GHz)
>>
>>
>> Which comes at 1/11-th utilization, which would then select something
>> like 100 MHz as being sufficient. But then the R part becomes 10x longer
>> and we end up with:
>>
>>
>>  RRRRRRRRRR----------RRRRRRRRRR----------               (100 MHz)
>>
>>
>> And since there's still plenty idle time, and the effective utilization
>> is still the same 1/11th, we'll not ramp up at all and continue in this
>> cycle.
>>
>> Note however that the total time of the cycle increased from 1.1ms
>> to 2ms, for an ~80% decrease in throughput.
>
> Got it, thanks for the explanation.
>
>>> Are you trying to boost the CPU frequency so that a process waiting on
>>> I/O does its next set of processing quickly enough after iowaiting on
>>> the previous I/O transaction, and is ready to feed I/O the next time
>>> sooner?
>>
>> This. So we break the above pattern by boosting the task that wakes from
>> IO-wait. Its utilization will never be enough to cause a significant
>> bump in frequency on its own, as its constantly blocked on the IO
>> device.
>
> It sounds like this problem can happen with any other use-case where
> one task blocks on the other, not just IO. Like a case where 2 tasks
> running on different CPUs block on a mutex, then on either task can
> wait on the other causing their utilization to be low right?
>
>>> The case I'm seeing a lot is a background thread does I/O request and
>>> blocks for short period, and wakes up. All this while the CPU
>>> frequency is low, but that wake up causes a spike in frequency. So
>>> over a period of time, you see these spikes that don't really help
>>> anything.
>>
>> So the background thread is doing some spurious IO but nothing
>> consistent?
>
> Yes, its not a consistent pattern. Its actually a 'kworker' that woke
> up to read/write something related to the video being played by the
> YouTube app and is asynchronous to the app itself. It could be writing
> to the logs or other information. But this definitely not a consistent
> pattern as in the use case you described but intermittent spikes. The
> frequency boosts don't help the actual activity of playing the video
> except increasing power.
>
>>> > I realize max freq/volt might not be the best option for you, but is
>>> > there another spot that would make sense? I can imagine you want to
>>> > return your MMC to low power state ASAP as well.
>>> >
>>> >
>>> > So rather than a disable flag, I would really rather see an IO-wait OPP
>>> > state selector or something.
>>>
>>> We never had this in older kernels and I don't think we ever had an
>>> issue where I/O was slow because of CPU frequency. If a task is busy a
>>> lot, then its load tracking signal should be high and take care of
>>> keeping CPU frequency high right?
>>
>> As per the above, no. If the device completion takes long enough to
>> inject enough idle time, the utilization signal will never be high
>> enough to break out of that pattern.
>>
>>> If PELT is decaying the load
>>> tracking of iowaiting tasks too much, then I think that it should be
>>> fixed there (probably decay an iowaiting task lesser?).
>>
>> For the above to work, we'd have to completely discard IO-wait time on
>> the utilization signal. But that would then give the task u=1, which
>> would be incorrect for placement decisions and wreck EAS.
>
> Not completely discard but cap the decay of the signal during IO wait.
>
>>
>>> Considering
>>> that it makes power worse on newer kernels, it'd probably be best to
>>> disable it in my opinion for those who don't need it.
>>
>> You have yet to convince me you don't need it. Sure Android might not
>> have much IO heavy workloads, but that's not to say nothing on ARM ever
>> does.
>>
>> Also note that if you set the boost OPP to the lowest OPP you
>> effectively do disable it.
>>
>> Looking at the code, it appears we already have this in
>> iowait_boost_max.
>
> Currently it is set to:
>  sg_cpu->iowait_boost_max = policy->cpuinfo.max_freq
>
> Are you proposing to make this a sysfs tunable so we can override what
> the iowait_boost_max value is?
>

Peter I didn't hear back from you. Maybe my comment here did not make
much sense to you? That could be because I was confused what you meant
by iowait_boost_max setting to 0. Currently afaik there isn't an
upstream way of doing this. Were you suggesting making
iowait_boost_max as a tunable and setting it to 0?

Or did you mean to have us carry an out of tree patch that sets it to
0? One of the reason I am pushing this patch is to not have to carry
an out of tree patch that disables it. Looking forward to your reply.

Thanks a lot,
Joel
Peter Zijlstra June 10, 2017, 1:56 p.m. UTC | #7
On Sat, Jun 10, 2017 at 01:08:18AM -0700, Joel Fernandes wrote:

> Adding Juri and Patrick as well to share any thoughts. Replied to
> Peter in the end of this email.

Oh sorry, I completely missed your earlier reply :-(

> On Wed, May 24, 2017 at 1:17 PM, Joel Fernandes <joelaf@google.com> wrote:
> > On Mon, May 22, 2017 at 1:21 AM, Peter Zijlstra <peterz@infradead.org> wrote:

> >> Suppose your (I/O) device has the task waiting for a completion for 1ms
> >> for each request. Further suppose that feeding it the next request takes
> >> .1ms at full speed (1 GHz).
> >>
> >> Then we get, without contending tasks, a cycle of:
> >>
> >>
> >>  R----------R----------                                 (1 GHz)
> >>
> >>
> >> Which comes at 1/11-th utilization, which would then select something
> >> like 100 MHz as being sufficient. But then the R part becomes 10x longer
> >> and we end up with:
> >>
> >>
> >>  RRRRRRRRRR----------RRRRRRRRRR----------               (100 MHz)
> >>
> >>
> >> And since there's still plenty idle time, and the effective utilization
> >> is still the same 1/11th, we'll not ramp up at all and continue in this
> >> cycle.
> >>
> >> Note however that the total time of the cycle increased from 1.1ms
> >> to 2ms, for an ~80% decrease in throughput.
> >
> > Got it, thanks for the explanation.
> >
> >>> Are you trying to boost the CPU frequency so that a process waiting on
> >>> I/O does its next set of processing quickly enough after iowaiting on
> >>> the previous I/O transaction, and is ready to feed I/O the next time
> >>> sooner?
> >>
> >> This. So we break the above pattern by boosting the task that wakes from
> >> IO-wait. Its utilization will never be enough to cause a significant
> >> bump in frequency on its own, as its constantly blocked on the IO
> >> device.
> >
> > It sounds like this problem can happen with any other use-case where
> > one task blocks on the other, not just IO. Like a case where 2 tasks
> > running on different CPUs block on a mutex, then on either task can
> > wait on the other causing their utilization to be low right?

No, with two tasks bouncing on a mutex this does not happen. For both
tasks are visible and consume time on the CPU. So, if for example, a
task A blocks on a task B, then B will still be running, and cpufreq
will still see B and provide it sufficient resource to keep running.
That is, if B is cpu bound, and we recognise it as such, it will get
full CPU.

The difference with the IO is that the IO device is completely
invisible. This makes sense in that cpufreq cannot affect the devices
performance, but it does lead to the above issue.

> >>> The case I'm seeing a lot is a background thread does I/O request and
> >>> blocks for short period, and wakes up. All this while the CPU
> >>> frequency is low, but that wake up causes a spike in frequency. So
> >>> over a period of time, you see these spikes that don't really help
> >>> anything.
> >>
> >> So the background thread is doing some spurious IO but nothing
> >> consistent?
> >
> > Yes, its not a consistent pattern. Its actually a 'kworker' that woke
> > up to read/write something related to the video being played by the
> > YouTube app and is asynchronous to the app itself. It could be writing
> > to the logs or other information. But this definitely not a consistent
> > pattern as in the use case you described but intermittent spikes. The
> > frequency boosts don't help the actual activity of playing the video
> > except increasing power.

Right; so one thing we can try is to ramp-up the boost. Because
currently its a bit of an asymmetric thing in that we'll instantly boost
to max and then slowly back off again.

If instead we need to 'earn' full boost by repeatedly blocking on IO
this might sufficiently damp your spikes.

> >> Also note that if you set the boost OPP to the lowest OPP you
> >> effectively do disable it.
> >>
> >> Looking at the code, it appears we already have this in
> >> iowait_boost_max.
> >
> > Currently it is set to:
> >  sg_cpu->iowait_boost_max = policy->cpuinfo.max_freq
> >
> > Are you proposing to make this a sysfs tunable so we can override what
> > the iowait_boost_max value is?

Not sysfs, but maybe cpufreq driver / platform. For example have it be
the OPP that provides the max Instructions per Watt.

> Peter I didn't hear back from you. Maybe my comment here did not make
> much sense to you? 

Again sorry; I completely missed it :/

> That could be because I was confused what you meant by
> iowait_boost_max setting to 0. Currently afaik there isn't an upstream
> way of doing this. Were you suggesting making iowait_boost_max as a
> tunable and setting it to 0?

Tunable as in exposed to the driver, not userspace.

But I'm hoping an efficient OPP and the ramp-up together would be enough
for your case and also still work for our desktop/server loads.
Joel Fernandes June 11, 2017, 6:59 a.m. UTC | #8
Hi Peter,

On Sat, Jun 10, 2017 at 6:56 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Sat, Jun 10, 2017 at 01:08:18AM -0700, Joel Fernandes wrote:
>
>> Adding Juri and Patrick as well to share any thoughts. Replied to
>> Peter in the end of this email.
>
> Oh sorry, I completely missed your earlier reply :-(

No problem. I appreciate you taking time to reply, thanks.

>> >>> Are you trying to boost the CPU frequency so that a process waiting on
>> >>> I/O does its next set of processing quickly enough after iowaiting on
>> >>> the previous I/O transaction, and is ready to feed I/O the next time
>> >>> sooner?
>> >>
>> >> This. So we break the above pattern by boosting the task that wakes from
>> >> IO-wait. Its utilization will never be enough to cause a significant
>> >> bump in frequency on its own, as its constantly blocked on the IO
>> >> device.
>> >
>> > It sounds like this problem can happen with any other use-case where
>> > one task blocks on the other, not just IO. Like a case where 2 tasks
>> > running on different CPUs block on a mutex, then on either task can
>> > wait on the other causing their utilization to be low right?
>
> No, with two tasks bouncing on a mutex this does not happen. For both
> tasks are visible and consume time on the CPU. So, if for example, a
> task A blocks on a task B, then B will still be running, and cpufreq
> will still see B and provide it sufficient resource to keep running.
> That is, if B is cpu bound, and we recognise it as such, it will get
> full CPU.
>
> The difference with the IO is that the IO device is completely
> invisible. This makes sense in that cpufreq cannot affect the devices
> performance, but it does lead to the above issue.

But if task A and B are on different CPUs due to CPU affinity and
these CPUs are on different frequency domains and bouncing on a mutex,
then you would run into the same problem right?

>> >>> The case I'm seeing a lot is a background thread does I/O request and
>> >>> blocks for short period, and wakes up. All this while the CPU
>> >>> frequency is low, but that wake up causes a spike in frequency. So
>> >>> over a period of time, you see these spikes that don't really help
>> >>> anything.
>> >>
>> >> So the background thread is doing some spurious IO but nothing
>> >> consistent?
>> >
>> > Yes, its not a consistent pattern. Its actually a 'kworker' that woke
>> > up to read/write something related to the video being played by the
>> > YouTube app and is asynchronous to the app itself. It could be writing
>> > to the logs or other information. But this definitely not a consistent
>> > pattern as in the use case you described but intermittent spikes. The
>> > frequency boosts don't help the actual activity of playing the video
>> > except increasing power.
>
> Right; so one thing we can try is to ramp-up the boost. Because
> currently its a bit of an asymmetric thing in that we'll instantly boost
> to max and then slowly back off again.
>
> If instead we need to 'earn' full boost by repeatedly blocking on IO
> this might sufficiently damp your spikes.

Cool, that sounds like a great idea.

>> >> Also note that if you set the boost OPP to the lowest OPP you
>> >> effectively do disable it.
>> >>
>> >> Looking at the code, it appears we already have this in
>> >> iowait_boost_max.
>> >
>> > Currently it is set to:
>> >  sg_cpu->iowait_boost_max = policy->cpuinfo.max_freq
>> >
>> > Are you proposing to make this a sysfs tunable so we can override what
>> > the iowait_boost_max value is?
>
> Not sysfs, but maybe cpufreq driver / platform. For example have it be
> the OPP that provides the max Instructions per Watt.
>
>> Peter I didn't hear back from you. Maybe my comment here did not make
>> much sense to you?
>
> Again sorry; I completely missed it :/

No problem, thank you for replying. :)

>> That could be because I was confused what you meant by
>> iowait_boost_max setting to 0. Currently afaik there isn't an upstream
>> way of doing this. Were you suggesting making iowait_boost_max as a
>> tunable and setting it to 0?
>
> Tunable as in exposed to the driver, not userspace.

Got it.

> But I'm hoping an efficient OPP and the ramp-up together would be enough
> for your case and also still work for our desktop/server loads.

Ok. I am trying repro this with a synthetic test and measure
throughput so that I have a predictable usecase.

I was also thinking of another approach where when a p->in_iowait task
wakes up, we don't decay its util_avg. Then we calculate the total
time it was blocked due to I/O and then use that to correct the error
in the rq's util_avg (since the task's contribution to the rq util_avg
could have decayed while it iowait-ing). This will in a sense boost
the util_avg. Do you think that's a workable approach? That way if the
task is waiting very briefly, then the error to correct would be small
and we wouldn't just end up ramping to max frequency.
I think the other way to do it could be to not decay the rq's util_avg
while a task is waiting on I/O (maybe by checking rq->nr_iowait?).

Thanks,
Joel
diff mbox

Patch

diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index b7de5bd76a31..5dddc21da4f6 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -2239,6 +2239,7 @@  static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy)
 
 	policy->cpuinfo.transition_latency = INTEL_CPUFREQ_TRANSITION_LATENCY;
 	policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY;
+	policy->iowait_boost_enable = true;
 	/* This reflects the intel_pstate_get_cpu_pstates() setting. */
 	policy->cur = policy->cpuinfo.min_freq;
 
diff --git a/include/linux/cpufreq.h b/include/linux/cpufreq.h
index a5ce0bbeadb5..0783d8b52ec8 100644
--- a/include/linux/cpufreq.h
+++ b/include/linux/cpufreq.h
@@ -127,6 +127,9 @@  struct cpufreq_policy {
 	 */
 	unsigned int		transition_delay_us;
 
+	/* Boost switch for tasks with p->in_iowait set */
+	bool iowait_boost_enable;
+
 	 /* Cached frequency lookup from cpufreq_driver_resolve_freq. */
 	unsigned int cached_target_freq;
 	int cached_resolved_idx;