diff mbox

[RFC] cpufreq: governor: Set MIN_LATENCY_MULTIPLIER to 20

Message ID 472e2bac1fc4a2589651beddbfaf6da53500d12e.1361871582.git.viresh.kumar@linaro.org (mailing list archive)
State RFC, archived
Headers show

Commit Message

Viresh Kumar Feb. 26, 2013, 9:43 a.m. UTC
Currently MIN_LATENCY_MULTIPLIER is set defined as 100 and so on a system with
transition latency of 1 ms, the minimum sampling time comes to be around 100 ms.
That is quite big if you want to get better performance for your system.

Redefine MIN_LATENCY_MULTIPLIER to 20 so that we can support 20ms sampling rate
for such platforms.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---

Hi Guys,

I really don't know how this figure (100) came initially, but we really need to
have 20ms support for my platform: ARM TC2.

Pushed here:

http://git.linaro.org/gitweb?p=people/vireshk/linux.git;a=shortlog;h=refs/heads/cpufreq-fixes

 drivers/cpufreq/cpufreq_governor.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Thomas Renninger Feb. 26, 2013, 10:44 a.m. UTC | #1
On Tuesday, February 26, 2013 03:13:32 PM Viresh Kumar wrote:
> Currently MIN_LATENCY_MULTIPLIER is set defined as 100 and so on a system
> with transition latency of 1 ms, the minimum sampling time comes to be
> around 100 ms. That is quite big if you want to get better performance for
> your system.
> 
> Redefine MIN_LATENCY_MULTIPLIER to 20 so that we can support 20ms sampling
> rate for such platforms.

Redefining MIN_LATENCY_MULTIPLIER shouldn't hurt that much, but this looks
like a workaround.
It only modifies the minimal sampling rate that userspace can set.
You would still need to set something from userspace to get the perfect 
sampling rate for this platform.

I wonder where the cpufreq driver does get the 1ms latency from?
Is this value valid?
The driver should return the correct latency, then there is no need for
workarounds like this.

      Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Viresh Kumar Feb. 26, 2013, 10:50 a.m. UTC | #2
On 26 February 2013 16:14, Thomas Renninger <trenn@suse.de> wrote:
> Redefining MIN_LATENCY_MULTIPLIER shouldn't hurt that much, but this looks
> like a workaround.
> It only modifies the minimal sampling rate that userspace can set.

Yes.

> You would still need to set something from userspace to get the perfect
> sampling rate for this platform.

Yes. We still need to fix sampling rate from userspace.

> I wonder where the cpufreq driver does get the 1ms latency from?
> Is this value valid?
> The driver should return the correct latency, then there is no need for
> workarounds like this.

I am talking about ARM Vexpress TC2 (Test Chip) big LITTLE SoC here. Its
not a production type SoC and freq change is a bit slow here. Its really around
1 ms :)

But the real systems may not have this big of latency.

Anyway, how do you come to 100 value in your initial patch. What motivated you
to fix it there?

--
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thomas Renninger Feb. 26, 2013, 12:24 p.m. UTC | #3
On Tuesday, February 26, 2013 04:20:07 PM Viresh Kumar wrote:
> On 26 February 2013 16:14, Thomas Renninger <trenn@suse.de> wrote:
> > Redefining MIN_LATENCY_MULTIPLIER shouldn't hurt that much, but this looks
> > like a workaround.
> > It only modifies the minimal sampling rate that userspace can set.
> 
> Yes.
> 
> > You would still need to set something from userspace to get the perfect
> > sampling rate for this platform.
> 
> Yes. We still need to fix sampling rate from userspace.
> 
> > I wonder where the cpufreq driver does get the 1ms latency from?
> > Is this value valid?
> > The driver should return the correct latency, then there is no need for
> > workarounds like this.
> 
> I am talking about ARM Vexpress TC2 (Test Chip) big LITTLE SoC here. Its
> not a production type SoC and freq change is a bit slow here. Its really
> around 1 ms :)
> 
> But the real systems may not have this big of latency.
> 
> Anyway, how do you come to 100 value in your initial patch. What motivated
> you to fix it there?
Iirc there were two things:
   - max latency does not make any sense at all and it got reverted
   - min latency makes some sense -> prevent a system to get unresponsive
     due to too small sampling rate forced by userspace.
     I reduced the min_sampling rate calculation to better be able to test
     best sampling rate values by trying them out.

But I agree, that it should not harm to lower the MIN_LATENCY_MULTIPLIER.
In fact it doesn't change anything as long as userspace does not override
the sample factor and the system should still not stall (this is what the 
min_sampling_rate tries to prevent).

So from what you describe above:
This patch makes sense, especially to test and debug some early HW where
latency values might be big (or bogus). Then the developer can still enforce
lower polling rates, but they can still not be that low that userspace is able 
to stall the system.

No objections from my side if this patch helps you to get further.

   Thomas
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index d2ac911..adb8e30 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -34,7 +34,7 @@ 
  */
 #define MIN_SAMPLING_RATE_RATIO			(2)
 #define LATENCY_MULTIPLIER			(1000)
-#define MIN_LATENCY_MULTIPLIER			(100)
+#define MIN_LATENCY_MULTIPLIER			(20)
 #define TRANSITION_LATENCY_LIMIT		(10 * 1000 * 1000)
 
 /* Ondemand Sampling types */