diff mbox series

[v2] cpufreq/amd-pstate: Refactor max frequency calculation

Message ID 20241219192144.2744863-1-naresh.solanki@9elements.com (mailing list archive)
State Superseded, archived
Headers show
Series [v2] cpufreq/amd-pstate: Refactor max frequency calculation | expand

Commit Message

Naresh Solanki Dec. 19, 2024, 7:21 p.m. UTC
The previous approach introduced roundoff errors during division when
calculating the boost ratio. This, in turn, affected the maximum
frequency calculation, often resulting in reporting lower frequency
values.

For example, on the Glinda SoC based board with the following
parameters:

max_perf = 208
nominal_perf = 100
nominal_freq = 2600 MHz

The Linux kernel previously calculated the frequency as:
freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
freq = 5405 MHz  // Integer arithmetic.

With the updated formula:
freq = (max_perf * nominal_freq) / nominal_perf
freq = 5408 MHz

This change ensures more accurate frequency calculations by eliminating
unnecessary shifts and divisions, thereby improving precision.

Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>

Changes in V2:
1. Rebase on superm1.git/linux-next branch
---
 drivers/cpufreq/amd-pstate.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

Comments

Mario Limonciello Dec. 19, 2024, 7:32 p.m. UTC | #1
On 12/19/2024 13:21, Naresh Solanki wrote:
> The previous approach introduced roundoff errors during division when
> calculating the boost ratio. This, in turn, affected the maximum
> frequency calculation, often resulting in reporting lower frequency
> values.
> 
> For example, on the Glinda SoC based board with the following
> parameters:
> 
> max_perf = 208
> nominal_perf = 100
> nominal_freq = 2600 MHz
> 
> The Linux kernel previously calculated the frequency as:
> freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
> freq = 5405 MHz  // Integer arithmetic.
> 
> With the updated formula:
> freq = (max_perf * nominal_freq) / nominal_perf
> freq = 5408 MHz
> 
> This change ensures more accurate frequency calculations by eliminating
> unnecessary shifts and divisions, thereby improving precision.
> 
> Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>

Thanks, this makes sense to me.

But looking at it, we should have the same problem with lowest nonlinear 
freq as it goes through the same conversion process.  Would you mind 
fixing that one too?

Gautham, Perry,

Is there something non-obvious I'm missing about why it was done this 
way?  It looks like it's been there since the start.

> 
> Changes in V2:
> 1. Rebase on superm1.git/linux-next branch
> ---
>   drivers/cpufreq/amd-pstate.c | 9 ++++-----
>   1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index d7b1de97727a..02a851f93fd6 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>   {
>   	int ret;
>   	u32 min_freq, max_freq;
> -	u32 nominal_perf, nominal_freq;
> +	u32 highest_perf, nominal_perf, nominal_freq;
>   	u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
> -	u32 boost_ratio, lowest_nonlinear_ratio;
> +	u32 lowest_nonlinear_ratio;
>   	struct cppc_perf_caps cppc_perf;
>   
>   	ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>   	else
>   		nominal_freq = cppc_perf.nominal_freq;
>   
> +	highest_perf = READ_ONCE(cpudata->highest_perf);
>   	nominal_perf = READ_ONCE(cpudata->nominal_perf);
> -
> -	boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
> -	max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
> +	max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
>   
>   	lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
>   	lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
Naresh Solanki Dec. 19, 2024, 8:10 p.m. UTC | #2
Hi Mario,

On Fri, 20 Dec 2024 at 01:02, Mario Limonciello
<mario.limonciello@amd.com> wrote:
>
> On 12/19/2024 13:21, Naresh Solanki wrote:
> > The previous approach introduced roundoff errors during division when
> > calculating the boost ratio. This, in turn, affected the maximum
> > frequency calculation, often resulting in reporting lower frequency
> > values.
> >
> > For example, on the Glinda SoC based board with the following
> > parameters:
> >
> > max_perf = 208
> > nominal_perf = 100
> > nominal_freq = 2600 MHz
> >
> > The Linux kernel previously calculated the frequency as:
> > freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
> > freq = 5405 MHz  // Integer arithmetic.
> >
> > With the updated formula:
> > freq = (max_perf * nominal_freq) / nominal_perf
> > freq = 5408 MHz
> >
> > This change ensures more accurate frequency calculations by eliminating
> > unnecessary shifts and divisions, thereby improving precision.
> >
> > Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>
>
> Thanks, this makes sense to me.
>
> But looking at it, we should have the same problem with lowest nonlinear
> freq as it goes through the same conversion process.  Would you mind
> fixing that one too?
Sure. Somehow my eyes missed that.
Also observed that current calculations yields zero for lowest_nonlinear_freq.
After fixing that, it reported frequency 2002 & 1404 depending on the core.

On a side note, I'm also observing that the highest_perf is set to 196 which
should not have been the case as I do have cores with value 208.
Seems like amd_get_boost_ratio_numerator needs some addressing for that.

Regards,
Naresh
>
> Gautham, Perry,
>
> Is there something non-obvious I'm missing about why it was done this
> way?  It looks like it's been there since the start.
>
> >
> > Changes in V2:
> > 1. Rebase on superm1.git/linux-next branch
> > ---
> >   drivers/cpufreq/amd-pstate.c | 9 ++++-----
> >   1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> > index d7b1de97727a..02a851f93fd6 100644
> > --- a/drivers/cpufreq/amd-pstate.c
> > +++ b/drivers/cpufreq/amd-pstate.c
> > @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> >   {
> >       int ret;
> >       u32 min_freq, max_freq;
> > -     u32 nominal_perf, nominal_freq;
> > +     u32 highest_perf, nominal_perf, nominal_freq;
> >       u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
> > -     u32 boost_ratio, lowest_nonlinear_ratio;
> > +     u32 lowest_nonlinear_ratio;
> >       struct cppc_perf_caps cppc_perf;
> >
> >       ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> > @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> >       else
> >               nominal_freq = cppc_perf.nominal_freq;
> >
> > +     highest_perf = READ_ONCE(cpudata->highest_perf);
> >       nominal_perf = READ_ONCE(cpudata->nominal_perf);
> > -
> > -     boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
> > -     max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
> > +     max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
> >
> >       lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
> >       lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
>
Naresh Solanki Dec. 19, 2024, 8:15 p.m. UTC | #3
Hi Mario,

On Fri, 20 Dec 2024 at 01:40, Naresh Solanki
<naresh.solanki@9elements.com> wrote:
>
> Hi Mario,
>
> On Fri, 20 Dec 2024 at 01:02, Mario Limonciello
> <mario.limonciello@amd.com> wrote:
> >
> > On 12/19/2024 13:21, Naresh Solanki wrote:
> > > The previous approach introduced roundoff errors during division when
> > > calculating the boost ratio. This, in turn, affected the maximum
> > > frequency calculation, often resulting in reporting lower frequency
> > > values.
> > >
> > > For example, on the Glinda SoC based board with the following
> > > parameters:
> > >
> > > max_perf = 208
> > > nominal_perf = 100
> > > nominal_freq = 2600 MHz
> > >
> > > The Linux kernel previously calculated the frequency as:
> > > freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
> > > freq = 5405 MHz  // Integer arithmetic.
> > >
> > > With the updated formula:
> > > freq = (max_perf * nominal_freq) / nominal_perf
> > > freq = 5408 MHz
> > >
> > > This change ensures more accurate frequency calculations by eliminating
> > > unnecessary shifts and divisions, thereby improving precision.
> > >
> > > Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>
> >
> > Thanks, this makes sense to me.
> >
> > But looking at it, we should have the same problem with lowest nonlinear
> > freq as it goes through the same conversion process.  Would you mind
> > fixing that one too?
> Sure. Somehow my eyes missed that.
> Also observed that current calculations yields zero for lowest_nonlinear_freq.
Sorry I was wrong. it's not zero. Its roundoff version.

> After fixing that, it reported frequency 2002 & 1404 depending on the core.
>
> On a side note, I'm also observing that the highest_perf is set to 196 which
> should not have been the case as I do have cores with value 208.
> Seems like amd_get_boost_ratio_numerator needs some addressing for that.
>
> Regards,
> Naresh
> >
> > Gautham, Perry,
> >
> > Is there something non-obvious I'm missing about why it was done this
> > way?  It looks like it's been there since the start.
> >
> > >
> > > Changes in V2:
> > > 1. Rebase on superm1.git/linux-next branch
> > > ---
> > >   drivers/cpufreq/amd-pstate.c | 9 ++++-----
> > >   1 file changed, 4 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> > > index d7b1de97727a..02a851f93fd6 100644
> > > --- a/drivers/cpufreq/amd-pstate.c
> > > +++ b/drivers/cpufreq/amd-pstate.c
> > > @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> > >   {
> > >       int ret;
> > >       u32 min_freq, max_freq;
> > > -     u32 nominal_perf, nominal_freq;
> > > +     u32 highest_perf, nominal_perf, nominal_freq;
> > >       u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
> > > -     u32 boost_ratio, lowest_nonlinear_ratio;
> > > +     u32 lowest_nonlinear_ratio;
> > >       struct cppc_perf_caps cppc_perf;
> > >
> > >       ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> > > @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
> > >       else
> > >               nominal_freq = cppc_perf.nominal_freq;
> > >
> > > +     highest_perf = READ_ONCE(cpudata->highest_perf);
> > >       nominal_perf = READ_ONCE(cpudata->nominal_perf);
> > > -
> > > -     boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
> > > -     max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
> > > +     max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
> > >
> > >       lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
> > >       lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
> >
Mario Limonciello Dec. 19, 2024, 9:08 p.m. UTC | #4
On 12/19/2024 14:15, Naresh Solanki wrote:
> Hi Mario,
> 
> On Fri, 20 Dec 2024 at 01:40, Naresh Solanki
> <naresh.solanki@9elements.com> wrote:
>>
>> Hi Mario,
>>
>> On Fri, 20 Dec 2024 at 01:02, Mario Limonciello
>> <mario.limonciello@amd.com> wrote:
>>>
>>> On 12/19/2024 13:21, Naresh Solanki wrote:
>>>> The previous approach introduced roundoff errors during division when
>>>> calculating the boost ratio. This, in turn, affected the maximum
>>>> frequency calculation, often resulting in reporting lower frequency
>>>> values.
>>>>
>>>> For example, on the Glinda SoC based board with the following
>>>> parameters:
>>>>
>>>> max_perf = 208
>>>> nominal_perf = 100
>>>> nominal_freq = 2600 MHz
>>>>
>>>> The Linux kernel previously calculated the frequency as:
>>>> freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
>>>> freq = 5405 MHz  // Integer arithmetic.
>>>>
>>>> With the updated formula:
>>>> freq = (max_perf * nominal_freq) / nominal_perf
>>>> freq = 5408 MHz
>>>>
>>>> This change ensures more accurate frequency calculations by eliminating
>>>> unnecessary shifts and divisions, thereby improving precision.
>>>>
>>>> Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>
>>>
>>> Thanks, this makes sense to me.
>>>
>>> But looking at it, we should have the same problem with lowest nonlinear
>>> freq as it goes through the same conversion process.  Would you mind
>>> fixing that one too?
>> Sure. Somehow my eyes missed that.
>> Also observed that current calculations yields zero for lowest_nonlinear_freq.
> Sorry I was wrong. it's not zero. Its roundoff version.
> 
>> After fixing that, it reported frequency 2002 & 1404 depending on the core.

Mmm, I wouldn't expect that.  Is your APU heterogenous?  Or is this a 
BIOS bug?

Both with your v3 patch in place and your patch not in place can you 
send me the report generated from:
https://gitlab.freedesktop.org/drm/amd/-/blob/master/scripts/amd-pstate-triage.py

>>
>> On a side note, I'm also observing that the highest_perf is set to 196 which
>> should not have been the case as I do have cores with value 208.
>> Seems like amd_get_boost_ratio_numerator needs some addressing for that.

Ah this is something that is quite confusing about how AMD client CPUs 
behave.  There is a feature called "Preferred cores" that uses the 
highest performance value to indicate rankings of the cores.  This means 
that you can't use the value in this register to calculate the 
frequency, you have to use the pre-defined scaling factor.

The scaling factor is listed in arch/x86/kernel/acpi/cppc.c and the 
correct one is fetched for this calculation.

>>
>> Regards,
>> Naresh
>>>
>>> Gautham, Perry,
>>>
>>> Is there something non-obvious I'm missing about why it was done this
>>> way?  It looks like it's been there since the start.
>>>
>>>>
>>>> Changes in V2:
>>>> 1. Rebase on superm1.git/linux-next branch
>>>> ---
>>>>    drivers/cpufreq/amd-pstate.c | 9 ++++-----
>>>>    1 file changed, 4 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
>>>> index d7b1de97727a..02a851f93fd6 100644
>>>> --- a/drivers/cpufreq/amd-pstate.c
>>>> +++ b/drivers/cpufreq/amd-pstate.c
>>>> @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>>>>    {
>>>>        int ret;
>>>>        u32 min_freq, max_freq;
>>>> -     u32 nominal_perf, nominal_freq;
>>>> +     u32 highest_perf, nominal_perf, nominal_freq;
>>>>        u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
>>>> -     u32 boost_ratio, lowest_nonlinear_ratio;
>>>> +     u32 lowest_nonlinear_ratio;
>>>>        struct cppc_perf_caps cppc_perf;
>>>>
>>>>        ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
>>>> @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>>>>        else
>>>>                nominal_freq = cppc_perf.nominal_freq;
>>>>
>>>> +     highest_perf = READ_ONCE(cpudata->highest_perf);
>>>>        nominal_perf = READ_ONCE(cpudata->nominal_perf);
>>>> -
>>>> -     boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
>>>> -     max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
>>>> +     max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
>>>>
>>>>        lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
>>>>        lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
>>>
Gautham R. Shenoy Dec. 20, 2024, 6:16 a.m. UTC | #5
On Fri, Dec 20, 2024 at 12:51:43AM +0530, Naresh Solanki wrote:
> The previous approach introduced roundoff errors during division when
> calculating the boost ratio. This, in turn, affected the maximum
> frequency calculation, often resulting in reporting lower frequency
> values.
> 
> For example, on the Glinda SoC based board with the following
> parameters:
> 
> max_perf = 208
> nominal_perf = 100
> nominal_freq = 2600 MHz
> 
> The Linux kernel previously calculated the frequency as:
> freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
> freq = 5405 MHz  // Integer arithmetic.
> 
> With the updated formula:
> freq = (max_perf * nominal_freq) / nominal_perf
> freq = 5408 MHz
> 
> This change ensures more accurate frequency calculations by eliminating
> unnecessary shifts and divisions, thereby improving precision.
> 
> Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>
> 
> Changes in V2:
> 1. Rebase on superm1.git/linux-next branch
> ---
>  drivers/cpufreq/amd-pstate.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
> index d7b1de97727a..02a851f93fd6 100644
> --- a/drivers/cpufreq/amd-pstate.c
> +++ b/drivers/cpufreq/amd-pstate.c
> @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>  {
>  	int ret;
>  	u32 min_freq, max_freq;
> -	u32 nominal_perf, nominal_freq;
> +	u32 highest_perf, nominal_perf, nominal_freq;
>  	u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
> -	u32 boost_ratio, lowest_nonlinear_ratio;
> +	u32 lowest_nonlinear_ratio;
>  	struct cppc_perf_caps cppc_perf;
>  
>  	ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
> @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>  	else
>  		nominal_freq = cppc_perf.nominal_freq;
>  
> +	highest_perf = READ_ONCE(cpudata->highest_perf);
>  	nominal_perf = READ_ONCE(cpudata->nominal_perf);
> -
> -	boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
> -	max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);


The patch looks obviously correct to me. And the suggested method
would work because nominal_freq is larger than the nominal_perf and
thus scaling is really necessary.

Besides, before this patch, there was another obvious issue that we
were computing the boost_ratio when we should have been computing the
ratio of nominal_freq and nominal_perf and then multiplied this with
max_perf without losing precision.

This is just one instance, but it can be generalized so that any 
freq --> perf and perf --> freq can be computed without loss of precision.

We need two things:

1. The mult_factor should be computed as a ratio of nominal_freq and
nominal_perf (and vice versa) as they are always known.

2. Use DIV64_U64_ROUND_UP instead of div64() which rounds up instead of rounding down.

So if we have the shifts defined as follows:

#define PERF_SHIFT   12UL //shift used for freq --> perf conversion
#define FREQ_SHIFT   10UL //shift used for perf --> freq conversion.

And in amd_pstate_init_freq() code, we initialize the two global variables:

u64 freq_mult_factor = DIV64_U64_ROUND_UP(nominal_freq  << FREQ_SHIFT, nominal_perf);
u64 perf_mult_factor = DIV64_U64_ROUND_UP(nominal_perf  << PERF_SHIFT, nominal_freq);

.. and have a couple of helper functions:

/* perf to freq conversion */
static inline unsigned int perf_to_freq(perf)
{
	return (perf * freq_mult_factor) >> FREQ_SHIFT;
}


/* freq to perf conversion */
static inline unsigned int freq_to_perf(freq)
{
	return (freq * perf_mult_factor) >> PERF_SHIFT;
}


> +	max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);

Then,
        max_freq = perf_to_freq(highest_perf);
	min_freq = perf_to_freq(lowest_non_linear_perf);


and so on.

This should just work.


>  
>  	lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
>  	lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
> -- 

--
Thanks and Regards
gautham.
Naresh Solanki Dec. 20, 2024, 10:09 a.m. UTC | #6
Hi Mario

Based on linux-next:
2024-12-20 11:02:53,078 INFO: 
Dhananjay Ugwekar Dec. 27, 2024, 5:49 a.m. UTC | #7
On 12/20/2024 11:46 AM, Gautham R. Shenoy wrote:
> On Fri, Dec 20, 2024 at 12:51:43AM +0530, Naresh Solanki wrote:
>> The previous approach introduced roundoff errors during division when
>> calculating the boost ratio. This, in turn, affected the maximum
>> frequency calculation, often resulting in reporting lower frequency
>> values.
>>
>> For example, on the Glinda SoC based board with the following
>> parameters:
>>
>> max_perf = 208
>> nominal_perf = 100
>> nominal_freq = 2600 MHz
>>
>> The Linux kernel previously calculated the frequency as:
>> freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024
>> freq = 5405 MHz  // Integer arithmetic.
>>
>> With the updated formula:
>> freq = (max_perf * nominal_freq) / nominal_perf
>> freq = 5408 MHz
>>
>> This change ensures more accurate frequency calculations by eliminating
>> unnecessary shifts and divisions, thereby improving precision.
>>
>> Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com>
>>
>> Changes in V2:
>> 1. Rebase on superm1.git/linux-next branch
>> ---
>>  drivers/cpufreq/amd-pstate.c | 9 ++++-----
>>  1 file changed, 4 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
>> index d7b1de97727a..02a851f93fd6 100644
>> --- a/drivers/cpufreq/amd-pstate.c
>> +++ b/drivers/cpufreq/amd-pstate.c
>> @@ -908,9 +908,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>>  {
>>  	int ret;
>>  	u32 min_freq, max_freq;
>> -	u32 nominal_perf, nominal_freq;
>> +	u32 highest_perf, nominal_perf, nominal_freq;
>>  	u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
>> -	u32 boost_ratio, lowest_nonlinear_ratio;
>> +	u32 lowest_nonlinear_ratio;
>>  	struct cppc_perf_caps cppc_perf;
>>  
>>  	ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
>> @@ -927,10 +927,9 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
>>  	else
>>  		nominal_freq = cppc_perf.nominal_freq;
>>  
>> +	highest_perf = READ_ONCE(cpudata->highest_perf);
>>  	nominal_perf = READ_ONCE(cpudata->nominal_perf);
>> -
>> -	boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
>> -	max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
> 
> 
> The patch looks obviously correct to me. And the suggested method
> would work because nominal_freq is larger than the nominal_perf and
> thus scaling is really necessary.
> 
> Besides, before this patch, there was another obvious issue that we
> were computing the boost_ratio when we should have been computing the
> ratio of nominal_freq and nominal_perf and then multiplied this with
> max_perf without losing precision.
> 
> This is just one instance, but it can be generalized so that any 
> freq --> perf and perf --> freq can be computed without loss of precision.
> 
> We need two things:
> 
> 1. The mult_factor should be computed as a ratio of nominal_freq and
> nominal_perf (and vice versa) as they are always known.
> 
> 2. Use DIV64_U64_ROUND_UP instead of div64() which rounds up instead of rounding down.
> 
> So if we have the shifts defined as follows:
> 
> #define PERF_SHIFT   12UL //shift used for freq --> perf conversion
> #define FREQ_SHIFT   10UL //shift used for perf --> freq conversion.
> 
> And in amd_pstate_init_freq() code, we initialize the two global variables:
> 
> u64 freq_mult_factor = DIV64_U64_ROUND_UP(nominal_freq  << FREQ_SHIFT, nominal_perf);
> u64 perf_mult_factor = DIV64_U64_ROUND_UP(nominal_perf  << PERF_SHIFT, nominal_freq);

I like this approach, but can we assume the nominal freq/perf values to be the same for 
all CPUs, otherwise we would need to make these factors a per-CPU or per-domain(where 
all CPUs within a "domain" have the same nominal_freq/perf). At which point the benefit 
of caching these ratios might diminish.

Thoughts, Gautham, Mario?

Thanks,
Dhananjay

> 
> .. and have a couple of helper functions:
> 
> /* perf to freq conversion */
> static inline unsigned int perf_to_freq(perf)
> {
> 	return (perf * freq_mult_factor) >> FREQ_SHIFT;
> }
> 
> 
> /* freq to perf conversion */
> static inline unsigned int freq_to_perf(freq)
> {
> 	return (freq * perf_mult_factor) >> PERF_SHIFT;
> }
> 
> 
>> +	max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
> 
> Then,
>         max_freq = perf_to_freq(highest_perf);
> 	min_freq = perf_to_freq(lowest_non_linear_perf);
> 
> 
> and so on.
> 
> This should just work.
> 
> 
>>  
>>  	lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
>>  	lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,
>> -- 
> 
> --
> Thanks and Regards
> gautham.
diff mbox series

Patch

diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
index d7b1de97727a..02a851f93fd6 100644
--- a/drivers/cpufreq/amd-pstate.c
+++ b/drivers/cpufreq/amd-pstate.c
@@ -908,9 +908,9 @@  static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
 {
 	int ret;
 	u32 min_freq, max_freq;
-	u32 nominal_perf, nominal_freq;
+	u32 highest_perf, nominal_perf, nominal_freq;
 	u32 lowest_nonlinear_perf, lowest_nonlinear_freq;
-	u32 boost_ratio, lowest_nonlinear_ratio;
+	u32 lowest_nonlinear_ratio;
 	struct cppc_perf_caps cppc_perf;
 
 	ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf);
@@ -927,10 +927,9 @@  static int amd_pstate_init_freq(struct amd_cpudata *cpudata)
 	else
 		nominal_freq = cppc_perf.nominal_freq;
 
+	highest_perf = READ_ONCE(cpudata->highest_perf);
 	nominal_perf = READ_ONCE(cpudata->nominal_perf);
-
-	boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
-	max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT);
+	max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf);
 
 	lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf);
 	lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT,