mbox series

[v4,0/3] Introduce Bandwidth OPPs for interconnects

Message ID 20190726231558.175130-1-saravanak@google.com (mailing list archive)
Headers show
Series Introduce Bandwidth OPPs for interconnects | expand

Message

Saravana Kannan July 26, 2019, 11:15 p.m. UTC
Interconnects and interconnect paths quantify their performance levels in
terms of bandwidth and not in terms of frequency. So similar to how we have
frequency based OPP tables in DT and in the OPP framework, we need
bandwidth OPP table support in DT and in the OPP framework.

So with the DT bindings added in this patch series, the DT for a GPU
that does bandwidth voting from GPU to Cache and GPU to DDR would look
something like this:

gpu_cache_opp_table: gpu_cache_opp_table {
	compatible = "operating-points-v2";

	gpu_cache_3000: opp-3000 {
		opp-peak-KBps = <3000000>;
		opp-avg-KBps = <1000000>;
	};
	gpu_cache_6000: opp-6000 {
		opp-peak-KBps = <6000000>;
		opp-avg-KBps = <2000000>;
	};
	gpu_cache_9000: opp-9000 {
		opp-peak-KBps = <9000000>;
		opp-avg-KBps = <9000000>;
	};
};

gpu_ddr_opp_table: gpu_ddr_opp_table {
	compatible = "operating-points-v2";

	gpu_ddr_1525: opp-1525 {
		opp-peak-KBps = <1525000>;
		opp-avg-KBps = <452000>;
	};
	gpu_ddr_3051: opp-3051 {
		opp-peak-KBps = <3051000>;
		opp-avg-KBps = <915000>;
	};
	gpu_ddr_7500: opp-7500 {
		opp-peak-KBps = <7500000>;
		opp-avg-KBps = <3000000>;
	};
};

gpu_opp_table: gpu_opp_table {
	compatible = "operating-points-v2";
	opp-shared;

	opp-200000000 {
		opp-hz = /bits/ 64 <200000000>;
	};
	opp-400000000 {
		opp-hz = /bits/ 64 <400000000>;
	};
};

gpu@7864000 {
	...
	operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;
	...
};

v1 -> v3:
- Lots of patch additions that were later dropped
v3 -> v4:
- Fixed typo bugs pointed out by Sibi.
- Fixed bug that incorrectly reset rate to 0 all the time
- Added units documentation
- Dropped interconnect-opp-table property and related changes

Cheers,
Saravana

Saravana Kannan (3):
  dt-bindings: opp: Introduce opp-peak-KBps and opp-avg-KBps bindings
  OPP: Add support for bandwidth OPP tables
  OPP: Add helper function for bandwidth OPP tables

 Documentation/devicetree/bindings/opp/opp.txt | 15 ++++--
 .../devicetree/bindings/property-units.txt    |  4 ++
 drivers/opp/core.c                            | 51 +++++++++++++++++++
 drivers/opp/of.c                              | 41 +++++++++++----
 drivers/opp/opp.h                             |  4 +-
 include/linux/pm_opp.h                        | 19 +++++++
 6 files changed, 121 insertions(+), 13 deletions(-)

Comments

Viresh Kumar July 29, 2019, 9:35 a.m. UTC | #1
On 26-07-19, 16:15, Saravana Kannan wrote:
> Interconnects and interconnect paths quantify their performance levels in
> terms of bandwidth and not in terms of frequency. So similar to how we have
> frequency based OPP tables in DT and in the OPP framework, we need
> bandwidth OPP table support in DT and in the OPP framework.
> 
> So with the DT bindings added in this patch series, the DT for a GPU
> that does bandwidth voting from GPU to Cache and GPU to DDR would look
> something like this:
> 
> gpu_cache_opp_table: gpu_cache_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_cache_3000: opp-3000 {
> 		opp-peak-KBps = <3000000>;
> 		opp-avg-KBps = <1000000>;
> 	};
> 	gpu_cache_6000: opp-6000 {
> 		opp-peak-KBps = <6000000>;
> 		opp-avg-KBps = <2000000>;
> 	};
> 	gpu_cache_9000: opp-9000 {
> 		opp-peak-KBps = <9000000>;
> 		opp-avg-KBps = <9000000>;
> 	};
> };
> 
> gpu_ddr_opp_table: gpu_ddr_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_ddr_1525: opp-1525 {
> 		opp-peak-KBps = <1525000>;
> 		opp-avg-KBps = <452000>;
> 	};
> 	gpu_ddr_3051: opp-3051 {
> 		opp-peak-KBps = <3051000>;
> 		opp-avg-KBps = <915000>;
> 	};
> 	gpu_ddr_7500: opp-7500 {
> 		opp-peak-KBps = <7500000>;
> 		opp-avg-KBps = <3000000>;
> 	};
> };
> 
> gpu_opp_table: gpu_opp_table {
> 	compatible = "operating-points-v2";
> 	opp-shared;
> 
> 	opp-200000000 {
> 		opp-hz = /bits/ 64 <200000000>;
> 	};
> 	opp-400000000 {
> 		opp-hz = /bits/ 64 <400000000>;
> 	};
> };
> 
> gpu@7864000 {
> 	...
> 	operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;
> 	...
> };

One feedback I missed giving earlier. Will it be possible to get some
user code merged along with this ? I want to make sure anything we add
ends up getting used.

That also helps understanding the problems you are facing in a better
way, i.e. with real examples.
Saravana Kannan July 29, 2019, 8:16 p.m. UTC | #2
On Mon, Jul 29, 2019 at 2:35 AM Viresh Kumar <viresh.kumar@linaro.org> wrote:
>
> On 26-07-19, 16:15, Saravana Kannan wrote:
> > Interconnects and interconnect paths quantify their performance levels in
> > terms of bandwidth and not in terms of frequency. So similar to how we have
> > frequency based OPP tables in DT and in the OPP framework, we need
> > bandwidth OPP table support in DT and in the OPP framework.
> >
> > So with the DT bindings added in this patch series, the DT for a GPU
> > that does bandwidth voting from GPU to Cache and GPU to DDR would look
> > something like this:
> >
> > gpu_cache_opp_table: gpu_cache_opp_table {
> >       compatible = "operating-points-v2";
> >
> >       gpu_cache_3000: opp-3000 {
> >               opp-peak-KBps = <3000000>;
> >               opp-avg-KBps = <1000000>;
> >       };
> >       gpu_cache_6000: opp-6000 {
> >               opp-peak-KBps = <6000000>;
> >               opp-avg-KBps = <2000000>;
> >       };
> >       gpu_cache_9000: opp-9000 {
> >               opp-peak-KBps = <9000000>;
> >               opp-avg-KBps = <9000000>;
> >       };
> > };
> >
> > gpu_ddr_opp_table: gpu_ddr_opp_table {
> >       compatible = "operating-points-v2";
> >
> >       gpu_ddr_1525: opp-1525 {
> >               opp-peak-KBps = <1525000>;
> >               opp-avg-KBps = <452000>;
> >       };
> >       gpu_ddr_3051: opp-3051 {
> >               opp-peak-KBps = <3051000>;
> >               opp-avg-KBps = <915000>;
> >       };
> >       gpu_ddr_7500: opp-7500 {
> >               opp-peak-KBps = <7500000>;
> >               opp-avg-KBps = <3000000>;
> >       };
> > };
> >
> > gpu_opp_table: gpu_opp_table {
> >       compatible = "operating-points-v2";
> >       opp-shared;
> >
> >       opp-200000000 {
> >               opp-hz = /bits/ 64 <200000000>;
> >       };
> >       opp-400000000 {
> >               opp-hz = /bits/ 64 <400000000>;
> >       };
> > };
> >
> > gpu@7864000 {
> >       ...
> >       operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;
> >       ...
> > };
>
> One feedback I missed giving earlier. Will it be possible to get some
> user code merged along with this ? I want to make sure anything we add
> ends up getting used.

Sibi might be working on doing that for the SDM845 CPUfreq driver.
Georgi could also change his GPU driver use case to use this BW OPP
table and required-opps.

The problem is that people don't want to start using this until we
decide on the DT representation. So it's like a chicken and egg
situation.

-Saravana
Viresh Kumar July 30, 2019, 2:46 a.m. UTC | #3
On 29-07-19, 13:16, Saravana Kannan wrote:
> Sibi might be working on doing that for the SDM845 CPUfreq driver.
> Georgi could also change his GPU driver use case to use this BW OPP
> table and required-opps.
> 
> The problem is that people don't want to start using this until we
> decide on the DT representation. So it's like a chicken and egg
> situation.

Yeah, I agree to that.

@Georgi and @Sibi: This is your chance to speak up about the proposal
from Saravana and if you find anything wrong with them. And specially
that it is mostly about interconnects here, I would like to have an
explicit Ack from Georgi on this.

And if you guys are all okay about this then please at least commit
that you will convert your stuff based on this in coming days.
Sibi Sankar July 30, 2019, 5:28 a.m. UTC | #4
Hey Viresh,

On 7/30/19 8:16 AM, Viresh Kumar wrote:
> On 29-07-19, 13:16, Saravana Kannan wrote:
>> Sibi might be working on doing that for the SDM845 CPUfreq driver.
>> Georgi could also change his GPU driver use case to use this BW OPP
>> table and required-opps.
>>
>> The problem is that people don't want to start using this until we
>> decide on the DT representation. So it's like a chicken and egg
>> situation.
> 
> Yeah, I agree to that.
> 
> @Georgi and @Sibi: This is your chance to speak up about the proposal
> from Saravana and if you find anything wrong with them. And specially
> that it is mostly about interconnects here, I would like to have an
> explicit Ack from Georgi on this.
> 
> And if you guys are all okay about this then please at least commit
> that you will convert your stuff based on this in coming days.

I've been using both Saravana's and Georgi's series for a while
now to scale DDR and L3 on SDM845. There is currently no consensus
as to where the votes are to be actuated from, hence couldn't post
anything out.

DCVS based on Saravana's series + passive governor:
https://github.com/QuinAsura/linux/tree/lnext-072619-SK-series

DCVS based on Georgi's series: (I had already posted this out)
https://github.com/QuinAsura/linux/tree/lnext-072619-GJ-series
Saravana Kannan July 30, 2019, 5:53 a.m. UTC | #5
On Mon, Jul 29, 2019 at 10:28 PM Sibi Sankar <sibis@codeaurora.org> wrote:
>
> Hey Viresh,
>
> On 7/30/19 8:16 AM, Viresh Kumar wrote:
> > On 29-07-19, 13:16, Saravana Kannan wrote:
> >> Sibi might be working on doing that for the SDM845 CPUfreq driver.
> >> Georgi could also change his GPU driver use case to use this BW OPP
> >> table and required-opps.
> >>
> >> The problem is that people don't want to start using this until we
> >> decide on the DT representation. So it's like a chicken and egg
> >> situation.
> >
> > Yeah, I agree to that.
> >
> > @Georgi and @Sibi: This is your chance to speak up about the proposal
> > from Saravana and if you find anything wrong with them. And specially
> > that it is mostly about interconnects here, I would like to have an
> > explicit Ack from Georgi on this.
> >
> > And if you guys are all okay about this then please at least commit
> > that you will convert your stuff based on this in coming days.
>
> I've been using both Saravana's and Georgi's series for a while
> now to scale DDR and L3 on SDM845. There is currently no consensus
> as to where the votes are to be actuated from, hence couldn't post
> anything out.
>
> DCVS based on Saravana's series + passive governor:
> https://github.com/QuinAsura/linux/tree/lnext-072619-SK-series

Thanks Sibi! You might want to convert your patches so that until the
passive governor is ready, you just look up the required opps and vote
for BW directly from the cpufreq driver. Once devfreq governor is
ready, you can switch to it.

-Saravana

>
> DCVS based on Georgi's series: (I had already posted this out)
> https://github.com/QuinAsura/linux/tree/lnext-072619-GJ-series
>
> --
> Qualcomm Innovation Center, Inc.
> Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project
Sibi Sankar July 30, 2019, 4:43 p.m. UTC | #6
On 7/30/19 11:23 AM, Saravana Kannan wrote:
> On Mon, Jul 29, 2019 at 10:28 PM Sibi Sankar <sibis@codeaurora.org> wrote:
>>
>> Hey Viresh,
>>
>> On 7/30/19 8:16 AM, Viresh Kumar wrote:
>>> On 29-07-19, 13:16, Saravana Kannan wrote:
>>>> Sibi might be working on doing that for the SDM845 CPUfreq driver.
>>>> Georgi could also change his GPU driver use case to use this BW OPP
>>>> table and required-opps.
>>>>
>>>> The problem is that people don't want to start using this until we
>>>> decide on the DT representation. So it's like a chicken and egg
>>>> situation.
>>>
>>> Yeah, I agree to that.
>>>
>>> @Georgi and @Sibi: This is your chance to speak up about the proposal
>>> from Saravana and if you find anything wrong with them. And specially
>>> that it is mostly about interconnects here, I would like to have an
>>> explicit Ack from Georgi on this.
>>>
>>> And if you guys are all okay about this then please at least commit
>>> that you will convert your stuff based on this in coming days.
>>
>> I've been using both Saravana's and Georgi's series for a while
>> now to scale DDR and L3 on SDM845. There is currently no consensus
>> as to where the votes are to be actuated from, hence couldn't post
>> anything out.
>>
>> DCVS based on Saravana's series + passive governor:
>> https://github.com/QuinAsura/linux/tree/lnext-072619-SK-series
> 
> Thanks Sibi! You might want to convert your patches so that until the
> passive governor is ready, you just look up the required opps and vote
> for BW directly from the cpufreq driver. Once devfreq governor is
> ready, you can switch to it.

Sure I'll do that.

> 
> -Saravana
> 
>>
>> DCVS based on Georgi's series: (I had already posted this out)
>> https://github.com/QuinAsura/linux/tree/lnext-072619-GJ-series
>>
>> --
>> Qualcomm Innovation Center, Inc.
>> Qualcomm Innovation Center, Inc, is a member of Code Aurora Forum,
>> a Linux Foundation Collaborative Project
Georgi Djakov Aug. 6, 2019, 3:27 p.m. UTC | #7
On 7/30/19 05:46, Viresh Kumar wrote:
> On 29-07-19, 13:16, Saravana Kannan wrote:
>> Sibi might be working on doing that for the SDM845 CPUfreq driver.
>> Georgi could also change his GPU driver use case to use this BW OPP
>> table and required-opps.
>>
>> The problem is that people don't want to start using this until we
>> decide on the DT representation. So it's like a chicken and egg
>> situation.
> 
> Yeah, I agree to that.
> 
> @Georgi and @Sibi: This is your chance to speak up about the proposal
> from Saravana and if you find anything wrong with them. And specially
> that it is mostly about interconnects here, I would like to have an
> explicit Ack from Georgi on this.
> 
> And if you guys are all okay about this then please at least commit
> that you will convert your stuff based on this in coming days.

Looks fine to me. I am already doing some testing with this patchset.
However, as Stephen already pointed out, we should s/KBps/kBps/.

Thanks,
Georgi