mbox series

[v6,0/3] Introduce Bandwidth OPPs for interconnects

Message ID 20191207002424.201796-1-saravanak@google.com (mailing list archive)
Headers show
Series Introduce Bandwidth OPPs for interconnects | expand

Message

Saravana Kannan Dec. 7, 2019, 12:24 a.m. UTC
Viresh/Stephen,

I don't think all the additional code/diff in this v6 series is worth it
to avoid using the rate field to store peak bandwidth. However, since folks
weren't too happy about it, here it is. I prefer the v5 series, but not
too strongly tied to it. Let me know what you think Viresh/Stephen.

Btw, I wasn't sure of opp-hz = 0 or opp-level = 0 were allowed. Also,
it's not clear why the duplicate check isn't done for opp-level when
_opp_add() is called. Based on that, we could add opp-level comparison
to opp_compare_key(). That's why you'll see a few spurious
opp_key.level = 0 lines. Let me know how you want to go with that.

I could also add a opp.key_type enum field to store what key type the
opp entry is. But looks like I can get away without adding an
unnecessary variable. So, I've skipped that for now.

------

Interconnects and interconnect paths quantify their performance levels in
terms of bandwidth and not in terms of frequency. So similar to how we have
frequency based OPP tables in DT and in the OPP framework, we need
bandwidth OPP table support in DT and in the OPP framework.

So with the DT bindings added in this patch series, the DT for a GPU
that does bandwidth voting from GPU to Cache and GPU to DDR would look
something like this:

gpu_cache_opp_table: gpu_cache_opp_table {
	compatible = "operating-points-v2";

	gpu_cache_3000: opp-3000 {
		opp-peak-KBps = <3000000>;
		opp-avg-KBps = <1000000>;
	};
	gpu_cache_6000: opp-6000 {
		opp-peak-KBps = <6000000>;
		opp-avg-KBps = <2000000>;
	};
	gpu_cache_9000: opp-9000 {
		opp-peak-KBps = <9000000>;
		opp-avg-KBps = <9000000>;
	};
};

gpu_ddr_opp_table: gpu_ddr_opp_table {
	compatible = "operating-points-v2";

	gpu_ddr_1525: opp-1525 {
		opp-peak-KBps = <1525000>;
		opp-avg-KBps = <452000>;
	};
	gpu_ddr_3051: opp-3051 {
		opp-peak-KBps = <3051000>;
		opp-avg-KBps = <915000>;
	};
	gpu_ddr_7500: opp-7500 {
		opp-peak-KBps = <7500000>;
		opp-avg-KBps = <3000000>;
	};
};

gpu_opp_table: gpu_opp_table {
	compatible = "operating-points-v2";
	opp-shared;

	opp-200000000 {
		opp-hz = /bits/ 64 <200000000>;
	};
	opp-400000000 {
		opp-hz = /bits/ 64 <400000000>;
	};
};

gpu@7864000 {
	...
	operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;
	...
};

v1 -> v3:
- Lots of patch additions that were later dropped
v3 -> v4:
- Fixed typo bugs pointed out by Sibi.
- Fixed bug that incorrectly reset rate to 0 all the time
- Added units documentation
- Dropped interconnect-opp-table property and related changes
v4->v5:
- Replaced KBps with kBps
- Minor documentation fix
v5->v6:
- Added Rob's reviewed-by for the DT patch
- Rewrote OPP patches to use separate field for peak_bw instead of
  reusing rate field.
- Pulled in opp-level parsing into _read_opp_key
- Addressed minor code style and typo comments

Cheers,
Saravana

Saravana Kannan (3):
  dt-bindings: opp: Introduce opp-peak-kBps and opp-avg-kBps bindings
  OPP: Add support for bandwidth OPP tables
  OPP: Add helper function for bandwidth OPP tables

 Documentation/devicetree/bindings/opp/opp.txt |  15 +-
 .../devicetree/bindings/property-units.txt    |   4 +
 drivers/opp/core.c                            | 316 +++++++++++++++---
 drivers/opp/of.c                              |  63 ++--
 drivers/opp/opp.h                             |   5 +
 include/linux/pm_opp.h                        |  43 +++
 6 files changed, 383 insertions(+), 63 deletions(-)

Comments

Viresh Kumar Jan. 8, 2020, 11:25 a.m. UTC | #1
On 06-12-19, 16:24, Saravana Kannan wrote:
> Viresh/Stephen,
> 
> I don't think all the additional code/diff in this v6 series is worth it
> to avoid using the rate field to store peak bandwidth. However, since folks
> weren't too happy about it, here it is. I prefer the v5 series, but not
> too strongly tied to it. Let me know what you think Viresh/Stephen.
> 
> Btw, I wasn't sure of opp-hz = 0

I am not sure either ;)

> or opp-level = 0 were allowed. Also,

I think this is allowed.

> it's not clear why the duplicate check isn't done for opp-level when
> _opp_add() is called. Based on that, we could add opp-level comparison

This should be done. Please do that in the first patch as I suggested
in the code as well.

> to opp_compare_key(). That's why you'll see a few spurious
> opp_key.level = 0 lines. Let me know how you want to go with that.
> 
> I could also add a opp.key_type enum field to store what key type the
> opp entry is. But looks like I can get away without adding an
> unnecessary variable. So, I've skipped that for now.

Not in the OPP struct, but such an enum can be used for helper
routines as I commented.
Viresh Kumar Jan. 14, 2020, 10:34 a.m. UTC | #2
On 06-12-19, 16:24, Saravana Kannan wrote:
> gpu_cache_opp_table: gpu_cache_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_cache_3000: opp-3000 {
> 		opp-peak-KBps = <3000000>;
> 		opp-avg-KBps = <1000000>;
> 	};
> 	gpu_cache_6000: opp-6000 {
> 		opp-peak-KBps = <6000000>;
> 		opp-avg-KBps = <2000000>;
> 	};
> 	gpu_cache_9000: opp-9000 {
> 		opp-peak-KBps = <9000000>;
> 		opp-avg-KBps = <9000000>;
> 	};
> };
> 
> gpu_ddr_opp_table: gpu_ddr_opp_table {
> 	compatible = "operating-points-v2";
> 
> 	gpu_ddr_1525: opp-1525 {
> 		opp-peak-KBps = <1525000>;
> 		opp-avg-KBps = <452000>;
> 	};
> 	gpu_ddr_3051: opp-3051 {
> 		opp-peak-KBps = <3051000>;
> 		opp-avg-KBps = <915000>;
> 	};
> 	gpu_ddr_7500: opp-7500 {
> 		opp-peak-KBps = <7500000>;
> 		opp-avg-KBps = <3000000>;
> 	};
> };
> 
> gpu_opp_table: gpu_opp_table {
> 	compatible = "operating-points-v2";
> 	opp-shared;
> 
> 	opp-200000000 {
> 		opp-hz = /bits/ 64 <200000000>;
> 	};
> 	opp-400000000 {
> 		opp-hz = /bits/ 64 <400000000>;
> 	};
> };
> 
> gpu@7864000 {
> 	...
> 	operating-points-v2 = <&gpu_opp_table>, <&gpu_cache_opp_table>, <&gpu_ddr_opp_table>;

Okay, I got confused a bit again after some interaction with Sibi
today. The multiple phandle thing in the operating-points-v2 property
is there specifically for nodes that can provide multiple devices,
like PM domains where the provider may end up providing multiple
domains.

But I am not sure what you are going to do with the list of phandles
you have set for the GPU here.

We can not add multiple OPP tables for a single device right now.