mbox series

[v8,00/10] Introduce OPP bandwidth bindings

Message ID 20200512125327.1868-1-georgi.djakov@linaro.org (mailing list archive)
Headers show
Series Introduce OPP bandwidth bindings | expand

Message

Georgi Djakov May 12, 2020, 12:53 p.m. UTC
Here is a proposal to extend the OPP bindings with bandwidth based on
a few previous discussions [1] and patchsets from me [2][3] and Saravana
[4][5][6][7][8][9].

Changes in v8:
* Addressed review comments from Matthias, Sibi and Viresh.
* Picked reviewed-by tags.
* Picked Sibi's interconnect-tag patches into this patchset.

Changes in v7: https://lore.kernel.org/r/20200424155404.10746-1-georgi.djakov@linaro.org
* This version is combination of both patchsets by Saravana and me, based
on [3] and [9].
* The latest version of DT bindings from Saravana is used here, with a
minor change of using arrays instead of single integers for opp-peak-kBps
and opp-avg-kBps. This is needed to support multiple interconnect paths.
* The concept of having multiple OPP tables per device has been dropped,
as it was nacked by Viresh.
* Various reviews comments have been addressed and some patches are
split, and there are also some new patches. Thanks to Viresh, Sibi and
others for providing feedback!

With this version of the patchset, the CPU/GPU to DDR bandwidth scaling
will look like this in DT:

One interconnect path (no change from Saravana's v6 patches):

cpu@0 {
	operating-points-v2 = <&cpu_opp_table>;
	interconnects = <&noc1 MASTER1 &noc2 SLAVE1>,
};

cpu_opp_table: cpu_opp_table {
	compatible = "operating-points-v2";

	opp-800000000 {
		opp-hz = /bits/ 64 <800000000>;
		opp-peak-kBps = <1525000>;
		opp-avg-kBps = <457000>;
	};

	opp-998400000 {
		opp-hz = /bits/ 64 <998400000>;
		opp-peak-kBps = <7614000>;
		opp-avg-kBps = <2284000>;
	};
};

Two interconnect paths:

cpu@0 {
	operating-points-v2 = <&cpu_opp_table>;
	interconnects = <&noc1 MASTER1 &noc2 SLAVE1>,
			<&noc3 MASTER2 &noc4 SLAVE2>;
};

cpu_opp_table: cpu_opp_table {
	compatible = "operating-points-v2";

	opp-800000000 {
		opp-hz = /bits/ 64 <800000000>;
		opp-peak-kBps = <1525000 2000>;
		opp-avg-kBps = <457000 1000>;
	};

	opp-998400000 {
		opp-hz = /bits/ 64 <998400000>;
		opp-peak-kBps = <7614000 4000>;
		opp-avg-kBps = <2284000 2000>;
	};
};

------

Every functional block on a SoC can contribute to the system power
efficiency by expressing its own bandwidth needs (to memory or other SoC
modules). This will allow the system to save power when high throughput
is not required (and also provide maximum throughput when needed).

There are at least three ways for a device to determine its bandwidth
needs:
	1. The device can dynamically calculate the needed bandwidth
based on some known variable. For example: UART (baud rate), I2C (fast
mode, high-speed mode, etc), USB (specification version, data transfer
type), SDHC (SD standard, clock rate, bus-width), Video Encoder/Decoder
(video format, resolution, frame-rate)

	2. There is a hardware specific value. For example: hardware
specific constant value (e.g. for PRNG) or use-case specific value that
is hard-coded.

	3. Predefined SoC/board specific bandwidth values. For example:
CPU or GPU bandwidth is related to the current core frequency and both
bandwidth and frequency are scaled together.

This patchset is trying to address point 3 above by extending the OPP
bindings to support predefined SoC/board bandwidth values and adds
support in cpufreq-dt to scale the interconnect between the CPU and the
DDR together with frequency and voltage.

[1] https://patchwork.kernel.org/patch/10577315/
[2] https://lore.kernel.org/r/20190313090010.20534-1-georgi.djakov@linaro.org/
[3] https://lore.kernel.org/r/20190423132823.7915-1-georgi.djakov@linaro.org/
[4] https://lore.kernel.org/r/20190608044339.115026-1-saravanak@google.com
[5] https://lore.kernel.org/r/20190614041733.120807-1-saravanak@google.com
[6] https://lore.kernel.org/r/20190703011020.151615-1-saravanak@google.com
[7] https://lore.kernel.org/r/20190726231558.175130-1-saravanak@google.com
[8] https://lore.kernel.org/r/20190807223111.230846-1-saravanak@google.com
[9] https://lore.kernel.org/r/20191207002424.201796-1-saravanak@google.com

Georgi Djakov (6):
  interconnect: Add of_icc_get_by_index() helper function
  OPP: Add support for parsing interconnect bandwidth
  OPP: Add sanity checks in _read_opp_key()
  OPP: Update the bandwidth on OPP frequency changes
  cpufreq: dt: Add support for interconnect bandwidth scaling
  cpufreq: dt: Validate all interconnect paths

Saravana Kannan (2):
  dt-bindings: opp: Introduce opp-peak-kBps and opp-avg-kBps bindings
  OPP: Add helpers for reading the binding properties

Sibi Sankar (2):
  dt-bindings: interconnect: Add interconnect-tags bindings
  OPP: Add support for setting interconnect-tags

 .../bindings/interconnect/interconnect.txt    |   5 +
 Documentation/devicetree/bindings/opp/opp.txt |  17 +-
 .../devicetree/bindings/property-units.txt    |   4 +
 drivers/cpufreq/Kconfig                       |   1 +
 drivers/cpufreq/cpufreq-dt.c                  |  54 +++++
 drivers/interconnect/core.c                   |  72 +++++--
 drivers/opp/Kconfig                           |   1 +
 drivers/opp/core.c                            |  55 ++++-
 drivers/opp/of.c                              | 189 ++++++++++++++++--
 drivers/opp/opp.h                             |  10 +
 include/linux/interconnect.h                  |   6 +
 include/linux/pm_opp.h                        |  12 ++
 12 files changed, 380 insertions(+), 46 deletions(-)

Comments

Viresh Kumar May 13, 2020, 6:55 a.m. UTC | #1
On 12-05-20, 15:53, Georgi Djakov wrote:
> Here is a proposal to extend the OPP bindings with bandwidth based on
> a few previous discussions [1] and patchsets from me [2][3] and Saravana
> [4][5][6][7][8][9].
> 
> Changes in v8:
> * Addressed review comments from Matthias, Sibi and Viresh.
> * Picked reviewed-by tags.
> * Picked Sibi's interconnect-tag patches into this patchset.

I have applied the series with the modifications I replied with
separately.

Please lemme know if any more tags (reviewed/acked) etc need to be
applied or any more changes are required before I send the pull
request to Rafael.

Please give my branch a try as soon as you can.

Thanks.
Georgi Djakov May 13, 2020, 10:10 a.m. UTC | #2
Hi Viresh,

On 5/13/20 09:55, Viresh Kumar wrote:
> On 12-05-20, 15:53, Georgi Djakov wrote:
>> Here is a proposal to extend the OPP bindings with bandwidth based on
>> a few previous discussions [1] and patchsets from me [2][3] and Saravana
>> [4][5][6][7][8][9].
>>
>> Changes in v8:
>> * Addressed review comments from Matthias, Sibi and Viresh.
>> * Picked reviewed-by tags.
>> * Picked Sibi's interconnect-tag patches into this patchset.
> 
> I have applied the series with the modifications I replied with
> separately.

Thanks a lot!

> Please lemme know if any more tags (reviewed/acked) etc need to be
> applied or any more changes are required before I send the pull
> request to Rafael.
> 
> Please give my branch a try as soon as you can.

On top of your branch i tested with scaling 3 different test paths (also
tagged with different tags) and it looks good:

 node                                  tag          avg         peak
--------------------------------------------------------------------
slv_ebi_ch0                                      458824      1525000
  cpu0                                   3          922          911
  cpu0                                   2          902          901
  cpu0                                   1       457000      1525000

Apart from that, i ran memory throughput tests and they also confirm
that it's working as expected.

There will be a minor conflict with my branch when this is merged upstream,
so maybe we will need to report it or use an immutable tag/branch.

Thanks,
Georgi
Viresh Kumar May 13, 2020, 10:18 a.m. UTC | #3
On 13-05-20, 13:10, Georgi Djakov wrote:
> There will be a minor conflict with my branch when this is merged upstream,
> so maybe we will need to report it or use an immutable tag/branch.

Okay, give me your branch and I will rebase over it then. Or if you
can do that over my branch, that will be better as mine is just based
of rc2.