mbox series

[v10,0/8] Introduce on-chip interconnect API

Message ID 20181127180349.29997-1-georgi.djakov@linaro.org (mailing list archive)
Headers show
Series Introduce on-chip interconnect API | expand

Message

Georgi Djakov Nov. 27, 2018, 6:03 p.m. UTC
Modern SoCs have multiple processors and various dedicated cores (video, gpu,
graphics, modem). These cores are talking to each other and can generate a
lot of data flowing through the on-chip interconnects. These interconnect
buses could form different topologies such as crossbar, point to point buses,
hierarchical buses or use the network-on-chip concept.

These buses have been sized usually to handle use cases with high data
throughput but it is not necessary all the time and consume a lot of power.
Furthermore, the priority between masters can vary depending on the running
use case like video playback or CPU intensive tasks.

Having an API to control the requirement of the system in terms of bandwidth
and QoS, so we can adapt the interconnect configuration to match those by
scaling the frequencies, setting link priority and tuning QoS parameters.
This configuration can be a static, one-time operation done at boot for some
platforms or a dynamic set of operations that happen at run-time.

This patchset introduce a new API to get the requirement and configure the
interconnect buses across the entire chipset to fit with the current demand.
The API is NOT for changing the performance of the endpoint devices, but only
the interconnect path in between them.

The API is using a consumer/provider-based model, where the providers are
the interconnect buses and the consumers could be various drivers.
The consumers request interconnect resources (path) to an endpoint and set
the desired constraints on this data flow path. The provider(s) receive
requests from consumers and aggregate these requests for all master-slave
pairs on that path. Then the providers configure each participating in the
topology node according to the requested data flow path, physical links and
constraints. The topology could be complicated and multi-tiered and is SoC
specific.

Below is a simplified diagram of a real-world SoC topology. The interconnect
providers are the NoCs.

+----------------+    +----------------+
| HW Accelerator |--->|      M NoC     |<---------------+
+----------------+    +----------------+                |
                        |      |                    +------------+
 +-----+  +-------------+      V       +------+     |            |
 | DDR |  |                +--------+  | PCIe |     |            |
 +-----+  |                | Slaves |  +------+     |            |
   ^ ^    |                +--------+     |         |   C NoC    |
   | |    V                               V         |            |
+------------------+   +------------------------+   |            |   +-----+
|                  |-->|                        |-->|            |-->| CPU |
|                  |-->|                        |<--|            |   +-----+
|     Mem NoC      |   |         S NoC          |   +------------+
|                  |<--|                        |---------+    |
|                  |<--|                        |<------+ |    |   +--------+
+------------------+   +------------------------+       | |    +-->| Slaves |
  ^  ^    ^    ^          ^                             | |        +--------+
  |  |    |    |          |                             | V
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
| CPUs |  |  | GPU |   | DSP |  | Masters |-->|       P NoC    |-->| Slaves |
+------+  |  +-----+   +-----+  +---------+   +----------------+   +--------+
          |
      +-------+
      | Modem |
      +-------+

TODO:
* Create icc_set_extended() to handle parameters such as latency and other
  QoS values. Nvidia and Qcom guys are interested in this.
* Cache the path between the nodes instead of walking the graph on each get().
* Sync interconnect requests with the idle state of the device.

Changes since patchset v9 (https://lkml.org/lkml/2018/8/31/444)
* Converted from using global node identifiers to local per provider ids.
* Dropped msm8916 platform driver until we figure out DT bindings.
* Included sdm845 platform driver instead.
* Added macros for converting to mbps, gbps, etc. to icc units.
* Added comments about aggregation, other minor changes.
* Fixed uninitialized variable. (Gustavo A. R. Silva)
* Removed set but not used variable. (YueHaibing)
* Fixed build error without DEBUGFS. (Arnd Bergmann)

Changes since patchset v8 (https://lkml.org/lkml/2018/8/10/387)
* Fixed the names of the files when built as modules.
* Corrected some typos in comments.

Changes since patchset v7 (https://lkml.org/lkml/2018/7/31/647)
* Addressed comments on kernel-doc and grammar. (Randy)
* Picked Reviewed-by: Evan
* Squashed consumer and provider DT bindings into single patch. (Rob)
* Cleaned-up msm8916 DT bindings docs by removing unused port ids.
* Updated documentation for the cases when NULL is returned. (Saravana)
* New patch to add myself as maintainer.

Changes since patchset v6 (https://lkml.org/lkml/2018/7/9/698)
* [patches 1,6]: Move the aggregation within the provider from the framework to
  the platform driver's set() callback, as the aggregation point could be SoC
  specific.
* [patch 1]: Include missing header, reset state only of the traversed nodes,
  move more code into path_init(), add more asserts, move misplaced mutex,
  simplify icc_link_destroy() (Evan)
* [patch 1]: Fix the order of requests to go from source to destination. (Alex)
* [patch 7]: Use better wording in the documentation. (Evan)
* [patch 6]: Reorder struct members, sort nodes alphabetically, improve naming
  of variables , add missing clk_disable_unprepare() in error paths. (Matthias)
* [patch 6]: Remove redundant NULL pointer check in msm8916 driver. (Alex)
* [patch 6]: Add missing depend on QCOM_SMD_RPM in Kconfig. (Evan)
* [patch 3]: Don't check for errors on debugfs calls, remove debugfs directory
  when module is unloaded (Greg)

Changes since patchset v5 (https://lkml.org/lkml/2018/6/20/453)
* Fix the modular build, make rpm-smd driver a module.
* Optimize locking and move to higher level. (Evan)
* Code cleanups. Fix typos. (Evan, Matthias)
* Add the source node to the path. (Evan)
* Rename path_allocate() to path_init() with minor refactoring. (Evan)
* Rename *_remove() functions to *_destroy().
* Return fixed errors in icc_link_destroy(). (Evan)
* Fix krealloc() usage in icc_link_destroy(). (Evan)
* Add missing kfree() in icc_node_create(). (Matthias)
* Make icc_node_add() return void. (Matthias)
* Change mutex_init to mutex_lock in icc_provider_add(). (Matthias)
* Add new icc_node_del() function to delete nodes from provider.
* Fix the header guard to reflect the path in smd-rpm.h. (Evan)
* Check for errors returned by qcom_icc_rpm_smd_send(). (Evan)
* Propagate the error of icc_provider_del(). (Evan)

Changes since patchset v4 (https://lkml.org/lkml/2018/3/9/856)
* Simplified locking by using a single global mutex. (Evan)
* Changed the aggregation function interface.
* Implemented functions for node, link, provider removal. (Evan)
* Naming changes on variables and functions, removed redundant code. (Evan)
* Fixes and clarifications in the docs. (Matthias, Evan, Amit, Alexandre)
* Removed mandatory reg DT property, made interconnect-names optional. (Bjorn)
* Made interconnect-cells property required to align with other bindings. (Neil)
* Moved msm8916 specific bindings into a separate file and patch. (Bjorn)
* Use the names, instead of the hardcoded ids for topology. (Matthias)
* Init the node before creating the links. (Evan)
* Added icc_units_to_bps macro. (Amit)

Changes since patchset v3 (https://lkml.org/lkml/2017/9/8/544)
* Refactored the constraints aggregation.
* Use the IDR API.
* Split the provider and consumer bindings into separate patches and propose
  new bindings for consumers, which allows to specify the local source port.
* Adopted the icc_ prefix for API functions.
* Introduced separate API functions for creating interconnect nodes and links.
* Added DT lookup support in addition to platform data.
* Dropped the event tracing patch for now.
* Added a patch to provide summary via debugfs.
* Use macro for the list of topology definitions in the platform driver.
* Various minor changes.

Changes since patchset v2 (https://lkml.org/lkml/2017/7/20/825)
* Split the aggregation into per node and per provider. Cache the
  aggregated values.
* Various small refactorings and cleanups in the framework.
* Added a patch introducing basic tracepoint support for monitoring
  the time required to update the interconnect nodes.

Changes since patchset v1 (https://lkml.org/lkml/2017/6/27/890)
* Updates in the documentation.
* Changes in request aggregation, locking.
* Dropped the aggregate() callback and use the default as it currently
  sufficient for the single vendor driver. Will add it later when needed.
* Dropped the dt-bindings draft patch for now.

Changes since RFC v2 (https://lkml.org/lkml/2017/6/12/316)
* Converted documentation to rst format.
* Fixed an incorrect call to mutex_lock. Renamed max_bw to peak_bw.

Changes since RFC v1 (https://lkml.org/lkml/2017/5/15/605)
* Refactored code into shorter functions.
* Added a new aggregate() API function.
* Rearranged some structs to reduce padding bytes.

Changes since RFC v0 (https://lkml.org/lkml/2017/3/1/599)
* Removed DT support and added optional Patch 3 with new bindings proposal.
* Converted the topology into internal driver data.
* Made the framework modular.
* interconnect_get() now takes (src and dst ports as arguments).
* Removed public declarations of some structs.
* Now passing prev/next nodes to the vendor driver.
* Properly remove requests on _put().
* Added refcounting.
* Updated documentation.
* Changed struct interconnect_path to use array instead of linked list.

David Dai (2):
  interconnect: qcom: Add sdm845 interconnect provider driver
  arm64: dts: sdm845: Add interconnect provider DT nodes

Georgi Djakov (5):
  interconnect: Add generic on-chip interconnect API
  dt-bindings: Introduce interconnect binding
  interconnect: Allow endpoints translation via DT
  interconnect: Add debugfs support
  MAINTAINERS: add a maintainer for the interconnect API

 .../bindings/interconnect/interconnect.txt    |  60 ++
 .../bindings/interconnect/qcom,sdm845.txt     |  24 +
 Documentation/interconnect/interconnect.rst   |  94 ++
 MAINTAINERS                                   |  10 +
 arch/arm64/boot/dts/qcom/sdm845.dtsi          |   5 +
 drivers/Kconfig                               |   2 +
 drivers/Makefile                              |   1 +
 drivers/interconnect/Kconfig                  |  15 +
 drivers/interconnect/Makefile                 |   6 +
 drivers/interconnect/core.c                   | 796 +++++++++++++++++
 drivers/interconnect/qcom/Kconfig             |  13 +
 drivers/interconnect/qcom/Makefile            |   5 +
 drivers/interconnect/qcom/sdm845.c            | 836 ++++++++++++++++++
 .../dt-bindings/interconnect/qcom,sdm845.h    | 143 +++
 include/linux/interconnect-provider.h         | 142 +++
 include/linux/interconnect.h                  |  58 ++
 16 files changed, 2210 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
 create mode 100644 Documentation/devicetree/bindings/interconnect/qcom,sdm845.txt
 create mode 100644 Documentation/interconnect/interconnect.rst
 create mode 100644 drivers/interconnect/Kconfig
 create mode 100644 drivers/interconnect/Makefile
 create mode 100644 drivers/interconnect/core.c
 create mode 100644 drivers/interconnect/qcom/Kconfig
 create mode 100644 drivers/interconnect/qcom/Makefile
 create mode 100644 drivers/interconnect/qcom/sdm845.c
 create mode 100644 include/dt-bindings/interconnect/qcom,sdm845.h
 create mode 100644 include/linux/interconnect-provider.h
 create mode 100644 include/linux/interconnect.h

Comments

Evan Green Dec. 5, 2018, 8:41 p.m. UTC | #1
On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> graphics, modem). These cores are talking to each other and can generate a
> lot of data flowing through the on-chip interconnects. These interconnect
> buses could form different topologies such as crossbar, point to point buses,
> hierarchical buses or use the network-on-chip concept.
>
> These buses have been sized usually to handle use cases with high data
> throughput but it is not necessary all the time and consume a lot of power.
> Furthermore, the priority between masters can vary depending on the running
> use case like video playback or CPU intensive tasks.
>
> Having an API to control the requirement of the system in terms of bandwidth
> and QoS, so we can adapt the interconnect configuration to match those by
> scaling the frequencies, setting link priority and tuning QoS parameters.
> This configuration can be a static, one-time operation done at boot for some
> platforms or a dynamic set of operations that happen at run-time.
>
> This patchset introduce a new API to get the requirement and configure the
> interconnect buses across the entire chipset to fit with the current demand.
> The API is NOT for changing the performance of the endpoint devices, but only
> the interconnect path in between them.

For what it's worth, we are ready to land this in Chrome OS. I think
this series has been very well discussed and reviewed, hasn't changed
much in the last few spins, and is in good enough shape to use as a
base for future patches. Georgi's also done a great job reaching out
to other SoC vendors, and there appears to be enough consensus that
this framework will be usable by more than just Qualcomm. There are
also several drivers out on the list trying to add patches to use this
framework, with more to come, so it made sense (to us) to get this
base framework nailed down. In my experiments this is an important
piece of the overall power management story, especially on systems
that are mostly idle.

I'll continue to track changes to this series and we will ultimately
reconcile with whatever happens upstream, but I thought it was worth
sending this note to express our "thumbs up" towards this framework.

-Evan
Greg KH Dec. 6, 2018, 2:55 p.m. UTC | #2
On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >
> > Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> > graphics, modem). These cores are talking to each other and can generate a
> > lot of data flowing through the on-chip interconnects. These interconnect
> > buses could form different topologies such as crossbar, point to point buses,
> > hierarchical buses or use the network-on-chip concept.
> >
> > These buses have been sized usually to handle use cases with high data
> > throughput but it is not necessary all the time and consume a lot of power.
> > Furthermore, the priority between masters can vary depending on the running
> > use case like video playback or CPU intensive tasks.
> >
> > Having an API to control the requirement of the system in terms of bandwidth
> > and QoS, so we can adapt the interconnect configuration to match those by
> > scaling the frequencies, setting link priority and tuning QoS parameters.
> > This configuration can be a static, one-time operation done at boot for some
> > platforms or a dynamic set of operations that happen at run-time.
> >
> > This patchset introduce a new API to get the requirement and configure the
> > interconnect buses across the entire chipset to fit with the current demand.
> > The API is NOT for changing the performance of the endpoint devices, but only
> > the interconnect path in between them.
> 
> For what it's worth, we are ready to land this in Chrome OS. I think
> this series has been very well discussed and reviewed, hasn't changed
> much in the last few spins, and is in good enough shape to use as a
> base for future patches. Georgi's also done a great job reaching out
> to other SoC vendors, and there appears to be enough consensus that
> this framework will be usable by more than just Qualcomm. There are
> also several drivers out on the list trying to add patches to use this
> framework, with more to come, so it made sense (to us) to get this
> base framework nailed down. In my experiments this is an important
> piece of the overall power management story, especially on systems
> that are mostly idle.
> 
> I'll continue to track changes to this series and we will ultimately
> reconcile with whatever happens upstream, but I thought it was worth
> sending this note to express our "thumbs up" towards this framework.

Looks like a v11 will be forthcoming, so I'll wait for that one to apply
it to the tree if all looks good.

thanks,

greg k-h
Georgi Djakov Dec. 7, 2018, 10:06 a.m. UTC | #3
Hi Greg and Evan,

On 12/6/18 16:55, Greg KH wrote:
> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>
>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>> graphics, modem). These cores are talking to each other and can generate a
>>> lot of data flowing through the on-chip interconnects. These interconnect
>>> buses could form different topologies such as crossbar, point to point buses,
>>> hierarchical buses or use the network-on-chip concept.
>>>
>>> These buses have been sized usually to handle use cases with high data
>>> throughput but it is not necessary all the time and consume a lot of power.
>>> Furthermore, the priority between masters can vary depending on the running
>>> use case like video playback or CPU intensive tasks.
>>>
>>> Having an API to control the requirement of the system in terms of bandwidth
>>> and QoS, so we can adapt the interconnect configuration to match those by
>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>> This configuration can be a static, one-time operation done at boot for some
>>> platforms or a dynamic set of operations that happen at run-time.
>>>
>>> This patchset introduce a new API to get the requirement and configure the
>>> interconnect buses across the entire chipset to fit with the current demand.
>>> The API is NOT for changing the performance of the endpoint devices, but only
>>> the interconnect path in between them.
>>
>> For what it's worth, we are ready to land this in Chrome OS. I think
>> this series has been very well discussed and reviewed, hasn't changed
>> much in the last few spins, and is in good enough shape to use as a
>> base for future patches. Georgi's also done a great job reaching out
>> to other SoC vendors, and there appears to be enough consensus that
>> this framework will be usable by more than just Qualcomm. There are
>> also several drivers out on the list trying to add patches to use this
>> framework, with more to come, so it made sense (to us) to get this
>> base framework nailed down. In my experiments this is an important
>> piece of the overall power management story, especially on systems
>> that are mostly idle.
>>
>> I'll continue to track changes to this series and we will ultimately
>> reconcile with whatever happens upstream, but I thought it was worth
>> sending this note to express our "thumbs up" towards this framework.
> 
> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> it to the tree if all looks good.
> 

Yes, it's coming. I will also include an additional fixup patch, as the
sdm845 provider driver will fail to build in linux-next, due to a recent
change in the cmd_db API.

Thanks,
Georgi
Rafael J. Wysocki Dec. 10, 2018, 9:04 a.m. UTC | #4
On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> > On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> > >
> > > Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> > > graphics, modem). These cores are talking to each other and can generate a
> > > lot of data flowing through the on-chip interconnects. These interconnect
> > > buses could form different topologies such as crossbar, point to point buses,
> > > hierarchical buses or use the network-on-chip concept.
> > >
> > > These buses have been sized usually to handle use cases with high data
> > > throughput but it is not necessary all the time and consume a lot of power.
> > > Furthermore, the priority between masters can vary depending on the running
> > > use case like video playback or CPU intensive tasks.
> > >
> > > Having an API to control the requirement of the system in terms of bandwidth
> > > and QoS, so we can adapt the interconnect configuration to match those by
> > > scaling the frequencies, setting link priority and tuning QoS parameters.
> > > This configuration can be a static, one-time operation done at boot for some
> > > platforms or a dynamic set of operations that happen at run-time.
> > >
> > > This patchset introduce a new API to get the requirement and configure the
> > > interconnect buses across the entire chipset to fit with the current demand.
> > > The API is NOT for changing the performance of the endpoint devices, but only
> > > the interconnect path in between them.
> >
> > For what it's worth, we are ready to land this in Chrome OS. I think
> > this series has been very well discussed and reviewed, hasn't changed
> > much in the last few spins, and is in good enough shape to use as a
> > base for future patches. Georgi's also done a great job reaching out
> > to other SoC vendors, and there appears to be enough consensus that
> > this framework will be usable by more than just Qualcomm. There are
> > also several drivers out on the list trying to add patches to use this
> > framework, with more to come, so it made sense (to us) to get this
> > base framework nailed down. In my experiments this is an important
> > piece of the overall power management story, especially on systems
> > that are mostly idle.
> >
> > I'll continue to track changes to this series and we will ultimately
> > reconcile with whatever happens upstream, but I thought it was worth
> > sending this note to express our "thumbs up" towards this framework.
>
> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> it to the tree if all looks good.

I'm honestly not sure if it is ready yet.

New versions are coming on and on, which may make such an impression,
but we had some discussion on it at the LPC and some serious questions
were asked during it, for instance regarding the DT binding introduced
here.  I'm not sure how this particular issue has been addressed here,
for example.

Thanks,
Rafael
Georgi Djakov Dec. 10, 2018, 10:18 a.m. UTC | #5
Hi Rafael,

On 12/10/18 11:04, Rafael J. Wysocki wrote:
> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>
>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>
>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>> graphics, modem). These cores are talking to each other and can generate a
>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>> buses could form different topologies such as crossbar, point to point buses,
>>>> hierarchical buses or use the network-on-chip concept.
>>>>
>>>> These buses have been sized usually to handle use cases with high data
>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>> Furthermore, the priority between masters can vary depending on the running
>>>> use case like video playback or CPU intensive tasks.
>>>>
>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>> This configuration can be a static, one-time operation done at boot for some
>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>
>>>> This patchset introduce a new API to get the requirement and configure the
>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>> the interconnect path in between them.
>>>
>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>> this series has been very well discussed and reviewed, hasn't changed
>>> much in the last few spins, and is in good enough shape to use as a
>>> base for future patches. Georgi's also done a great job reaching out
>>> to other SoC vendors, and there appears to be enough consensus that
>>> this framework will be usable by more than just Qualcomm. There are
>>> also several drivers out on the list trying to add patches to use this
>>> framework, with more to come, so it made sense (to us) to get this
>>> base framework nailed down. In my experiments this is an important
>>> piece of the overall power management story, especially on systems
>>> that are mostly idle.
>>>
>>> I'll continue to track changes to this series and we will ultimately
>>> reconcile with whatever happens upstream, but I thought it was worth
>>> sending this note to express our "thumbs up" towards this framework.
>>
>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>> it to the tree if all looks good.
> 
> I'm honestly not sure if it is ready yet.
> 
> New versions are coming on and on, which may make such an impression,
> but we had some discussion on it at the LPC and some serious questions
> were asked during it, for instance regarding the DT binding introduced
> here.  I'm not sure how this particular issue has been addressed here,
> for example.

There have been no changes in bindings since v4 (other than squashing
consumer and provider bindings into a single patch and fixing typos).

The last DT comment was on v9 [1] where Rob wanted confirmation from
other SoC vendors that this works for them too. And now we have that
confirmation and there are patches posted on the list [2].

The second thing (also discussed at LPC) was about possible cases where
some consumer drivers can't calculate how much bandwidth they actually
need and how to address that. The proposal was to extend the OPP
bindings with one more property, but this is not part of this patchset.
It is a future step that needs more discussion on the mailing list. If a
driver really needs some bandwidth data now, it should be put into the
driver and not in DT. After we have enough consumers, we can discuss
again if it makes sense to extract something into DT or not.

Thanks,
Georgi

[1] https://lkml.org/lkml/2018/9/25/939
[2] https://lkml.org/lkml/2018/11/28/12
Rafael J. Wysocki Dec. 10, 2018, 11 a.m. UTC | #6
On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Hi Rafael,
>
> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> > On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>
> >> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>
> >>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>> graphics, modem). These cores are talking to each other and can generate a
> >>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>> buses could form different topologies such as crossbar, point to point buses,
> >>>> hierarchical buses or use the network-on-chip concept.
> >>>>
> >>>> These buses have been sized usually to handle use cases with high data
> >>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>> Furthermore, the priority between masters can vary depending on the running
> >>>> use case like video playback or CPU intensive tasks.
> >>>>
> >>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>> This configuration can be a static, one-time operation done at boot for some
> >>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>
> >>>> This patchset introduce a new API to get the requirement and configure the
> >>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>> the interconnect path in between them.
> >>>
> >>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>> this series has been very well discussed and reviewed, hasn't changed
> >>> much in the last few spins, and is in good enough shape to use as a
> >>> base for future patches. Georgi's also done a great job reaching out
> >>> to other SoC vendors, and there appears to be enough consensus that
> >>> this framework will be usable by more than just Qualcomm. There are
> >>> also several drivers out on the list trying to add patches to use this
> >>> framework, with more to come, so it made sense (to us) to get this
> >>> base framework nailed down. In my experiments this is an important
> >>> piece of the overall power management story, especially on systems
> >>> that are mostly idle.
> >>>
> >>> I'll continue to track changes to this series and we will ultimately
> >>> reconcile with whatever happens upstream, but I thought it was worth
> >>> sending this note to express our "thumbs up" towards this framework.
> >>
> >> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >> it to the tree if all looks good.
> >
> > I'm honestly not sure if it is ready yet.
> >
> > New versions are coming on and on, which may make such an impression,
> > but we had some discussion on it at the LPC and some serious questions
> > were asked during it, for instance regarding the DT binding introduced
> > here.  I'm not sure how this particular issue has been addressed here,
> > for example.
>
> There have been no changes in bindings since v4 (other than squashing
> consumer and provider bindings into a single patch and fixing typos).
>
> The last DT comment was on v9 [1] where Rob wanted confirmation from
> other SoC vendors that this works for them too. And now we have that
> confirmation and there are patches posted on the list [2].

OK

> The second thing (also discussed at LPC) was about possible cases where
> some consumer drivers can't calculate how much bandwidth they actually
> need and how to address that. The proposal was to extend the OPP
> bindings with one more property, but this is not part of this patchset.
> It is a future step that needs more discussion on the mailing list. If a
> driver really needs some bandwidth data now, it should be put into the
> driver and not in DT. After we have enough consumers, we can discuss
> again if it makes sense to extract something into DT or not.

That's fine by me.

Admittedly, I have some reservations regarding the extent to which
this approach will turn out to be useful in practice, but I guess as
long as there is enough traction, the best way to find out it to try
and see. :-)

From now on I will assume that this series is going to be applied by Greg.

Thanks,
Rafael
Georgi Djakov Dec. 10, 2018, 2:50 p.m. UTC | #7
On 12/10/18 13:00, Rafael J. Wysocki wrote:
> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> Hi Rafael,
>>
>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>
>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>
>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>
>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>
>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>
>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>> the interconnect path in between them.
>>>>>
>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>> also several drivers out on the list trying to add patches to use this
>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>> base framework nailed down. In my experiments this is an important
>>>>> piece of the overall power management story, especially on systems
>>>>> that are mostly idle.
>>>>>
>>>>> I'll continue to track changes to this series and we will ultimately
>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>
>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>> it to the tree if all looks good.
>>>
>>> I'm honestly not sure if it is ready yet.
>>>
>>> New versions are coming on and on, which may make such an impression,
>>> but we had some discussion on it at the LPC and some serious questions
>>> were asked during it, for instance regarding the DT binding introduced
>>> here.  I'm not sure how this particular issue has been addressed here,
>>> for example.
>>
>> There have been no changes in bindings since v4 (other than squashing
>> consumer and provider bindings into a single patch and fixing typos).
>>
>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>> other SoC vendors that this works for them too. And now we have that
>> confirmation and there are patches posted on the list [2].
> 
> OK
> 
>> The second thing (also discussed at LPC) was about possible cases where
>> some consumer drivers can't calculate how much bandwidth they actually
>> need and how to address that. The proposal was to extend the OPP
>> bindings with one more property, but this is not part of this patchset.
>> It is a future step that needs more discussion on the mailing list. If a
>> driver really needs some bandwidth data now, it should be put into the
>> driver and not in DT. After we have enough consumers, we can discuss
>> again if it makes sense to extract something into DT or not.
> 
> That's fine by me.
> 
> Admittedly, I have some reservations regarding the extent to which
> this approach will turn out to be useful in practice, but I guess as
> long as there is enough traction, the best way to find out it to try
> and see. :-)
> 
> From now on I will assume that this series is going to be applied by Greg.

That was the initial idea, but the problem is that there is a recent
change in the cmd_db API (needed by the sdm845 provider driver), which
is going through arm-soc/qcom/drivers. So either Greg pulls also the
qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
and Arnd. Maybe there are other options. I don't have any preference and
don't want to put extra burden on any maintainers, so i am ok with what
they prefer.

Thanks,
Georgi
Greg KH Dec. 11, 2018, 6:58 a.m. UTC | #8
On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
> On 12/10/18 13:00, Rafael J. Wysocki wrote:
> > On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>
> >> Hi Rafael,
> >>
> >> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> >>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>
> >>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>>
> >>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>>>> graphics, modem). These cores are talking to each other and can generate a
> >>>>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>>>> buses could form different topologies such as crossbar, point to point buses,
> >>>>>> hierarchical buses or use the network-on-chip concept.
> >>>>>>
> >>>>>> These buses have been sized usually to handle use cases with high data
> >>>>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>>>> Furthermore, the priority between masters can vary depending on the running
> >>>>>> use case like video playback or CPU intensive tasks.
> >>>>>>
> >>>>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>>>> This configuration can be a static, one-time operation done at boot for some
> >>>>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>>>
> >>>>>> This patchset introduce a new API to get the requirement and configure the
> >>>>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>>>> the interconnect path in between them.
> >>>>>
> >>>>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>>>> this series has been very well discussed and reviewed, hasn't changed
> >>>>> much in the last few spins, and is in good enough shape to use as a
> >>>>> base for future patches. Georgi's also done a great job reaching out
> >>>>> to other SoC vendors, and there appears to be enough consensus that
> >>>>> this framework will be usable by more than just Qualcomm. There are
> >>>>> also several drivers out on the list trying to add patches to use this
> >>>>> framework, with more to come, so it made sense (to us) to get this
> >>>>> base framework nailed down. In my experiments this is an important
> >>>>> piece of the overall power management story, especially on systems
> >>>>> that are mostly idle.
> >>>>>
> >>>>> I'll continue to track changes to this series and we will ultimately
> >>>>> reconcile with whatever happens upstream, but I thought it was worth
> >>>>> sending this note to express our "thumbs up" towards this framework.
> >>>>
> >>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >>>> it to the tree if all looks good.
> >>>
> >>> I'm honestly not sure if it is ready yet.
> >>>
> >>> New versions are coming on and on, which may make such an impression,
> >>> but we had some discussion on it at the LPC and some serious questions
> >>> were asked during it, for instance regarding the DT binding introduced
> >>> here.  I'm not sure how this particular issue has been addressed here,
> >>> for example.
> >>
> >> There have been no changes in bindings since v4 (other than squashing
> >> consumer and provider bindings into a single patch and fixing typos).
> >>
> >> The last DT comment was on v9 [1] where Rob wanted confirmation from
> >> other SoC vendors that this works for them too. And now we have that
> >> confirmation and there are patches posted on the list [2].
> > 
> > OK
> > 
> >> The second thing (also discussed at LPC) was about possible cases where
> >> some consumer drivers can't calculate how much bandwidth they actually
> >> need and how to address that. The proposal was to extend the OPP
> >> bindings with one more property, but this is not part of this patchset.
> >> It is a future step that needs more discussion on the mailing list. If a
> >> driver really needs some bandwidth data now, it should be put into the
> >> driver and not in DT. After we have enough consumers, we can discuss
> >> again if it makes sense to extract something into DT or not.
> > 
> > That's fine by me.
> > 
> > Admittedly, I have some reservations regarding the extent to which
> > this approach will turn out to be useful in practice, but I guess as
> > long as there is enough traction, the best way to find out it to try
> > and see. :-)
> > 
> > From now on I will assume that this series is going to be applied by Greg.
> 
> That was the initial idea, but the problem is that there is a recent
> change in the cmd_db API (needed by the sdm845 provider driver), which
> is going through arm-soc/qcom/drivers. So either Greg pulls also the
> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
> and Arnd. Maybe there are other options. I don't have any preference and
> don't want to put extra burden on any maintainers, so i am ok with what
> they prefer.

Let me take the time later this week to review the code, which I haven't
done in a while...

thanks,

greg k-h
Georgi Djakov Dec. 17, 2018, 11:17 a.m. UTC | #9
Hi Greg,

On 12/11/18 08:58, Greg Kroah-Hartman wrote:
> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>
>>>> Hi Rafael,
>>>>
>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>
>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>
>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>
>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>
>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>
>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>> the interconnect path in between them.
>>>>>>>
>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>> piece of the overall power management story, especially on systems
>>>>>>> that are mostly idle.
>>>>>>>
>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>
>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>> it to the tree if all looks good.
>>>>>
>>>>> I'm honestly not sure if it is ready yet.
>>>>>
>>>>> New versions are coming on and on, which may make such an impression,
>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>> for example.
>>>>
>>>> There have been no changes in bindings since v4 (other than squashing
>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>
>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>> other SoC vendors that this works for them too. And now we have that
>>>> confirmation and there are patches posted on the list [2].
>>>
>>> OK
>>>
>>>> The second thing (also discussed at LPC) was about possible cases where
>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>> need and how to address that. The proposal was to extend the OPP
>>>> bindings with one more property, but this is not part of this patchset.
>>>> It is a future step that needs more discussion on the mailing list. If a
>>>> driver really needs some bandwidth data now, it should be put into the
>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>> again if it makes sense to extract something into DT or not.
>>>
>>> That's fine by me.
>>>
>>> Admittedly, I have some reservations regarding the extent to which
>>> this approach will turn out to be useful in practice, but I guess as
>>> long as there is enough traction, the best way to find out it to try
>>> and see. :-)
>>>
>>> From now on I will assume that this series is going to be applied by Greg.
>>
>> That was the initial idea, but the problem is that there is a recent
>> change in the cmd_db API (needed by the sdm845 provider driver), which
>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>> and Arnd. Maybe there are other options. I don't have any preference and
>> don't want to put extra burden on any maintainers, so i am ok with what
>> they prefer.
> 
> Let me take the time later this week to review the code, which I haven't
> done in a while...
> 

When you get a chance to review, please keep in mind that the latest
version is v12 (from 08.Dec). The same is also available in linux-next
with no reported issues.

Thanks,
Georgi
Georgi Djakov Jan. 10, 2019, 2:19 p.m. UTC | #10
Hi Greg,

On 12/17/18 13:17, Georgi Djakov wrote:
> Hi Greg,
> 
> On 12/11/18 08:58, Greg Kroah-Hartman wrote:
>> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>
>>>>> Hi Rafael,
>>>>>
>>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>
>>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>>
>>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>>
>>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>>
>>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>>
>>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>>> the interconnect path in between them.
>>>>>>>>
>>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>>> piece of the overall power management story, especially on systems
>>>>>>>> that are mostly idle.
>>>>>>>>
>>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>>
>>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>>> it to the tree if all looks good.
>>>>>>
>>>>>> I'm honestly not sure if it is ready yet.
>>>>>>
>>>>>> New versions are coming on and on, which may make such an impression,
>>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>>> for example.
>>>>>
>>>>> There have been no changes in bindings since v4 (other than squashing
>>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>>
>>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>>> other SoC vendors that this works for them too. And now we have that
>>>>> confirmation and there are patches posted on the list [2].
>>>>
>>>> OK
>>>>
>>>>> The second thing (also discussed at LPC) was about possible cases where
>>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>>> need and how to address that. The proposal was to extend the OPP
>>>>> bindings with one more property, but this is not part of this patchset.
>>>>> It is a future step that needs more discussion on the mailing list. If a
>>>>> driver really needs some bandwidth data now, it should be put into the
>>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>>> again if it makes sense to extract something into DT or not.
>>>>
>>>> That's fine by me.
>>>>
>>>> Admittedly, I have some reservations regarding the extent to which
>>>> this approach will turn out to be useful in practice, but I guess as
>>>> long as there is enough traction, the best way to find out it to try
>>>> and see. :-)
>>>>
>>>> From now on I will assume that this series is going to be applied by Greg.
>>>
>>> That was the initial idea, but the problem is that there is a recent
>>> change in the cmd_db API (needed by the sdm845 provider driver), which
>>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>>> and Arnd. Maybe there are other options. I don't have any preference and
>>> don't want to put extra burden on any maintainers, so i am ok with what
>>> they prefer.
>>
>> Let me take the time later this week to review the code, which I haven't
>> done in a while...
>>
> 
> When you get a chance to review, please keep in mind that the latest
> version is v12 (from 08.Dec). The same is also available in linux-next
> with no reported issues.

The dependencies for this patchset have been already merged in v5.0-rc1,
so i was wondering if this can still go into -rc2? Various patches that
use this API are already posted and having it sooner will make dealing
with dependencies and merge paths a bit easier during the next merge
window. Or i can just rebase and resend everything targeting v5.1.

Thanks,
Georgi
Greg KH Jan. 10, 2019, 4:29 p.m. UTC | #11
On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote:
> Hi Greg,
> 
> On 12/17/18 13:17, Georgi Djakov wrote:
> > Hi Greg,
> > 
> > On 12/11/18 08:58, Greg Kroah-Hartman wrote:
> >> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
> >>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
> >>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>
> >>>>> Hi Rafael,
> >>>>>
> >>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
> >>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >>>>>>>
> >>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
> >>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>>>>>>>
> >>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
> >>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
> >>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
> >>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
> >>>>>>>>> hierarchical buses or use the network-on-chip concept.
> >>>>>>>>>
> >>>>>>>>> These buses have been sized usually to handle use cases with high data
> >>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
> >>>>>>>>> Furthermore, the priority between masters can vary depending on the running
> >>>>>>>>> use case like video playback or CPU intensive tasks.
> >>>>>>>>>
> >>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
> >>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
> >>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
> >>>>>>>>> This configuration can be a static, one-time operation done at boot for some
> >>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
> >>>>>>>>>
> >>>>>>>>> This patchset introduce a new API to get the requirement and configure the
> >>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
> >>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
> >>>>>>>>> the interconnect path in between them.
> >>>>>>>>
> >>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
> >>>>>>>> this series has been very well discussed and reviewed, hasn't changed
> >>>>>>>> much in the last few spins, and is in good enough shape to use as a
> >>>>>>>> base for future patches. Georgi's also done a great job reaching out
> >>>>>>>> to other SoC vendors, and there appears to be enough consensus that
> >>>>>>>> this framework will be usable by more than just Qualcomm. There are
> >>>>>>>> also several drivers out on the list trying to add patches to use this
> >>>>>>>> framework, with more to come, so it made sense (to us) to get this
> >>>>>>>> base framework nailed down. In my experiments this is an important
> >>>>>>>> piece of the overall power management story, especially on systems
> >>>>>>>> that are mostly idle.
> >>>>>>>>
> >>>>>>>> I'll continue to track changes to this series and we will ultimately
> >>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
> >>>>>>>> sending this note to express our "thumbs up" towards this framework.
> >>>>>>>
> >>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
> >>>>>>> it to the tree if all looks good.
> >>>>>>
> >>>>>> I'm honestly not sure if it is ready yet.
> >>>>>>
> >>>>>> New versions are coming on and on, which may make such an impression,
> >>>>>> but we had some discussion on it at the LPC and some serious questions
> >>>>>> were asked during it, for instance regarding the DT binding introduced
> >>>>>> here.  I'm not sure how this particular issue has been addressed here,
> >>>>>> for example.
> >>>>>
> >>>>> There have been no changes in bindings since v4 (other than squashing
> >>>>> consumer and provider bindings into a single patch and fixing typos).
> >>>>>
> >>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
> >>>>> other SoC vendors that this works for them too. And now we have that
> >>>>> confirmation and there are patches posted on the list [2].
> >>>>
> >>>> OK
> >>>>
> >>>>> The second thing (also discussed at LPC) was about possible cases where
> >>>>> some consumer drivers can't calculate how much bandwidth they actually
> >>>>> need and how to address that. The proposal was to extend the OPP
> >>>>> bindings with one more property, but this is not part of this patchset.
> >>>>> It is a future step that needs more discussion on the mailing list. If a
> >>>>> driver really needs some bandwidth data now, it should be put into the
> >>>>> driver and not in DT. After we have enough consumers, we can discuss
> >>>>> again if it makes sense to extract something into DT or not.
> >>>>
> >>>> That's fine by me.
> >>>>
> >>>> Admittedly, I have some reservations regarding the extent to which
> >>>> this approach will turn out to be useful in practice, but I guess as
> >>>> long as there is enough traction, the best way to find out it to try
> >>>> and see. :-)
> >>>>
> >>>> From now on I will assume that this series is going to be applied by Greg.
> >>>
> >>> That was the initial idea, but the problem is that there is a recent
> >>> change in the cmd_db API (needed by the sdm845 provider driver), which
> >>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
> >>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
> >>> and Arnd. Maybe there are other options. I don't have any preference and
> >>> don't want to put extra burden on any maintainers, so i am ok with what
> >>> they prefer.
> >>
> >> Let me take the time later this week to review the code, which I haven't
> >> done in a while...
> >>
> > 
> > When you get a chance to review, please keep in mind that the latest
> > version is v12 (from 08.Dec). The same is also available in linux-next
> > with no reported issues.
> 
> The dependencies for this patchset have been already merged in v5.0-rc1,
> so i was wondering if this can still go into -rc2? Various patches that
> use this API are already posted and having it sooner will make dealing
> with dependencies and merge paths a bit easier during the next merge
> window. Or i can just rebase and resend everything targeting v5.1.

We can't add new features after -rc1, sorry.

Please rebase and resend to target 5.1

thanks,

greg k-h
Georgi Djakov Jan. 10, 2019, 4:34 p.m. UTC | #12
On 1/10/19 18:29, Greg Kroah-Hartman wrote:
> On Thu, Jan 10, 2019 at 04:19:14PM +0200, Georgi Djakov wrote:
>> Hi Greg,
>>
>> On 12/17/18 13:17, Georgi Djakov wrote:
>>> Hi Greg,
>>>
>>> On 12/11/18 08:58, Greg Kroah-Hartman wrote:
>>>> On Mon, Dec 10, 2018 at 04:50:00PM +0200, Georgi Djakov wrote:
>>>>> On 12/10/18 13:00, Rafael J. Wysocki wrote:
>>>>>> On Mon, Dec 10, 2018 at 11:18 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>
>>>>>>> Hi Rafael,
>>>>>>>
>>>>>>> On 12/10/18 11:04, Rafael J. Wysocki wrote:
>>>>>>>> On Thu, Dec 6, 2018 at 3:55 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>>>>>>>>
>>>>>>>>> On Wed, Dec 05, 2018 at 12:41:35PM -0800, Evan Green wrote:
>>>>>>>>>> On Tue, Nov 27, 2018 at 10:03 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Modern SoCs have multiple processors and various dedicated cores (video, gpu,
>>>>>>>>>>> graphics, modem). These cores are talking to each other and can generate a
>>>>>>>>>>> lot of data flowing through the on-chip interconnects. These interconnect
>>>>>>>>>>> buses could form different topologies such as crossbar, point to point buses,
>>>>>>>>>>> hierarchical buses or use the network-on-chip concept.
>>>>>>>>>>>
>>>>>>>>>>> These buses have been sized usually to handle use cases with high data
>>>>>>>>>>> throughput but it is not necessary all the time and consume a lot of power.
>>>>>>>>>>> Furthermore, the priority between masters can vary depending on the running
>>>>>>>>>>> use case like video playback or CPU intensive tasks.
>>>>>>>>>>>
>>>>>>>>>>> Having an API to control the requirement of the system in terms of bandwidth
>>>>>>>>>>> and QoS, so we can adapt the interconnect configuration to match those by
>>>>>>>>>>> scaling the frequencies, setting link priority and tuning QoS parameters.
>>>>>>>>>>> This configuration can be a static, one-time operation done at boot for some
>>>>>>>>>>> platforms or a dynamic set of operations that happen at run-time.
>>>>>>>>>>>
>>>>>>>>>>> This patchset introduce a new API to get the requirement and configure the
>>>>>>>>>>> interconnect buses across the entire chipset to fit with the current demand.
>>>>>>>>>>> The API is NOT for changing the performance of the endpoint devices, but only
>>>>>>>>>>> the interconnect path in between them.
>>>>>>>>>>
>>>>>>>>>> For what it's worth, we are ready to land this in Chrome OS. I think
>>>>>>>>>> this series has been very well discussed and reviewed, hasn't changed
>>>>>>>>>> much in the last few spins, and is in good enough shape to use as a
>>>>>>>>>> base for future patches. Georgi's also done a great job reaching out
>>>>>>>>>> to other SoC vendors, and there appears to be enough consensus that
>>>>>>>>>> this framework will be usable by more than just Qualcomm. There are
>>>>>>>>>> also several drivers out on the list trying to add patches to use this
>>>>>>>>>> framework, with more to come, so it made sense (to us) to get this
>>>>>>>>>> base framework nailed down. In my experiments this is an important
>>>>>>>>>> piece of the overall power management story, especially on systems
>>>>>>>>>> that are mostly idle.
>>>>>>>>>>
>>>>>>>>>> I'll continue to track changes to this series and we will ultimately
>>>>>>>>>> reconcile with whatever happens upstream, but I thought it was worth
>>>>>>>>>> sending this note to express our "thumbs up" towards this framework.
>>>>>>>>>
>>>>>>>>> Looks like a v11 will be forthcoming, so I'll wait for that one to apply
>>>>>>>>> it to the tree if all looks good.
>>>>>>>>
>>>>>>>> I'm honestly not sure if it is ready yet.
>>>>>>>>
>>>>>>>> New versions are coming on and on, which may make such an impression,
>>>>>>>> but we had some discussion on it at the LPC and some serious questions
>>>>>>>> were asked during it, for instance regarding the DT binding introduced
>>>>>>>> here.  I'm not sure how this particular issue has been addressed here,
>>>>>>>> for example.
>>>>>>>
>>>>>>> There have been no changes in bindings since v4 (other than squashing
>>>>>>> consumer and provider bindings into a single patch and fixing typos).
>>>>>>>
>>>>>>> The last DT comment was on v9 [1] where Rob wanted confirmation from
>>>>>>> other SoC vendors that this works for them too. And now we have that
>>>>>>> confirmation and there are patches posted on the list [2].
>>>>>>
>>>>>> OK
>>>>>>
>>>>>>> The second thing (also discussed at LPC) was about possible cases where
>>>>>>> some consumer drivers can't calculate how much bandwidth they actually
>>>>>>> need and how to address that. The proposal was to extend the OPP
>>>>>>> bindings with one more property, but this is not part of this patchset.
>>>>>>> It is a future step that needs more discussion on the mailing list. If a
>>>>>>> driver really needs some bandwidth data now, it should be put into the
>>>>>>> driver and not in DT. After we have enough consumers, we can discuss
>>>>>>> again if it makes sense to extract something into DT or not.
>>>>>>
>>>>>> That's fine by me.
>>>>>>
>>>>>> Admittedly, I have some reservations regarding the extent to which
>>>>>> this approach will turn out to be useful in practice, but I guess as
>>>>>> long as there is enough traction, the best way to find out it to try
>>>>>> and see. :-)
>>>>>>
>>>>>> From now on I will assume that this series is going to be applied by Greg.
>>>>>
>>>>> That was the initial idea, but the problem is that there is a recent
>>>>> change in the cmd_db API (needed by the sdm845 provider driver), which
>>>>> is going through arm-soc/qcom/drivers. So either Greg pulls also the
>>>>> qcom-drivers-for-4.21 tag from Andy or the whole series goes via Olof
>>>>> and Arnd. Maybe there are other options. I don't have any preference and
>>>>> don't want to put extra burden on any maintainers, so i am ok with what
>>>>> they prefer.
>>>>
>>>> Let me take the time later this week to review the code, which I haven't
>>>> done in a while...
>>>>
>>>
>>> When you get a chance to review, please keep in mind that the latest
>>> version is v12 (from 08.Dec). The same is also available in linux-next
>>> with no reported issues.
>>
>> The dependencies for this patchset have been already merged in v5.0-rc1,
>> so i was wondering if this can still go into -rc2? Various patches that
>> use this API are already posted and having it sooner will make dealing
>> with dependencies and merge paths a bit easier during the next merge
>> window. Or i can just rebase and resend everything targeting v5.1.
> 
> We can't add new features after -rc1, sorry.
> 
> Please rebase and resend to target 5.1

Ok, i was expecting that. Thanks for confirming!

BR,
Georgi