diff mbox series

[v9,2/8] dt-bindings: Introduce interconnect binding

Message ID 20180831140151.13972-3-georgi.djakov@linaro.org (mailing list archive)
State Not Applicable, archived
Delegated to: Andy Gross
Headers show
Series Introduce on-chip interconnect API | expand

Commit Message

Georgi Djakov Aug. 31, 2018, 2:01 p.m. UTC
This binding is intended to represent the relations between the interconnect
controllers (providers) and consumer device nodes. It will allow creating links
between consumers and interconnect paths (exposed by interconnect providers).

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
 1 file changed, 60 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

Comments

Rob Herring Sept. 25, 2018, 6:02 p.m. UTC | #1
On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> This binding is intended to represent the relations between the interconnect
> controllers (providers) and consumer device nodes. It will allow creating links
> between consumers and interconnect paths (exposed by interconnect providers).

As I mentioned in person, I want to see other SoC families using this 
before accepting. They don't have to be ready for upstream, but WIP 
patches or even just a "yes, this works for us and we're going to use 
this binding on X".

Also, I think the QCom GPU use of this should be fully sorted out. Or 
more generically how this fits into OPP binding which seems to be never 
ending extended...

> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
>  1 file changed, 60 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
> 
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> new file mode 100644
> index 000000000000..5cb7d3c8d44d
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -0,0 +1,60 @@
> +Interconnect Provider Device Tree Bindings
> +=========================================
> +
> +The purpose of this document is to define a common set of generic interconnect
> +providers/consumers properties.
> +
> +
> += interconnect providers =
> +
> +The interconnect provider binding is intended to represent the interconnect
> +controllers in the system. Each provider registers a set of interconnect
> +nodes, which expose the interconnect related capabilities of the interconnect
> +to consumer drivers. These capabilities can be throughput, latency, priority
> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +depending on the use case. Interconnect providers can also be interconnect
> +consumers, such as in the case where two network-on-chip fabrics interface
> +directly

missing '.'

> +
> +Required properties:
> +- compatible : contains the interconnect provider compatible string
> +- #interconnect-cells : number of cells in a interconnect specifier needed to
> +			encode the interconnect node id
> +
> +Example:
> +
> +		snoc: snoc@580000 {
> +			compatible = "qcom,msm8916-snoc";
> +			#interconnect-cells = <1>;
> +			reg = <0x580000 0x14000>;
> +			clock-names = "bus_clk", "bus_a_clk";
> +			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> +				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
> +		};
> +
> +
> += interconnect consumers =
> +
> +The interconnect consumers are device nodes which dynamically express their
> +bandwidth requirements along interconnect paths they are connected to. There
> +can be multiple interconnect providers on a SoC and the consumer may consume
> +multiple paths from different providers depending on use case and the
> +components it has to interact with.
> +
> +Required properties:
> +interconnects : Pairs of phandles and interconnect provider specifier to denote
> +	        the edge source and destination ports of the interconnect path.
> +
> +Optional properties:
> +interconnect-names : List of interconnect path name strings sorted in the same
> +		     order as the interconnects property. Consumers drivers will use
> +		     interconnect-names to match interconnect paths with interconnect
> +		     specifiers.

specifier pairs.

> +
> +Example:
> +
> +	sdhci@7864000 {
> +		...
> +		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> +		interconnect-names = "ddr";
> +	};
Jordan Crouse Sept. 26, 2018, 2:34 p.m. UTC | #2
On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> > This binding is intended to represent the relations between the interconnect
> > controllers (providers) and consumer device nodes. It will allow creating links
> > between consumers and interconnect paths (exposed by interconnect providers).
> 
> As I mentioned in person, I want to see other SoC families using this 
> before accepting. They don't have to be ready for upstream, but WIP 
> patches or even just a "yes, this works for us and we're going to use 
> this binding on X".
> 
> Also, I think the QCom GPU use of this should be fully sorted out. Or 
> more generically how this fits into OPP binding which seems to be never 
> ending extended...

This is a discussion I wouldn't mind having now.  To jog memories, this is what
I posted a few weeks ago:

https://patchwork.freedesktop.org/patch/246117/

This seems like the easiest way to me to tie the frequency and the bandwidth
quota together for GPU devfreq scaling but I'm not married to the format and
I'll happily go a few rounds on the bikeshed if we can get something we can
be happy with.

Jordan

> > Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> > ---
> >  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
> >  1 file changed, 60 insertions(+)
> >  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
> > 
> > diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > new file mode 100644
> > index 000000000000..5cb7d3c8d44d
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> > @@ -0,0 +1,60 @@
> > +Interconnect Provider Device Tree Bindings
> > +=========================================
> > +
> > +The purpose of this document is to define a common set of generic interconnect
> > +providers/consumers properties.
> > +
> > +
> > += interconnect providers =
> > +
> > +The interconnect provider binding is intended to represent the interconnect
> > +controllers in the system. Each provider registers a set of interconnect
> > +nodes, which expose the interconnect related capabilities of the interconnect
> > +to consumer drivers. These capabilities can be throughput, latency, priority
> > +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> > +depending on the use case. Interconnect providers can also be interconnect
> > +consumers, such as in the case where two network-on-chip fabrics interface
> > +directly
> 
> missing '.'
> 
> > +
> > +Required properties:
> > +- compatible : contains the interconnect provider compatible string
> > +- #interconnect-cells : number of cells in a interconnect specifier needed to
> > +			encode the interconnect node id
> > +
> > +Example:
> > +
> > +		snoc: snoc@580000 {
> > +			compatible = "qcom,msm8916-snoc";
> > +			#interconnect-cells = <1>;
> > +			reg = <0x580000 0x14000>;
> > +			clock-names = "bus_clk", "bus_a_clk";
> > +			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> > +				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
> > +		};
> > +
> > +
> > += interconnect consumers =
> > +
> > +The interconnect consumers are device nodes which dynamically express their
> > +bandwidth requirements along interconnect paths they are connected to. There
> > +can be multiple interconnect providers on a SoC and the consumer may consume
> > +multiple paths from different providers depending on use case and the
> > +components it has to interact with.
> > +
> > +Required properties:
> > +interconnects : Pairs of phandles and interconnect provider specifier to denote
> > +	        the edge source and destination ports of the interconnect path.
> > +
> > +Optional properties:
> > +interconnect-names : List of interconnect path name strings sorted in the same
> > +		     order as the interconnects property. Consumers drivers will use
> > +		     interconnect-names to match interconnect paths with interconnect
> > +		     specifiers.
> 
> specifier pairs.
> 
> > +
> > +Example:
> > +
> > +	sdhci@7864000 {
> > +		...
> > +		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
> > +		interconnect-names = "ddr";
> > +	};
Georgi Djakov Sept. 26, 2018, 2:42 p.m. UTC | #3
Hi Rob,

Thanks for the comments!

On 09/25/2018 09:02 PM, Rob Herring wrote:
> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>> This binding is intended to represent the relations between the interconnect
>> controllers (providers) and consumer device nodes. It will allow creating links
>> between consumers and interconnect paths (exposed by interconnect providers).
> 
> As I mentioned in person, I want to see other SoC families using this 
> before accepting. They don't have to be ready for upstream, but WIP 
> patches or even just a "yes, this works for us and we're going to use 
> this binding on X".

Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
currently using this binding, there is ongoing work from at least two
other vendors that would be using this same binding. I will check on
what is their progress so far.

> Also, I think the QCom GPU use of this should be fully sorted out. Or 
> more generically how this fits into OPP binding which seems to be never 
> ending extended...

I see this as a further step. It could be OPP binding which include
bandwidth values or some separate DT property. Jordan has already
proposed something, do you have any initial comments on that?

BR,
Georgi

>> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
>> ---
>>  .../bindings/interconnect/interconnect.txt    | 60 +++++++++++++++++++
>>  1 file changed, 60 insertions(+)
>>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>>
>> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>> new file mode 100644
>> index 000000000000..5cb7d3c8d44d
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
>> @@ -0,0 +1,60 @@
>> +Interconnect Provider Device Tree Bindings
>> +=========================================
>> +
>> +The purpose of this document is to define a common set of generic interconnect
>> +providers/consumers properties.
>> +
>> +
>> += interconnect providers =
>> +
>> +The interconnect provider binding is intended to represent the interconnect
>> +controllers in the system. Each provider registers a set of interconnect
>> +nodes, which expose the interconnect related capabilities of the interconnect
>> +to consumer drivers. These capabilities can be throughput, latency, priority
>> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
>> +depending on the use case. Interconnect providers can also be interconnect
>> +consumers, such as in the case where two network-on-chip fabrics interface
>> +directly
> 
> missing '.'
> 
>> +
>> +Required properties:
>> +- compatible : contains the interconnect provider compatible string
>> +- #interconnect-cells : number of cells in a interconnect specifier needed to
>> +			encode the interconnect node id
>> +
>> +Example:
>> +
>> +		snoc: snoc@580000 {
>> +			compatible = "qcom,msm8916-snoc";
>> +			#interconnect-cells = <1>;
>> +			reg = <0x580000 0x14000>;
>> +			clock-names = "bus_clk", "bus_a_clk";
>> +			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
>> +				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
>> +		};
>> +
>> +
>> += interconnect consumers =
>> +
>> +The interconnect consumers are device nodes which dynamically express their
>> +bandwidth requirements along interconnect paths they are connected to. There
>> +can be multiple interconnect providers on a SoC and the consumer may consume
>> +multiple paths from different providers depending on use case and the
>> +components it has to interact with.
>> +
>> +Required properties:
>> +interconnects : Pairs of phandles and interconnect provider specifier to denote
>> +	        the edge source and destination ports of the interconnect path.
>> +
>> +Optional properties:
>> +interconnect-names : List of interconnect path name strings sorted in the same
>> +		     order as the interconnects property. Consumers drivers will use
>> +		     interconnect-names to match interconnect paths with interconnect
>> +		     specifiers.
> 
> specifier pairs.
> 
>> +
>> +Example:
>> +
>> +	sdhci@7864000 {
>> +		...
>> +		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
>> +		interconnect-names = "ddr";
>> +	};
Sudeep Holla Sept. 26, 2018, 2:48 p.m. UTC | #4
On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
> Hi Rob,
> 
> Thanks for the comments!
> 
> On 09/25/2018 09:02 PM, Rob Herring wrote:
> > On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> >> This binding is intended to represent the relations between the interconnect
> >> controllers (providers) and consumer device nodes. It will allow creating links
> >> between consumers and interconnect paths (exposed by interconnect providers).
> > 
> > As I mentioned in person, I want to see other SoC families using this 
> > before accepting. They don't have to be ready for upstream, but WIP 
> > patches or even just a "yes, this works for us and we're going to use 
> > this binding on X".
> 
> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> currently using this binding, there is ongoing work from at least two
> other vendors that would be using this same binding. I will check on
> what is their progress so far.
> 
> > Also, I think the QCom GPU use of this should be fully sorted out. Or 
> > more generically how this fits into OPP binding which seems to be never 
> > ending extended...
> 
> I see this as a further step. It could be OPP binding which include
> bandwidth values or some separate DT property. Jordan has already
> proposed something, do you have any initial comments on that?

I am curious as how this fits into new systems which have firmware driven
CPUFreq and other DVFS. I would like to avoid using this in such systems
and leave it upto the firmware to scale the bus/interconnect based on the
other components that are connected to it and active.

--
Regards,
Sudeep
Georgi Djakov Sept. 26, 2018, 3:03 p.m. UTC | #5
Hi Sudeep,

On 09/26/2018 05:48 PM, Sudeep Holla wrote:
> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>> Hi Rob,
>>
>> Thanks for the comments!
>>
>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>> This binding is intended to represent the relations between the interconnect
>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>
>>> As I mentioned in person, I want to see other SoC families using this 
>>> before accepting. They don't have to be ready for upstream, but WIP 
>>> patches or even just a "yes, this works for us and we're going to use 
>>> this binding on X".
>>
>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>> currently using this binding, there is ongoing work from at least two
>> other vendors that would be using this same binding. I will check on
>> what is their progress so far.
>>
>>> Also, I think the QCom GPU use of this should be fully sorted out. Or 
>>> more generically how this fits into OPP binding which seems to be never 
>>> ending extended...
>>
>> I see this as a further step. It could be OPP binding which include
>> bandwidth values or some separate DT property. Jordan has already
>> proposed something, do you have any initial comments on that?
> 
> I am curious as how this fits into new systems which have firmware driven
> CPUFreq and other DVFS. I would like to avoid using this in such systems
> and leave it upto the firmware to scale the bus/interconnect based on the
> other components that are connected to it and active.

This can be used with firmware driven systems too. In this case the
interconnect platform driver will not manage the interconnects directly,
but will collect data and provide hints to the firmware. Then the
firmware can take into account these hints and do the scaling.

Thanks,
Georgi
Saravana Kannan Oct. 1, 2018, 8:56 p.m. UTC | #6
On 09/26/2018 07:34 AM, Jordan Crouse wrote:
> On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>> This binding is intended to represent the relations between the interconnect
>>> controllers (providers) and consumer device nodes. It will allow creating links
>>> between consumers and interconnect paths (exposed by interconnect providers).
>> As I mentioned in person, I want to see other SoC families using this
>> before accepting. They don't have to be ready for upstream, but WIP
>> patches or even just a "yes, this works for us and we're going to use
>> this binding on X".
>>
>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>> more generically how this fits into OPP binding which seems to be never
>> ending extended...
> This is a discussion I wouldn't mind having now.  To jog memories, this is what
> I posted a few weeks ago:
>
> https://patchwork.freedesktop.org/patch/246117/
>
> This seems like the easiest way to me to tie the frequency and the bandwidth
> quota together for GPU devfreq scaling but I'm not married to the format and
> I'll happily go a few rounds on the bikeshed if we can get something we can
> be happy with.
>
> Jordan

Been meaning to send this out for a while, but caught up with other stuff.

That GPU BW patch is very specific to device to device mapping and 
doesn't work well for different use cases (Eg: those that  can calculate 
based on use case, etc).

Interconnect paths have different BW (bandwidth) operating points that 
they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc. Having a 
mapping from GPU or CPU to those are fine/necessary, but we still need a 
separate BW OPP table for interconnect paths to list what they can 
actually support.

Two different ways we could represent BW OPP tables for interconnect paths:
1.  Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as 
devices and have OPPs for those devices.

2. We can have a "interconnect-opp-tables" DT binding similar to 
"interconnects" and "interconnect-names". So if a device GPU or Video 
decoder or I2C device needs to vote on an interconnect path, they can 
also list the OPP tables that those paths can support.

I know Rob doesn't like (1). But I'm hoping at least (2) is acceptable. 
I'm open to other suggestions too.

Both (1) and (2) need BW OPP tables similar to frequency OPP tables. 
That should be easy to add and Viresh is open to that. I'm open to other 
options too, but the fundamental missing part is how to tie a list of BW 
OPPs to interconnect paths in DT.

Once we have one of the above two options, we can use the required-opps 
field (already present in kernel) for the mapping between GPU to a 
particular BW need (suggested by Viresh during an in person conversation).

Thanks,
Saravana
Jordan Crouse Oct. 1, 2018, 9:26 p.m. UTC | #7
On Mon, Oct 01, 2018 at 01:56:32PM -0700, Saravana Kannan wrote:
> 
> 
> On 09/26/2018 07:34 AM, Jordan Crouse wrote:
> >On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
> >>On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> >>>This binding is intended to represent the relations between the interconnect
> >>>controllers (providers) and consumer device nodes. It will allow creating links
> >>>between consumers and interconnect paths (exposed by interconnect providers).
> >>As I mentioned in person, I want to see other SoC families using this
> >>before accepting. They don't have to be ready for upstream, but WIP
> >>patches or even just a "yes, this works for us and we're going to use
> >>this binding on X".
> >>
> >>Also, I think the QCom GPU use of this should be fully sorted out. Or
> >>more generically how this fits into OPP binding which seems to be never
> >>ending extended...
> >This is a discussion I wouldn't mind having now.  To jog memories, this is what
> >I posted a few weeks ago:
> >
> >https://patchwork.freedesktop.org/patch/246117/
> >
> >This seems like the easiest way to me to tie the frequency and the bandwidth
> >quota together for GPU devfreq scaling but I'm not married to the format and
> >I'll happily go a few rounds on the bikeshed if we can get something we can
> >be happy with.
> >
> >Jordan
> 
> Been meaning to send this out for a while, but caught up with other stuff.
> 
> That GPU BW patch is very specific to device to device mapping and
> doesn't work well for different use cases (Eg: those that  can
> calculate based on use case, etc).
> 
> Interconnect paths have different BW (bandwidth) operating points
> that they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc.
> Having a mapping from GPU or CPU to those are fine/necessary, but we
> still need a separate BW OPP table for interconnect paths to list
> what they can actually support.
> 
> Two different ways we could represent BW OPP tables for interconnect paths:
> 1.  Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
> devices and have OPPs for those devices.
> 
> 2. We can have a "interconnect-opp-tables" DT binding similar to
> "interconnects" and "interconnect-names". So if a device GPU or
> Video decoder or I2C device needs to vote on an interconnect path,
> they can also list the OPP tables that those paths can support.
> 
> I know Rob doesn't like (1). But I'm hoping at least (2) is
> acceptable. I'm open to other suggestions too.
> 
> Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
> That should be easy to add and Viresh is open to that. I'm open to
> other options too, but the fundamental missing part is how to tie a
> list of BW OPPs to interconnect paths in DT.
> 
> Once we have one of the above two options, we can use the
> required-opps field (already present in kernel) for the mapping
> between GPU to a particular BW need (suggested by Viresh during an
> in person conversation).

Assuming we are willing to maintain the bandwidth OPP tables and the
names / phandles needed to describe a 1:1 GPU -> bandwidth mapping
I'm okay with required-opps but for the sake of argument how would
required-opps work for a device that needs to vote multiple paths
for a given OPP?

Jordan
Saravana Kannan Oct. 1, 2018, 9:51 p.m. UTC | #8
On 10/01/2018 02:26 PM, Jordan Crouse wrote:
> On Mon, Oct 01, 2018 at 01:56:32PM -0700, Saravana Kannan wrote:
>>
>> On 09/26/2018 07:34 AM, Jordan Crouse wrote:
>>> On Tue, Sep 25, 2018 at 01:02:15PM -0500, Rob Herring wrote:
>>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>>> This binding is intended to represent the relations between the interconnect
>>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>> As I mentioned in person, I want to see other SoC families using this
>>>> before accepting. They don't have to be ready for upstream, but WIP
>>>> patches or even just a "yes, this works for us and we're going to use
>>>> this binding on X".
>>>>
>>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>>> more generically how this fits into OPP binding which seems to be never
>>>> ending extended...
>>> This is a discussion I wouldn't mind having now.  To jog memories, this is what
>>> I posted a few weeks ago:
>>>
>>> https://patchwork.freedesktop.org/patch/246117/
>>>
>>> This seems like the easiest way to me to tie the frequency and the bandwidth
>>> quota together for GPU devfreq scaling but I'm not married to the format and
>>> I'll happily go a few rounds on the bikeshed if we can get something we can
>>> be happy with.
>>>
>>> Jordan
>> Been meaning to send this out for a while, but caught up with other stuff.
>>
>> That GPU BW patch is very specific to device to device mapping and
>> doesn't work well for different use cases (Eg: those that  can
>> calculate based on use case, etc).
>>
>> Interconnect paths have different BW (bandwidth) operating points
>> that they can support. For example: 1 GB/s, 1.7 GB/s, 5GB/s, etc.
>> Having a mapping from GPU or CPU to those are fine/necessary, but we
>> still need a separate BW OPP table for interconnect paths to list
>> what they can actually support.
>>
>> Two different ways we could represent BW OPP tables for interconnect paths:
>> 1.  Represent interconnect paths (CPU to DDR, GPU to DDR, etc) as
>> devices and have OPPs for those devices.
>>
>> 2. We can have a "interconnect-opp-tables" DT binding similar to
>> "interconnects" and "interconnect-names". So if a device GPU or
>> Video decoder or I2C device needs to vote on an interconnect path,
>> they can also list the OPP tables that those paths can support.
>>
>> I know Rob doesn't like (1). But I'm hoping at least (2) is
>> acceptable. I'm open to other suggestions too.
>>
>> Both (1) and (2) need BW OPP tables similar to frequency OPP tables.
>> That should be easy to add and Viresh is open to that. I'm open to
>> other options too, but the fundamental missing part is how to tie a
>> list of BW OPPs to interconnect paths in DT.
>>
>> Once we have one of the above two options, we can use the
>> required-opps field (already present in kernel) for the mapping
>> between GPU to a particular BW need (suggested by Viresh during an
>> in person conversation).
> Assuming we are willing to maintain the bandwidth OPP tables and the
> names / phandles needed to describe a 1:1 GPU -> bandwidth mapping
> I'm okay with required-opps but for the sake of argument how would
> required-opps work for a device that needs to vote multiple paths
> for a given OPP?

You can list multiple required-opps per device OPP. It's an array of 
phandles to OPP entries in other tables.

-Saravana
Saravana Kannan Oct. 1, 2018, 11:49 p.m. UTC | #9
On 09/26/2018 07:48 AM, Sudeep Holla wrote:
> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>> Hi Rob,
>>
>> Thanks for the comments!
>>
>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>> This binding is intended to represent the relations between the interconnect
>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>> As I mentioned in person, I want to see other SoC families using this
>>> before accepting. They don't have to be ready for upstream, but WIP
>>> patches or even just a "yes, this works for us and we're going to use
>>> this binding on X".
>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>> currently using this binding, there is ongoing work from at least two
>> other vendors that would be using this same binding. I will check on
>> what is their progress so far.
>>
>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>> more generically how this fits into OPP binding which seems to be never
>>> ending extended...
>> I see this as a further step. It could be OPP binding which include
>> bandwidth values or some separate DT property. Jordan has already
>> proposed something, do you have any initial comments on that?
> I am curious as how this fits into new systems which have firmware driven
> CPUFreq and other DVFS. I would like to avoid using this in such systems
> and leave it upto the firmware to scale the bus/interconnect based on the
> other components that are connected to it and active.
>

You've made the same point multiple times across different patch sets. 
Not all FW can do arbitrary functions. A lot of them are very limited in 
their capabilities. So, as much as you and I would like to let the FW do 
the work, it's not always possible. So, in those cases, we do need to 
have support for the kernel scaling the interconnects correctly. 
Hopefully this clears up your questions about FW capabilities.

Thanks,
Saravana
Sudeep Holla Oct. 2, 2018, 11:17 a.m. UTC | #10
On Mon, Oct 01, 2018 at 04:49:32PM -0700, Saravana Kannan wrote:
> On 09/26/2018 07:48 AM, Sudeep Holla wrote:
> > On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
> > > Hi Rob,
> > > 
> > > Thanks for the comments!
> > > 
> > > On 09/25/2018 09:02 PM, Rob Herring wrote:
> > > > On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
> > > > > This binding is intended to represent the relations between the interconnect
> > > > > controllers (providers) and consumer device nodes. It will allow creating links
> > > > > between consumers and interconnect paths (exposed by interconnect providers).
> > > > As I mentioned in person, I want to see other SoC families using this
> > > > before accepting. They don't have to be ready for upstream, but WIP
> > > > patches or even just a "yes, this works for us and we're going to use
> > > > this binding on X".
> > > Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> > > currently using this binding, there is ongoing work from at least two
> > > other vendors that would be using this same binding. I will check on
> > > what is their progress so far.
> > > 
> > > > Also, I think the QCom GPU use of this should be fully sorted out. Or
> > > > more generically how this fits into OPP binding which seems to be never
> > > > ending extended...
> > > I see this as a further step. It could be OPP binding which include
> > > bandwidth values or some separate DT property. Jordan has already
> > > proposed something, do you have any initial comments on that?
> > I am curious as how this fits into new systems which have firmware driven
> > CPUFreq and other DVFS. I would like to avoid using this in such systems
> > and leave it upto the firmware to scale the bus/interconnect based on the
> > other components that are connected to it and active.
> > 
> 
> You've made the same point multiple times across different patch sets. Not
> all FW can do arbitrary functions. A lot of them are very limited in their
> capabilities. So, as much as you and I would like to let the FW do the work,
> it's not always possible. So, in those cases, we do need to have support for
> the kernel scaling the interconnects correctly. Hopefully this clears up
> your questions about FW capabilities.

Yes, I do understand I have made the same point multiple time and it's
intentional. We need to get the fragmented f/w support story fixed.
Different ARM vendors are doing different things in f/w and ARM sees the
same fragmentation story as before. We have come up with new specification
and my annoying multiple emails are just to constantly remind the same.

I do understand we have existing implementations to consider, but fixing
the functionality in arbitrary way is not a good design and it better
to get them fixed for future.

--
Regards,
Sudeep
Saravana Kannan Oct. 2, 2018, 6:56 p.m. UTC | #11
On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> On Mon, Oct 01, 2018 at 04:49:32PM -0700, Saravana Kannan wrote:
>> On 09/26/2018 07:48 AM, Sudeep Holla wrote:
>>> On Wed, Sep 26, 2018 at 05:42:15PM +0300, Georgi Djakov wrote:
>>>> Hi Rob,
>>>>
>>>> Thanks for the comments!
>>>>
>>>> On 09/25/2018 09:02 PM, Rob Herring wrote:
>>>>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>>>>> This binding is intended to represent the relations between the interconnect
>>>>>> controllers (providers) and consumer device nodes. It will allow creating links
>>>>>> between consumers and interconnect paths (exposed by interconnect providers).
>>>>> As I mentioned in person, I want to see other SoC families using this
>>>>> before accepting. They don't have to be ready for upstream, but WIP
>>>>> patches or even just a "yes, this works for us and we're going to use
>>>>> this binding on X".
>>>> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
>>>> currently using this binding, there is ongoing work from at least two
>>>> other vendors that would be using this same binding. I will check on
>>>> what is their progress so far.
>>>>
>>>>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>>>>> more generically how this fits into OPP binding which seems to be never
>>>>> ending extended...
>>>> I see this as a further step. It could be OPP binding which include
>>>> bandwidth values or some separate DT property. Jordan has already
>>>> proposed something, do you have any initial comments on that?
>>> I am curious as how this fits into new systems which have firmware driven
>>> CPUFreq and other DVFS. I would like to avoid using this in such systems
>>> and leave it upto the firmware to scale the bus/interconnect based on the
>>> other components that are connected to it and active.
>>>
>> You've made the same point multiple times across different patch sets. Not
>> all FW can do arbitrary functions. A lot of them are very limited in their
>> capabilities. So, as much as you and I would like to let the FW do the work,
>> it's not always possible. So, in those cases, we do need to have support for
>> the kernel scaling the interconnects correctly. Hopefully this clears up
>> your questions about FW capabilities.
> Yes, I do understand I have made the same point multiple time and it's
> intentional. We need to get the fragmented f/w support story fixed.
> Different ARM vendors are doing different things in f/w and ARM sees the
> same fragmentation story as before. We have come up with new specification
> and my annoying multiple emails are just to constantly remind the same.
>
> I do understand we have existing implementations to consider, but fixing
> the functionality in arbitrary way is not a good design and it better
> to get them fixed for future.

I believe the fragmentation you are referring to is  in the 
interface/communication protocol. I see the benefit of standardizing 
that as long as the standard actually turns out to be good. But that's 
completely separate from what the FW can/can't do. Asking to standardize 
what the FW can/can't do doesn't seem realistic as each chip vendor will 
have different priorities - power, performance, cost, chip area, etc. 
It's the conflation of these separate topics that doesn't help IMHO.

-Saravana
Sudeep Holla Oct. 3, 2018, 9:33 a.m. UTC | #12
On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
> 
> On 10/02/2018 04:17 AM, Sudeep Holla wrote:

[...]

> > Yes, I do understand I have made the same point multiple time and it's
> > intentional. We need to get the fragmented f/w support story fixed.
> > Different ARM vendors are doing different things in f/w and ARM sees the
> > same fragmentation story as before. We have come up with new specification
> > and my annoying multiple emails are just to constantly remind the same.
> > 
> > I do understand we have existing implementations to consider, but fixing
> > the functionality in arbitrary way is not a good design and it better
> > to get them fixed for future.
> 
> I believe the fragmentation you are referring to is  in the
> interface/communication protocol. I see the benefit of standardizing that as
> long as the standard actually turns out to be good. But that's completely
> separate from what the FW can/can't do. Asking to standardize what the FW
> can/can't do doesn't seem realistic as each chip vendor will have different
> priorities - power, performance, cost, chip area, etc. It's the conflation
> of these separate topics that doesn't help IMHO.

I agree on interface/communication protocol fragmentation and firmware
can implement whatever the vendor wish. What I was also referring was
the mix-n-match approach which should be avoided.

e.g. Device A and B's PM is managed completely by firmware using OSPM hints
Suppose Device X's PM is dependent on Device A and B, in which case it's
simpler and cleaner to leave Device X PM to firmware. Reading the state
of A and B and using that as hint for X is just overhead which firmware
can manage better. That was my main concern here: A=CPU and B=some other
device and X is inter-connect to which A and B are connected.

If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
then is a mess and that's what I was concerned. I am sorry if that's not
the scenario here, I may have mistaken then.

--
Regards,
Sudeep
Saravana Kannan Oct. 3, 2018, 6:06 p.m. UTC | #13
On 10/03/2018 02:33 AM, Sudeep Holla wrote:
> On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
>> On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> [...]
>
>>> Yes, I do understand I have made the same point multiple time and it's
>>> intentional. We need to get the fragmented f/w support story fixed.
>>> Different ARM vendors are doing different things in f/w and ARM sees the
>>> same fragmentation story as before. We have come up with new specification
>>> and my annoying multiple emails are just to constantly remind the same.
>>>
>>> I do understand we have existing implementations to consider, but fixing
>>> the functionality in arbitrary way is not a good design and it better
>>> to get them fixed for future.
>> I believe the fragmentation you are referring to is  in the
>> interface/communication protocol. I see the benefit of standardizing that as
>> long as the standard actually turns out to be good. But that's completely
>> separate from what the FW can/can't do. Asking to standardize what the FW
>> can/can't do doesn't seem realistic as each chip vendor will have different
>> priorities - power, performance, cost, chip area, etc. It's the conflation
>> of these separate topics that doesn't help IMHO.
> I agree on interface/communication protocol fragmentation and firmware
> can implement whatever the vendor wish. What I was also referring was
> the mix-n-match approach which should be avoided.
>
> e.g. Device A and B's PM is managed completely by firmware using OSPM hints
> Suppose Device X's PM is dependent on Device A and B, in which case it's
> simpler and cleaner to leave Device X PM to firmware. Reading the state
> of A and B and using that as hint for X is just overhead which firmware
> can manage better. That was my main concern here: A=CPU and B=some other
> device and X is inter-connect to which A and B are connected.
>
> If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
> then is a mess and that's what I was concerned. I am sorry if that's not
> the scenario here, I may have mistaken then.
>
What you are asking would be an ideal case, but this is not an ideal 
world. There are tons of constraints for each chip vendor. Saying you 
can't mix and match makes perfect the enemy of the good. Adding FW 
support for A and B might make them optimal. But adding support for X 
might not be possible for multiple real world constraints (chip area, 
cost, time to market, etc). Saying "either do it all or do nothing" is 
going to hold back a lot progress that can come in increments. Heck, we 
do the same thing in the kernel. We'll add basic simple features first 
and then improve on them. Why is it suddenly frowned up if a FW/HW 
follows the same approach? I'll just have to agree to disagree with you 
on this view point.

-Saravana
Sudeep Holla Oct. 10, 2018, 3:02 p.m. UTC | #14
On Wed, Oct 03, 2018 at 11:06:45AM -0700, Saravana Kannan wrote:
> 
> 
> On 10/03/2018 02:33 AM, Sudeep Holla wrote:
> > On Tue, Oct 02, 2018 at 11:56:56AM -0700, Saravana Kannan wrote:
> > > On 10/02/2018 04:17 AM, Sudeep Holla wrote:
> > [...]
> > 
> > > > Yes, I do understand I have made the same point multiple time and it's
> > > > intentional. We need to get the fragmented f/w support story fixed.
> > > > Different ARM vendors are doing different things in f/w and ARM sees the
> > > > same fragmentation story as before. We have come up with new specification
> > > > and my annoying multiple emails are just to constantly remind the same.
> > > > 
> > > > I do understand we have existing implementations to consider, but fixing
> > > > the functionality in arbitrary way is not a good design and it better
> > > > to get them fixed for future.
> > > I believe the fragmentation you are referring to is  in the
> > > interface/communication protocol. I see the benefit of standardizing that as
> > > long as the standard actually turns out to be good. But that's completely
> > > separate from what the FW can/can't do. Asking to standardize what the FW
> > > can/can't do doesn't seem realistic as each chip vendor will have different
> > > priorities - power, performance, cost, chip area, etc. It's the conflation
> > > of these separate topics that doesn't help IMHO.
> > I agree on interface/communication protocol fragmentation and firmware
> > can implement whatever the vendor wish. What I was also referring was
> > the mix-n-match approach which should be avoided.
> > 
> > e.g. Device A and B's PM is managed completely by firmware using OSPM hints
> > Suppose Device X's PM is dependent on Device A and B, in which case it's
> > simpler and cleaner to leave Device X PM to firmware. Reading the state
> > of A and B and using that as hint for X is just overhead which firmware
> > can manage better. That was my main concern here: A=CPU and B=some other
> > device and X is inter-connect to which A and B are connected.
> > 
> > If CPU OPPs are obtained from f/w and this inter-connect from DT, mapping
> > then is a mess and that's what I was concerned. I am sorry if that's not
> > the scenario here, I may have mistaken then.
> > 
> What you are asking would be an ideal case, but this is not an ideal world.

Agreed.

> There are tons of constraints for each chip vendor. Saying you can't mix and
> match makes perfect the enemy of the good.

We can have endless debate on that.

> Adding FW support for A and B might make them optimal.

OK...

> But adding support for X might not be possible for
> multiple real world constraints (chip area, cost, time to market, etc).

but is not a good design though. If f/w blindly changes DVFS for X based
on OS request, then there's possibility for clkscrew kind of exploits
still though moving A/B to f/w was to avoid it. The chances are low but
not zero.

> Saying "either do it all or do nothing" is going to hold back a lot progress
> that can come in increments. Heck, we do the same thing in the kernel. We'll
> add basic simple features first and then improve on them. Why is it suddenly
> frowned up if a FW/HW follows the same approach? I'll just have to agree to
> disagree with you on this view point.
>

I agree on adding basic and then improve on that policy. But it's not
fair to compare this mix-'n'-match approach to that. Sorry but I
disagree with the comparison here.

--
Regards,
Sudeep
Georgi Djakov Nov. 27, 2018, 6:05 p.m. UTC | #15
Hi Rob,

On 9/26/18 17:42, Georgi Djakov wrote:
> Hi Rob,
> 
> Thanks for the comments!
> 
> On 09/25/2018 09:02 PM, Rob Herring wrote:
>> On Fri, Aug 31, 2018 at 05:01:45PM +0300, Georgi Djakov wrote:
>>> This binding is intended to represent the relations between the interconnect
>>> controllers (providers) and consumer device nodes. It will allow creating links
>>> between consumers and interconnect paths (exposed by interconnect providers).
>>
>> As I mentioned in person, I want to see other SoC families using this
>> before accepting. They don't have to be ready for upstream, but WIP
>> patches or even just a "yes, this works for us and we're going to use
>> this binding on X".

Patches for the iMX7ULP platform are already available [1]. Thanks 
Alexandre Bailon!

The interconnect API seems to be also a good fit for Nvidia SoCs. There 
is an ongoing discussion about implementing an interconnect provider 
driver for Tegra [2]. Thanks Thierry and Krishna!

In addition of the above, i also checked privately with a few other SoC 
maintainers and made them aware of these patches. Some are not ready for 
upstream yet, but the feedback was positive and i expect more SoCs to 
make use of this in the future.

BR,
Georgi

[1] https://lkml.org/lkml/2018/11/17/368
[2] https://marc.info/?t=154056181900001&r=1&w=2


> Other than the 3 Qualcomm SoCs (msm8916, msm8996, sdm845) that are
> currently using this binding, there is ongoing work from at least two
> other vendors that would be using this same binding. I will check on
> what is their progress so far.
> 
>> Also, I think the QCom GPU use of this should be fully sorted out. Or
>> more generically how this fits into OPP binding which seems to be never
>> ending extended...
> 
> I see this as a further step. It could be OPP binding which include
> bandwidth values or some separate DT property. Jordan has already
> proposed something, do you have any initial comments on that?
> 
> BR,
> Georgi
>
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..5cb7d3c8d44d
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,60 @@ 
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+			encode the interconnect node id
+
+Example:
+
+		snoc: snoc@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};
+
+
+= interconnect consumers =
+
+The interconnect consumers are device nodes which dynamically express their
+bandwidth requirements along interconnect paths they are connected to. There
+can be multiple interconnect providers on a SoC and the consumer may consume
+multiple paths from different providers depending on use case and the
+components it has to interact with.
+
+Required properties:
+interconnects : Pairs of phandles and interconnect provider specifier to denote
+	        the edge source and destination ports of the interconnect path.
+
+Optional properties:
+interconnect-names : List of interconnect path name strings sorted in the same
+		     order as the interconnects property. Consumers drivers will use
+		     interconnect-names to match interconnect paths with interconnect
+		     specifiers.
+
+Example:
+
+	sdhci@7864000 {
+		...
+		interconnects = <&pnoc MASTER_SDCC_1 &bimc SLAVE_EBI_CH0>;
+		interconnect-names = "ddr";
+	};