diff mbox series

[v7,2/8] dt-bindings: Introduce interconnect provider bindings

Message ID 20180731161340.13000-3-georgi.djakov@linaro.org (mailing list archive)
State Not Applicable, archived
Delegated to: Andy Gross
Headers show
Series Introduce on-chip interconnect API | expand

Commit Message

Georgi Djakov July 31, 2018, 4:13 p.m. UTC
This binding is intended to represent the interconnect hardware present
in some of the modern SoCs. Currently it consists only of a binding for
the interconnect hardware devices (provider).

Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
---
 .../bindings/interconnect/interconnect.txt    | 33 +++++++++++++++++++
 1 file changed, 33 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt

--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Rob Herring (Arm) Aug. 2, 2018, 9:02 p.m. UTC | #1
On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> This binding is intended to represent the interconnect hardware present
> in some of the modern SoCs. Currently it consists only of a binding for
> the interconnect hardware devices (provider).

If you want the bindings reviewed, then you need to send them to the
DT list. CC'ing me is pointless, I get CC'ed too many things to read.

The consumer and producer binding should be a single patch. One is not
useful without the other.

There is also a patch series from Maxime Ripard that's addressing the
same general area. See "dt-bindings: Add a dma-parent property". We
don't need multiple ways to address describing the device to memory
paths, so you all had better work out a common solution.

Rob

>
> Signed-off-by: Georgi Djakov <georgi.djakov@linaro.org>
> ---
>  .../bindings/interconnect/interconnect.txt    | 33 +++++++++++++++++++
>  1 file changed, 33 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/interconnect/interconnect.txt
>
> diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> new file mode 100644
> index 000000000000..6e2b2971b094
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
> @@ -0,0 +1,33 @@
> +Interconnect Provider Device Tree Bindings
> +=========================================
> +
> +The purpose of this document is to define a common set of generic interconnect
> +providers/consumers properties.
> +
> +
> += interconnect providers =
> +
> +The interconnect provider binding is intended to represent the interconnect
> +controllers in the system. Each provider registers a set of interconnect
> +nodes, which expose the interconnect related capabilities of the interconnect
> +to consumer drivers. These capabilities can be throughput, latency, priority
> +etc. The consumer drivers set constraints on interconnect path (or endpoints)
> +depending on the use case. Interconnect providers can also be interconnect
> +consumers, such as in the case where two network-on-chip fabrics interface
> +directly
> +
> +Required properties:
> +- compatible : contains the interconnect provider compatible string
> +- #interconnect-cells : number of cells in a interconnect specifier needed to
> +                       encode the interconnect node id
> +
> +Example:
> +
> +               snoc: snoc@580000 {
> +                       compatible = "qcom,msm8916-snoc";
> +                       #interconnect-cells = <1>;
> +                       reg = <0x580000 0x14000>;
> +                       clock-names = "bus_clk", "bus_a_clk";
> +                       clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
> +                                <&rpmcc RPM_SMD_SNOC_A_CLK>;
> +               };
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Georgi Djakov Aug. 7, 2018, 2:54 p.m. UTC | #2
Hi Rob,

On 08/03/2018 12:02 AM, Rob Herring wrote:
> On Tue, Jul 31, 2018 at 10:13 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>
>> This binding is intended to represent the interconnect hardware present
>> in some of the modern SoCs. Currently it consists only of a binding for
>> the interconnect hardware devices (provider).
> 
> If you want the bindings reviewed, then you need to send them to the
> DT list. CC'ing me is pointless, I get CC'ed too many things to read.

Ops, ok!

> The consumer and producer binding should be a single patch. One is not
> useful without the other.

The reason for splitting them is that they can be reviewed separately.
Also we can rely on platform data instead of using DT and the consumer
binding. However will do as you suggest.

> There is also a patch series from Maxime Ripard that's addressing the
> same general area. See "dt-bindings: Add a dma-parent property". We
> don't need multiple ways to address describing the device to memory
> paths, so you all had better work out a common solution.

Looks like this fits exactly into the interconnect API concept. I see
MBUS as interconnect provider and display/camera as consumers, that
report their bandwidth needs. I am also planning to add support for
priority.

Thanks,
Georgi
--
To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Maxime Ripard Aug. 20, 2018, 3:32 p.m. UTC | #3
Hi Georgi,

On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > There is also a patch series from Maxime Ripard that's addressing the
> > same general area. See "dt-bindings: Add a dma-parent property". We
> > don't need multiple ways to address describing the device to memory
> > paths, so you all had better work out a common solution.
> 
> Looks like this fits exactly into the interconnect API concept. I see
> MBUS as interconnect provider and display/camera as consumers, that
> report their bandwidth needs. I am also planning to add support for
> priority.

Thanks for working on this. After looking at your serie, the one thing
I'm a bit uncertain about (and the most important one to us) is how we
would be able to tell through which interconnect the DMA are done.

This is important to us since our topology is actually quite simple as
you've seen, but the RAM is not mapped on that bus and on the CPU's,
so we need to apply an offset to each buffer being DMA'd.

Maxime
Georgi Djakov Aug. 24, 2018, 2:51 p.m. UTC | #4
Hi Maxime,

On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> Hi Georgi,
> 
> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>> There is also a patch series from Maxime Ripard that's addressing the
>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>> don't need multiple ways to address describing the device to memory
>>> paths, so you all had better work out a common solution.
>>
>> Looks like this fits exactly into the interconnect API concept. I see
>> MBUS as interconnect provider and display/camera as consumers, that
>> report their bandwidth needs. I am also planning to add support for
>> priority.
> 
> Thanks for working on this. After looking at your serie, the one thing
> I'm a bit uncertain about (and the most important one to us) is how we
> would be able to tell through which interconnect the DMA are done.
> 
> This is important to us since our topology is actually quite simple as
> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> so we need to apply an offset to each buffer being DMA'd.

Ok, i see - your problem is not about bandwidth scaling but about using
different memory ranges by the driver to access the same location. So
this is not really the same and your problem is different. Also the
interconnect bindings are describing a path and endpoints. However i am
open to any ideas.

Thanks,
Georgi
Rob Herring (Arm) Aug. 24, 2018, 3:35 p.m. UTC | #5
On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>
> Hi Maxime,
>
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> >
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> >
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> >
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
>
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location. So
> this is not really the same and your problem is different. Also the
> interconnect bindings are describing a path and endpoints. However i am
> open to any ideas.

It may be different things you need, but both are related to the path
between a bus master and memory. We can't have each 'problem'
described in a different way. Well, we could as long as each platform
has different problems, but that's unlikely.

It could turn out that the only commonality is property naming
convention, but that's still better than 2 independent solutions.

I know you each want to just fix your issues, but the fact that DT
doesn't model the DMA side of the bus structure has been an issue at
least since the start of DT on ARM. Either we should address this in a
flexible way or we can just continue to manage without. So I'm not
inclined to take something that only addresses one SoC family.

Rob
Maxime Ripard Aug. 27, 2018, 3:08 p.m. UTC | #6
Hi!

On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
> Hi Maxime,
> 
> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > Hi Georgi,
> > 
> > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>> There is also a patch series from Maxime Ripard that's addressing the
> >>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>> don't need multiple ways to address describing the device to memory
> >>> paths, so you all had better work out a common solution.
> >>
> >> Looks like this fits exactly into the interconnect API concept. I see
> >> MBUS as interconnect provider and display/camera as consumers, that
> >> report their bandwidth needs. I am also planning to add support for
> >> priority.
> > 
> > Thanks for working on this. After looking at your serie, the one thing
> > I'm a bit uncertain about (and the most important one to us) is how we
> > would be able to tell through which interconnect the DMA are done.
> > 
> > This is important to us since our topology is actually quite simple as
> > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > so we need to apply an offset to each buffer being DMA'd.
> 
> Ok, i see - your problem is not about bandwidth scaling but about using
> different memory ranges by the driver to access the same location.

Well, it turns out that the problem we are bitten by at the moment is
the memory range one, but the controller it goes through also provides
bandwidth scaling, priorities and so on, so it's not too far off.

> So this is not really the same and your problem is different. Also
> the interconnect bindings are describing a path and
> endpoints. However i am open to any ideas.

It's describing a path and endpoints, but it can describe multiple of
them for the same device, right? If so, we'd need to provide
additional information to distinguish which path is used for DMA.

Maxime
Maxime Ripard Aug. 27, 2018, 3:11 p.m. UTC | #7
On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >
> > Hi Maxime,
> >
> > On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> > > Hi Georgi,
> > >
> > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> > >>> There is also a patch series from Maxime Ripard that's addressing the
> > >>> same general area. See "dt-bindings: Add a dma-parent property". We
> > >>> don't need multiple ways to address describing the device to memory
> > >>> paths, so you all had better work out a common solution.
> > >>
> > >> Looks like this fits exactly into the interconnect API concept. I see
> > >> MBUS as interconnect provider and display/camera as consumers, that
> > >> report their bandwidth needs. I am also planning to add support for
> > >> priority.
> > >
> > > Thanks for working on this. After looking at your serie, the one thing
> > > I'm a bit uncertain about (and the most important one to us) is how we
> > > would be able to tell through which interconnect the DMA are done.
> > >
> > > This is important to us since our topology is actually quite simple as
> > > you've seen, but the RAM is not mapped on that bus and on the CPU's,
> > > so we need to apply an offset to each buffer being DMA'd.
> >
> > Ok, i see - your problem is not about bandwidth scaling but about using
> > different memory ranges by the driver to access the same location. So
> > this is not really the same and your problem is different. Also the
> > interconnect bindings are describing a path and endpoints. However i am
> > open to any ideas.
> 
> It may be different things you need, but both are related to the path
> between a bus master and memory. We can't have each 'problem'
> described in a different way. Well, we could as long as each platform
> has different problems, but that's unlikely.
> 
> It could turn out that the only commonality is property naming
> convention, but that's still better than 2 independent solutions.

Yeah, I really don't think the two issues are unrelated. Can we maybe
have a particular interconnect-names value to mark the interconnect
being used to perform DMA?

> I know you each want to just fix your issues, but the fact that DT
> doesn't model the DMA side of the bus structure has been an issue at
> least since the start of DT on ARM. Either we should address this in a
> flexible way or we can just continue to manage without. So I'm not
> inclined to take something that only addresses one SoC family.

I'd really like to have it addressed. We're getting bit by this, and
the hacks we have don't work well anymore.

Maxime
Georgi Djakov Aug. 29, 2018, 12:31 p.m. UTC | #8
Hi Maxime,

On 08/27/2018 06:08 PM, Maxime Ripard wrote:
> Hi!
> 
> On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote:
>> Hi Maxime,
>>
>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>> Hi Georgi,
>>>
>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>> don't need multiple ways to address describing the device to memory
>>>>> paths, so you all had better work out a common solution.
>>>>
>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>> report their bandwidth needs. I am also planning to add support for
>>>> priority.
>>>
>>> Thanks for working on this. After looking at your serie, the one thing
>>> I'm a bit uncertain about (and the most important one to us) is how we
>>> would be able to tell through which interconnect the DMA are done.
>>>
>>> This is important to us since our topology is actually quite simple as
>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>> so we need to apply an offset to each buffer being DMA'd.
>>
>> Ok, i see - your problem is not about bandwidth scaling but about using
>> different memory ranges by the driver to access the same location.
> 
> Well, it turns out that the problem we are bitten by at the moment is
> the memory range one, but the controller it goes through also provides
> bandwidth scaling, priorities and so on, so it's not too far off.

Thanks for the clarification. Alright, so this will fit nicely into the
model as a provider. I agree that we should try to use the same binding
to describe a path from a master to memory in DT.

>> So this is not really the same and your problem is different. Also
>> the interconnect bindings are describing a path and
>> endpoints. However i am open to any ideas.
> 
> It's describing a path and endpoints, but it can describe multiple of
> them for the same device, right? If so, we'd need to provide
> additional information to distinguish which path is used for DMA.

Sure, multiple paths are supported.

BR,
Georgi
Georgi Djakov Aug. 29, 2018, 12:33 p.m. UTC | #9
Hi Rob and Maxime,

On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
>> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
>>>
>>> Hi Maxime,
>>>
>>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
>>>> Hi Georgi,
>>>>
>>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
>>>>>> There is also a patch series from Maxime Ripard that's addressing the
>>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
>>>>>> don't need multiple ways to address describing the device to memory
>>>>>> paths, so you all had better work out a common solution.
>>>>>
>>>>> Looks like this fits exactly into the interconnect API concept. I see
>>>>> MBUS as interconnect provider and display/camera as consumers, that
>>>>> report their bandwidth needs. I am also planning to add support for
>>>>> priority.
>>>>
>>>> Thanks for working on this. After looking at your serie, the one thing
>>>> I'm a bit uncertain about (and the most important one to us) is how we
>>>> would be able to tell through which interconnect the DMA are done.
>>>>
>>>> This is important to us since our topology is actually quite simple as
>>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
>>>> so we need to apply an offset to each buffer being DMA'd.
>>>
>>> Ok, i see - your problem is not about bandwidth scaling but about using
>>> different memory ranges by the driver to access the same location. So
>>> this is not really the same and your problem is different. Also the
>>> interconnect bindings are describing a path and endpoints. However i am
>>> open to any ideas.
>>
>> It may be different things you need, but both are related to the path
>> between a bus master and memory. We can't have each 'problem'
>> described in a different way. Well, we could as long as each platform
>> has different problems, but that's unlikely.
>>
>> It could turn out that the only commonality is property naming
>> convention, but that's still better than 2 independent solutions.
> 
> Yeah, I really don't think the two issues are unrelated. Can we maybe
> have a particular interconnect-names value to mark the interconnect
> being used to perform DMA?

We can call one of the paths "dma" and use it to perform DMA for the
current device. I don't see a problem with this. The name of the path is
descriptive and makes sense. And by doing we avoid adding more DT
properties, which would be an other option.

This also makes me think that it might be a good idea to have a standard
name for the path to memory as i expect some people will call it "mem",
others "ddr" etc.

Thanks,
Georgi

>> I know you each want to just fix your issues, but the fact that DT
>> doesn't model the DMA side of the bus structure has been an issue at
>> least since the start of DT on ARM. Either we should address this in a
>> flexible way or we can just continue to manage without. So I'm not
>> inclined to take something that only addresses one SoC family.
> 
> I'd really like to have it addressed. We're getting bit by this, and
> the hacks we have don't work well anymore.
Maxime Ripard Aug. 30, 2018, 7:47 a.m. UTC | #10
Hi,

On Wed, Aug 29, 2018 at 03:33:29PM +0300, Georgi Djakov wrote:
> On 08/27/2018 06:11 PM, Maxime Ripard wrote:
> > On Fri, Aug 24, 2018 at 10:35:23AM -0500, Rob Herring wrote:
> >> On Fri, Aug 24, 2018 at 9:51 AM Georgi Djakov <georgi.djakov@linaro.org> wrote:
> >>>
> >>> Hi Maxime,
> >>>
> >>> On 08/20/2018 06:32 PM, Maxime Ripard wrote:
> >>>> Hi Georgi,
> >>>>
> >>>> On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote:
> >>>>>> There is also a patch series from Maxime Ripard that's addressing the
> >>>>>> same general area. See "dt-bindings: Add a dma-parent property". We
> >>>>>> don't need multiple ways to address describing the device to memory
> >>>>>> paths, so you all had better work out a common solution.
> >>>>>
> >>>>> Looks like this fits exactly into the interconnect API concept. I see
> >>>>> MBUS as interconnect provider and display/camera as consumers, that
> >>>>> report their bandwidth needs. I am also planning to add support for
> >>>>> priority.
> >>>>
> >>>> Thanks for working on this. After looking at your serie, the one thing
> >>>> I'm a bit uncertain about (and the most important one to us) is how we
> >>>> would be able to tell through which interconnect the DMA are done.
> >>>>
> >>>> This is important to us since our topology is actually quite simple as
> >>>> you've seen, but the RAM is not mapped on that bus and on the CPU's,
> >>>> so we need to apply an offset to each buffer being DMA'd.
> >>>
> >>> Ok, i see - your problem is not about bandwidth scaling but about using
> >>> different memory ranges by the driver to access the same location. So
> >>> this is not really the same and your problem is different. Also the
> >>> interconnect bindings are describing a path and endpoints. However i am
> >>> open to any ideas.
> >>
> >> It may be different things you need, but both are related to the path
> >> between a bus master and memory. We can't have each 'problem'
> >> described in a different way. Well, we could as long as each platform
> >> has different problems, but that's unlikely.
> >>
> >> It could turn out that the only commonality is property naming
> >> convention, but that's still better than 2 independent solutions.
> > 
> > Yeah, I really don't think the two issues are unrelated. Can we maybe
> > have a particular interconnect-names value to mark the interconnect
> > being used to perform DMA?
> 
> We can call one of the paths "dma" and use it to perform DMA for the
> current device. I don't see a problem with this. The name of the path is
> descriptive and makes sense. And by doing we avoid adding more DT
> properties, which would be an other option.

That works for me. If Rob is fine with it too, I'll send an updated
version of my serie based on yours.

Thanks!
Maxime
diff mbox series

Patch

diff --git a/Documentation/devicetree/bindings/interconnect/interconnect.txt b/Documentation/devicetree/bindings/interconnect/interconnect.txt
new file mode 100644
index 000000000000..6e2b2971b094
--- /dev/null
+++ b/Documentation/devicetree/bindings/interconnect/interconnect.txt
@@ -0,0 +1,33 @@ 
+Interconnect Provider Device Tree Bindings
+=========================================
+
+The purpose of this document is to define a common set of generic interconnect
+providers/consumers properties.
+
+
+= interconnect providers =
+
+The interconnect provider binding is intended to represent the interconnect
+controllers in the system. Each provider registers a set of interconnect
+nodes, which expose the interconnect related capabilities of the interconnect
+to consumer drivers. These capabilities can be throughput, latency, priority
+etc. The consumer drivers set constraints on interconnect path (or endpoints)
+depending on the use case. Interconnect providers can also be interconnect
+consumers, such as in the case where two network-on-chip fabrics interface
+directly
+
+Required properties:
+- compatible : contains the interconnect provider compatible string
+- #interconnect-cells : number of cells in a interconnect specifier needed to
+			encode the interconnect node id
+
+Example:
+
+		snoc: snoc@580000 {
+			compatible = "qcom,msm8916-snoc";
+			#interconnect-cells = <1>;
+			reg = <0x580000 0x14000>;
+			clock-names = "bus_clk", "bus_a_clk";
+			clocks = <&rpmcc RPM_SMD_SNOC_CLK>,
+				 <&rpmcc RPM_SMD_SNOC_A_CLK>;
+		};