mbox series

[v6,00/12] ARM/MIPS: DTS: add child nodes describing the PVRSGX GPU present in some OMAP SoC and JZ4780 (and many more)

Message ID cover.1586939718.git.hns@goldelico.com (mailing list archive)
Headers show
Series ARM/MIPS: DTS: add child nodes describing the PVRSGX GPU present in some OMAP SoC and JZ4780 (and many more) | expand

Message

H. Nikolaus Schaller April 15, 2020, 8:35 a.m. UTC
* rebased to v5.7-rc1
* added DTS for for a31, a31s, a83t - by Philipp Rossak <embed3d@gmail.com>
* added DTS for "samsung,s5pv210-sgx540-120" - by Jonathan Bakker <xc-racer2@live.ca>
* bindings.yaml fixes:
  - added a31, a31
  - fixes for omap4470
  - jz4780 contains an sgx540-130 and not -120
  - a83t contains an sgx544-115 and not -116
  - removed "additionalProperties: false" because some SoC may need additional properties

PATCH V5 2020-03-29 19:38:32:
* reworked YAML bindings to pass dt_binding_check and be better grouped
* rename all nodes to "gpu: gpu@<address>"
* removed "img,sgx5" from example - suggested by Rob Herring <robh+dt@kernel.org>

PATCH V4 2019-12-17 19:02:11:
* MIPS: DTS: jz4780: removed "img,sgx5" from bindings
* YAML bindings: updated according to suggestions by Rob Herring
* MIPS: DTS: jz4780: insert-sorted gpu node by register address - suggested by Paul Cercueil

PATCH V3 2019-11-24 12:40:33:
* reworked YAML format with help by Rob Herring
* removed .txt binding document
* change compatible "ti,am335x-sgx" to "ti,am3352-sgx" - suggested by Tony Lindgren

PATCH V2 2019-11-07 12:06:17:
* tried to convert bindings to YAML format - suggested by Rob Herring
* added JZ4780 DTS node (proven to load the driver)
* removed timer and img,cores properties until we know we really need them - suggested by Rob Herring

PATCH V1 2019-10-18 20:46:35:

This patch series defines child nodes for the SGX5xx interface inside
different SoC so that a driver can be found and probed by the
compatible strings and can retrieve information about the SGX revision
that is included in a specific SoC. It also defines the interrupt number
to be used by the SGX driver.

There is currently no mainline driver for these GPUs, but a project [1]
is ongoing with the goal to get the open-source part as provided by TI/IMG
and others into drivers/gpu/drm/pvrsgx.

The kernel modules built from this project have successfully demonstrated
to work with the DTS definitions from this patch set on AM335x BeagleBone
Black, DM3730 and OMAP5 Pyra and Droid 4. They partially work on OMAP3530 and
PandaBoard ES but that is likely a problem in the kernel driver or the
(non-free) user-space libraries and binaries.

Wotk for JZ4780 (CI20 board) is in progress and there is potential to extend
this work to e.g. BananaPi-M3 (A83) and  some Intel Poulsbo and CedarView
devices.

[1]: https://github.com/openpvrsgx-devgroup


H. Nikolaus Schaller (8):
  dt-bindings: add img,pvrsgx.yaml for Imagination GPUs
  ARM: DTS: am33xx: add sgx gpu child node
  ARM: DTS: am3517: add sgx gpu child node
  ARM: DTS: omap34xx: add sgx gpu child node
  ARM: DTS: omap36xx: add sgx gpu child node
  ARM: DTS: omap4: add sgx gpu child node
  ARM: DTS: omap5: add sgx gpu child node
  MIPS: DTS: jz4780: add sgx gpu node

Jonathan Bakker (1):
  arm: dts: s5pv210: Add G3D node

Philipp Rossak (3):
  ARM: dts: sun6i: a31: add sgx gpu child node
  ARM: dts: sun6i: a31s: add sgx gpu child node
  ARM: dts: sun8i: a83t: add sgx gpu child node

 .../devicetree/bindings/gpu/img,pvrsgx.yaml   | 122 ++++++++++++++++++
 arch/arm/boot/dts/am33xx.dtsi                 |  11 +-
 arch/arm/boot/dts/am3517.dtsi                 |   9 +-
 arch/arm/boot/dts/omap34xx.dtsi               |  11 +-
 arch/arm/boot/dts/omap36xx.dtsi               |   9 +-
 arch/arm/boot/dts/omap4.dtsi                  |  11 +-
 arch/arm/boot/dts/omap4470.dts                |  15 +++
 arch/arm/boot/dts/omap5.dtsi                  |  11 +-
 arch/arm/boot/dts/s5pv210.dtsi                |  15 +++
 arch/arm/boot/dts/sun6i-a31.dtsi              |  11 ++
 arch/arm/boot/dts/sun6i-a31s.dtsi             |  10 ++
 arch/arm/boot/dts/sun8i-a83t.dtsi             |  11 ++
 arch/mips/boot/dts/ingenic/jz4780.dtsi        |  11 ++
 13 files changed, 229 insertions(+), 28 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/gpu/img,pvrsgx.yaml
 create mode 100644 arch/arm/boot/dts/omap4470.dts

Comments

Maxime Ripard April 15, 2020, 10:10 a.m. UTC | #1
Hi,

On Wed, Apr 15, 2020 at 10:35:07AM +0200, H. Nikolaus Schaller wrote:
> * rebased to v5.7-rc1
> * added DTS for for a31, a31s, a83t - by Philipp Rossak <embed3d@gmail.com>
> * added DTS for "samsung,s5pv210-sgx540-120" - by Jonathan Bakker <xc-racer2@live.ca>
> * bindings.yaml fixes:
>   - added a31, a31
>   - fixes for omap4470
>   - jz4780 contains an sgx540-130 and not -120
>   - a83t contains an sgx544-115 and not -116
>   - removed "additionalProperties: false" because some SoC may need additional properties
>
> PATCH V5 2020-03-29 19:38:32:
> * reworked YAML bindings to pass dt_binding_check and be better grouped
> * rename all nodes to "gpu: gpu@<address>"
> * removed "img,sgx5" from example - suggested by Rob Herring <robh+dt@kernel.org>
>
> PATCH V4 2019-12-17 19:02:11:
> * MIPS: DTS: jz4780: removed "img,sgx5" from bindings
> * YAML bindings: updated according to suggestions by Rob Herring
> * MIPS: DTS: jz4780: insert-sorted gpu node by register address - suggested by Paul Cercueil
>
> PATCH V3 2019-11-24 12:40:33:
> * reworked YAML format with help by Rob Herring
> * removed .txt binding document
> * change compatible "ti,am335x-sgx" to "ti,am3352-sgx" - suggested by Tony Lindgren
>
> PATCH V2 2019-11-07 12:06:17:
> * tried to convert bindings to YAML format - suggested by Rob Herring
> * added JZ4780 DTS node (proven to load the driver)
> * removed timer and img,cores properties until we know we really need them - suggested by Rob Herring
>
> PATCH V1 2019-10-18 20:46:35:
>
> This patch series defines child nodes for the SGX5xx interface inside
> different SoC so that a driver can be found and probed by the
> compatible strings and can retrieve information about the SGX revision
> that is included in a specific SoC. It also defines the interrupt number
> to be used by the SGX driver.
>
> There is currently no mainline driver for these GPUs, but a project
> [1] is ongoing with the goal to get the open-source part as provided
> by TI/IMG and others into drivers/gpu/drm/pvrsgx.

Just a heads up, DRM requires an open-source user-space, so if your
plan is to move the open-source kernel driver while using the
closed-source library (as that page seem to suggest), that might
change a few things.

> The kernel modules built from this project have successfully
> demonstrated to work with the DTS definitions from this patch set on
> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
> partially work on OMAP3530 and PandaBoard ES but that is likely a
> problem in the kernel driver or the (non-free) user-space libraries
> and binaries.
>
> Wotk for JZ4780 (CI20 board) is in progress and there is potential
> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
> and CedarView devices.

If it's not been tested on any Allwinner board yet, I'll leave it
aside until it's been properly shown to work.

Maxime
H. Nikolaus Schaller April 15, 2020, 12:41 p.m. UTC | #2
Hi Maxime,

> Am 15.04.2020 um 12:10 schrieb Maxime Ripard <maxime@cerno.tech>:
> 
> Hi,
> 
> On Wed, Apr 15, 2020 at 10:35:07AM +0200, H. Nikolaus Schaller wrote:
>> * rebased to v5.7-rc1
>> * added DTS for for a31, a31s, a83t - by Philipp Rossak <embed3d@gmail.com>
>> * added DTS for "samsung,s5pv210-sgx540-120" - by Jonathan Bakker <xc-racer2@live.ca>
>> * bindings.yaml fixes:
>>  - added a31, a31
>>  - fixes for omap4470
>>  - jz4780 contains an sgx540-130 and not -120
>>  - a83t contains an sgx544-115 and not -116
>>  - removed "additionalProperties: false" because some SoC may need additional properties
>> 
>> PATCH V5 2020-03-29 19:38:32:
>> * reworked YAML bindings to pass dt_binding_check and be better grouped
>> * rename all nodes to "gpu: gpu@<address>"
>> * removed "img,sgx5" from example - suggested by Rob Herring <robh+dt@kernel.org>
>> 
>> PATCH V4 2019-12-17 19:02:11:
>> * MIPS: DTS: jz4780: removed "img,sgx5" from bindings
>> * YAML bindings: updated according to suggestions by Rob Herring
>> * MIPS: DTS: jz4780: insert-sorted gpu node by register address - suggested by Paul Cercueil
>> 
>> PATCH V3 2019-11-24 12:40:33:
>> * reworked YAML format with help by Rob Herring
>> * removed .txt binding document
>> * change compatible "ti,am335x-sgx" to "ti,am3352-sgx" - suggested by Tony Lindgren
>> 
>> PATCH V2 2019-11-07 12:06:17:
>> * tried to convert bindings to YAML format - suggested by Rob Herring
>> * added JZ4780 DTS node (proven to load the driver)
>> * removed timer and img,cores properties until we know we really need them - suggested by Rob Herring
>> 
>> PATCH V1 2019-10-18 20:46:35:
>> 
>> This patch series defines child nodes for the SGX5xx interface inside
>> different SoC so that a driver can be found and probed by the
>> compatible strings and can retrieve information about the SGX revision
>> that is included in a specific SoC. It also defines the interrupt number
>> to be used by the SGX driver.
>> 
>> There is currently no mainline driver for these GPUs, but a project
>> [1] is ongoing with the goal to get the open-source part as provided
>> by TI/IMG and others into drivers/gpu/drm/pvrsgx.
> 
> Just a heads up, DRM requires an open-source user-space, so if your
> plan is to move the open-source kernel driver while using the
> closed-source library (as that page seem to suggest), that might
> change a few things.

The far future goal is to arrive at a completely open implementation,
but nobody knows how to get there. Therefore we bake smaller bread :)

step 1: get SoC integration right and stable (this is what this series is for)
step 2: make the open source kernel driver work with closed-source libs
step 3: write open-source replacements for user-space

> 
>> The kernel modules built from this project have successfully
>> demonstrated to work with the DTS definitions from this patch set on
>> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
>> partially work on OMAP3530 and PandaBoard ES but that is likely a
>> problem in the kernel driver or the (non-free) user-space libraries
>> and binaries.
>> 
>> Wotk for JZ4780 (CI20 board) is in progress and there is potential
>> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
>> and CedarView devices.
> 
> If it's not been tested on any Allwinner board yet, I'll leave it
> aside until it's been properly shown to work.

Phillip has testes something on a83.

BR and thanks,
Nikolaus
Daniel Vetter April 15, 2020, 12:46 p.m. UTC | #3
On Wed, Apr 15, 2020 at 02:41:52PM +0200, H. Nikolaus Schaller wrote:
> Hi Maxime,
> 
> > Am 15.04.2020 um 12:10 schrieb Maxime Ripard <maxime@cerno.tech>:
> > 
> > Hi,
> > 
> > On Wed, Apr 15, 2020 at 10:35:07AM +0200, H. Nikolaus Schaller wrote:
> >> * rebased to v5.7-rc1
> >> * added DTS for for a31, a31s, a83t - by Philipp Rossak <embed3d@gmail.com>
> >> * added DTS for "samsung,s5pv210-sgx540-120" - by Jonathan Bakker <xc-racer2@live.ca>
> >> * bindings.yaml fixes:
> >>  - added a31, a31
> >>  - fixes for omap4470
> >>  - jz4780 contains an sgx540-130 and not -120
> >>  - a83t contains an sgx544-115 and not -116
> >>  - removed "additionalProperties: false" because some SoC may need additional properties
> >> 
> >> PATCH V5 2020-03-29 19:38:32:
> >> * reworked YAML bindings to pass dt_binding_check and be better grouped
> >> * rename all nodes to "gpu: gpu@<address>"
> >> * removed "img,sgx5" from example - suggested by Rob Herring <robh+dt@kernel.org>
> >> 
> >> PATCH V4 2019-12-17 19:02:11:
> >> * MIPS: DTS: jz4780: removed "img,sgx5" from bindings
> >> * YAML bindings: updated according to suggestions by Rob Herring
> >> * MIPS: DTS: jz4780: insert-sorted gpu node by register address - suggested by Paul Cercueil
> >> 
> >> PATCH V3 2019-11-24 12:40:33:
> >> * reworked YAML format with help by Rob Herring
> >> * removed .txt binding document
> >> * change compatible "ti,am335x-sgx" to "ti,am3352-sgx" - suggested by Tony Lindgren
> >> 
> >> PATCH V2 2019-11-07 12:06:17:
> >> * tried to convert bindings to YAML format - suggested by Rob Herring
> >> * added JZ4780 DTS node (proven to load the driver)
> >> * removed timer and img,cores properties until we know we really need them - suggested by Rob Herring
> >> 
> >> PATCH V1 2019-10-18 20:46:35:
> >> 
> >> This patch series defines child nodes for the SGX5xx interface inside
> >> different SoC so that a driver can be found and probed by the
> >> compatible strings and can retrieve information about the SGX revision
> >> that is included in a specific SoC. It also defines the interrupt number
> >> to be used by the SGX driver.
> >> 
> >> There is currently no mainline driver for these GPUs, but a project
> >> [1] is ongoing with the goal to get the open-source part as provided
> >> by TI/IMG and others into drivers/gpu/drm/pvrsgx.
> > 
> > Just a heads up, DRM requires an open-source user-space, so if your
> > plan is to move the open-source kernel driver while using the
> > closed-source library (as that page seem to suggest), that might
> > change a few things.
> 
> The far future goal is to arrive at a completely open implementation,
> but nobody knows how to get there. Therefore we bake smaller bread :)
> 
> step 1: get SoC integration right and stable (this is what this series is for)
> step 2: make the open source kernel driver work with closed-source libs
> step 3: write open-source replacements for user-space

step4: clean up the kernel driver
step5: get the mesa driver and kernel driver reviewed
step6: get it all merged

It's a very long road, but awesome to hear that someone is taking care of
pvrsgx. And I'm totally fine with landing stuff like you propose in step
1. Just not the driver/uapi itself.

Goog luck and have fun!

Cheers, Daniel

> 
> > 
> >> The kernel modules built from this project have successfully
> >> demonstrated to work with the DTS definitions from this patch set on
> >> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
> >> partially work on OMAP3530 and PandaBoard ES but that is likely a
> >> problem in the kernel driver or the (non-free) user-space libraries
> >> and binaries.
> >> 
> >> Wotk for JZ4780 (CI20 board) is in progress and there is potential
> >> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
> >> and CedarView devices.
> > 
> > If it's not been tested on any Allwinner board yet, I'll leave it
> > aside until it's been properly shown to work.
> 
> Phillip has testes something on a83.
> 
> BR and thanks,
> Nikolaus
Maxime Ripard April 15, 2020, 1:02 p.m. UTC | #4
On Wed, Apr 15, 2020 at 02:41:52PM +0200, H. Nikolaus Schaller wrote:
> >> The kernel modules built from this project have successfully
> >> demonstrated to work with the DTS definitions from this patch set on
> >> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
> >> partially work on OMAP3530 and PandaBoard ES but that is likely a
> >> problem in the kernel driver or the (non-free) user-space libraries
> >> and binaries.
> >>
> >> Wotk for JZ4780 (CI20 board) is in progress and there is potential
> >> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
> >> and CedarView devices.
> >
> > If it's not been tested on any Allwinner board yet, I'll leave it
> > aside until it's been properly shown to work.
>
> Phillip has testes something on a83.

I'm a bit skeptical on that one since it doesn't even list the
interrupts connected to the GPU that the binding mandates.

Maxime
H. Nikolaus Schaller April 15, 2020, 1:04 p.m. UTC | #5
> Am 15.04.2020 um 15:02 schrieb Maxime Ripard <maxime@cerno.tech>:
> 
> On Wed, Apr 15, 2020 at 02:41:52PM +0200, H. Nikolaus Schaller wrote:
>>>> The kernel modules built from this project have successfully
>>>> demonstrated to work with the DTS definitions from this patch set on
>>>> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
>>>> partially work on OMAP3530 and PandaBoard ES but that is likely a
>>>> problem in the kernel driver or the (non-free) user-space libraries
>>>> and binaries.
>>>> 
>>>> Wotk for JZ4780 (CI20 board) is in progress and there is potential
>>>> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
>>>> and CedarView devices.
>>> 
>>> If it's not been tested on any Allwinner board yet, I'll leave it
>>> aside until it's been properly shown to work.
>> 
>> Phillip has tested something on a83.
> 
> I'm a bit skeptical on that one since it doesn't even list the
> interrupts connected to the GPU that the binding mandates.

I think he left it out for a future update.
But best he comments himself.

BR and thanks,
Nikolaus
Philipp Rossak April 17, 2020, 12:09 p.m. UTC | #6
Hi all,

On 15.04.20 15:04, H. Nikolaus Schaller wrote:
> 
>> Am 15.04.2020 um 15:02 schrieb Maxime Ripard <maxime@cerno.tech>:
>>
>> On Wed, Apr 15, 2020 at 02:41:52PM +0200, H. Nikolaus Schaller wrote:
>>>>> The kernel modules built from this project have successfully
>>>>> demonstrated to work with the DTS definitions from this patch set on
>>>>> AM335x BeagleBone Black, DM3730 and OMAP5 Pyra and Droid 4. They
>>>>> partially work on OMAP3530 and PandaBoard ES but that is likely a
>>>>> problem in the kernel driver or the (non-free) user-space libraries
>>>>> and binaries.
>>>>>
>>>>> Wotk for JZ4780 (CI20 board) is in progress and there is potential
>>>>> to extend this work to e.g. BananaPi-M3 (A83) and some Intel Poulsbo
>>>>> and CedarView devices.
>>>>
>>>> If it's not been tested on any Allwinner board yet, I'll leave it
>>>> aside until it's been properly shown to work.
>>>
>>> Phillip has tested something on a83.
>>
Yes I'm currently working on the a83t demo. The kernel module is loading 
correctly and the clocks, interrupts and resets seems to be working 
correctly.

I'm currently working on getting the users space driver working with the 
kernel driver. This is hopefully done soon.

>> I'm a bit skeptical on that one since it doesn't even list the
>> interrupts connected to the GPU that the binding mandates.
> 
> I think he left it out for a future update.
> But best he comments himself.

I'm currently working on those bindings. They are now 90% done, but they 
are not finished till now. Currently there is some mainline support 
missing to add the full binding. The A83T and also the A31/A31s have a 
GPU Power Off Gating Register in the R_PRCM module, that is not 
supported right now in Mainline. The Register need to be written when 
the GPU is powered on and off.

@Maxime: I totally agree on your point that a demo needs to be provided 
before the related DTS patches should be provided. That's the reason why 
I added the gpu placeholder patches.
Do you have an idea how a driver for the R_PRCM stuff can look like? I'm 
not that experienced with the clock driver framework.

The big question is right now how to proceed with the A83T and A31s 
patches. I see there three options, which one do you prefer?:

1. Provide now placeholder patches and send new patches, if everything 
is clear and other things are mainlined
2. Provide now patches as complete as possible and provide later patches 
to complete them when the R_PRCM things are mainlined
3. Leave them out, till the related work is mainlined and the bindings 
are final.


Since this GPU IP core is very flexible and the SOC manufactures can 
configure it on their needs, I think the binding will extend in the 
future. For example the SGX544 GPU is available in different 
configurations: there is a SGX544 core and SGX544MPx core. The x stands 
for the count of the USSE (Universal Scalable Shader Engine) cores. For 
example the GPU in the A83T is a MP1 and the A31/A31s a MP2.
In addition to that some of the GPU's have also a 2D engine.
There might be even more differences in the GPU's that we don't know 
right now and should be described in the Devicetree, but that's a 
different topic that we should keep in mind.

Cheers
Philipp
Maxime Ripard April 20, 2020, 7:38 a.m. UTC | #7
Hi,

On Fri, Apr 17, 2020 at 02:09:06PM +0200, Philipp Rossak wrote:
> > > I'm a bit skeptical on that one since it doesn't even list the
> > > interrupts connected to the GPU that the binding mandates.
> > 
> > I think he left it out for a future update.
> > But best he comments himself.
> 
> I'm currently working on those bindings. They are now 90% done, but they are
> not finished till now. Currently there is some mainline support missing to
> add the full binding. The A83T and also the A31/A31s have a GPU Power Off
> Gating Register in the R_PRCM module, that is not supported right now in
> Mainline. The Register need to be written when the GPU is powered on and
> off.
> 
> @Maxime: I totally agree on your point that a demo needs to be provided
> before the related DTS patches should be provided. That's the reason why I
> added the gpu placeholder patches.
> Do you have an idea how a driver for the R_PRCM stuff can look like? I'm not
> that experienced with the clock driver framework.

It looks like a power-domain to me, so you'd rather plug that into the genpd
framework.

> The big question is right now how to proceed with the A83T and A31s patches.
> I see there three options, which one do you prefer?:
> 
> 1. Provide now placeholder patches and send new patches, if everything is
> clear and other things are mainlined
> 2. Provide now patches as complete as possible and provide later patches to
> complete them when the R_PRCM things are mainlined
> 3. Leave them out, till the related work is mainlined and the bindings are
> final.

Like I said, the DT *has* to be backward-compatible, so for any DT patch that
you are asking to be merged, you should be prepared to support it indefinitely
and be able to run from it, and you won't be able to change the bindings later
on.

> Since this GPU IP core is very flexible and the SOC manufactures can
> configure it on their needs, I think the binding will extend in the future.
> For example the SGX544 GPU is available in different configurations: there
> is a SGX544 core and SGX544MPx core. The x stands for the count of the USSE
> (Universal Scalable Shader Engine) cores. For example the GPU in the A83T is
> a MP1 and the A31/A31s a MP2.

Mali is in the same situation and it didn't cause much trouble.

> In addition to that some of the GPU's have also a 2D engine.

In the same memory region, running from the same interrupts, or is it a
completely separate IP that happens to be sold by the same vendor?

> There might be even more differences in the GPU's that we don't know right
> now and should be described in the Devicetree, but that's a different topic
> that we should keep in mind.

Like I said, it's not a completely different topic.

Maxime
Philipp Rossak April 21, 2020, 9:57 a.m. UTC | #8
Hi,

On 20.04.20 09:38, Maxime Ripard wrote:
> Hi,
> 
> On Fri, Apr 17, 2020 at 02:09:06PM +0200, Philipp Rossak wrote:
>>>> I'm a bit skeptical on that one since it doesn't even list the
>>>> interrupts connected to the GPU that the binding mandates.
>>>
>>> I think he left it out for a future update.
>>> But best he comments himself.
>>
>> I'm currently working on those bindings. They are now 90% done, but they are
>> not finished till now. Currently there is some mainline support missing to
>> add the full binding. The A83T and also the A31/A31s have a GPU Power Off
>> Gating Register in the R_PRCM module, that is not supported right now in
>> Mainline. The Register need to be written when the GPU is powered on and
>> off.
>>
>> @Maxime: I totally agree on your point that a demo needs to be provided
>> before the related DTS patches should be provided. That's the reason why I
>> added the gpu placeholder patches.
>> Do you have an idea how a driver for the R_PRCM stuff can look like? I'm not
>> that experienced with the clock driver framework.
> 
> It looks like a power-domain to me, so you'd rather plug that into the genpd
> framework.

I had a look on genpd and I'm not really sure if that fits.

It is basically some bit that verify that the clocks should be enabled 
or disabled. I think this is better placed somewhere in the clocking 
framework.
I see there more similarities to the gating stuff.
Do you think it is suitable to implement it like the clock gating?


>> The big question is right now how to proceed with the A83T and A31s patches.
>> I see there three options, which one do you prefer?:
>>
>> 1. Provide now placeholder patches and send new patches, if everything is
>> clear and other things are mainlined
>> 2. Provide now patches as complete as possible and provide later patches to
>> complete them when the R_PRCM things are mainlined
>> 3. Leave them out, till the related work is mainlined and the bindings are
>> final.
> 
> Like I said, the DT *has* to be backward-compatible, so for any DT patch that
> you are asking to be merged, you should be prepared to support it indefinitely
> and be able to run from it, and you won't be able to change the bindings later
> on.

I agree on your points. But is this also suitable to drivers that are 
currently off tree and might be merged in one or two years?

>> Since this GPU IP core is very flexible and the SOC manufactures can
>> configure it on their needs, I think the binding will extend in the future.
>> For example the SGX544 GPU is available in different configurations: there
>> is a SGX544 core and SGX544MPx core. The x stands for the count of the USSE
>> (Universal Scalable Shader Engine) cores. For example the GPU in the A83T is
>> a MP1 and the A31/A31s a MP2.
> 
> Mali is in the same situation and it didn't cause much trouble.
> 
Good to know.

>> In addition to that some of the GPU's have also a 2D engine.
> 
> In the same memory region, running from the same interrupts, or is it a
> completely separate IP that happens to be sold by the same vendor?
> 
What I know till now this is part of the PowerVR IP and not separated. 
So it should use the same memory region, clocks and interrupts.

Cheers
Philipp
Maxime Ripard April 21, 2020, 11:21 a.m. UTC | #9
Hi,

On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
> On 20.04.20 09:38, Maxime Ripard wrote:
> > Hi,
> > 
> > On Fri, Apr 17, 2020 at 02:09:06PM +0200, Philipp Rossak wrote:
> > > > > I'm a bit skeptical on that one since it doesn't even list the
> > > > > interrupts connected to the GPU that the binding mandates.
> > > > 
> > > > I think he left it out for a future update.
> > > > But best he comments himself.
> > > 
> > > I'm currently working on those bindings. They are now 90% done, but they are
> > > not finished till now. Currently there is some mainline support missing to
> > > add the full binding. The A83T and also the A31/A31s have a GPU Power Off
> > > Gating Register in the R_PRCM module, that is not supported right now in
> > > Mainline. The Register need to be written when the GPU is powered on and
> > > off.
> > > 
> > > @Maxime: I totally agree on your point that a demo needs to be provided
> > > before the related DTS patches should be provided. That's the reason why I
> > > added the gpu placeholder patches.
> > > Do you have an idea how a driver for the R_PRCM stuff can look like? I'm not
> > > that experienced with the clock driver framework.
> > 
> > It looks like a power-domain to me, so you'd rather plug that into the genpd
> > framework.
> 
> I had a look on genpd and I'm not really sure if that fits.
> 
> It is basically some bit that verify that the clocks should be enabled or
> disabled.

No, it can do much more than that. It's a framework to control the SoCs power
domains, so clocks might be a part of it, but most of the time it's going to be
about powering up a particular device.

> I think this is better placed somewhere in the clocking framework.
> I see there more similarities to the gating stuff.
> Do you think it is suitable to implement it like the clock gating?

I'm really not sure what makes you think that this should be modelled as a
clock?

> > > The big question is right now how to proceed with the A83T and A31s patches.
> > > I see there three options, which one do you prefer?:
> > > 
> > > 1. Provide now placeholder patches and send new patches, if everything is
> > > clear and other things are mainlined
> > > 2. Provide now patches as complete as possible and provide later patches to
> > > complete them when the R_PRCM things are mainlined
> > > 3. Leave them out, till the related work is mainlined and the bindings are
> > > final.
> > 
> > Like I said, the DT *has* to be backward-compatible, so for any DT patch that
> > you are asking to be merged, you should be prepared to support it indefinitely
> > and be able to run from it, and you won't be able to change the bindings later
> > on.
> 
> I agree on your points. But is this also suitable to drivers that are
> currently off tree and might be merged in one or two years?

This is what we done for the Mali. The devicetree binding was first done for the
out-of-tree driver, and then lima/panfrost reused it.

The key thing here is to have enough confidence about how the hardware works so
that you can accurately describe it.

Maxime
Tony Lindgren April 21, 2020, 2:15 p.m. UTC | #10
* Maxime Ripard <maxime@cerno.tech> [200421 11:22]:
> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
> > I had a look on genpd and I'm not really sure if that fits.
> > 
> > It is basically some bit that verify that the clocks should be enabled or
> > disabled.
> 
> No, it can do much more than that. It's a framework to control the SoCs power
> domains, so clocks might be a part of it, but most of the time it's going to be
> about powering up a particular device.

Note that on omaps there are actually SoC module specific registers.
And there can be multiple devices within a single target module on
omaps. So the extra dts node and device is justified there.

For other SoCs, the SGX clocks are probably best handled directly
in pvr-drv.c PM runtime functions unless a custom hardware wrapper
with SoC specific registers exists.

Regards,

Tony
Philipp Rossak April 21, 2020, 4:42 p.m. UTC | #11
Hi,

On 21.04.20 13:21, Maxime Ripard wrote:
> Hi,
> 
> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
>> On 20.04.20 09:38, Maxime Ripard wrote:
>>> Hi,
>>>
>>> On Fri, Apr 17, 2020 at 02:09:06PM +0200, Philipp Rossak wrote:
>>>>>> I'm a bit skeptical on that one since it doesn't even list the
>>>>>> interrupts connected to the GPU that the binding mandates.
>>>>>
>>>>> I think he left it out for a future update.
>>>>> But best he comments himself.
>>>>
>>>> I'm currently working on those bindings. They are now 90% done, but they are
>>>> not finished till now. Currently there is some mainline support missing to
>>>> add the full binding. The A83T and also the A31/A31s have a GPU Power Off
>>>> Gating Register in the R_PRCM module, that is not supported right now in
>>>> Mainline. The Register need to be written when the GPU is powered on and
>>>> off.
>>>>
>>>> @Maxime: I totally agree on your point that a demo needs to be provided
>>>> before the related DTS patches should be provided. That's the reason why I
>>>> added the gpu placeholder patches.
>>>> Do you have an idea how a driver for the R_PRCM stuff can look like? I'm not
>>>> that experienced with the clock driver framework.
>>>
>>> It looks like a power-domain to me, so you'd rather plug that into the genpd
>>> framework.
>>
>> I had a look on genpd and I'm not really sure if that fits.
>>
>> It is basically some bit that verify that the clocks should be enabled or
>> disabled.
> 
> No, it can do much more than that. It's a framework to control the SoCs power
> domains, so clocks might be a part of it, but most of the time it's going to be
> about powering up a particular device.
> 
So I think I've found now the right piece of documentation and a driver 
that implements something similar [1].

So I will write a similar driver like linked above that only sets the 
right bits for A83T and A31/A31s.
Do you think this is the right approach?

>> I think this is better placed somewhere in the clocking framework.
>> I see there more similarities to the gating stuff.
>> Do you think it is suitable to implement it like the clock gating?
> 
> I'm really not sure what makes you think that this should be modelled as a
> clock?
> 

Looks like I looked in the wrong place and got some information that are 
not suitable for this.

>>>> The big question is right now how to proceed with the A83T and A31s patches.
>>>> I see there three options, which one do you prefer?:
>>>>
>>>> 1. Provide now placeholder patches and send new patches, if everything is
>>>> clear and other things are mainlined
>>>> 2. Provide now patches as complete as possible and provide later patches to
>>>> complete them when the R_PRCM things are mainlined
>>>> 3. Leave them out, till the related work is mainlined and the bindings are
>>>> final.
>>>
>>> Like I said, the DT *has* to be backward-compatible, so for any DT patch that
>>> you are asking to be merged, you should be prepared to support it indefinitely
>>> and be able to run from it, and you won't be able to change the bindings later
>>> on.
>>
>> I agree on your points. But is this also suitable to drivers that are
>> currently off tree and might be merged in one or two years?
> 
> This is what we done for the Mali. The devicetree binding was first done for the
> out-of-tree driver, and then lima/panfrost reused it.
> 
> The key thing here is to have enough confidence about how the hardware works so
> that you can accurately describe it.

Ok thanks! So I will resend my patches when the work got a more mature 
state and we know enough about the Hardware.

Cheers,
Philipp


[1]: 
https://elixir.bootlin.com/linux/latest/source/drivers/soc/amlogic/meson-gx-pwrc-vpu.c
H. Nikolaus Schaller April 21, 2020, 5:29 p.m. UTC | #12
> Am 21.04.2020 um 16:15 schrieb Tony Lindgren <tony@atomide.com>:
> 
> * Maxime Ripard <maxime@cerno.tech> [200421 11:22]:
>> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
>>> I had a look on genpd and I'm not really sure if that fits.
>>> 
>>> It is basically some bit that verify that the clocks should be enabled or
>>> disabled.
>> 
>> No, it can do much more than that. It's a framework to control the SoCs power
>> domains, so clocks might be a part of it, but most of the time it's going to be
>> about powering up a particular device.
> 
> Note that on omaps there are actually SoC module specific registers.

Ah, I see. This is of course a difference that the TI glue logic has
its own registers in the same address range as the sgx and this can't
be easily handled by a common sgx driver.

This indeed seems to be unique with omap.

> And there can be multiple devices within a single target module on
> omaps. So the extra dts node and device is justified there.
> 
> For other SoCs, the SGX clocks are probably best handled directly
> in pvr-drv.c PM runtime functions unless a custom hardware wrapper
> with SoC specific registers exists.

That is why we need to evaluate what the better strategy is.

So we have
a) omap which has a custom wrapper around the sgx
b) others without, i.e. an empty (or pass-through) wrapper

Which one do we make the "standard" and which one the "exception"?
What are good reasons for either one?


I am currently in strong favour of a) being standard because it
makes the pvr-drv.c simpler and really generic (independent of
wrapping into any SoC).

This will likely avoid problems if we find more SoC with yet another
scheme how the SGX clocks are wrapped.

It also allows to handle different number of clocks (A31 seems to
need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
or making big lists of conditionals. This variance would be handled
outside the sgx core bindings and driver.

So instead of an img+omap.yaml and an img+a81.yaml and an img+a31.yaml
etc. we have a single img,pvrsgx.yaml and individual wrappers (the omap
one already exists as bindings/bus/ti-sysc.txt).

The only drawback is that we need this "pass-through" wrapper in DTS
and driver code to handle clocks, power etc.


The second best solution in my view is to make b) the standard
and allow the clock(s) to be optional to cover the omap case.
And conditionals are added to properly describe the variance of
how the sgx is wrapped/integrated.


IMHO this is a decision which can not be easily revised later.
It is an architectural decision. So we should base it on strategic
goals.

> 
> 
> Regards,
> 
> Tony
> 

BR and thanks for clarification,
Nikolaus
Tony Lindgren April 21, 2020, 5:39 p.m. UTC | #13
* H. Nikolaus Schaller <hns@goldelico.com> [200421 17:31]:
> > Am 21.04.2020 um 16:15 schrieb Tony Lindgren <tony@atomide.com>:
> > Note that on omaps there are actually SoC module specific registers.
> 
> Ah, I see. This is of course a difference that the TI glue logic has
> its own registers in the same address range as the sgx and this can't
> be easily handled by a common sgx driver.
> 
> This indeed seems to be unique with omap.
> 
> > And there can be multiple devices within a single target module on
> > omaps. So the extra dts node and device is justified there.
> > 
> > For other SoCs, the SGX clocks are probably best handled directly
> > in pvr-drv.c PM runtime functions unless a custom hardware wrapper
> > with SoC specific registers exists.
> 
> That is why we need to evaluate what the better strategy is.
> 
> So we have
> a) omap which has a custom wrapper around the sgx
> b) others without, i.e. an empty (or pass-through) wrapper
> 
> Which one do we make the "standard" and which one the "exception"?
> What are good reasons for either one?

The wrapper is already handled by the ti-sysc binding, the sgx
binding should be standard with optional clocks.

See for example the standard 8250 uart for am335x with:

$ git grep -B20 -A10 uart0 arch/arm/boot/dts/am33xx-l4.dtsi

The 8250 device configuration is described in the standard 8250
dts binding, and the am335x module in the ti-sysc binding.
The are separate devices :)

So for the sgx binding, you can just leave out TI specific
module wrapper completely from the example.

> It also allows to handle different number of clocks (A31 seems to
> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
> or making big lists of conditionals. This variance would be handled
> outside the sgx core bindings and driver.

Well if other SoCs implement genpd domains etc, that's then
again part of a separate binding and not part of the sgx binding.

Regards,

Tony
Tony Lindgren April 21, 2020, 5:46 p.m. UTC | #14
* Tony Lindgren <tony@atomide.com> [200421 10:39]:
> See for example the standard 8250 uart for am335x with:
> 
> $ git grep -B20 -A10 uart0 arch/arm/boot/dts/am33xx-l4.dtsi
> 
> The 8250 device configuration is described in the standard 8250
> dts binding, and the am335x module in the ti-sysc binding.
> The are separate devices :)

Just to clarify why it's like that, see for example
arch/arm/boot/dts/am33xx.dtsi, and target-module@47400000
in that file for the musb controller.

There's a single ti-sysc interconnect target module, but it has
multiple devices. There are two musb controler instances, two phy
instances and a cppi41 dma instance within a single module.

With sgx, I belive there is only the sgx IP within the
ti-sysc interconnect target module. They are still seprate
devices with their own control registers.

Regards,

Tony
Maxime Ripard April 22, 2020, 6:58 a.m. UTC | #15
On Tue, Apr 21, 2020 at 07:29:32PM +0200, H. Nikolaus Schaller wrote:
> 
> > Am 21.04.2020 um 16:15 schrieb Tony Lindgren <tony@atomide.com>:
> > 
> > * Maxime Ripard <maxime@cerno.tech> [200421 11:22]:
> >> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
> >>> I had a look on genpd and I'm not really sure if that fits.
> >>> 
> >>> It is basically some bit that verify that the clocks should be enabled or
> >>> disabled.
> >> 
> >> No, it can do much more than that. It's a framework to control the SoCs power
> >> domains, so clocks might be a part of it, but most of the time it's going to be
> >> about powering up a particular device.
> > 
> > Note that on omaps there are actually SoC module specific registers.
> 
> Ah, I see. This is of course a difference that the TI glue logic has
> its own registers in the same address range as the sgx and this can't
> be easily handled by a common sgx driver.
> 
> This indeed seems to be unique with omap.
> 
> > And there can be multiple devices within a single target module on
> > omaps. So the extra dts node and device is justified there.
> > 
> > For other SoCs, the SGX clocks are probably best handled directly
> > in pvr-drv.c PM runtime functions unless a custom hardware wrapper
> > with SoC specific registers exists.
> 
> That is why we need to evaluate what the better strategy is.
> 
> So we have
> a) omap which has a custom wrapper around the sgx
> b) others without, i.e. an empty (or pass-through) wrapper
> 
> Which one do we make the "standard" and which one the "exception"?
> What are good reasons for either one?
> 
> 
> I am currently in strong favour of a) being standard because it
> makes the pvr-drv.c simpler and really generic (independent of
> wrapping into any SoC).
> 
> This will likely avoid problems if we find more SoC with yet another
> scheme how the SGX clocks are wrapped.
> 
> It also allows to handle different number of clocks (A31 seems to
> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
> or making big lists of conditionals. This variance would be handled
> outside the sgx core bindings and driver.

I disagree. Every other GPU binding and driver is handling that just fine, and
the SGX is not special in any case here.

Maxime
H. Nikolaus Schaller April 22, 2020, 7:10 a.m. UTC | #16
Hi Maxime,

> Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
> 
> On Tue, Apr 21, 2020 at 07:29:32PM +0200, H. Nikolaus Schaller wrote:
>> 
>>> Am 21.04.2020 um 16:15 schrieb Tony Lindgren <tony@atomide.com>:
>>> 
>>> * Maxime Ripard <maxime@cerno.tech> [200421 11:22]:
>>>> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
>>>>> I had a look on genpd and I'm not really sure if that fits.
>>>>> 
>>>>> It is basically some bit that verify that the clocks should be enabled or
>>>>> disabled.
>>>> 
>>>> No, it can do much more than that. It's a framework to control the SoCs power
>>>> domains, so clocks might be a part of it, but most of the time it's going to be
>>>> about powering up a particular device.
>>> 
>>> Note that on omaps there are actually SoC module specific registers.
>> 
>> Ah, I see. This is of course a difference that the TI glue logic has
>> its own registers in the same address range as the sgx and this can't
>> be easily handled by a common sgx driver.
>> 
>> This indeed seems to be unique with omap.
>> 
>>> And there can be multiple devices within a single target module on
>>> omaps. So the extra dts node and device is justified there.
>>> 
>>> For other SoCs, the SGX clocks are probably best handled directly
>>> in pvr-drv.c PM runtime functions unless a custom hardware wrapper
>>> with SoC specific registers exists.
>> 
>> That is why we need to evaluate what the better strategy is.
>> 
>> So we have
>> a) omap which has a custom wrapper around the sgx
>> b) others without, i.e. an empty (or pass-through) wrapper
>> 
>> Which one do we make the "standard" and which one the "exception"?
>> What are good reasons for either one?
>> 
>> 
>> I am currently in strong favour of a) being standard because it
>> makes the pvr-drv.c simpler and really generic (independent of
>> wrapping into any SoC).
>> 
>> This will likely avoid problems if we find more SoC with yet another
>> scheme how the SGX clocks are wrapped.
>> 
>> It also allows to handle different number of clocks (A31 seems to
>> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
>> or making big lists of conditionals. This variance would be handled
>> outside the sgx core bindings and driver.
> 
> I disagree. Every other GPU binding and driver is handling that just fine, and
> the SGX is not special in any case here.

Can you please better explain this? With example or a description
or a proposal?

I simply do not have your experience with "every other GPU" as you have.
And I admit that I can't read from your statement what we should do
to bring this topic forward.

So please make a proposal how it should be in your view.

BR and thanks,
Nikolaus
Maxime Ripard April 22, 2020, 3:13 p.m. UTC | #17
On Wed, Apr 22, 2020 at 09:10:57AM +0200, H. Nikolaus Schaller wrote:
> > Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
> > 
> > On Tue, Apr 21, 2020 at 07:29:32PM +0200, H. Nikolaus Schaller wrote:
> >> 
> >>> Am 21.04.2020 um 16:15 schrieb Tony Lindgren <tony@atomide.com>:
> >>> 
> >>> * Maxime Ripard <maxime@cerno.tech> [200421 11:22]:
> >>>> On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
> >>>>> I had a look on genpd and I'm not really sure if that fits.
> >>>>> 
> >>>>> It is basically some bit that verify that the clocks should be enabled or
> >>>>> disabled.
> >>>> 
> >>>> No, it can do much more than that. It's a framework to control the SoCs power
> >>>> domains, so clocks might be a part of it, but most of the time it's going to be
> >>>> about powering up a particular device.
> >>> 
> >>> Note that on omaps there are actually SoC module specific registers.
> >> 
> >> Ah, I see. This is of course a difference that the TI glue logic has
> >> its own registers in the same address range as the sgx and this can't
> >> be easily handled by a common sgx driver.
> >> 
> >> This indeed seems to be unique with omap.
> >> 
> >>> And there can be multiple devices within a single target module on
> >>> omaps. So the extra dts node and device is justified there.
> >>> 
> >>> For other SoCs, the SGX clocks are probably best handled directly
> >>> in pvr-drv.c PM runtime functions unless a custom hardware wrapper
> >>> with SoC specific registers exists.
> >> 
> >> That is why we need to evaluate what the better strategy is.
> >> 
> >> So we have
> >> a) omap which has a custom wrapper around the sgx
> >> b) others without, i.e. an empty (or pass-through) wrapper
> >> 
> >> Which one do we make the "standard" and which one the "exception"?
> >> What are good reasons for either one?
> >> 
> >> 
> >> I am currently in strong favour of a) being standard because it
> >> makes the pvr-drv.c simpler and really generic (independent of
> >> wrapping into any SoC).
> >> 
> >> This will likely avoid problems if we find more SoC with yet another
> >> scheme how the SGX clocks are wrapped.
> >> 
> >> It also allows to handle different number of clocks (A31 seems to
> >> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
> >> or making big lists of conditionals. This variance would be handled
> >> outside the sgx core bindings and driver.
> > 
> > I disagree. Every other GPU binding and driver is handling that just fine, and
> > the SGX is not special in any case here.
> 
> Can you please better explain this? With example or a description
> or a proposal?

I can't, I don't have any knowledge about this GPU.

> I simply do not have your experience with "every other GPU" as you have.
> And I admit that I can't read from your statement what we should do
> to bring this topic forward.
> 
> So please make a proposal how it should be in your view.

If you need some inspiration, I guess you could look at the mali and vivante
bindings once you have an idea of what the GPU needs across the SoCs it's
integrated in.

Maxime
H. Nikolaus Schaller April 22, 2020, 4:09 p.m. UTC | #18
Hi Maxime,

> Am 22.04.2020 um 17:13 schrieb Maxime Ripard <maxime@cerno.tech>:
> 
> On Wed, Apr 22, 2020 at 09:10:57AM +0200, H. Nikolaus Schaller wrote:
>>> Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
>>>> 
>>>> It also allows to handle different number of clocks (A31 seems to
>>>> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
>>>> or making big lists of conditionals. This variance would be handled
>>>> outside the sgx core bindings and driver.
>>> 
>>> I disagree. Every other GPU binding and driver is handling that just fine, and
>>> the SGX is not special in any case here.
>> 
>> Can you please better explain this? With example or a description
>> or a proposal?
> 
> I can't, I don't have any knowledge about this GPU.

Hm. Now I am fully puzzled.
You have no knowledge about this GPU but disagree with our proposal?
Is it just gut feeling?

Anyways, we need to find a solution. Together.

> 
>> I simply do not have your experience with "every other GPU" as you have.
>> And I admit that I can't read from your statement what we should do
>> to bring this topic forward.
>> 
>> So please make a proposal how it should be in your view.
> 
> If you need some inspiration, I guess you could look at the mali and vivante
> bindings once you have an idea of what the GPU needs across the SoCs it's
> integrated in.

Well, I do not need inspiration, we need to come to an agreement about
img,pvrsgx.yaml and we need some maintainer to finally pick it up.

I wonder how we can come to this stage.

If I look at vivante,gc.yaml or arm,mali-utgard.yaml I don't
see big differences to what we propose and those I see seem to come
from technical differences between sgx, vivante, mali etc. So there
is no single scheme that fits all different gpu types.

One thing we can learn is that "core" seems to be a de facto standard 
for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.

BR and thanks,
Nikolaus
H. Nikolaus Schaller April 22, 2020, 5:23 p.m. UTC | #19
Hi Paul,

> Am 22.04.2020 um 18:55 schrieb Paul Cercueil <paul@crapouillou.net>:
> 
> Hi Nikolaus,
> 
> 
> Le mer. 22 avril 2020 à 18:09, H. Nikolaus Schaller <hns@goldelico.com> a écrit :
>> Hi Maxime,
>>> Am 22.04.2020 um 17:13 schrieb Maxime Ripard <maxime@cerno.tech>:
>>> On Wed, Apr 22, 2020 at 09:10:57AM +0200, H. Nikolaus Schaller wrote:
>>>>> Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
>>>>>> It also allows to handle different number of clocks (A31 seems to
>>>>>> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
>>>>>> or making big lists of conditionals. This variance would be handled
>>>>>> outside the sgx core bindings and driver.
>>>>> I disagree. Every other GPU binding and driver is handling that just fine, and
>>>>> the SGX is not special in any case here.
>>>> Can you please better explain this? With example or a description
>>>> or a proposal?
>>> I can't, I don't have any knowledge about this GPU.
>> Hm. Now I am fully puzzled.
>> You have no knowledge about this GPU but disagree with our proposal?
>> Is it just gut feeling?
>> Anyways, we need to find a solution. Together.
>>>> I simply do not have your experience with "every other GPU" as you have.
>>>> And I admit that I can't read from your statement what we should do
>>>> to bring this topic forward.
>>>> So please make a proposal how it should be in your view.
>>> If you need some inspiration, I guess you could look at the mali and vivante
>>> bindings once you have an idea of what the GPU needs across the SoCs it's
>>> integrated in.
>> Well, I do not need inspiration, we need to come to an agreement about
>> img,pvrsgx.yaml and we need some maintainer to finally pick it up.
>> I wonder how we can come to this stage.
>> If I look at vivante,gc.yaml or arm,mali-utgard.yaml I don't
>> see big differences to what we propose and those I see seem to come
>> from technical differences between sgx, vivante, mali etc. So there
>> is no single scheme that fits all different gpu types.
>> One thing we can learn is that "core" seems to be a de facto standard
>> for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.
> 
> The Vivante GPU binding requires "bus", "core" and "shader" clocks. But if your SoC only has one clock for the GPU, there's nothing that prevents you from passing the very same clock as "bus", "core" and "shader". This is what we do on the Ingenic JZ4770 SoC.

Fine and good to know.

Well, for the SGX we so far only know a single "core" clock (with different
names). Only the A31 seems to be different.

Fortunately I finally found a little time to scan through the a31
user manual: http://dl.linux-sunxi.org/A31/A31%20User%20Manual%20V1.20.pdf

There are 3 clock dividers. And there is a single clock PLL8 dedicated to
the gpu. The clock dividers are called "gpu core", "gpu mem", "gpu hyd".

Then, there are dedicated clock gating registers. And idle/power status
registers.

Unfortunately, chapter "5.1. GPU" is almost empty and has no block diagram.
So I have no idea what "HYD" stands for. And if the memory and HYD clocks
are needed and how they should be initialized. If they are different ones
or can all be driven by PLL8 in parallel.

That scarce information makes it difficult to form a "proper" bindings
document out of it. Any can fit or be false.

At least there is something common with all other SGX implementations I
am aware of: there is a "core" clock.

So I'd suggest to get things moving forwards:
* we add a "core" clock-names to the bindings
* this can't be wrong for the A31 since it is defined in the data sheet
* we make it optional since the omap chips have a clock wrapper
* "core" is a name I think all architectures/drivers can live with
  and can add later "shader", "bus" etc. if needed
* any additions for the A31 will be additions

If that sounds ok and nobody objects to it, I can submit a new patch
version for further review.

BR and thanks,
Nikolaus
Philipp Rossak April 22, 2020, 7 p.m. UTC | #20
Hi,

On 22.04.20 19:23, H. Nikolaus Schaller wrote:
> Hi Paul,
> 
>> Am 22.04.2020 um 18:55 schrieb Paul Cercueil <paul@crapouillou.net>:
>>
>> Hi Nikolaus,
>>
>>
>> Le mer. 22 avril 2020 à 18:09, H. Nikolaus Schaller <hns@goldelico.com> a écrit :
>>> Hi Maxime,
>>>> Am 22.04.2020 um 17:13 schrieb Maxime Ripard <maxime@cerno.tech>:
>>>> On Wed, Apr 22, 2020 at 09:10:57AM +0200, H. Nikolaus Schaller wrote:
>>>>>> Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
>>>>>>> It also allows to handle different number of clocks (A31 seems to
>>>>>>> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
>>>>>>> or making big lists of conditionals. This variance would be handled
>>>>>>> outside the sgx core bindings and driver.
>>>>>> I disagree. Every other GPU binding and driver is handling that just fine, and
>>>>>> the SGX is not special in any case here.
>>>>> Can you please better explain this? With example or a description
>>>>> or a proposal?
>>>> I can't, I don't have any knowledge about this GPU.
>>> Hm. Now I am fully puzzled.
>>> You have no knowledge about this GPU but disagree with our proposal?
>>> Is it just gut feeling?
>>> Anyways, we need to find a solution. Together.
>>>>> I simply do not have your experience with "every other GPU" as you have.
>>>>> And I admit that I can't read from your statement what we should do
>>>>> to bring this topic forward.
>>>>> So please make a proposal how it should be in your view.
>>>> If you need some inspiration, I guess you could look at the mali and vivante
>>>> bindings once you have an idea of what the GPU needs across the SoCs it's
>>>> integrated in.
>>> Well, I do not need inspiration, we need to come to an agreement about
>>> img,pvrsgx.yaml and we need some maintainer to finally pick it up.
>>> I wonder how we can come to this stage.
>>> If I look at vivante,gc.yaml or arm,mali-utgard.yaml I don't
>>> see big differences to what we propose and those I see seem to come
>>> from technical differences between sgx, vivante, mali etc. So there
>>> is no single scheme that fits all different gpu types.
>>> One thing we can learn is that "core" seems to be a de facto standard
>>> for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.
>>
>> The Vivante GPU binding requires "bus", "core" and "shader" clocks. But if your SoC only has one clock for the GPU, there's nothing that prevents you from passing the very same clock as "bus", "core" and "shader". This is what we do on the Ingenic JZ4770 SoC.
> 
> Fine and good to know.
> 
> Well, for the SGX we so far only know a single "core" clock (with different
> names). Only the A31 seems to be different.
> 
> Fortunately I finally found a little time to scan through the a31
> user manual: http://dl.linux-sunxi.org/A31/A31%20User%20Manual%20V1.20.pdf
> 
> There are 3 clock dividers. And there is a single clock PLL8 dedicated to
> the gpu. The clock dividers are called "gpu core", "gpu mem", "gpu hyd".
> 
> Then, there are dedicated clock gating registers. And idle/power status
> registers.
> 
> Unfortunately, chapter "5.1. GPU" is almost empty and has no block diagram.
> So I have no idea what "HYD" stands for. And if the memory and HYD clocks
> are needed and how they should be initialized. If they are different ones
> or can all be driven by PLL8 in parallel.
> 
> That scarce information makes it difficult to form a "proper" bindings
> document out of it. Any can fit or be false.
> 
> At least there is something common with all other SGX implementations I
> am aware of: there is a "core" clock.
> 
> So I'd suggest to get things moving forwards:
> * we add a "core" clock-names to the bindings
> * this can't be wrong for the A31 since it is defined in the data sheet
> * we make it optional since the omap chips have a clock wrapper
> * "core" is a name I think all architectures/drivers can live with
>    and can add later "shader", "bus" etc. if needed
> * any additions for the A31 will be additions
> 
> If that sounds ok and nobody objects to it, I can submit a new patch
> version for further review.
> 
> BR and thanks,
> Nikolaus
> 
A few years back, I did a big research on the PowerVR GPUs. Back then I 
found an interesting TI datasheet. I forgot about this till I have seen 
the right buzz words. Sorry that I remembered it that late.

Back then I came to the conclusion that all PowerVR GPU's have in 
general 3 Clocks.

A system clock, a memory clock and a core clock. [1].

The hyd_clk at sunxi devices seems to be the system clock.

With those additional information it should be very easy to get a proper 
binding.

Cheers
Philipp

[1]: 
https://github.com/embed-3d/PVRSGX_hwdoc/blob/master/sources/pdfs/Spruh73c_chapter_SGX_Graphics_Accelerator.pdf
Tony Lindgren April 22, 2020, 7:33 p.m. UTC | #21
* Philipp Rossak <embed3d@gmail.com> [200422 19:05]:
> A few years back, I did a big research on the PowerVR GPUs. Back then I
> found an interesting TI datasheet. I forgot about this till I have seen the
> right buzz words. Sorry that I remembered it that late.
> 
> Back then I came to the conclusion that all PowerVR GPU's have in general 3
> Clocks.
> 
> A system clock, a memory clock and a core clock. [1].

Hmm I'm not sure if those names are sgx or SoC specific.

Anyways, the sgx clocks for omap variants are already handled
by the ti-sysc module as "fck" and "ick" so nothing to do there.

> The hyd_clk at sunxi devices seems to be the system clock.
> 
> With those additional information it should be very easy to get a proper
> binding.

It would be best to find the clock(s) name used in the sgx docs
to avoid using SoC specific naming :)

But yeah "sysclk" "memclk" and "coreclk" seem just fine for
me for the optional clocks if that works for other SoCs.

Regards,

Tony

> [1]: https://github.com/embed-3d/PVRSGX_hwdoc/blob/master/sources/pdfs/Spruh73c_chapter_SGX_Graphics_Accelerator.pdf
H. Nikolaus Schaller April 22, 2020, 9:12 p.m. UTC | #22
Hi Tony and Philip,

> Am 22.04.2020 um 21:33 schrieb Tony Lindgren <tony@atomide.com>:
> 
> * Philipp Rossak <embed3d@gmail.com> [200422 19:05]:
>> A few years back, I did a big research on the PowerVR GPUs. Back then I
>> found an interesting TI datasheet. I forgot about this till I have seen the
>> right buzz words. Sorry that I remembered it that late.
>> 
>> Back then I came to the conclusion that all PowerVR GPU's have in general 3
>> Clocks.
>> 
>> A system clock, a memory clock and a core clock. [1].

Great!  This is an excerpt of the am335x TRM.
I may have seen this information in the past but also forgot about it.

Indeed, it seems to change a lot of our thinking.

> 
> Hmm I'm not sure if those names are sgx or SoC specific.

It depends. Here is some quick research:

the am335x lists:
  THALIAIRQ, SYSCLK & MEMCLK (connected in parallel), CORECLK

The omap3530 TRM has different information. It names them
  SGX_FCLK, SGX_ICLK, SGX_RST and SGX_IRQ
  but this is likely a TI nomenclature defined by the PRCM wrapper.

DM3730 and OMAP4 and TRM tells the same.

The OMAP5 TRM is interestingly different. It has:
  GPU_ICLK, GPU_FCLK1, GPU_FCLK2, GPU_RST and GPU_IRQ.
  Really surprising is that the PRCM outputs are called
    GPU_L3_GICLK, GPU_CORE_GCLK and GPU_HYS_GCLK.

  I.e. the same "HYD" as we have seen in the A31. It seems to
  be a feature of the sgx544 to have two functional clocks and
  one being called "HYD".

Now I know why it didn't play a role so far. Because the omap5
wrapper hides this detail from the sgx implementation.

Next I checked the AM572x TRM:
 it has also a hyd_clk, a core_clk, sys_clk, some reset and a gpu_irq

The DRA7xx TRM does the same as AM57xx.

So the "hyd" clock seems to be a second functional clock
with unknown function in some SGX variants. It seems to be
something different from the "memclock" of the am335x but may
be the same.

> 
> Anyways, the sgx clocks for omap variants are already handled
> by the ti-sysc module as "fck" and "ick" so nothing to do there.

Which brings back the question if this complexity and not well
defined clocks of the SGX core should really be part of the bindings
any why we have to care about...

What is the benefit of modeling at this level of pretend accuracy?

> 
>> The hyd_clk at sunxi devices seems to be the system clock.
>> 
>> With those additional information it should be very easy to get a proper
>> binding.
> 
> It would be best to find the clock(s) name used in the sgx docs
> to avoid using SoC specific naming :)

If there were specific SGX docs describing the VHDL signal names :)

> 
> But yeah "sysclk" "memclk" and "coreclk" seem just fine for
> me for the optional clocks if that works for other SoCs.

Well, if the other SoC would follow the PRCM/sysc approach
the omap uses, all these clocks would be part of the wrapper
and can be named and numbered as it best fits to the SoC
data sheet and clock control registers.

> 
> Regards,
> 
> Tony
> 
>> [1]: https://github.com/embed-3d/PVRSGX_hwdoc/blob/master/sources/pdfs/Spruh73c_chapter_SGX_Graphics_Accelerator.pdf

So a compromise could be to

* define

  clock-names:
    items:
      - const: core
      - const: mem
      - const: sys
      - const: hyd

* make clocks optional (for omap or others wanting to use a wrapper driver)
* DTs can request the same clock providers for core and hyd or mem if that fits best
* the driver must enable all 4 clocks if they exists

BR and thanks,
Nikolaus
Neil Armstrong April 23, 2020, 3 p.m. UTC | #23
On 22/04/2020 18:09, H. Nikolaus Schaller wrote:
> Hi Maxime,
> 
>> Am 22.04.2020 um 17:13 schrieb Maxime Ripard <maxime@cerno.tech>:
>>
>> On Wed, Apr 22, 2020 at 09:10:57AM +0200, H. Nikolaus Schaller wrote:
>>>> Am 22.04.2020 um 08:58 schrieb Maxime Ripard <maxime@cerno.tech>:
>>>>>
>>>>> It also allows to handle different number of clocks (A31 seems to
>>>>> need 4, Samsung, A83 and JZ4780 one) without changing the sgx bindings
>>>>> or making big lists of conditionals. This variance would be handled
>>>>> outside the sgx core bindings and driver.
>>>>
>>>> I disagree. Every other GPU binding and driver is handling that just fine, and
>>>> the SGX is not special in any case here.
>>>
>>> Can you please better explain this? With example or a description
>>> or a proposal?
>>
>> I can't, I don't have any knowledge about this GPU.
> 
> Hm. Now I am fully puzzled.
> You have no knowledge about this GPU but disagree with our proposal?
> Is it just gut feeling?
> 
> Anyways, we need to find a solution. Together.
> 
>>
>>> I simply do not have your experience with "every other GPU" as you have.
>>> And I admit that I can't read from your statement what we should do
>>> to bring this topic forward.
>>>
>>> So please make a proposal how it should be in your view.
>>
>> If you need some inspiration, I guess you could look at the mali and vivante
>> bindings once you have an idea of what the GPU needs across the SoCs it's
>> integrated in.
> 
> Well, I do not need inspiration, we need to come to an agreement about
> img,pvrsgx.yaml and we need some maintainer to finally pick it up.
> 
> I wonder how we can come to this stage.
> 
> If I look at vivante,gc.yaml or arm,mali-utgard.yaml I don't
> see big differences to what we propose and those I see seem to come
> from technical differences between sgx, vivante, mali etc. So there
> is no single scheme that fits all different gpu types.
> 
> One thing we can learn is that "core" seems to be a de facto standard 
> for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.

Usually IPs needs a few clocks:
- pclk or apb or reg: the clock clocking the "slave" bus to serve the registers
- axi or bus or ahb: the bus clocking the the "master" bus to get data from system memory
- core: the actual clock feeding the GPU logic

Sometimes you have a single clock for slave and master bus.

But you can also have separate clocks for shader cores, .. this depends on the IP and it's architecture.
The IP can also have memories with separate clocks, etc...

But all these clocks can be source by an unique clock on a SoC, but different on another
SoC, this is why it's important to list them all, even optional.

You'll certainly have at least a reset signal, and a power domain, these should exist and be optional.

Neil

> 
> BR and thanks,
> Nikolaus
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
H. Nikolaus Schaller April 23, 2020, 3:45 p.m. UTC | #24
Hi Neil,

> Am 23.04.2020 um 17:00 schrieb Neil Armstrong <narmstrong@baylibre.com>:
>> One thing we can learn is that "core" seems to be a de facto standard 
>> for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.
> 
> Usually IPs needs a few clocks:
> - pclk or apb or reg: the clock clocking the "slave" bus to serve the registers
> - axi or bus or ahb: the bus clocking the the "master" bus to get data from system memory
> - core: the actual clock feeding the GPU logic

And the sgx544 seems to have two such clocks.

> Sometimes you have a single clock for slave and master bus.
> 
> But you can also have separate clocks for shader cores, .. this depends on the IP and it's architecture.
> The IP can also have memories with separate clocks, etc...

Indeed.

> But all these clocks can be source by an unique clock on a SoC, but different on another
> SoC, this is why it's important to list them all, even optional.
> 
> You'll certainly have at least a reset signal, and a power domain, these should exist and be optional.

Well, they exist only as hints in block diagrams of some SoC data sheets
(so we do not know if they represent the imagination definitions) and the
current driver code doesn't make use of it. Still the gpu core works.

So I do not see any urgent need to add a complete list to the bindings now.

Unless some special SoC integration makes use of them. Then it is IMHO easier
to extend the bindings by a follow-up patch than now thinking about all
potential options and bloating the bindings with things we (the open source
community) doesn't and can't know.

My goal is to keep the bindings as minimalistic as possible. And reset lines
and power domains are (at least for those we have in the works) not needed
to make working systems.

Therefore, for clocks I also would start with a minimalistic approach for
a single optional GPU core clock and leave out reset and power completely.

BR and thanks,
Nikolaus
Maxime Ripard April 23, 2020, 8:36 p.m. UTC | #25
On Thu, Apr 23, 2020 at 05:45:55PM +0200, H. Nikolaus Schaller wrote:
> > Am 23.04.2020 um 17:00 schrieb Neil Armstrong <narmstrong@baylibre.com>:
> >> One thing we can learn is that "core" seems to be a de facto standard 
> >> for the core clock-name. An alternative "gpu" is used by nvidia,gk20a.txt.
> > 
> > Usually IPs needs a few clocks:
> > - pclk or apb or reg: the clock clocking the "slave" bus to serve the registers
> > - axi or bus or ahb: the bus clocking the the "master" bus to get data from system memory
> > - core: the actual clock feeding the GPU logic
> 
> And the sgx544 seems to have two such clocks.
> 
> > Sometimes you have a single clock for slave and master bus.
> > 
> > But you can also have separate clocks for shader cores, .. this depends on the IP and it's architecture.
> > The IP can also have memories with separate clocks, etc...
> 
> Indeed.
> 
> > But all these clocks can be source by an unique clock on a SoC, but different on another
> > SoC, this is why it's important to list them all, even optional.
> > 
> > You'll certainly have at least a reset signal, and a power domain, these should exist and be optional.
> 
> Well, they exist only as hints in block diagrams of some SoC data
> sheets (so we do not know if they represent the imagination
> definitions) and the current driver code doesn't make use of it. Still
> the gpu core works.
> 
> So I do not see any urgent need to add a complete list to the bindings
> now.
> 
> Unless some special SoC integration makes use of them. Then it is IMHO
> easier to extend the bindings by a follow-up patch than now thinking
> about all potential options and bloating the bindings with things we
> (the open source community) doesn't and can't know.
> 
> My goal is to keep the bindings as minimalistic as possible. And reset
> lines and power domains are (at least for those we have in the works)
> not needed to make working systems.
> 
> Therefore, for clocks I also would start with a minimalistic approach
> for a single optional GPU core clock and leave out reset and power
> completely.

Like I said above, the DT is considered an ABI and you'll have to
maintain backward compatibility (ie, newer kernel running with older
DT). Therefore, you won't be able to require a new clock, reset or
power-domain later on for example.

I guess the question I'm really asking is: since you don't really know
how the hardware is integrated at the moment, why should we have that
discussion *now*. It's really not suprising that you don't know yet, so
I'm not sure why we need to rush in the bindings.

Maxime
Maxime Ripard April 23, 2020, 8:37 p.m. UTC | #26
On Tue, Apr 21, 2020 at 06:42:17PM +0200, Philipp Rossak wrote:
> Hi,
> 
> On 21.04.20 13:21, Maxime Ripard wrote:
> > Hi,
> > 
> > On Tue, Apr 21, 2020 at 11:57:33AM +0200, Philipp Rossak wrote:
> > > On 20.04.20 09:38, Maxime Ripard wrote:
> > > > Hi,
> > > > 
> > > > On Fri, Apr 17, 2020 at 02:09:06PM +0200, Philipp Rossak wrote:
> > > > > > > I'm a bit skeptical on that one since it doesn't even list the
> > > > > > > interrupts connected to the GPU that the binding mandates.
> > > > > > 
> > > > > > I think he left it out for a future update.
> > > > > > But best he comments himself.
> > > > > 
> > > > > I'm currently working on those bindings. They are now 90% done, but they are
> > > > > not finished till now. Currently there is some mainline support missing to
> > > > > add the full binding. The A83T and also the A31/A31s have a GPU Power Off
> > > > > Gating Register in the R_PRCM module, that is not supported right now in
> > > > > Mainline. The Register need to be written when the GPU is powered on and
> > > > > off.
> > > > > 
> > > > > @Maxime: I totally agree on your point that a demo needs to be provided
> > > > > before the related DTS patches should be provided. That's the reason why I
> > > > > added the gpu placeholder patches.
> > > > > Do you have an idea how a driver for the R_PRCM stuff can look like? I'm not
> > > > > that experienced with the clock driver framework.
> > > > 
> > > > It looks like a power-domain to me, so you'd rather plug that into the genpd
> > > > framework.
> > > 
> > > I had a look on genpd and I'm not really sure if that fits.
> > > 
> > > It is basically some bit that verify that the clocks should be enabled or
> > > disabled.
> > 
> > No, it can do much more than that. It's a framework to control the SoCs power
> > domains, so clocks might be a part of it, but most of the time it's going to be
> > about powering up a particular device.
> > 
> So I think I've found now the right piece of documentation and a driver that
> implements something similar [1].
> 
> So I will write a similar driver like linked above that only sets the right
> bits for A83T and A31/A31s.
> Do you think this is the right approach?

That sounds about right yes

Maxime
H. Nikolaus Schaller April 24, 2020, 9:51 a.m. UTC | #27
Hi,

> Am 23.04.2020 um 22:36 schrieb Maxime Ripard <maxime@cerno.tech>:
>> My goal is to keep the bindings as minimalistic as possible. And reset
>> lines and power domains are (at least for those we have in the works)
>> not needed to make working systems.
>> 
>> Therefore, for clocks I also would start with a minimalistic approach
>> for a single optional GPU core clock and leave out reset and power
>> completely.
> 
> Like I said above, the DT is considered an ABI and you'll have to
> maintain backward compatibility (ie, newer kernel running with older
> DT).

Generally I fully agree to this rule (although I have experienced
that exceptions happen more often than I like).

But here, we don't have any older DT which define something about SGX.

We introduce SGX for the first time with bindings and DT in parallel.
So they are in sync.

Therefore, newer kernels with SGX support and older DT simply will
skip SGX and not load any drivers. So we can't break older DT and
older DT can't break SGX.

What we introduce is a DT code that is well hung and tested (originating
in vendor kernels). It is cast in a bindings.yaml where not everyone
is happy with for reasons outside the originally proposed DT.

For new SoC not yet supported, I don't see a need to touch the
existing ones.

This is because I only propose to *add* properties to the bindings
for devices that have not been supported with SGX before and are
not sufficiently covered by what exists.

So backward compatibility is a non-problem.

> Therefore, you won't be able to require a new clock, reset or
> power-domain later on for example.
> 
> I guess the question I'm really asking is: since you don't really know
> how the hardware is integrated at the moment,

Like I explained, we do not need to know and model all details about
the hardware integration. The register set of an SoC does not always
provide bits to control all signals we may see in a block diagram or
think they must exist.

We have a set of SoC where it is demonstrated to work without need
for more detailed knowledge about specific hardware integration.

So we know everything of importance for this initial set of SoC to
make it work.

> why should we have that
> discussion *now*. It's really not suprising that you don't know yet, so
> I'm not sure why we need to rush in the bindings.

Because:
* there are people who want to have upstream SGX support for an initial
  set of SoC *now*
* the discussion already lasts ca. 6 months since I posted v1,
  that should be enough and is not a rush
* it is not required to know more details to make a working system
* we will not gain more information by waiting for another year or two
* problems are not solved by postponing them
* there are DTS for some initial SoC, tested to work
* it is no longer possible to submit DT without bindings.yaml (or is it?)
* we just need to define a bindings.yaml for them, not invent something
  completely new
* we can start with a minimal bindings.yaml for the analysed SoC and
  *extend* it in the future if really needed
* we can discuss changes & extensions for the bindings when they are
  really proposed
* having this patch series upstream is a prerequisite for introducing
  the sgx kernel driver to staging

In other words: your suggestion to postpone everything will keep finished
work sitting in front of the door and rotting and blocking unfinished work...

And to be honest, we have postponed SGX support already for too long
time and could be much farther with more and broader community cooperation.
So we should not block ourselves.

So if you can contribute new information or proposals to specifically
improve the proposed bindings.yaml, you are very welcome. But please do
it *now*.

BR and thanks,
Nikolaus