diff mbox

[2/2] arm64: dts: sdm845: Support GPU/GMU

Message ID 20180302215638.6145-3-jcrouse@codeaurora.org (mailing list archive)
State Not Applicable, archived
Headers show

Commit Message

Jordan Crouse March 2, 2018, 9:56 p.m. UTC
Add the nodes and other bits to describe the Adreno GPU and GMU
devices.

Change-Id: Ibf4dc0ebb0ac03d8b6b8e65747e142c440e70b0a
Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
---
 arch/arm64/boot/dts/qcom/sdm845.dtsi | 120 +++++++++++++++++++++++++++++++++++
 1 file changed, 120 insertions(+)

Comments

Viresh Kumar March 5, 2018, 4:42 a.m. UTC | #1
On 02-03-18, 14:56, Jordan Crouse wrote:
> Add the nodes and other bits to describe the Adreno GPU and GMU
> devices.
> 
> Change-Id: Ibf4dc0ebb0ac03d8b6b8e65747e142c440e70b0a

Remove it ?

> Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
> ---
>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 120 +++++++++++++++++++++++++++++++++++
>  1 file changed, 120 insertions(+)
> 
> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> index 7b5c16eb63b7..cc6d367ee55e 100644
> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> @@ -312,5 +312,125 @@
>  				status = "disabled";
>  			};
>  		};
> +
> +		adreno_smmu: arm,smmu-adreno@5040000 {
> +			compatible = "qcom,msm8996-smmu-v2";
> +			reg = <0x5040000 0x10000>;
> +			#iommu-cells = <1>;
> +			#global-interrupts = <2>;
> +			interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_HIGH>,
> +				     <GIC_SPI 231 IRQ_TYPE_LEVEL_HIGH>,
> +				     <GIC_SPI 364 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 365 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 366 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 367 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 368 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 369 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 370 IRQ_TYPE_EDGE_RISING>,
> +				     <GIC_SPI 371 IRQ_TYPE_EDGE_RISING>;
> +			clocks = <&clock_gcc GCC_GPU_MEMNOC_GFX_CLK>,
> +				 <&clock_gcc GCC_GPU_CFG_AHB_CLK>;
> +			clock-names = "bus", "iface";
> +
> +			power-domains = <&clock_gpucc GPU_CX_GDSC>;
> +		};
> +
> +		gpu_opp_table: adreno-opp-table {
> +			compatible = "operating-points-v2";
> +
> +			opp-710000000 {
> +				opp-hz = /bits/ 64 <710000000>;
> +				qcom,arc-level = <416>;

I am not sure if I saw where this is defined ?
Jordan Crouse March 5, 2018, 3:28 p.m. UTC | #2
On Mon, Mar 05, 2018 at 10:12:21AM +0530, Viresh Kumar wrote:
> On 02-03-18, 14:56, Jordan Crouse wrote:
> > Add the nodes and other bits to describe the Adreno GPU and GMU
> > devices.
> > 
> > Change-Id: Ibf4dc0ebb0ac03d8b6b8e65747e142c440e70b0a
> 
> Remove it ?

* Shakes fist at Gerrit *

> > Signed-off-by: Jordan Crouse <jcrouse@codeaurora.org>
> > ---
> >  arch/arm64/boot/dts/qcom/sdm845.dtsi | 120 +++++++++++++++++++++++++++++++++++
> >  1 file changed, 120 insertions(+)
> > 
> > diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> > index 7b5c16eb63b7..cc6d367ee55e 100644
> > --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> > +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> > @@ -312,5 +312,125 @@
> >  				status = "disabled";
> >  			};
> >  		};
> > +
> > +		adreno_smmu: arm,smmu-adreno@5040000 {
> > +			compatible = "qcom,msm8996-smmu-v2";
> > +			reg = <0x5040000 0x10000>;
> > +			#iommu-cells = <1>;
> > +			#global-interrupts = <2>;
> > +			interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_HIGH>,
> > +				     <GIC_SPI 231 IRQ_TYPE_LEVEL_HIGH>,
> > +				     <GIC_SPI 364 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 365 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 366 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 367 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 368 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 369 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 370 IRQ_TYPE_EDGE_RISING>,
> > +				     <GIC_SPI 371 IRQ_TYPE_EDGE_RISING>;
> > +			clocks = <&clock_gcc GCC_GPU_MEMNOC_GFX_CLK>,
> > +				 <&clock_gcc GCC_GPU_CFG_AHB_CLK>;
> > +			clock-names = "bus", "iface";
> > +
> > +			power-domains = <&clock_gpucc GPU_CX_GDSC>;
> > +		};
> > +
> > +		gpu_opp_table: adreno-opp-table {
> > +			compatible = "operating-points-v2";
> > +
> > +			opp-710000000 {
> > +				opp-hz = /bits/ 64 <710000000>;
> > +				qcom,arc-level = <416>;
> 
> I am not sure if I saw where this is defined ?

I'm glad you brought this up - I was trying to find a place in the documentation
to put it, but since target specific nodes would be a new trick for OPP I didn't
quite know how to go about doing it. Do we just list them as Optional: or
should we add a target specific section to the documentation and list them out
there instead?

Jordan
Viresh Kumar March 6, 2018, 4:26 a.m. UTC | #3
On 05-03-18, 08:28, Jordan Crouse wrote:
> I'm glad you brought this up - I was trying to find a place in the documentation
> to put it, but since target specific nodes would be a new trick for OPP I didn't
> quite know how to go about doing it. Do we just list them as Optional: or
> should we add a target specific section to the documentation and list them out
> there instead?

Something like this and you need to have your own compatible string as
well:

Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt

But first, what's the purpose of this field? Just to check if it can
be handled by current bindings.
Jordan Crouse March 6, 2018, 3:37 p.m. UTC | #4
On Tue, Mar 06, 2018 at 09:56:56AM +0530, Viresh Kumar wrote:
> On 05-03-18, 08:28, Jordan Crouse wrote:
> > I'm glad you brought this up - I was trying to find a place in the documentation
> > to put it, but since target specific nodes would be a new trick for OPP I didn't
> > quite know how to go about doing it. Do we just list them as Optional: or
> > should we add a target specific section to the documentation and list them out
> > there instead?
> 
> Something like this and you need to have your own compatible string as
> well:
> 
> Documentation/devicetree/bindings/opp/ti-omap5-opp-supply.txt
> 
> But first, what's the purpose of this field? Just to check if it can
> be handled by current bindings.

I'll try to explain but I might need Stephen or some of the other folks to jump
in and save me.

On sdm845 there are shared power resources controlled by the RPMh which is
programed by a series of commands from the regulator driver; but in the case
of the GPU the votes are passed to the GMU (graphics management unit) which
communicates with the RPMh on behalf of the GPU.

In order to construct the RPMh vote we need first need a voltage level that we
can look up in a database. qcom,arc-level is the voltage level associated with
a specific OPP.

I had previously been abusing this with opp-microvolt but that was wrong for
obvious reasons and then Stephen pointed out that this would be a better way.

Jordan
Viresh Kumar March 7, 2018, 5:06 a.m. UTC | #5
On 06-03-18, 08:37, Jordan Crouse wrote:
> I'll try to explain but I might need Stephen or some of the other folks to jump
> in and save me.

Maybe you should start using his kernel.org address then ? :)

> On sdm845 there are shared power resources controlled by the RPMh which is
> programed by a series of commands from the regulator driver; but in the case
> of the GPU the votes are passed to the GMU (graphics management unit) which
> communicates with the RPMh on behalf of the GPU.
> 
> In order to construct the RPMh vote we need first need a voltage level that we
> can look up in a database. qcom,arc-level is the voltage level associated with
> a specific OPP.
> 
> I had previously been abusing this with opp-microvolt but that was wrong for
> obvious reasons and then Stephen pointed out that this would be a better way.

Glad that you shared that :)

A solution is already in progress for that and is partially merged as
well.

Look for "performance_state" in genpd and OPP cores (already merged),
followed by this series here:

https://lkml.kernel.org/r/cover.1513926033.git.viresh.kumar@linaro.org
Jordan Crouse March 8, 2018, 8:14 p.m. UTC | #6
On Wed, Mar 07, 2018 at 10:36:24AM +0530, Viresh Kumar wrote:
> On 06-03-18, 08:37, Jordan Crouse wrote:
> > I'll try to explain but I might need Stephen or some of the other folks to jump
> > in and save me.
> 
> Maybe you should start using his kernel.org address then ? :)
> 
> > On sdm845 there are shared power resources controlled by the RPMh which is
> > programed by a series of commands from the regulator driver; but in the case
> > of the GPU the votes are passed to the GMU (graphics management unit) which
> > communicates with the RPMh on behalf of the GPU.
> > 
> > In order to construct the RPMh vote we need first need a voltage level that we
> > can look up in a database. qcom,arc-level is the voltage level associated with
> > a specific OPP.
> > 
> > I had previously been abusing this with opp-microvolt but that was wrong for
> > obvious reasons and then Stephen pointed out that this would be a better way.
> 
> Glad that you shared that :)
> 
> A solution is already in progress for that and is partially merged as
> well.
> 
> Look for "performance_state" in genpd and OPP cores (already merged),
> followed by this series here:
> 
> https://lkml.kernel.org/r/cover.1513926033.git.viresh.kumar@linaro.org

It seems to me that performance_state has a direct relationship with genpd
which is good for CPU votes but in this case, we're just passing along raw data
to an independent microcontroller. The 'qcom,arc-level' is used to construct
the actual values that the GMU will program into the RPMh. Since these are
informational (from the CPU perspective) rather than functional I feel like
that using performance_state for this would be as hacky as using opp-microvolt
or something else.

Jordan
Viresh Kumar March 9, 2018, 3:43 a.m. UTC | #7
On 08-03-18, 13:14, Jordan Crouse wrote:
> It seems to me that performance_state has a direct relationship with genpd
> which is good for CPU votes but in this case, we're just passing along raw data
> to an independent microcontroller. The 'qcom,arc-level' is used to construct
> the actual values that the GMU will program into the RPMh. Since these are

The "genpd" here is created specially for this RPM. The performance-state thing
is designed to solve this very specific problem of qualcomm SoCs and there is no
way we are going to add another property for this now.

> informational (from the CPU perspective) rather than functional I feel like
> that using performance_state for this would be as hacky as using opp-microvolt
> or something else.

There is some WIP stuff here Rajendra is testing currently.

ssh://git@git.linaro.org/people/viresh.kumar/mylinux.git opp/genpd/qcom

Please have a talk with Rajendra who can help you understand on how this can be
used for GPUs.
Rajendra Nayak March 9, 2018, 10:19 a.m. UTC | #8
Hey Jordan/Viresh,

On 03/09/2018 09:13 AM, Viresh Kumar wrote:
> On 08-03-18, 13:14, Jordan Crouse wrote:
>> It seems to me that performance_state has a direct relationship with genpd
>> which is good for CPU votes but in this case, we're just passing along raw data
>> to an independent microcontroller. The 'qcom,arc-level' is used to construct
>> the actual values that the GMU will program into the RPMh. Since these are
> 
> The "genpd" here is created specially for this RPM. The performance-state thing
> is designed to solve this very specific problem of qualcomm SoCs and there is no
> way we are going to add another property for this now.
> 
>> informational (from the CPU perspective) rather than functional I feel like
>> that using performance_state for this would be as hacky as using opp-microvolt
>> or something else.
> 
> There is some WIP stuff here Rajendra is testing currently.

What I am doing is a powerdomain driver to communicate magic values from
CPU to rpmh, looks like we need another one to communicate from CPU to
GMU now :)

A few things,
* someone will need to map these larger magic values into something that
rpmh would understand (between 0 to 15 on the sdm845), thats done by
reading the command db level maps. Is this done by GMU?

* are these communicated just once for every OPP at init/boot, or for
every OPP switch?

* whats the communication mechanism we use between CPU and GMU? 

regards
Rajendra

> 
> ssh://git@git.linaro.org/people/viresh.kumar/mylinux.git opp/genpd/qcom
> 
> Please have a talk with Rajendra who can help you understand on how this can be
> used for GPUs.
>
Jordan Crouse March 9, 2018, 3:47 p.m. UTC | #9
On Fri, Mar 09, 2018 at 03:49:00PM +0530, Rajendra Nayak wrote:
> Hey Jordan/Viresh,
> 
> On 03/09/2018 09:13 AM, Viresh Kumar wrote:
> > On 08-03-18, 13:14, Jordan Crouse wrote:
> >> It seems to me that performance_state has a direct relationship with genpd
> >> which is good for CPU votes but in this case, we're just passing along raw data
> >> to an independent microcontroller. The 'qcom,arc-level' is used to construct
> >> the actual values that the GMU will program into the RPMh. Since these are
> > 
> > The "genpd" here is created specially for this RPM. The performance-state thing
> > is designed to solve this very specific problem of qualcomm SoCs and there is no
> > way we are going to add another property for this now.
> > 
> >> informational (from the CPU perspective) rather than functional I feel like
> >> that using performance_state for this would be as hacky as using opp-microvolt
> >> or something else.
> > 
> > There is some WIP stuff here Rajendra is testing currently.
> 
> What I am doing is a powerdomain driver to communicate magic values from
> CPU to rpmh, looks like we need another one to communicate from CPU to
> GMU now :)
> 
> A few things,
> * someone will need to map these larger magic values into something that
> rpmh would understand (between 0 to 15 on the sdm845), thats done by
> reading the command db level maps. Is this done by GMU?

It is done by the GPU - we take the magic values, construct them into other
magic values and send them to the GMU. As far as I know we are the only other
in-kernel client of cmd-db other than the regulator(s).

> * are these communicated just once for every OPP at init/boot, or for
> every OPP switch?

Just once at init.

> * whats the communication mechanism we use between CPU and GMU?

Shared memory queues inspired by Venus HFI.

Jordan
Jordan Crouse March 9, 2018, 4:03 p.m. UTC | #10
On Fri, Mar 09, 2018 at 09:13:32AM +0530, Viresh Kumar wrote:
> On 08-03-18, 13:14, Jordan Crouse wrote:
> > It seems to me that performance_state has a direct relationship with genpd
> > which is good for CPU votes but in this case, we're just passing along raw data
> > to an independent microcontroller. The 'qcom,arc-level' is used to construct
> > the actual values that the GMU will program into the RPMh. Since these are
> 
> The "genpd" here is created specially for this RPM. The performance-state thing
> is designed to solve this very specific problem of qualcomm SoCs and there is no
> way we are going to add another property for this now.
> 
> > informational (from the CPU perspective) rather than functional I feel like
> > that using performance_state for this would be as hacky as using opp-microvolt
> > or something else.
> 
> There is some WIP stuff here Rajendra is testing currently.
> 
> ssh://git@git.linaro.org/people/viresh.kumar/mylinux.git opp/genpd/qcom
> 
> Please have a talk with Rajendra who can help you understand on how this can be
> used for GPUs.

I don't think we are understanding each other. The GMU is a separate
microcontroller. It is given a magic number (actually a combination of magic
numbers) that it then uses to directly interact with the other hardware to make
the vote. The only responsibility that the CPU has is to construct that magic
number (once, at init) and send it across when asked.

Looking at the sdhc code from the testing tree it makes perfect sense
that you have a device that needs to eventually do a RPMh vote from the CPU and
so the 'required-opp' and performance state come together to do the right thing.
This is good code.

None of this is true for the GPU. The CPU never votes for the GPU so there 
isn't any need to connect it to the power domain drivers. Even if you did
there isn't any current mechanism for the rpmpd driver to pass the right magic
number to the GPU driver which is what it really needs.

I suppose that instead of using 'qcom,arc-level' we could use 'qcom-corner' but
then thats just a naming dispute. We still need a way for the GPU to query the
magic values.

Jordan
Stephen Boyd March 9, 2018, 5:18 p.m. UTC | #11
(I wrote an email that seems to have been lost)

Quoting Jordan Crouse (2018-03-09 08:03:55)
> On Fri, Mar 09, 2018 at 09:13:32AM +0530, Viresh Kumar wrote:
> > On 08-03-18, 13:14, Jordan Crouse wrote:
> > > It seems to me that performance_state has a direct relationship with genpd
> > > which is good for CPU votes but in this case, we're just passing along raw data
> > > to an independent microcontroller. The 'qcom,arc-level' is used to construct
> > > the actual values that the GMU will program into the RPMh. Since these are
> > 
> > The "genpd" here is created specially for this RPM. The performance-state thing
> > is designed to solve this very specific problem of qualcomm SoCs and there is no
> > way we are going to add another property for this now.
> > 
> > > informational (from the CPU perspective) rather than functional I feel like
> > > that using performance_state for this would be as hacky as using opp-microvolt
> > > or something else.
> > 
> > There is some WIP stuff here Rajendra is testing currently.
> > 
> > ssh://git@git.linaro.org/people/viresh.kumar/mylinux.git opp/genpd/qcom
> > 
> > Please have a talk with Rajendra who can help you understand on how this can be
> > used for GPUs.
> 
> I don't think we are understanding each other. The GMU is a separate
> microcontroller. It is given a magic number (actually a combination of magic
> numbers) that it then uses to directly interact with the other hardware to make
> the vote. The only responsibility that the CPU has is to construct that magic
> number (once, at init) and send it across when asked.
> 
> Looking at the sdhc code from the testing tree it makes perfect sense
> that you have a device that needs to eventually do a RPMh vote from the CPU and
> so the 'required-opp' and performance state come together to do the right thing.
> This is good code.
> 
> None of this is true for the GPU. The CPU never votes for the GPU so there 
> isn't any need to connect it to the power domain drivers. Even if you did
> there isn't any current mechanism for the rpmpd driver to pass the right magic
> number to the GPU driver which is what it really needs.
> 
> I suppose that instead of using 'qcom,arc-level' we could use 'qcom-corner' but
> then thats just a naming dispute. We still need a way for the GPU to query the
> magic values.
> 

Agreed. Forcing genpd and set_performance_state APIs on GMU doesn't make
any sense if the GMU is doing it all itself.

The binding should be the same between sdhc and GMU if they're actually
talking about the same thing though. I think they are, because I suspect
GMU needs to translate the human reasonable value of qcom,corner into
the hardware qcom,arc-level value to write into it's hardware so the GMU
can auto-transition performance states. If I'm wrong, then
qcom,arc-level needs to be created alongside of qcom,corner because
they're different number spaces.

BTW, it's qcom,corner and not qcom-corner right?
Jordan Crouse March 9, 2018, 5:42 p.m. UTC | #12
On Fri, Mar 09, 2018 at 09:18:41AM -0800, Stephen Boyd wrote:
> (I wrote an email that seems to have been lost)
> 
> Quoting Jordan Crouse (2018-03-09 08:03:55)
> > On Fri, Mar 09, 2018 at 09:13:32AM +0530, Viresh Kumar wrote:
> > > On 08-03-18, 13:14, Jordan Crouse wrote:
> > > > It seems to me that performance_state has a direct relationship with genpd
> > > > which is good for CPU votes but in this case, we're just passing along raw data
> > > > to an independent microcontroller. The 'qcom,arc-level' is used to construct
> > > > the actual values that the GMU will program into the RPMh. Since these are
> > > 
> > > The "genpd" here is created specially for this RPM. The performance-state thing
> > > is designed to solve this very specific problem of qualcomm SoCs and there is no
> > > way we are going to add another property for this now.
> > > 
> > > > informational (from the CPU perspective) rather than functional I feel like
> > > > that using performance_state for this would be as hacky as using opp-microvolt
> > > > or something else.
> > > 
> > > There is some WIP stuff here Rajendra is testing currently.
> > > 
> > > ssh://git@git.linaro.org/people/viresh.kumar/mylinux.git opp/genpd/qcom
> > > 
> > > Please have a talk with Rajendra who can help you understand on how this can be
> > > used for GPUs.
> > 
> > I don't think we are understanding each other. The GMU is a separate
> > microcontroller. It is given a magic number (actually a combination of magic
> > numbers) that it then uses to directly interact with the other hardware to make
> > the vote. The only responsibility that the CPU has is to construct that magic
> > number (once, at init) and send it across when asked.
> > 
> > Looking at the sdhc code from the testing tree it makes perfect sense
> > that you have a device that needs to eventually do a RPMh vote from the CPU and
> > so the 'required-opp' and performance state come together to do the right thing.
> > This is good code.
> > 
> > None of this is true for the GPU. The CPU never votes for the GPU so there 
> > isn't any need to connect it to the power domain drivers. Even if you did
> > there isn't any current mechanism for the rpmpd driver to pass the right magic
> > number to the GPU driver which is what it really needs.
> > 
> > I suppose that instead of using 'qcom,arc-level' we could use 'qcom-corner' but
> > then thats just a naming dispute. We still need a way for the GPU to query the
> > magic values.
> > 
> 
> Agreed. Forcing genpd and set_performance_state APIs on GMU doesn't make
> any sense if the GMU is doing it all itself.
> 
> The binding should be the same between sdhc and GMU if they're actually
> talking about the same thing though. I think they are, because I suspect
> GMU needs to translate the human reasonable value of qcom,corner into
> the hardware qcom,arc-level value to write into it's hardware so the GMU
> can auto-transition performance states. If I'm wrong, then
> qcom,arc-level needs to be created alongside of qcom,corner because
> they're different number spaces.

I think the value that we're discussing here is the RPMh voltage level which we
translate into the qcom,arc-level(s). I'm not sure how that matches up with
qcom,corner.

> BTW, it's qcom,corner and not qcom-corner right?

http://git.linaro.org/people/viresh.kumar/mylinux.git/commit/?h=opp/genpd/qcom&id=7586600b3bf3f8e79ce9198922fad7d4aa5b3f8d

+					qcom-corner = <1>;

shrug?

Jordan
Viresh Kumar March 12, 2018, 5:52 a.m. UTC | #13
On 09-03-18, 09:03, Jordan Crouse wrote:
> I don't think we are understanding each other. The GMU is a separate
> microcontroller. It is given a magic number (actually a combination of magic
> numbers) that it then uses to directly interact with the other hardware to make
> the vote. The only responsibility that the CPU has is to construct that magic
> number (once, at init) and send it across when asked.
> 
> Looking at the sdhc code from the testing tree it makes perfect sense
> that you have a device that needs to eventually do a RPMh vote from the CPU and
> so the 'required-opp' and performance state come together to do the right thing.
> This is good code.
> 
> None of this is true for the GPU. The CPU never votes for the GPU so there 
> isn't any need to connect it to the power domain drivers. Even if you did
> there isn't any current mechanism for the rpmpd driver to pass the right magic
> number to the GPU driver which is what it really needs.
> 
> I suppose that instead of using 'qcom,arc-level' we could use 'qcom-corner' but
> then thats just a naming dispute. We still need a way for the GPU to query the
> magic values.

Okay, I get it. You can't (shouldn't) use genpd stuff here. I would still like
you guys (You/Rajendra/Stephen) to conclude if qcom-corner and qcom,arc-level
are eventually same values and we should use same property for them.
Viresh Kumar March 12, 2018, 5:54 a.m. UTC | #14
On 09-03-18, 10:42, Jordan Crouse wrote:
> On Fri, Mar 09, 2018 at 09:18:41AM -0800, Stephen Boyd wrote:
> > BTW, it's qcom,corner and not qcom-corner right?
> 
> http://git.linaro.org/people/viresh.kumar/mylinux.git/commit/?h=opp/genpd/qcom&id=7586600b3bf3f8e79ce9198922fad7d4aa5b3f8d
> 
> +					qcom-corner = <1>;
> 
> shrug?

My branch isn't final and this naming has to come from Qcom (Rajendra). So it
can be qcom,corner eventually :)
Stephen Boyd March 13, 2018, 6:23 p.m. UTC | #15
On Sun, Mar 11, 2018 at 10:52 PM, Viresh Kumar <viresh.kumar@linaro.org> wrote:
> On 09-03-18, 09:03, Jordan Crouse wrote:
>> I don't think we are understanding each other. The GMU is a separate
>> microcontroller. It is given a magic number (actually a combination of magic
>> numbers) that it then uses to directly interact with the other hardware to make
>> the vote. The only responsibility that the CPU has is to construct that magic
>> number (once, at init) and send it across when asked.
>>
>> Looking at the sdhc code from the testing tree it makes perfect sense
>> that you have a device that needs to eventually do a RPMh vote from the CPU and
>> so the 'required-opp' and performance state come together to do the right thing.
>> This is good code.
>>
>> None of this is true for the GPU. The CPU never votes for the GPU so there
>> isn't any need to connect it to the power domain drivers. Even if you did
>> there isn't any current mechanism for the rpmpd driver to pass the right magic
>> number to the GPU driver which is what it really needs.
>>
>> I suppose that instead of using 'qcom,arc-level' we could use 'qcom-corner' but
>> then thats just a naming dispute. We still need a way for the GPU to query the
>> magic values.
>
> Okay, I get it. You can't (shouldn't) use genpd stuff here. I would still like
> you guys (You/Rajendra/Stephen) to conclude if qcom-corner and qcom,arc-level
> are eventually same values and we should use same property for them.
>

It sounds like it's qcom,{corner,level} from Jordan's description. In
my mind 'level' and 'corner' are the same but they got a name change
with the new RPM interface. Then another number space was introduced
with the new RPM interface, 'arc-level', which represents the numbers
that the hardware deals with. It may be that DT doesn't ever care to
express the 'arc-level', because cmd db hides the numberspace by
requiring software to translate the software 'level' to the hardware
'arc-level'. So the whole thing may be a moot point and we can decide
to use qcom,level everywhere because it's the future.
diff mbox

Patch

diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index 7b5c16eb63b7..cc6d367ee55e 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -312,5 +312,125 @@ 
 				status = "disabled";
 			};
 		};
+
+		adreno_smmu: arm,smmu-adreno@5040000 {
+			compatible = "qcom,msm8996-smmu-v2";
+			reg = <0x5040000 0x10000>;
+			#iommu-cells = <1>;
+			#global-interrupts = <2>;
+			interrupts = <GIC_SPI 229 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 231 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 364 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 365 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 366 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 367 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 368 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 369 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 370 IRQ_TYPE_EDGE_RISING>,
+				     <GIC_SPI 371 IRQ_TYPE_EDGE_RISING>;
+			clocks = <&clock_gcc GCC_GPU_MEMNOC_GFX_CLK>,
+				 <&clock_gcc GCC_GPU_CFG_AHB_CLK>;
+			clock-names = "bus", "iface";
+
+			power-domains = <&clock_gpucc GPU_CX_GDSC>;
+		};
+
+		gpu_opp_table: adreno-opp-table {
+			compatible = "operating-points-v2";
+
+			opp-710000000 {
+				opp-hz = /bits/ 64 <710000000>;
+				qcom,arc-level = <416>;
+			};
+
+			opp-675000000 {
+				opp-hz = /bits/ 64 <675000000>;
+				qcom,arc-level = <384>;
+			};
+
+			opp-596000000 {
+				opp-hz = /bits/ 64 <596000000>;
+				qcom,arc-level = <320>;
+			};
+
+			opp-520000000 {
+				opp-hz = /bits/ 64 <520000000>;
+				qcom,arc-level = <256>;
+			};
+
+			opp-414000000 {
+				opp-hz = /bits/ 64 <414000000>;
+				qcom,arc-level = <192>;
+			};
+
+			opp-342000000 {
+				opp-hz = /bits/ 64 <342000000>;
+				qcom,arc-level = <128>;
+			};
+
+			opp-257000000 {
+				opp-hz = /bits/ 64 <257000000>;
+				qcom,arc-level = <64>;
+			};
+		};
+
+		gpu@5000000 {
+			compatible = "qcom,adreno-630.2", "qcom,adreno";
+			#stream-id-cells = <16>;
+
+			reg = <0x5000000 0x40000>;
+			reg-names = "kgsl_3d0_reg_memory";
+
+			/*
+			 * Look ma, no clocks! The GPU clocks and power are
+			 * controlled entirely by the GMU
+			 */
+
+			interrupts = <GCI_SPI 300 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "kgsl_3d0_irq";
+
+			iommus = <&adreno_smmu 0>;
+
+			operating-points-v2 = <&gpu_opp_table>;
+
+			gmu = <&gmu>;
+		};
+
+		gmu_opp_table: adreno-gmu-opp-table {
+			compatible = "operating-points-v2";
+
+			opp-400000000 {
+				opp-hz = /bits/ 64 <400000000>;
+				qcom,arc-level = <128>;
+			};
+
+			opp-200000000 {
+				opp-hz = /bits/ 64 <200000000>;
+				qcom,arc-level = <48>;
+			};
+		};
+
+		gmu: gmu@506a000 {
+			compatible="qcom,adreno-gmu";
+
+			reg = <0x506a000 0x30000>,
+			      <0xb200000 0x300000>;
+			reg-names = "gmu", "gmu_pdc";
+
+			interrupts = <GIC_SPI 304 IRQ_TYPE_LEVEL_HIGH>,
+				     <GIC_SPI 305 IRQ_TYPE_LEVEL_HIGH>;
+			interrupt-names = "hfi", "gmu";
+
+			clocks = <&clock_gpucc GPU_CC_CX_GMU_CLK>,
+				 <&clock_gpucc GPU_CC_CXO_CLK>,
+				 <&clock_gcc GCC_DDRSS_GPU_AXI_CLK>,
+				 <&clock_gcc GCC_GPU_MEMNOC_GFX_CLK>;
+			clock-names = "gmu", "cxo", "axi", "memnoc";
+
+			power-domains = <&clock_gpucc GPU_CX_GDSC>;
+			iommus = <&adreno_smmu 5>;
+
+			operating-points-v2 = <&gmu_opp_table>;
+		};
 	};
 };