diff mbox

[v2,1/5] PM / OPP: extend DT binding to specify phandle of another node for OPP

Message ID 1380634382-15609-2-git-send-email-Sudeep.KarkadaNagesha@arm.com (mailing list archive)
State RFC, archived
Headers show

Commit Message

Sudeep KarkadaNagesha Oct. 1, 2013, 1:32 p.m. UTC
From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>

If more than one similar devices share the same operating points(OPPs)
being in the same clock domain, currently we need to replicate the
OPP entries in all the nodes.

This patch extends existing binding by adding a new property named
'operating-points-phandle' to specify the phandle in any device node
pointing to another node which contains the actual OPP tuples.
This helps to avoid replication if multiple devices share the OPPs.

Cc: Rob Herring <rob.herring@calxeda.com>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Nishanth Menon <nm@ti.com>
Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
---
 Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++--
 1 file changed, 149 insertions(+), 12 deletions(-)

Comments

Nishanth Menon Oct. 3, 2013, 12:40 p.m. UTC | #1
On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
> 
> If more than one similar devices share the same operating points(OPPs)
> being in the same clock domain, currently we need to replicate the
> OPP entries in all the nodes.
> 
> This patch extends existing binding by adding a new property named
> 'operating-points-phandle' to specify the phandle in any device node
> pointing to another node which contains the actual OPP tuples.
> This helps to avoid replication if multiple devices share the OPPs.
> 
> Cc: Rob Herring <rob.herring@calxeda.com>
> Cc: Pawel Moll <pawel.moll@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Stephen Warren <swarren@wwwdotorg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Nishanth Menon <nm@ti.com>
> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
> ---
>  Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++--
>  1 file changed, 149 insertions(+), 12 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt
> index 74499e5..f59b878 100644
> --- a/Documentation/devicetree/bindings/power/opp.txt
> +++ b/Documentation/devicetree/bindings/power/opp.txt
> @@ -4,22 +4,159 @@ SoCs have a standard set of tuples consisting of frequency and
>  voltage pairs that the device will support per voltage domain. These
>  are called Operating Performance Points or OPPs.
>  
> -Properties:
> +Required Properties:
>  - operating-points: An array of 2-tuples items, and each item consists
>    of frequency and voltage like <freq-kHz vol-uV>.
>  	freq: clock frequency in kHz
>  	vol: voltage in microvolt
>  
> +- operating-points-phandle: phandle to the device tree node which contains
> +	the operating points tuples(recommended to be used if multiple
> +	devices are in the same clock domain and hence share OPPs, as it
> +	avoids replication of OPPs)
> +
> +  operating-points and operating-points-phandle are mutually exclusive, only
> +  one of them can be present in any device node.
> +
>  Examples:
>  
> -cpu@0 {
> -	compatible = "arm,cortex-a9";
> -	reg = <0>;
> -	next-level-cache = <&L2>;
> -	operating-points = <
> -		/* kHz    uV */
> -		792000  1100000
> -		396000  950000
> -		198000  850000
> -	>;
> -};
> +1. A uniprocessor system (phandle not required)
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points = <
> +			/* kHz    uV */
> +			792000  1100000
> +			396000  950000
> +			198000  850000
> +		>;
> +	};
> +
> +2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle)
> +    Some existing DTs describe homogenous SMP systems by only listing the
> +    OPPs in the cpu@0 node. For compatiblity with existing DTs, an
> +    operating system may handle this case specially.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points = <
> +			/* kHz    uV */
> +			792000  1100000
> +			396000  950000
> +			198000  850000
> +		>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a9";
> +		reg = <1>;
> +	};
> +
> +	cpu2: cpu@2 {
> +		compatible = "arm,cortex-a9";
> +		reg = <2>;
> +	};
> +
> +	cpu3: cpu@3 {
> +		compatible = "arm,cortex-a9";
> +		reg = <3>;
> +	};
> +
> +2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle)
> +    If more than one device of same type share the same OPPs, for example
> +    all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid
> +    replicating the OPPs in all the nodes. We can specify the phandle of
> +    the node which contains the OPP tuples instead.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a9";
> +		reg = <1>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu2: cpu@2 {
> +		compatible = "arm,cortex-a9";
> +		reg = <2>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu3: cpu@3 {
> +		compatible = "arm,cortex-a9";
> +		reg = <3>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	opps-table {
> +		cpu_opp: cpu_opp {
> +			operating-points = <
> +				/* kHz    uV */
> +				792000  1100000
> +				396000  950000
> +				198000  850000
> +			>;
> +		};
> +		... /* other device OPP nodes */
> +	}
> +
> +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
> +   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
> +   the clock domain.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a15";
> +		reg = <0>;
> +		operating-points-phandle = <&cluster0_opp>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a15";
> +		reg = <1>;
> +		operating-points-phandle = <&cluster0_opp>;
> +	};
> +
> +	cpu2: cpu@100 {
> +		compatible = "arm,cortex-a7";
> +		reg = <100>;
> +		operating-points-phandle = <&cluster1_opp>;
> +	};
> +
> +	cpu3: cpu@101 {
> +		compatible = "arm,cortex-a7";
> +		reg = <101>;
> +		operating-points-phandle = <&cluster1_opp>;
> +	};
> +
> +	opps-table {
> +		cluster0_opp: cluster0_opp {
> +			operating-points = <
> +				/* kHz    uV */
> +				792000  1100000
> +				396000  950000
> +				198000  850000
> +			>;
> +		};
Style comment - add an EOL
> +		cluster1_opp: cluster1_opp {
> +			operating-points = <
> +				/* kHz    uV */
> +				792000  950000
> +				396000  750000
> +				198000  450000
> +			>;
> +		};
> +		... /* other device OPP nodes */
> +	}
> +
> +Container Node
> +--------------
> +	- It's highly recommended to place all the shared OPPs under single
> +	  node for consistency and better readability
> +	- It's quite similar to clocks or pinmux container nodes
> +	- In the above examples, "opps-table" is the container node
> 

in short, I love this - thanks for doing this.

However, could you squash this to patch #2 -> having implementation
and binding together is better for git log history.
Sudeep KarkadaNagesha Oct. 3, 2013, 1:05 p.m. UTC | #2
Hi Nishanth,

Thanks for reviewing it.

On 03/10/13 13:40, Nishanth Menon wrote:
> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
>> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
[...]
>> +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
>> +   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
>> +   the clock domain.
>> +
>> +	cpu0: cpu@0 {
>> +		compatible = "arm,cortex-a15";
>> +		reg = <0>;
>> +		operating-points-phandle = <&cluster0_opp>;
>> +	};
>> +
>> +	cpu1: cpu@1 {
>> +		compatible = "arm,cortex-a15";
>> +		reg = <1>;
>> +		operating-points-phandle = <&cluster0_opp>;
>> +	};
>> +
>> +	cpu2: cpu@100 {
>> +		compatible = "arm,cortex-a7";
>> +		reg = <100>;
>> +		operating-points-phandle = <&cluster1_opp>;
>> +	};
>> +
>> +	cpu3: cpu@101 {
>> +		compatible = "arm,cortex-a7";
>> +		reg = <101>;
>> +		operating-points-phandle = <&cluster1_opp>;
>> +	};
>> +
>> +	opps-table {
>> +		cluster0_opp: cluster0_opp {
>> +			operating-points = <
>> +				/* kHz    uV */
>> +				792000  1100000
>> +				396000  950000
>> +				198000  850000
>> +			>;
>> +		};
> Style comment - add an EOL
Ok will fix up.

>> +		cluster1_opp: cluster1_opp {
>> +			operating-points = <
>> +				/* kHz    uV */
>> +				792000  950000
>> +				396000  750000
>> +				198000  450000
>> +			>;
>> +		};
>> +		... /* other device OPP nodes */
>> +	}
>> +
>> +Container Node
>> +--------------
>> +	- It's highly recommended to place all the shared OPPs under single
>> +	  node for consistency and better readability
>> +	- It's quite similar to clocks or pinmux container nodes
>> +	- In the above examples, "opps-table" is the container node
>>
> 
> in short, I love this - thanks for doing this.
> 
> However, could you squash this to patch #2 -> having implementation
> and binding together is better for git log history.
> 
Based on some arguments on the other threads[1] on devicetree list, I thought
having separate patches for binding and driver changes is preferred. Hence the
split, I am OK either way.

Can I add your ACK/Reviewed-by otherwise ?

Regards,
Sudeep

[1] http://www.spinics.net/lists/devicetree/msg04855.html


--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Nishanth Menon Oct. 3, 2013, 2:29 p.m. UTC | #3
On 10/03/2013 08:05 AM, Sudeep KarkadaNagesha wrote:
[...]
>> However, could you squash this to patch #2 -> having implementation
>> and binding together is better for git log history.
>>
> Based on some arguments on the other threads[1] on devicetree list, I thought
> having separate patches for binding and driver changes is preferred. Hence the
> split, I am OK either way.
Thanks for pointing the discussion out.

/me might rant about this ;) -> if someone has a strong opinion about
this, they should probably propose a change to submitting patches
guideline.. Grr..

I leave this to Rafael as to how he'd like this to be squashed/split

> 
> Can I add your ACK/Reviewed-by otherwise ?
> 
> Regards,
> Sudeep
> 
> [1] http://www.spinics.net/lists/devicetree/msg04855.html
>
Sudeep KarkadaNagesha Oct. 7, 2013, 12:40 p.m. UTC | #4
Hi Mark, Stephen,

On 01/10/13 14:32, Sudeep KarkadaNagesha wrote:
> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
> 
> If more than one similar devices share the same operating points(OPPs)
> being in the same clock domain, currently we need to replicate the
> OPP entries in all the nodes.
> 
> This patch extends existing binding by adding a new property named
> 'operating-points-phandle' to specify the phandle in any device node
> pointing to another node which contains the actual OPP tuples.
> This helps to avoid replication if multiple devices share the OPPs.
> 

Can you review this version ?

Regards,
Sudeep

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring Oct. 7, 2013, 4:01 p.m. UTC | #5
On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
> 
> If more than one similar devices share the same operating points(OPPs)
> being in the same clock domain, currently we need to replicate the
> OPP entries in all the nodes.
> 
> This patch extends existing binding by adding a new property named
> 'operating-points-phandle' to specify the phandle in any device node
> pointing to another node which contains the actual OPP tuples.
> This helps to avoid replication if multiple devices share the OPPs.
> 
> Cc: Rob Herring <rob.herring@calxeda.com>
> Cc: Pawel Moll <pawel.moll@arm.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Stephen Warren <swarren@wwwdotorg.org>
> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
> Cc: Nishanth Menon <nm@ti.com>
> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
> ---
>  Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++--
>  1 file changed, 149 insertions(+), 12 deletions(-)
> 
> diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt
> index 74499e5..f59b878 100644
> --- a/Documentation/devicetree/bindings/power/opp.txt
> +++ b/Documentation/devicetree/bindings/power/opp.txt
> @@ -4,22 +4,159 @@ SoCs have a standard set of tuples consisting of frequency and
>  voltage pairs that the device will support per voltage domain. These
>  are called Operating Performance Points or OPPs.
>  
> -Properties:
> +Required Properties:
>  - operating-points: An array of 2-tuples items, and each item consists
>    of frequency and voltage like <freq-kHz vol-uV>.
>  	freq: clock frequency in kHz
>  	vol: voltage in microvolt
>  
> +- operating-points-phandle: phandle to the device tree node which contains
> +	the operating points tuples(recommended to be used if multiple
> +	devices are in the same clock domain and hence share OPPs, as it
> +	avoids replication of OPPs)
> +
> +  operating-points and operating-points-phandle are mutually exclusive, only
> +  one of them can be present in any device node.
> +
>  Examples:
>  
> -cpu@0 {
> -	compatible = "arm,cortex-a9";
> -	reg = <0>;
> -	next-level-cache = <&L2>;
> -	operating-points = <
> -		/* kHz    uV */
> -		792000  1100000
> -		396000  950000
> -		198000  850000
> -	>;
> -};
> +1. A uniprocessor system (phandle not required)
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points = <
> +			/* kHz    uV */
> +			792000  1100000
> +			396000  950000
> +			198000  850000
> +		>;
> +	};
> +
> +2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle)
> +    Some existing DTs describe homogenous SMP systems by only listing the
> +    OPPs in the cpu@0 node. For compatiblity with existing DTs, an
> +    operating system may handle this case specially.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points = <
> +			/* kHz    uV */
> +			792000  1100000
> +			396000  950000
> +			198000  850000
> +		>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a9";
> +		reg = <1>;
> +	};
> +
> +	cpu2: cpu@2 {
> +		compatible = "arm,cortex-a9";
> +		reg = <2>;
> +	};
> +
> +	cpu3: cpu@3 {
> +		compatible = "arm,cortex-a9";
> +		reg = <3>;
> +	};
> +
> +2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle)
> +    If more than one device of same type share the same OPPs, for example
> +    all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid
> +    replicating the OPPs in all the nodes. We can specify the phandle of
> +    the node which contains the OPP tuples instead.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a9";
> +		reg = <0>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a9";
> +		reg = <1>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu2: cpu@2 {
> +		compatible = "arm,cortex-a9";
> +		reg = <2>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	cpu3: cpu@3 {
> +		compatible = "arm,cortex-a9";
> +		reg = <3>;
> +		operating-points-phandle = <&cpu_opp>;
> +	};
> +
> +	opps-table {
> +		cpu_opp: cpu_opp {
> +			operating-points = <
> +				/* kHz    uV */
> +				792000  1100000
> +				396000  950000
> +				198000  850000
> +			>;
> +		};
> +		... /* other device OPP nodes */

But this is a subnode of /cpus. IMO, OPPs should be located near what
they control.


> +	}
> +
> +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
> +   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
> +   the clock domain.
> +
> +	cpu0: cpu@0 {
> +		compatible = "arm,cortex-a15";
> +		reg = <0>;
> +		operating-points-phandle = <&cluster0_opp>;
> +	};
> +
> +	cpu1: cpu@1 {
> +		compatible = "arm,cortex-a15";
> +		reg = <1>;
> +		operating-points-phandle = <&cluster0_opp>;
> +	};
> +
> +	cpu2: cpu@100 {
> +		compatible = "arm,cortex-a7";
> +		reg = <100>;
> +		operating-points-phandle = <&cluster1_opp>;
> +	};
> +
> +	cpu3: cpu@101 {
> +		compatible = "arm,cortex-a7";
> +		reg = <101>;
> +		operating-points-phandle = <&cluster1_opp>;
> +	};
> +
> +	opps-table {
> +		cluster0_opp: cluster0_opp {

Why not use the cpu topology? Then the operating point can apply to
cores based on the position in the topology. You don't even need a
phandle in that case. You can look for OPPs in either a cpu node or in
the topology.


> +			operating-points = <
> +				/* kHz    uV */
> +				792000  1100000
> +				396000  950000
> +				198000  850000
> +			>;
> +		};
> +		cluster1_opp: cluster1_opp {
> +			operating-points = <
> +				/* kHz    uV */
> +				792000  950000
> +				396000  750000
> +				198000  450000
> +			>;
> +		};
> +		... /* other device OPP nodes */
> +	}
> +
> +Container Node
> +--------------
> +	- It's highly recommended to place all the shared OPPs under single
> +	  node for consistency and better readability
> +	- It's quite similar to clocks or pinmux container nodes
> +	- In the above examples, "opps-table" is the container node
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sudeep KarkadaNagesha Oct. 8, 2013, 12:55 p.m. UTC | #6
On 07/10/13 17:01, Rob Herring wrote:
> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
>> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>>
>> If more than one similar devices share the same operating points(OPPs)
>> being in the same clock domain, currently we need to replicate the
>> OPP entries in all the nodes.
>>
>> This patch extends existing binding by adding a new property named
>> 'operating-points-phandle' to specify the phandle in any device node
>> pointing to another node which contains the actual OPP tuples.
>> This helps to avoid replication if multiple devices share the OPPs.
>>
>> Cc: Rob Herring <rob.herring@calxeda.com>
>> Cc: Pawel Moll <pawel.moll@arm.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Stephen Warren <swarren@wwwdotorg.org>
>> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
>> Cc: Nishanth Menon <nm@ti.com>
>> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>> ---
>>  Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++--
>>  1 file changed, 149 insertions(+), 12 deletions(-)
>>
>> diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt
>> index 74499e5..f59b878 100644
>> --- a/Documentation/devicetree/bindings/power/opp.txt
>> +++ b/Documentation/devicetree/bindings/power/opp.txt
>> @@ -4,22 +4,159 @@ SoCs have a standard set of tuples consisting of frequency and
>>  voltage pairs that the device will support per voltage domain. These
>>  are called Operating Performance Points or OPPs.
>>  
>> -Properties:
>> +Required Properties:
>>  - operating-points: An array of 2-tuples items, and each item consists
>>    of frequency and voltage like <freq-kHz vol-uV>.
>>  	freq: clock frequency in kHz
>>  	vol: voltage in microvolt
>>  
>> +- operating-points-phandle: phandle to the device tree node which contains
>> +	the operating points tuples(recommended to be used if multiple
>> +	devices are in the same clock domain and hence share OPPs, as it
>> +	avoids replication of OPPs)
>> +
>> +  operating-points and operating-points-phandle are mutually exclusive, only
>> +  one of them can be present in any device node.
>> +
>>  Examples:
>>  
>> -cpu@0 {
>> -	compatible = "arm,cortex-a9";
>> -	reg = <0>;
>> -	next-level-cache = <&L2>;
>> -	operating-points = <
>> -		/* kHz    uV */
>> -		792000  1100000
>> -		396000  950000
>> -		198000  850000
>> -	>;
>> -};
>> +1. A uniprocessor system (phandle not required)
>> +
>> +	cpu0: cpu@0 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <0>;
>> +		operating-points = <
>> +			/* kHz    uV */
>> +			792000  1100000
>> +			396000  950000
>> +			198000  850000
>> +		>;
>> +	};
>> +
>> +2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle)
>> +    Some existing DTs describe homogenous SMP systems by only listing the
>> +    OPPs in the cpu@0 node. For compatiblity with existing DTs, an
>> +    operating system may handle this case specially.
>> +
>> +	cpu0: cpu@0 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <0>;
>> +		operating-points = <
>> +			/* kHz    uV */
>> +			792000  1100000
>> +			396000  950000
>> +			198000  850000
>> +		>;
>> +	};
>> +
>> +	cpu1: cpu@1 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <1>;
>> +	};
>> +
>> +	cpu2: cpu@2 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <2>;
>> +	};
>> +
>> +	cpu3: cpu@3 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <3>;
>> +	};
>> +
>> +2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle)
>> +    If more than one device of same type share the same OPPs, for example
>> +    all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid
>> +    replicating the OPPs in all the nodes. We can specify the phandle of
>> +    the node which contains the OPP tuples instead.
>> +
>> +	cpu0: cpu@0 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <0>;
>> +		operating-points-phandle = <&cpu_opp>;
>> +	};
>> +
>> +	cpu1: cpu@1 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <1>;
>> +		operating-points-phandle = <&cpu_opp>;
>> +	};
>> +
>> +	cpu2: cpu@2 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <2>;
>> +		operating-points-phandle = <&cpu_opp>;
>> +	};
>> +
>> +	cpu3: cpu@3 {
>> +		compatible = "arm,cortex-a9";
>> +		reg = <3>;
>> +		operating-points-phandle = <&cpu_opp>;
>> +	};
>> +
>> +	opps-table {
>> +		cpu_opp: cpu_opp {
>> +			operating-points = <
>> +				/* kHz    uV */
>> +				792000  1100000
>> +				396000  950000
>> +				198000  850000
>> +			>;
>> +		};
>> +		... /* other device OPP nodes */
> 
> But this is a subnode of /cpus. IMO, OPPs should be located near what
> they control.
> 
No, the idea was to group all the shared OPPs in a container node like clocks or
pinmux. So opps-table in above example need not be subnode of /cpus.

> 
>> +	}
>> +
>> +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
>> +   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
>> +   the clock domain.
>> +
>> +	cpu0: cpu@0 {
>> +		compatible = "arm,cortex-a15";
>> +		reg = <0>;
>> +		operating-points-phandle = <&cluster0_opp>;
>> +	};
>> +
>> +	cpu1: cpu@1 {
>> +		compatible = "arm,cortex-a15";
>> +		reg = <1>;
>> +		operating-points-phandle = <&cluster0_opp>;
>> +	};
>> +
>> +	cpu2: cpu@100 {
>> +		compatible = "arm,cortex-a7";
>> +		reg = <100>;
>> +		operating-points-phandle = <&cluster1_opp>;
>> +	};
>> +
>> +	cpu3: cpu@101 {
>> +		compatible = "arm,cortex-a7";
>> +		reg = <101>;
>> +		operating-points-phandle = <&cluster1_opp>;
>> +	};
>> +
>> +	opps-table {
>> +		cluster0_opp: cluster0_opp {
> 
> Why not use the cpu topology? Then the operating point can apply to
> cores based on the position in the topology. You don't even need a
> phandle in that case. You can look for OPPs in either a cpu node or in
> the topology.
> 
Agreed but few thoughts behind this binding:

1. OPPs are not just cpu specific:
   How do we share OPPs for 2 devices in the same clock domain ?
   Also moving the OPP into cpu-topo makes parsing specific to cpu.
   Currently the OPP library fetches the of_node from the device struct
   which is applicable to any devices.

2. Should cpu topology(i.e. cpu-map) just contain the topology info ? and
   phandles to these nodes should be used to setup any affinity ?

3. As part of RFC[1][2], it was also discussed that on some SoCs we need
   multiple OPP profiles/options which it provides but only one of them can
   be used based on the board design. phandle to OPP will also help resolve
   that issue.

Let me know your thoughts.

Regards,
Sudeep

[1] http://www.spinics.net/lists/cpufreq/msg06556.html
[2] http://www.spinics.net/lists/cpufreq/msg06685.html

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sudeep KarkadaNagesha Oct. 17, 2013, 11:15 a.m. UTC | #7
Hi DT-folks,

On 08/10/13 13:55, Sudeep KarkadaNagesha wrote:
> On 07/10/13 17:01, Rob Herring wrote:
>> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
>>> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>>>
>>> If more than one similar devices share the same operating points(OPPs)
>>> being in the same clock domain, currently we need to replicate the
>>> OPP entries in all the nodes.
>>>
>>> This patch extends existing binding by adding a new property named
>>> 'operating-points-phandle' to specify the phandle in any device node
>>> pointing to another node which contains the actual OPP tuples.
>>> This helps to avoid replication if multiple devices share the OPPs.
>>>
>>> Cc: Rob Herring <rob.herring@calxeda.com>
>>> Cc: Pawel Moll <pawel.moll@arm.com>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Stephen Warren <swarren@wwwdotorg.org>
>>> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
>>> Cc: Nishanth Menon <nm@ti.com>
>>> Signed-off-by: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>>> ---
>>>  Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++--
>>>  1 file changed, 149 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt
>>> index 74499e5..f59b878 100644
>>> --- a/Documentation/devicetree/bindings/power/opp.txt
>>> +++ b/Documentation/devicetree/bindings/power/opp.txt
>>> @@ -4,22 +4,159 @@ SoCs have a standard set of tuples consisting of frequency and
>>>  voltage pairs that the device will support per voltage domain. These
>>>  are called Operating Performance Points or OPPs.
>>>  
>>> -Properties:
>>> +Required Properties:
>>>  - operating-points: An array of 2-tuples items, and each item consists
>>>    of frequency and voltage like <freq-kHz vol-uV>.
>>>  	freq: clock frequency in kHz
>>>  	vol: voltage in microvolt
>>>  
>>> +- operating-points-phandle: phandle to the device tree node which contains
>>> +	the operating points tuples(recommended to be used if multiple
>>> +	devices are in the same clock domain and hence share OPPs, as it
>>> +	avoids replication of OPPs)
>>> +
>>> +  operating-points and operating-points-phandle are mutually exclusive, only
>>> +  one of them can be present in any device node.
>>> +
>>>  Examples:
>>>  
>>> -cpu@0 {
>>> -	compatible = "arm,cortex-a9";
>>> -	reg = <0>;
>>> -	next-level-cache = <&L2>;
>>> -	operating-points = <
>>> -		/* kHz    uV */
>>> -		792000  1100000
>>> -		396000  950000
>>> -		198000  850000
>>> -	>;
>>> -};
>>> +1. A uniprocessor system (phandle not required)
>>> +
>>> +	cpu0: cpu@0 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <0>;
>>> +		operating-points = <
>>> +			/* kHz    uV */
>>> +			792000  1100000
>>> +			396000  950000
>>> +			198000  850000
>>> +		>;
>>> +	};
>>> +
>>> +2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle)
>>> +    Some existing DTs describe homogenous SMP systems by only listing the
>>> +    OPPs in the cpu@0 node. For compatiblity with existing DTs, an
>>> +    operating system may handle this case specially.
>>> +
>>> +	cpu0: cpu@0 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <0>;
>>> +		operating-points = <
>>> +			/* kHz    uV */
>>> +			792000  1100000
>>> +			396000  950000
>>> +			198000  850000
>>> +		>;
>>> +	};
>>> +
>>> +	cpu1: cpu@1 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <1>;
>>> +	};
>>> +
>>> +	cpu2: cpu@2 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <2>;
>>> +	};
>>> +
>>> +	cpu3: cpu@3 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <3>;
>>> +	};
>>> +
>>> +2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle)
>>> +    If more than one device of same type share the same OPPs, for example
>>> +    all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid
>>> +    replicating the OPPs in all the nodes. We can specify the phandle of
>>> +    the node which contains the OPP tuples instead.
>>> +
>>> +	cpu0: cpu@0 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <0>;
>>> +		operating-points-phandle = <&cpu_opp>;
>>> +	};
>>> +
>>> +	cpu1: cpu@1 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <1>;
>>> +		operating-points-phandle = <&cpu_opp>;
>>> +	};
>>> +
>>> +	cpu2: cpu@2 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <2>;
>>> +		operating-points-phandle = <&cpu_opp>;
>>> +	};
>>> +
>>> +	cpu3: cpu@3 {
>>> +		compatible = "arm,cortex-a9";
>>> +		reg = <3>;
>>> +		operating-points-phandle = <&cpu_opp>;
>>> +	};
>>> +
>>> +	opps-table {
>>> +		cpu_opp: cpu_opp {
>>> +			operating-points = <
>>> +				/* kHz    uV */
>>> +				792000  1100000
>>> +				396000  950000
>>> +				198000  850000
>>> +			>;
>>> +		};
>>> +		... /* other device OPP nodes */
>>
>> But this is a subnode of /cpus. IMO, OPPs should be located near what
>> they control.
>>
> No, the idea was to group all the shared OPPs in a container node like clocks or
> pinmux. So opps-table in above example need not be subnode of /cpus.
> 
>>
>>> +	}
>>> +
>>> +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
>>> +   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
>>> +   the clock domain.
>>> +
>>> +	cpu0: cpu@0 {
>>> +		compatible = "arm,cortex-a15";
>>> +		reg = <0>;
>>> +		operating-points-phandle = <&cluster0_opp>;
>>> +	};
>>> +
>>> +	cpu1: cpu@1 {
>>> +		compatible = "arm,cortex-a15";
>>> +		reg = <1>;
>>> +		operating-points-phandle = <&cluster0_opp>;
>>> +	};
>>> +
>>> +	cpu2: cpu@100 {
>>> +		compatible = "arm,cortex-a7";
>>> +		reg = <100>;
>>> +		operating-points-phandle = <&cluster1_opp>;
>>> +	};
>>> +
>>> +	cpu3: cpu@101 {
>>> +		compatible = "arm,cortex-a7";
>>> +		reg = <101>;
>>> +		operating-points-phandle = <&cluster1_opp>;
>>> +	};
>>> +
>>> +	opps-table {
>>> +		cluster0_opp: cluster0_opp {
>>
>> Why not use the cpu topology? Then the operating point can apply to
>> cores based on the position in the topology. You don't even need a
>> phandle in that case. You can look for OPPs in either a cpu node or in
>> the topology.
>>
> Agreed but few thoughts behind this binding:
> 
> 1. OPPs are not just cpu specific:
>    How do we share OPPs for 2 devices in the same clock domain ?
>    Also moving the OPP into cpu-topo makes parsing specific to cpu.
>    Currently the OPP library fetches the of_node from the device struct
>    which is applicable to any devices.
> 
> 2. Should cpu topology(i.e. cpu-map) just contain the topology info ? and
>    phandles to these nodes should be used to setup any affinity ?
> 
> 3. As part of RFC[1][2], it was also discussed that on some SoCs we need
>    multiple OPP profiles/options which it provides but only one of them can
>    be used based on the board design. phandle to OPP will also help resolve
>    that issue.
> 
> Let me know your thoughts.
> 

(pinging to get this discussion alive again)
Is this extension to the binding to support shared OPPs acceptable ?
If not, any ideas to progress on this is helpful.

Regards,
Sudeep

> 
> [1] http://www.spinics.net/lists/cpufreq/msg06556.html
> [2] http://www.spinics.net/lists/cpufreq/msg06685.html
> 
> --
> To unsubscribe from this list: send the line "unsubscribe devicetree" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Rob Herring Oct. 17, 2013, 1:22 p.m. UTC | #8
On Tue, Oct 8, 2013 at 7:55 AM, Sudeep KarkadaNagesha
<Sudeep.KarkadaNagesha@arm.com> wrote:
> On 07/10/13 17:01, Rob Herring wrote:
>> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
>>> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>>>
>>> If more than one similar devices share the same operating points(OPPs)
>>> being in the same clock domain, currently we need to replicate the
>>> OPP entries in all the nodes.
>>>
>>> This patch extends existing binding by adding a new property named
>>> 'operating-points-phandle' to specify the phandle in any device node
>>> pointing to another node which contains the actual OPP tuples.
>>> This helps to avoid replication if multiple devices share the OPPs.
>>>

[snip]

>>> +    opps-table {
>>> +            cpu_opp: cpu_opp {
>>> +                    operating-points = <
>>> +                            /* kHz    uV */
>>> +                            792000  1100000
>>> +                            396000  950000
>>> +                            198000  850000
>>> +                    >;
>>> +            };
>>> +            ... /* other device OPP nodes */
>>
>> But this is a subnode of /cpus. IMO, OPPs should be located near what
>> they control.
>>
> No, the idea was to group all the shared OPPs in a container node like clocks or
> pinmux. So opps-table in above example need not be subnode of /cpus.

Clocks are typically defined in a clock controller node. pinmux
doesn't fit anywhere well so it is an exception. We don't want to
expand on that. OPP's at least for cpu's would typically follow the
topology, so put them there.

>>> +    cpu3: cpu@101 {
>>> +            compatible = "arm,cortex-a7";
>>> +            reg = <101>;
>>> +            operating-points-phandle = <&cluster1_opp>;
>>> +    };
>>> +
>>> +    opps-table {
>>> +            cluster0_opp: cluster0_opp {
>>
>> Why not use the cpu topology? Then the operating point can apply to
>> cores based on the position in the topology. You don't even need a
>> phandle in that case. You can look for OPPs in either a cpu node or in
>> the topology.
>>
> Agreed but few thoughts behind this binding:
>
> 1. OPPs are not just cpu specific:
>    How do we share OPPs for 2 devices in the same clock domain ?
>    Also moving the OPP into cpu-topo makes parsing specific to cpu.
>    Currently the OPP library fetches the of_node from the device struct
>    which is applicable to any devices.

But an OPP for a cpu is cpu specific, so put it there. For devices, we
may need something different. Perhaps it should be part of the clock
binding in some way.

> 2. Should cpu topology(i.e. cpu-map) just contain the topology info ? and
>    phandles to these nodes should be used to setup any affinity ?

Can't the OPP just be in the topology nodes at the appropriate level.
Then the kernel can look for the OPPs in either place. Perhaps Lorenzo
has thoughts on this.

> 3. As part of RFC[1][2], it was also discussed that on some SoCs we need
>    multiple OPP profiles/options which it provides but only one of them can
>    be used based on the board design. phandle to OPP will also help resolve
>    that issue.

Then include the OPP required for that board design. You could have
dtsi files of OPPs that can be included based on the board.

Rob
--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Sudeep KarkadaNagesha Oct. 17, 2013, 5:22 p.m. UTC | #9
On 17/10/13 14:22, Rob Herring wrote:
> On Tue, Oct 8, 2013 at 7:55 AM, Sudeep KarkadaNagesha
> <Sudeep.KarkadaNagesha@arm.com> wrote:
>> On 07/10/13 17:01, Rob Herring wrote:
>>> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
>>>> From: Sudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com>
>>>>
>>>> If more than one similar devices share the same operating points(OPPs)
>>>> being in the same clock domain, currently we need to replicate the
>>>> OPP entries in all the nodes.
>>>>
>>>> This patch extends existing binding by adding a new property named
>>>> 'operating-points-phandle' to specify the phandle in any device node
>>>> pointing to another node which contains the actual OPP tuples.
>>>> This helps to avoid replication if multiple devices share the OPPs.
>>>>
> 
> [snip]
> 
>>>> +    opps-table {
>>>> +            cpu_opp: cpu_opp {
>>>> +                    operating-points = <
>>>> +                            /* kHz    uV */
>>>> +                            792000  1100000
>>>> +                            396000  950000
>>>> +                            198000  850000
>>>> +                    >;
>>>> +            };
>>>> +            ... /* other device OPP nodes */
>>>
>>> But this is a subnode of /cpus. IMO, OPPs should be located near what
>>> they control.
>>>
>> No, the idea was to group all the shared OPPs in a container node like clocks or
>> pinmux. So opps-table in above example need not be subnode of /cpus.
> 
> Clocks are typically defined in a clock controller node. pinmux
> doesn't fit anywhere well so it is an exception. We don't want to
> expand on that. OPP's at least for cpu's would typically follow the
> topology, so put them there.
> 
I agree we don't want OPP to be another exception like pin-mux.

>>>> +    cpu3: cpu@101 {
>>>> +            compatible = "arm,cortex-a7";
>>>> +            reg = <101>;
>>>> +            operating-points-phandle = <&cluster1_opp>;
>>>> +    };
>>>> +
>>>> +    opps-table {
>>>> +            cluster0_opp: cluster0_opp {
>>>
>>> Why not use the cpu topology? Then the operating point can apply to
>>> cores based on the position in the topology. You don't even need a
>>> phandle in that case. You can look for OPPs in either a cpu node or in
>>> the topology.
>>>
>> Agreed but few thoughts behind this binding:
>>
>> 1. OPPs are not just cpu specific:
>>    How do we share OPPs for 2 devices in the same clock domain ?
>>    Also moving the OPP into cpu-topo makes parsing specific to cpu.
>>    Currently the OPP library fetches the of_node from the device struct
>>    which is applicable to any devices.
> 
> But an OPP for a cpu is cpu specific, so put it there. For devices, we
> may need something different. Perhaps it should be part of the clock
> binding in some way.
> 

I still don't understand why we need to handle cpu and other devices
differently. If you think that the devices sharing OPPs can be handled through
clock binding in some way, it should be equally applicable for CPUs too ?
In fact when I started this thread, even I had a similar thought and has asked
on the list[1]

>> 2. Should cpu topology(i.e. cpu-map) just contain the topology info ? and
>>    phandles to these nodes should be used to setup any affinity ?
> 
> Can't the OPP just be in the topology nodes at the appropriate level.
> Then the kernel can look for the OPPs in either place. Perhaps Lorenzo
> has thoughts on this.
> 

Ok will check with Lorenzo. I agree that we can extend topology binding so that
we add OPPs there. But I would like to avoid that if we think we can solve it
through clock bindings as mentioned above.

>> 3. As part of RFC[1][2], it was also discussed that on some SoCs we need
>>    multiple OPP profiles/options which it provides but only one of them can
>>    be used based on the board design. phandle to OPP will also help resolve
>>    that issue.
> 
> Then include the OPP required for that board design. You could have
> dtsi files of OPPs that can be included based on the board.
> 

I will let Nishanth comment on this.

Regards,
Sudeep

[1] http://marc.info/?l=linux-pm&m=137449807414601&w=4

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Nishanth Menon Oct. 17, 2013, 6:36 p.m. UTC | #10
On 10/17/2013 12:22 PM, Sudeep KarkadaNagesha wrote:
> On 17/10/13 14:22, Rob Herring wrote:
>> On Tue, Oct 8, 2013 at 7:55 AM, Sudeep KarkadaNagesha
>> <Sudeep.KarkadaNagesha@arm.com> wrote:
>>> On 07/10/13 17:01, Rob Herring wrote:
>>>> On 10/01/2013 08:32 AM, Sudeep KarkadaNagesha wrote:
[...]
>>> 3. As part of RFC[1][2], it was also discussed that on some SoCs we need
>>>    multiple OPP profiles/options which it provides but only one of them can
>>>    be used based on the board design. phandle to OPP will also help resolve
>>>    that issue.
>>
>> Then include the OPP required for that board design. You could have
>> dtsi files of OPPs that can be included based on the board.
>>
> 
> I will let Nishanth comment on this.

there are couple of angles to this[1] that Sudeep already pointed at:

A) SoCs like OMAP3430, 3630, 3530, 3730, omap4430, omap4460, omap4470,
omap5430, omap5432, there will at least be 2 dtsi *per* OPPset dtsi
per chip - and we have OPP sets per device inside the SoC - one for
CPU, one for GPU, one for IVA(accelerator), one for l3 (interconnect).
Further, when we go to newer variants like AM335x,AM437x,DRA7 - the
opp sets can increase to around 4/5 per domain per chip. Most of these
are deterministic based on Efused bit fields.
-> On this angle, arch/arm/mach-imx/mach-imx6q.c is a good example of
a similar challenge - here OPP enable/disable of hardcoded frequency
is kept in kernel, but data is picked up from SoC dts - not a good
story when we want to maintain old dtb compatibility - if frequency
changes in either dts/kernel, we wont function.
-> We are working internally to a potentially generic solution to
address this - not yet in a stage even for RFC -> if that is possible
to be done, then we dont need profiles to handle things, instead an
alternative constraint description is provided in dts. or we could
force folks to use specific dtsis OR allow generic handling based on
profiles.

B) now we can have a chip which can support a high frequency (efuse
will say so), however board characteristics(Power Distribution Network
requirements) may not be capable of allowing the chip to perform. We
can argue saying that "why would anyone put a higher performing chip
on a not-capable board?" - but we all know the reality of marketting
folks and cost of chip story.. but this is a real constraint when we
try to support our SoCs on as many platforms as possible and enabling
Linux on all those platforms.
-> We could use profiles to deal with this.
-> we could use dtsi override with OPP to deal with this
OR
-> we could use custom properties (something like
ti,non-optimal-board-design ;) ).

My personal preference is to state hardware capability solely in dts
and kernel is just operations on the dts data. OPP profiles did seem
to me to be the intutive way to do that job as it does describe
exactly wha the hardware capabilities were. Creating n different dtsis
is possible per device, but it essentially does describe the same
information.

[1] http://www.spinics.net/lists/cpufreq/msg06685.html
Lorenzo Pieralisi Oct. 18, 2013, 8:40 a.m. UTC | #11
On Thu, Oct 17, 2013 at 02:22:44PM +0100, Rob Herring wrote:

[...]

> >>> +    cpu3: cpu@101 {
> >>> +            compatible = "arm,cortex-a7";
> >>> +            reg = <101>;
> >>> +            operating-points-phandle = <&cluster1_opp>;
> >>> +    };
> >>> +
> >>> +    opps-table {
> >>> +            cluster0_opp: cluster0_opp {
> >>
> >> Why not use the cpu topology? Then the operating point can apply to
> >> cores based on the position in the topology. You don't even need a
> >> phandle in that case. You can look for OPPs in either a cpu node or in
> >> the topology.
> >>
> > Agreed but few thoughts behind this binding:
> >
> > 1. OPPs are not just cpu specific:
> >    How do we share OPPs for 2 devices in the same clock domain ?
> >    Also moving the OPP into cpu-topo makes parsing specific to cpu.
> >    Currently the OPP library fetches the of_node from the device struct
> >    which is applicable to any devices.
> 
> But an OPP for a cpu is cpu specific, so put it there. For devices, we
> may need something different. Perhaps it should be part of the clock
> binding in some way.
> 
> > 2. Should cpu topology(i.e. cpu-map) just contain the topology info ? and
> >    phandles to these nodes should be used to setup any affinity ?
> 
> Can't the OPP just be in the topology nodes at the appropriate level.
> Then the kernel can look for the OPPs in either place. Perhaps Lorenzo
> has thoughts on this.

The reason we introduced the topology nodes (ie cpu-map) was to provide
a topology description of the system with leaf nodes linked to cpu
nodes; nodes in the topology do not map to HW entities.
Put it differently, IMHO we should attach no HW meaning to nodes in the
cpu-map so we should not add eg OPPs under cluster nodes because those
nodes do not represent HW entities, I would like to keep cpu-map a pure
DT representation of the topology, no more, but debate is open.

Lorenzo

--
To unsubscribe from this list: send the line "unsubscribe linux-pm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt
index 74499e5..f59b878 100644
--- a/Documentation/devicetree/bindings/power/opp.txt
+++ b/Documentation/devicetree/bindings/power/opp.txt
@@ -4,22 +4,159 @@  SoCs have a standard set of tuples consisting of frequency and
 voltage pairs that the device will support per voltage domain. These
 are called Operating Performance Points or OPPs.
 
-Properties:
+Required Properties:
 - operating-points: An array of 2-tuples items, and each item consists
   of frequency and voltage like <freq-kHz vol-uV>.
 	freq: clock frequency in kHz
 	vol: voltage in microvolt
 
+- operating-points-phandle: phandle to the device tree node which contains
+	the operating points tuples(recommended to be used if multiple
+	devices are in the same clock domain and hence share OPPs, as it
+	avoids replication of OPPs)
+
+  operating-points and operating-points-phandle are mutually exclusive, only
+  one of them can be present in any device node.
+
 Examples:
 
-cpu@0 {
-	compatible = "arm,cortex-a9";
-	reg = <0>;
-	next-level-cache = <&L2>;
-	operating-points = <
-		/* kHz    uV */
-		792000  1100000
-		396000  950000
-		198000  850000
-	>;
-};
+1. A uniprocessor system (phandle not required)
+
+	cpu0: cpu@0 {
+		compatible = "arm,cortex-a9";
+		reg = <0>;
+		operating-points = <
+			/* kHz    uV */
+			792000  1100000
+			396000  950000
+			198000  850000
+		>;
+	};
+
+2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle)
+    Some existing DTs describe homogenous SMP systems by only listing the
+    OPPs in the cpu@0 node. For compatiblity with existing DTs, an
+    operating system may handle this case specially.
+
+	cpu0: cpu@0 {
+		compatible = "arm,cortex-a9";
+		reg = <0>;
+		operating-points = <
+			/* kHz    uV */
+			792000  1100000
+			396000  950000
+			198000  850000
+		>;
+	};
+
+	cpu1: cpu@1 {
+		compatible = "arm,cortex-a9";
+		reg = <1>;
+	};
+
+	cpu2: cpu@2 {
+		compatible = "arm,cortex-a9";
+		reg = <2>;
+	};
+
+	cpu3: cpu@3 {
+		compatible = "arm,cortex-a9";
+		reg = <3>;
+	};
+
+2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle)
+    If more than one device of same type share the same OPPs, for example
+    all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid
+    replicating the OPPs in all the nodes. We can specify the phandle of
+    the node which contains the OPP tuples instead.
+
+	cpu0: cpu@0 {
+		compatible = "arm,cortex-a9";
+		reg = <0>;
+		operating-points-phandle = <&cpu_opp>;
+	};
+
+	cpu1: cpu@1 {
+		compatible = "arm,cortex-a9";
+		reg = <1>;
+		operating-points-phandle = <&cpu_opp>;
+	};
+
+	cpu2: cpu@2 {
+		compatible = "arm,cortex-a9";
+		reg = <2>;
+		operating-points-phandle = <&cpu_opp>;
+	};
+
+	cpu3: cpu@3 {
+		compatible = "arm,cortex-a9";
+		reg = <3>;
+		operating-points-phandle = <&cpu_opp>;
+	};
+
+	opps-table {
+		cpu_opp: cpu_opp {
+			operating-points = <
+				/* kHz    uV */
+				792000  1100000
+				396000  950000
+				198000  850000
+			>;
+		};
+		... /* other device OPP nodes */
+	}
+
+4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of
+   CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share
+   the clock domain.
+
+	cpu0: cpu@0 {
+		compatible = "arm,cortex-a15";
+		reg = <0>;
+		operating-points-phandle = <&cluster0_opp>;
+	};
+
+	cpu1: cpu@1 {
+		compatible = "arm,cortex-a15";
+		reg = <1>;
+		operating-points-phandle = <&cluster0_opp>;
+	};
+
+	cpu2: cpu@100 {
+		compatible = "arm,cortex-a7";
+		reg = <100>;
+		operating-points-phandle = <&cluster1_opp>;
+	};
+
+	cpu3: cpu@101 {
+		compatible = "arm,cortex-a7";
+		reg = <101>;
+		operating-points-phandle = <&cluster1_opp>;
+	};
+
+	opps-table {
+		cluster0_opp: cluster0_opp {
+			operating-points = <
+				/* kHz    uV */
+				792000  1100000
+				396000  950000
+				198000  850000
+			>;
+		};
+		cluster1_opp: cluster1_opp {
+			operating-points = <
+				/* kHz    uV */
+				792000  950000
+				396000  750000
+				198000  450000
+			>;
+		};
+		... /* other device OPP nodes */
+	}
+
+Container Node
+--------------
+	- It's highly recommended to place all the shared OPPs under single
+	  node for consistency and better readability
+	- It's quite similar to clocks or pinmux container nodes
+	- In the above examples, "opps-table" is the container node