From patchwork Tue Oct 1 13:32:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sudeep KarkadaNagesha X-Patchwork-Id: 2970061 Return-Path: X-Original-To: patchwork-linux-pm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id C09659F289 for ; Tue, 1 Oct 2013 13:32:57 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 686B8203D9 for ; Tue, 1 Oct 2013 13:32:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 14987203F3 for ; Tue, 1 Oct 2013 13:32:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753361Ab3JANcq (ORCPT ); Tue, 1 Oct 2013 09:32:46 -0400 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]:36900 "EHLO cam-admin0.cambridge.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753346Ab3JANcq (ORCPT ); Tue, 1 Oct 2013 09:32:46 -0400 Received: from e103737-lin.cambridge.arm.com (e103737-lin.cambridge.arm.com [10.1.207.34]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id r91DWbkj007353; Tue, 1 Oct 2013 14:32:38 +0100 (BST) From: Sudeep KarkadaNagesha To: cpufreq@vger.kernel.org, linux-pm@vger.kernel.org, devicetree@vger.kernel.org Cc: Sudeep.KarkadaNagesha@arm.com, Sudeep KarkadaNagesha , Rob Herring , Pawel Moll , Mark Rutland , Stephen Warren , "Rafael J. Wysocki" , Nishanth Menon Subject: [PATCH v2 1/5] PM / OPP: extend DT binding to specify phandle of another node for OPP Date: Tue, 1 Oct 2013 14:32:58 +0100 Message-Id: <1380634382-15609-2-git-send-email-Sudeep.KarkadaNagesha@arm.com> X-Mailer: git-send-email 1.8.1.2 In-Reply-To: <1380634382-15609-1-git-send-email-Sudeep.KarkadaNagesha@arm.com> References: <1380634382-15609-1-git-send-email-Sudeep.KarkadaNagesha@arm.com> Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,KHOP_BIG_TO_CC, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Sudeep KarkadaNagesha If more than one similar devices share the same operating points(OPPs) being in the same clock domain, currently we need to replicate the OPP entries in all the nodes. This patch extends existing binding by adding a new property named 'operating-points-phandle' to specify the phandle in any device node pointing to another node which contains the actual OPP tuples. This helps to avoid replication if multiple devices share the OPPs. Cc: Rob Herring Cc: Pawel Moll Cc: Mark Rutland Cc: Stephen Warren Cc: "Rafael J. Wysocki" Cc: Nishanth Menon Signed-off-by: Sudeep KarkadaNagesha --- Documentation/devicetree/bindings/power/opp.txt | 161 ++++++++++++++++++++++-- 1 file changed, 149 insertions(+), 12 deletions(-) diff --git a/Documentation/devicetree/bindings/power/opp.txt b/Documentation/devicetree/bindings/power/opp.txt index 74499e5..f59b878 100644 --- a/Documentation/devicetree/bindings/power/opp.txt +++ b/Documentation/devicetree/bindings/power/opp.txt @@ -4,22 +4,159 @@ SoCs have a standard set of tuples consisting of frequency and voltage pairs that the device will support per voltage domain. These are called Operating Performance Points or OPPs. -Properties: +Required Properties: - operating-points: An array of 2-tuples items, and each item consists of frequency and voltage like . freq: clock frequency in kHz vol: voltage in microvolt +- operating-points-phandle: phandle to the device tree node which contains + the operating points tuples(recommended to be used if multiple + devices are in the same clock domain and hence share OPPs, as it + avoids replication of OPPs) + + operating-points and operating-points-phandle are mutually exclusive, only + one of them can be present in any device node. + Examples: -cpu@0 { - compatible = "arm,cortex-a9"; - reg = <0>; - next-level-cache = <&L2>; - operating-points = < - /* kHz uV */ - 792000 1100000 - 396000 950000 - 198000 850000 - >; -}; +1. A uniprocessor system (phandle not required) + + cpu0: cpu@0 { + compatible = "arm,cortex-a9"; + reg = <0>; + operating-points = < + /* kHz uV */ + 792000 1100000 + 396000 950000 + 198000 850000 + >; + }; + +2a. Consider a SMP system with 4 CPUs in the same clock domain(no phandle) + Some existing DTs describe homogenous SMP systems by only listing the + OPPs in the cpu@0 node. For compatiblity with existing DTs, an + operating system may handle this case specially. + + cpu0: cpu@0 { + compatible = "arm,cortex-a9"; + reg = <0>; + operating-points = < + /* kHz uV */ + 792000 1100000 + 396000 950000 + 198000 850000 + >; + }; + + cpu1: cpu@1 { + compatible = "arm,cortex-a9"; + reg = <1>; + }; + + cpu2: cpu@2 { + compatible = "arm,cortex-a9"; + reg = <2>; + }; + + cpu3: cpu@3 { + compatible = "arm,cortex-a9"; + reg = <3>; + }; + +2b. Consider a SMP system with 4 CPUs in the same clock domain(with phandle) + If more than one device of same type share the same OPPs, for example + all the CPUs on a SoC or in a single cluster on a SoC, then we can avoid + replicating the OPPs in all the nodes. We can specify the phandle of + the node which contains the OPP tuples instead. + + cpu0: cpu@0 { + compatible = "arm,cortex-a9"; + reg = <0>; + operating-points-phandle = <&cpu_opp>; + }; + + cpu1: cpu@1 { + compatible = "arm,cortex-a9"; + reg = <1>; + operating-points-phandle = <&cpu_opp>; + }; + + cpu2: cpu@2 { + compatible = "arm,cortex-a9"; + reg = <2>; + operating-points-phandle = <&cpu_opp>; + }; + + cpu3: cpu@3 { + compatible = "arm,cortex-a9"; + reg = <3>; + operating-points-phandle = <&cpu_opp>; + }; + + opps-table { + cpu_opp: cpu_opp { + operating-points = < + /* kHz uV */ + 792000 1100000 + 396000 950000 + 198000 850000 + >; + }; + ... /* other device OPP nodes */ + } + +4. Consider an AMP(asymmetric multi-processor) sytem with 2 clusters of + CPUs. Each cluster has 2 CPUs and all the CPUs within the cluster share + the clock domain. + + cpu0: cpu@0 { + compatible = "arm,cortex-a15"; + reg = <0>; + operating-points-phandle = <&cluster0_opp>; + }; + + cpu1: cpu@1 { + compatible = "arm,cortex-a15"; + reg = <1>; + operating-points-phandle = <&cluster0_opp>; + }; + + cpu2: cpu@100 { + compatible = "arm,cortex-a7"; + reg = <100>; + operating-points-phandle = <&cluster1_opp>; + }; + + cpu3: cpu@101 { + compatible = "arm,cortex-a7"; + reg = <101>; + operating-points-phandle = <&cluster1_opp>; + }; + + opps-table { + cluster0_opp: cluster0_opp { + operating-points = < + /* kHz uV */ + 792000 1100000 + 396000 950000 + 198000 850000 + >; + }; + cluster1_opp: cluster1_opp { + operating-points = < + /* kHz uV */ + 792000 950000 + 396000 750000 + 198000 450000 + >; + }; + ... /* other device OPP nodes */ + } + +Container Node +-------------- + - It's highly recommended to place all the shared OPPs under single + node for consistency and better readability + - It's quite similar to clocks or pinmux container nodes + - In the above examples, "opps-table" is the container node